Search is not available for this dataset
project
stringlengths
1
235
source
stringclasses
16 values
language
stringclasses
48 values
content
stringlengths
909
64.8M
bfast
cran
R
Package ‘bfast’ October 12, 2022 Version 1.6.1 Title Breaks for Additive Season and Trend Description Decomposition of time series into trend, seasonal, and remainder components with methods for detecting and characterizing abrupt changes within the trend and seasonal components. 'BFAST' can be used to analyze different types of satellite image time series and can be applied to other disciplines dealing with seasonal or non-seasonal time series, such as hydrology, climatology, and econometrics. The algorithm can be extended to label detected changes with information on the parameters of the fitted piecewise linear models. 'BFAST' monitoring functionality is described in Verbesselt et al. (2010) <doi:10.1016/j.rse.2009.08.014>. 'BFAST monitor' provides functionality to detect disturbance in near real-time based on 'BFAST'- type models, and is described in Verbesselt et al. (2012) <doi:10.1016/j.rse.2012.02.022>. 'BFAST Lite' approach is a flexible approach that handles missing data without interpolation, and will be described in an upcoming paper. Furthermore, different models can now be used to fit the time series data and detect structural changes (breaks). Depends R (>= 3.0.0), strucchangeRcpp Imports graphics, stats, zoo, forecast, Rcpp (>= 0.12.7), Rdpack (>= 0.7) Suggests MASS, sfsmisc, stlplus, raster License GPL (>= 2) URL https://bfast2.github.io/ BugReports https://github.com/bfast2/bfast/issues LazyLoad yes LazyData yes LinkingTo Rcpp RoxygenNote 7.1.1 RdMacros Rdpack NeedsCompilation yes Author <NAME> [aut], <NAME> [aut, cre] (<https://orcid.org/0000-0001-5654-1277>), <NAME> [aut], <NAME> [ctb], <NAME> [aut], <NAME> [ctb], <NAME> [ctb] (<https://orcid.org/0000-0003-3654-2090>), <NAME> [ctb], Dongdong Kong [ctb] (<https://orcid.org/0000-0003-1836-8172>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-05-10 14:12:11 UTC R topics documented: bfast-packag... 2 .bfast_cpp_closestfro... 4 bfas... 4 bfast0... 8 bfast01classif... 11 bfastlit... 12 bfastmonito... 15 bfastp... 19 bfastt... 21 create16dayts-deprecate... 23 date... 23 harves... 24 modisraste... 24 ndv... 25 plot.bfas... 25 plot.bfastlit... 26 setoption... 27 simt... 28 so... 28 bfast-package Breaks For Additive Season and Trend (BFAST) Description BFAST integrates the decomposition of time series into trend, seasonal, and remainder compo- nents with methods for detecting and characterizing abrupt changes within the trend and seasonal components. BFAST can be used to analyze different types of satellite image time series and can be applied to other disciplines dealing with seasonal or non-seasonal time series,such as hydrol- ogy, climatology, and econometrics. The algorithm can be extended to label detected changes with information on the parameters of the fitted piecewise linear models. Additionally monitoring disturbances in BFAST-type models at the end of time series (i.e., in near real-time) is available: Based on a model for stable historical behaviour abnormal changes within newly acquired data can be detected. Different models are available for modeling the stable his- torical behavior. A season-trend model (with harmonic seasonal pattern) is used as a default in the regresssion modelling. Details The package contains: • bfast(): Main function for iterative decomposition and break detection as described in Verbesselt et al (2010a,b). • bfastlite(): lightweight and fast detection of all breaks in a time series using a single iteration with all components at once. • bfastmonitor(): Monitoring approach for detecting disturbances in near real-time (see Verbesselt et al. 2012). • bfastpp(): Data pre-processing for BFAST-type modeling. • Functions for plotting and printing, see bfast(). • simts: Artificial example data set. • harvest: NDVI time series of a P. radiata plantation that is harvested. • som: NDVI time series of locations in the south of Somalia to illustrate the near real-time disturbance approach Package options bfast uses the following options to modify the default behaviour: • bfast.prefer_matrix_methods: logical value defining whether methods should try to use the design matrix instead of the formula and a dataframe whenever possible. This can avoid expensive repeated calls of model.matrix and model.frame and make model fitting faster using lm.fit. • bfast.use_bfastts_modifications: logical value defining whether a faster version of bfastts() should be used. • strucchange.use_armadillo: logical value defining whether to use C++ optimised code paths in strucchangeRcpp. By default, all three are enabled. See set_fallback_options() for a convenient interface for setting them all off for debugging purposes. References <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. <NAME>, <NAME>, <NAME>, <NAME> (2010). “Detecting trend and seasonal changes in satellite image time series.” Remote Sensing of Environment, 114(1), 106–115. ISSN 0034-4257, doi: 10.1016/j.rse.2009.08.014, https://doi.org/10.1016/j.rse.2009.08.014. <NAME>, <NAME>, <NAME>, <NAME> (2010). “Phenological change detection while accounting for abrupt and gradual trends in satellite image time series.” Remote Sensing of Envi- ronment, 114(12), 2970–2980. ISSN 0034-4257, doi: 10.1016/j.rse.2010.08.003, https://doi. org/10.1016/j.rse.2010.08.003. .bfast_cpp_closestfrom For all elements of a vector a, find the closest elements in a vector B and returns resulting indexes Description For all elements of a vector a, find the closest elements in a vector B and returns resulting indexes Usage .bfast_cpp_closestfrom(a, b, twosided) Arguments a numeric vector, ordered b numeric vector, ordered twosided logical value, if false, indexes will always point to elements in b that are less than or equal to elements in a but not greater than. Value integer vector of the same size as a with elements represnting indexes pointing to closest values in b bfast Break Detection in the Seasonal and Trend Component of a Univariate Time Series Description Iterative break detection in seasonal and trend component of a time series. Seasonal breaks is a function that combines the iterative decomposition of time series into trend, seasonal and remainder components with significant break detection in the decomposed components of the time series. Usage bfast( Yt, h = 0.15, season = c("dummy", "harmonic", "none"), max.iter = 10, breaks = NULL, hpc = "none", level = 0.05, decomp = c("stl", "stlplus"), type = "OLS-MOSUM", ... ) Arguments Yt univariate time series to be analyzed. This should be an object of class "ts" with a frequency greater than one. h minimal segment size between potentially detected breaks in the trend model given as fraction relative to the sample size (i.e. the minimal number of obser- vations in each segment divided by the total length of the timeseries). season the seasonal model used to fit the seasonal component and detect seasonal breaks (i.e. significant phenological change). There are three options: "dummy", "har- monic", or "none" where "dummy" is the model proposed in the first Remote Sensing of Environment paper and "harmonic" is the model used in the second Remote Sensing of Environment paper (See paper for more details) and where "none" indicates that no seasonal model will be fitted (i.e. St = 0 ). If there is no seasonal cycle (e.g. frequency of the time series is 1) "none" can be selected to avoid fitting a seasonal model. max.iter maximum amount of iterations allowed for estimation of breakpoints in seasonal and trend component. breaks integer specifying the maximal number of breaks to be calculated. By default the maximal number allowed by h is used. hpc A character specifying the high performance computing support. Default is "none", can be set to "foreach". Install the "foreach" package for hpc support. level numeric; threshold value for the sctest.efp test; if a length 2 vector is passed, the first value is used for the trend, the second for the seasonality decomp "stlplus" or "stl": the function to use for decomposition. stl can handle sparse time series (1 < frequency < 4), stlplus can handle NA values in the time series. type character, indicating the type argument to efp ... additional arguments passed to stlplus::stlplus(), if its usage has been en- abled, otherwise ignored. Details The algorithm decomposes the input time series Yt into three components: trend, seasonality and remainder, using the function defined by the decomp parameter. Then each component is checked for at least one significant break using strucchangeRcpp::efp(), and if there is one, strucchangeRcpp::breakpoints() is run on the component. The result allows differentiating between breaks in trend and seasonality. Value An object of the class "bfast" is a list with the following elements: Yt equals the Yt used as input. output is a list with the following elements (for each iteration): Tt the fitted trend component St the fitted seasonal component Nt the noise or remainder component Vt equals the deseasonalized data Yt - St for each iteration bp.Vt output of the breakpoints function for the trend model. Note that the output breakpoints are index numbers of na.o ci.Vt output of the breakpoints confint function for the trend model Wt equals the detrended data Yt - Tt for each iteration bp.Wt output of the breakpoints function for the seasonal model. Note that the output breakpoints are index numbers of n ci.Wt output of the breakpoints confint function for the seasonal model nobp is a list with the following elements: nobp.Vt logical, TRUE if there are no breakpoints detected nobp.Wt logical, TRUE if there are no breakpoints detected Magnitude magnitude of the biggest change detected in the trend component Time timing of the biggest change detected in the trend component Author(s) <NAME> References <NAME>, <NAME>, <NAME>, Culvenor D (2010). “Detecting trend and seasonal changes in satellite image time series.” Remote Sensing of Environment, 114(1), 106–115. ISSN 0034-4257, doi: 10.1016/j.rse.2009.08.014, https://doi.org/10.1016/j.rse.2009.08.014. <NAME>, <NAME>, <NAME>, <NAME> (2010). “Phenological change detection while accounting for abrupt and gradual trends in satellite image time series.” Remote Sensing of Envi- ronment, 114(12), 2970–2980. ISSN 0034-4257, doi: 10.1016/j.rse.2010.08.003, https://doi. org/10.1016/j.rse.2010.08.003. See Also plot.bfast for plotting of bfast() results. breakpoints for more examples and background information about estimation of breakpoints in time series. Examples ## Simulated Data plot(simts) # stl object containing simulated NDVI time series datats <- ts(rowSums(simts$time.series)) # sum of all the components (season,abrupt,remainder) tsp(datats) <- tsp(simts$time.series) # assign correct time series attributes plot(datats) fit <- bfast(datats, h = 0.15, season = "dummy", max.iter = 1) plot(fit, sim = simts) fit # prints out whether breakpoints are detected # in the seasonal and trend component ## Real data ## The data should be a regular ts() object without NA's ## See Fig. 8 b in reference plot(harvest, ylab = "NDVI") # MODIS 16-day cleaned and interpolated NDVI time series (rdist <- 10/length(harvest)) # ratio of distance between breaks (time steps) and length of the time series fit <- bfast(harvest, h = rdist, season = "harmonic", max.iter = 1, breaks = 2) plot(fit) ## plot anova and slope of the trend identified trend segments plot(fit, ANOVA = TRUE) ## plot the trend component and identify the break with ## the largest magnitude of change plot(fit, type = "trend", largest = TRUE) ## plot all the different available plots plot(fit, type = "all") ## output niter <- length(fit$output) # nr of iterations out <- fit$output[[niter]] # output of results of the final fitted seasonal and trend models and ## #nr of breakpoints in both. ## running bfast on yearly data t <- ts(as.numeric(harvest), frequency = 1, start = 2006) fit <- bfast(t, h = 0.23, season = "none", max.iter = 1) plot(fit) fit ## handling missing values with stlplus (NDVIa <- as.ts(zoo::zoo(som$NDVI.a, som$Time))) fit <- bfast(NDVIa, season = "harmonic", max.iter = 1, decomp = "stlplus") plot(fit) fit bfast01 Checking for one major break in the time series Description A function to select a suitable model for the data by choosing either a model with 0 or with 1 breakpoint. Usage bfast01( data, formula = NULL, test = "OLS-MOSUM", level = 0.05, aggregate = all, trim = NULL, bandwidth = 0.15, functional = "max", order = 3, lag = NULL, slag = NULL, na.action = na.omit, reg = c("lm", "rlm"), stl = "none", sbins = 1 ) Arguments data A time series of class ts, or another object that can be coerced to such. The time series is processed by bfastpp. A time series of class ts can be prepared by a convenience function bfastts in case of daily, 10 or 16-daily time series. formula formula for the regression model. The default is intelligently guessed based on the arguments order/lag/slag i.e. response ~ trend + harmon, i.e., a linear trend and a harmonic season component. Other specifications are possible using all terms set up by bfastpp, i.e., season (seasonal pattern with dummy vari- ables), lag (autoregressive terms), slag (seasonal autoregressiv terms), or xreg (further covariates). See bfastpp for details. test character specifying the type of test(s) performed. Can be one or more of BIC, supLM, supF, OLS-MOSUM, ..., or any other test supported by sctest.formula level numeric. Significance for the sctest.formula performed. aggregate function that aggregates a logical vector to a single value. This is used for ag- gregating the individual test decisions from test to a single one. trim numeric. The mimimal segment size passed to the from argument of the Fstats function. bandwidth numeric scalar from interval (0,1), functional. The bandwidth argument is passed to the h argument of the sctest.formula. functional arguments passed on to sctest.formula order numeric. Order of the harmonic term, defaulting to 3. lag numeric. Order of the autoregressive term, by default omitted. slag numeric. Order of the seasonal autoregressive term, by default omitted. na.action arguments passed on to bfastpp reg whether to use OLS regression lm() or robust regression MASS::rlm(). stl argument passed on to bfastpp sbins argument passed on to bfastpp Details bfast01 tries to select a suitable model for the data by choosing either a model with 0 or with 1 breakpoint. It proceeds in the following steps: 1. The data is preprocessed with bfastpp using the arguments order/lag/slag/na.action/stl/sbins. 2. A linear model with the given formula is fitted. By default a suitable formula is guessed based on the preprocessing parameters. 3. The model with 1 breakpoint is estimated as well where the breakpoint is chosen to minimize the segmented residual sum of squares. 4. A sequence of tests for the null hypothesis of zero breaks is performed. Each test results in a decision for FALSE (no breaks) or TRUE (structural break(s)). The test decisions are then aggregated to a single decision (by default using all() but any() or some other function could also be used). Available methods for the object returned include standard methods for linear models (coef, fit- ted, residuals, predict, AIC, BIC, logLik, deviance, nobs, model.matrix, model.frame), standard methods for breakpoints (breakpoints, breakdates), coercion to a zoo series with the decomposed components (as.zoo), and a plot method which plots such a zoo series along with the confidence interval (if the 1-break model is visualized). All methods take a ’breaks’ argument which can either be 0 or 1. By default the value chosen based on the ’test’ decisions is used. Note that the different tests supported have power for different types of alternatives. Some tests (such as supLM/supF or BIC) assess changes in all coefficients of the model while residual-based tests (e.g., OLS-CUSUM or OLS-MOSUM) assess changes in the conditional mean. See Zeileis (2005) for a unifying view. Value bfast01 returns a list of class "bfast01" with the following elements: call the original function call. data the data preprocessed by "bfastpp". formula the model formulae. breaks the number of breaks chosen based on the test decision (either 0 or 1). test the individual test decisions. breakpoints the optimal breakpoint for the model with 1 break. model A list of two ’lm’ objects with no and one breaks, respectively. Author(s) <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME> (2013). “Shifts in Global Vegetation Activity Trends.” Remote Sensing, 5(3), 1117–1133. ISSN 2072-4292, doi: 10.3390/rs5031117, https: //doi.org/10.3390/rs5031117. Zeileis A (2005). “A Unified Approach to Structural Change Tests Based on ML Scores, F Statis- tics, and OLS Residuals.” Econometric Reviews, 24(4), 445–466. ISSN 0747-4938, doi: 10.1080/ 07474930500406053, https://doi.org/10.1080/07474930500406053. See Also bfastmonitor, breakpoints Examples library(zoo) ## define a regular time series ndvi <- as.ts(zoo(som$NDVI.a, som$Time)) ## fit variations bf1 <- bfast01(ndvi) bf2 <- bfast01(ndvi, test = c("BIC", "OLS-MOSUM", "supLM"), aggregate = any) bf3 <- bfast01(ndvi, test = c("OLS-MOSUM", "supLM"), aggregate = any, bandwidth = 0.11) ## inspect test decisions bf1$test bf1$breaks bf2$test bf2$breaks bf3$test bf3$breaks ## look at coefficients coef(bf1) coef(bf1, breaks = 0) coef(bf1, breaks = 1) ## zoo series with all components plot(as.zoo(ndvi)) plot(as.zoo(bf1, breaks = 1)) plot(as.zoo(bf2)) plot(as.zoo(bf3)) ## leveraged by plot method plot(bf1, regular = TRUE) plot(bf2) plot(bf2, plot.type = "multiple", which = c("response", "trend", "season"), screens = c(1, 1, 2)) plot(bf3) bfast01classify Change type analysis of the bfast01 function Description A function to determine the change type Usage bfast01classify( object, alpha = 0.05, pct_stable = NULL, typology = c("standard", "drylands") ) Arguments object bfast01 object, i.e. the output of the bfast01 function. alpha threshold for significance tests, default 0.05 pct_stable threshold for segment stability, unit: percent change per unit time (0-100), de- fault NULL typology classification legend to use: standard refers to the original legend as used in De Jong et al. (2013), drylands refers to the legend used in Bernardino et al. (2020). Details bfast01classify Value bfast01classify returns a data.frame with the following elements: flag_type Type of shift: (1) monotonic increase, (2) monotonic decrease, (3) monotonic increase (with positive break), (4) monotonic decrease (with negative break), (5) interruption: increase with negative break, (6) interruption: decrease with posi- tive break, (7) reversal: increase to decrease, (8) reversal: decrease to increase flag_significance SIGNIFICANCE FLAG: (0) both segments significant (or no break and signif- icant), (1) only first segment significant, (2) only 2nd segment significant, (3) both segments insignificant (or no break and not significant) flag_pct_stable STABILITY FLAG: (0) change in both segments is substantial (or no break and substantial), (1) only first segment substantial, (2) only 2nd segment substantial (3) both segments are stable (or no break and stable) and also significance and percentage of both segments before and after the potentially detected break: "p_segment1", "p_segment2", "pct_segment1", "pct_segment2". Author(s) <NAME>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2020). “Global- scale characterization of turning points in arid and semi-arid ecosystem functioning.” Global Ecol- ogy and Biogeography, 29(7), 1230–1245. doi: 10.1111/geb.13099, https://doi.org/10.1111/ geb.13099. <NAME>, <NAME>, <NAME>, <NAME> (2013). “Shifts in Global Vegetation Ac- tivity Trends.” Remote Sensing, 5(3), 1117–1133. ISSN 2072-4292, doi: 10.3390/rs5031117, https://doi.org/10.3390/rs5031117. See Also bfast01 Examples library(zoo) ## define a regular time series ndvi <- as.ts(zoo(som$NDVI.a, som$Time)) ## fit variations bf1 <- bfast01(ndvi) bfast01classify(bf1, pct_stable = 0.25) bfastlite Detect multiple breaks in a time series Description A combination of bfastpp and breakpoints to do light-weight detection of multiple breaks in a time series while also being able to deal with NA values by excluding them via bfastpp. Usage bfastlite( data, formula = response ~ trend + harmon, order = 3, breaks = "LWZ", lag = NULL, slag = NULL, na.action = na.omit, stl = c("none", "trend", "seasonal", "both"), decomp = c("stl", "stlplus"), sbins = 1, ... ) bfast0n( data, formula = response ~ trend + harmon, order = 3, breaks = "LWZ", lag = NULL, slag = NULL, na.action = na.omit, stl = c("none", "trend", "seasonal", "both"), decomp = c("stl", "stlplus"), sbins = 1, ... ) Arguments data A time series of class ts, or another object that can be coerced to such. For seasonal components, a frequency greater than 1 is required. formula a symbolic description for the model in which breakpoints will be estimated. order numeric. Order of the harmonic term, defaulting to 3. breaks either a positive integer specifying the maximal number of breaks to be calcu- lated, or a string specifying the information criterion to use to automatically determine the optimal number of breaks (see also logLik. "all" means the maximal number allowed by h is used. NULL is treated as the default of the breakpoints function (i.e. BIC). lag numeric. Orders of the autoregressive term, by default omitted. slag numeric. Orders of the seasonal autoregressive term, by default omitted. na.action function for handling NAs in the data (after all other preprocessing). stl character. Prior to all other preprocessing, STL (season-trend decomposition via LOESS smoothing) can be employed for trend-adjustment and/or season- adjustment. The "trend" or "seasonal" component or both from stl are re- moved from each column in data. By default ("none"), no STL adjustment is used. decomp "stlplus" or "stl": use the NA-tolerant decomposition package or the reference package (which can make use of time series with 2-3 observations per year) sbins numeric. Controls the number of seasonal dummies. If integer > 1, sets the number of seasonal dummies to use per year. If <= 1, treated as a multiplier to the number of observations per year, i.e. ndummies = nobs/year * sbins. ... Additional arguments to breakpoints. Value An object of class bfastlite, with two elements: breakpoints output from breakpoints, containing information about the estimated break- points. data_pp preprocessed data as output by bfastpp. Author(s) <NAME>, <NAME> Examples plot(simts) # stl object containing simulated NDVI time series datats <- ts(rowSums(simts$time.series)) # sum of all the components (season,abrupt,remainder) tsp(datats) <- tsp(simts$time.series) # assign correct time series attributes plot(datats) # Detect breaks bp = bfastlite(datats) # Default method of estimating breakpoints bp[["breakpoints"]][["breakpoints"]] # Plot plot(bp) # Custom method of estimating number of breaks (request 2 breaks) strucchangeRcpp::breakpoints(bp[["breakpoints"]], breaks = 2) # Plot including magnitude based on RMSD for the cos1 component of harmonics plot(bp, magstat = "RMSD", magcomp = "harmoncos1", breaks = 2) bfastmonitor Near Real-Time Disturbance Detection Based on BFAST-Type Models Description Monitoring disturbances in time series models (with trend/season/regressor terms) at the end of time series (i.e., in near real-time). Based on a model for stable historical behaviour abnormal changes within newly acquired data can be detected. Different models are available for modeling the stable historical behavior. A season-trend model (with harmonic seasonal pattern) is used as a default in the regresssion modelling. Usage bfastmonitor( data, start, formula = response ~ trend + harmon, order = 3, lag = NULL, slag = NULL, history = c("ROC", "BP", "all"), type = "OLS-MOSUM", h = 0.25, end = 10, level = c(0.05, 0.05), hpc = "none", verbose = FALSE, plot = FALSE, sbins = 1 ) Arguments data A time series of class ts, or another object that can be coerced to such. For seasonal components, a frequency greater than 1 is required. start numeric. The starting date of the monitoring period. Can either be given as a float (e.g., 2000.5) or a vector giving period/cycle (e.g., c(2000, 7)). formula formula for the regression model. The default is response ~ trend + harmon, i.e., a linear trend and a harmonic season component. Other specifications are possible using all terms set up by bfastpp, i.e., season (seasonal pattern with dummy variables), lag (autoregressive terms), slag (seasonal autoregressive terms), or xreg (further covariates). See bfastpp for details. order numeric. Order of the harmonic term, defaulting to 3. lag numeric. Order of the autoregressive term, by default omitted. slag numeric. Order of the seasonal autoregressive term, by default omitted. history specification of the start of the stable history period. Can either be a charac- ter, numeric, or a function. If character, then selection is possible between reverse-ordered CUSUM ("ROC", default), Bai and Perron breakpoint estima- tion ("BP"), or all available observations ("all"). If numeric, the start date can be specified in the same form as start. If a function is supplied it is called as history(formula, data) to compute a numeric start date. type character specifying the type of monitoring process. By default, a MOSUM process based on OLS residuals is employed. See mefp for alternatives. h numeric scalar from interval (0,1) specifying the bandwidth relative to the sam- ple size in MOSUM/ME monitoring processes. end numeric. Maximum time (relative to the history period) that will be monitored (in MOSUM/ME processes). Default is 10 times the history period. level numeric vector. Significance levels of the monitoring and ROC (if selected) procedure, i.e., probability of type I error. hpc character specifying the high performance computing support. Default is "none", can be set to "foreach". See breakpoints for more details. verbose logical. Should information about the monitoring be printed during computa- tion? plot logical. Should the result be plotted? sbins numeric. Number of seasonal dummies, passed to bfastpp. Details bfastmonitor provides monitoring of disturbances (or structural changes) in near real-time based on a wide class of time series regression models with optional season/trend/autoregressive/covariate terms. See Verbesselt at al. (2011) for details. Based on a given time series (typically, but not necessarily, with frequency greater than 1), the data is first preprocessed for regression modeling. Trend/season/autoregressive/covariate terms are (optionally) computed using bfastpp. Second, the data is split into a history and monitoring period (starting with start). Third, a subset of the history period is determined which is considered to be stable (see also below). Fourth, a regression model is fitted to the preprocessed data in the stable history period. Fifth, a monitoring procedure is used to determine whether the observations in the monitoring period conform with this stable regression model or whether a change is detected. The regression model can be specified by the user. The default is to use a linear trend and a harmonic season: response ~ trend + harmon. However, all other terms set up by bfastpp can also be omitted/added, e.g., response ~ 1 (just a constant), response ~ season (seasonal dummies for each period), etc. Further terms precomputed by bfastpp can be lag (autoregressive terms of specified order), slag (seasonal autoregressive terms of specified order), xreg (covariates, if data has more than one column). For determining the size of the stable history period, various approaches are available. First, the user can set a start date based on subject-matter knowledge. Second, data-driven methods can be employed. By default, this is a reverse-ordered CUSUM test (ROC). Alternatively, breakpoints can be estimated (Bai and Perron method) and only the data after the last breakpoint are employed for the stable history. Finally, the user can also supply a function for his/her own data-driven method. Value bfastmonitor returns an object of class "bfastmonitor", i.e., a list with components as follows. data original "ts" time series, tspp preprocessed "data.frame" for regression modeling, model fitted "lm" model for the stable history period, mefp fitted "mefp" process for the monitoring period, history start and end time of history period, monitor start and end time of monitoring period, breakpoint breakpoint detected (if any). magnitude median of the difference between the data and the model prediction in the mon- itoring period. Author(s) <NAME>, <NAME> References <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. See Also monitor, mefp, breakpoints Examples NDVIa <- as.ts(zoo::zoo(som$NDVI.a, som$Time)) plot(NDVIa) ## apply the bfast monitor function on the data ## start of the monitoring period is c(2010, 13) ## and the ROC method is used as a method to automatically identify a stable history mona <- bfastmonitor(NDVIa, start = c(2010, 13)) mona plot(mona) ## fitted season-trend model in history period summary(mona$model) ## OLS-based MOSUM monitoring process plot(mona$mefp, functional = NULL) ## the pattern in the running mean of residuals ## this illustrates the empirical fluctuation process ## and the significance of the detected break. NDVIb <- as.ts(zoo(som$NDVI.b, som$Time)) plot(NDVIb) monb <- bfastmonitor(NDVIb, start = c(2010, 13)) monb plot(monb) summary(monb$model) plot(monb$mefp, functional = NULL) ## set the stable history period manually and use a 4th order harmonic model bfastmonitor(NDVIb, start = c(2010, 13), history = c(2008, 7), order = 4, plot = TRUE) ## just use a 6th order harmonic model without trend mon <- bfastmonitor(NDVIb, formula = response ~ harmon, start = c(2010, 13), order = 6, plot = TRUE) summary(mon$model) AIC(mon$model) ## use a custom number of seasonal dummies (11/yr) instead of harmonics mon <- bfastmonitor(NDVIb, formula = response ~ season, start = c(2010, 13), sbins = 11, plot = TRUE) summary(mon$model) AIC(mon$model) ## Example for processing raster bricks (satellite image time series of 16-day NDVI images) f <- system.file("extdata/modisraster.grd", package = "bfast") library("raster") modisbrick <- raster::brick(f) data <- as.vector(modisbrick[1]) ndvi <- bfastts(data, dates, type = c("16-day")) plot(ndvi/10000) ## derive median NDVI of a NDVI raster brick medianNDVI <- raster::calc(modisbrick, fun = function(x) median(x, na.rm = TRUE)) raster::plot(medianNDVI) ## helper function to be used with the calc() function xbfastmonitor <- function(x, timestamps = dates) { ndvi <- bfastts(x, timestamps, type = c("16-day")) ndvi <- window(ndvi, end = c(2011, 14))/10000 ## delete end of the time to obtain a dataset similar to RSE paper (Verbesselt et al.,2012) bfm <- bfastmonitor(data = ndvi, start = c(2010, 12), history = c("ROC")) return(c(breakpoint = bfm$breakpoint, magnitude = bfm$magnitude)) } ## apply on one pixel for testing ndvi <- bfastts(as.numeric(modisbrick[1])/10000, dates, type = c("16-day")) plot(ndvi) bfm <- bfastmonitor(data = ndvi, start = c(2010, 12), history = c("ROC")) bfm$magnitude plot(bfm) xbfastmonitor(modisbrick[1], dates) ## helper function applied on one pixel ## apply the bfastmonitor function onto a raster brick timeofbreak <- raster::calc(modisbrick, fun=xbfastmonitor) raster::plot(timeofbreak) ## time of break and magnitude of change raster::plot(timeofbreak,2) ## magnitude of change bfastpp Time Series Preprocessing for BFAST-Type Models Description Time series preprocessing for subsequent regression modeling. Based on a (seasonal) time series, a data frame with the response, seasonal terms, a trend term, (seasonal) autoregressive terms, and covariates is computed. This can subsequently be employed in regression models. Usage bfastpp( data, order = 3, lag = NULL, slag = NULL, na.action = na.omit, stl = c("none", "trend", "seasonal", "both"), decomp = c("stl", "stlplus"), sbins = 1 ) Arguments data A time series of class ts, or another object that can be coerced to such. For seasonal components, a frequency greater than 1 is required. order numeric. Order of the harmonic term, defaulting to 3. lag numeric. Orders of the autoregressive term, by default omitted. slag numeric. Orders of the seasonal autoregressive term, by default omitted. na.action function for handling NAs in the data (after all other preprocessing). stl character. Prior to all other preprocessing, STL (season-trend decomposition via LOESS smoothing) can be employed for trend-adjustment and/or season- adjustment. The "trend" or "seasonal" component or both from stl are re- moved from each column in data. By default ("none"), no STL adjustment is used. decomp "stlplus" or "stl": use the NA-tolerant decomposition package or the reference package (which can make use of time series with 2-3 observations per year) sbins numeric. Controls the number of seasonal dummies. If integer > 1, sets the number of seasonal dummies to use per year. If <= 1, treated as a multiplier to the number of observations per year, i.e. ndummies = nobs/year * sbins. Details To facilitate (linear) regression models of time series data, bfastpp facilitates preprocessing and setting up regressor terms. It returns a data.frame containing the first column of the data as the response while further columns (if any) are used as covariates xreg. Additionally, a linear trend, seasonal dummies, harmonic seasonal terms, and (seasonal) autoregressive terms are provided. Optionally, each column of data can be seasonally adjusted and/or trend-adjusted via STL (season- trend decomposition via LOESS smoothing) prior to preprocessing. The idea would be to capture season and/or trend nonparametrically prior to regression modelling. Value If no formula is provided, bfastpp returns a "data.frame" with the following variables (some of which may be matrices). time numeric vector of time stamps, response response vector (first column of data), trend linear time trend (running from 1 to number of observations), season factor indicating season period, harmon harmonic seasonal terms (of specified order), lag autoregressive terms (or orders lag, if any), slag seasonal autoregressive terms (or orders slag, if any), xreg covariate regressor (all columns of data except the first, if any). If a formula is given, bfastpp returns a list with components X, y, and t, where X is the design matrix of the model, y is the response vector, and t represents the time of observations. X will only contain variables that occur in the formula. Columns of X have names as decribed above. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. See Also bfastmonitor Examples ## set up time series ndvi <- as.ts(zoo::zoo(cbind(a = som$NDVI.a, b = som$NDVI.b), som$Time)) ndvi <- window(ndvi, start = c(2006, 1), end = c(2009, 23)) ## parametric season-trend model d1 <- bfastpp(ndvi, order = 2) d1lm <- lm(response ~ trend + harmon, data = d1) summary(d1lm) # plot visually (except season, as it's a factor) plot(zoo::read.zoo(d1)[,-3], # Avoid clipping plots for pretty output ylim = list(c(min(d1[,2]), max(d1[,2])), c(min(d1[,3]), max(d1[,3])), c(-1, 1), c(-1, 1), c(-1, 1), c(-1, 1), c(min(d1[,6]), max(d1[,6])) )) ## autoregressive model (after nonparametric season-trend adjustment) d2 <- bfastpp(ndvi, stl = "both", lag = 1:2) d2lm <- lm(response ~ lag, data = d2) summary(d2lm) ## use the lower level lm.fit function d3 <- bfastpp(ndvi, stl = "both", lag = 1:2) d3mm <- model.matrix(response ~ lag, d3) d3lm <- lm.fit(d3mm, d3$response) d3lm$coefficients bfastts Create a regular time series object by combining data and date infor- mation Description Create a regular time series object by combining measurements (data) and time (dates) information. Usage bfastts(data, dates, type = c("irregular", "16-day", "10-day")) Arguments data A data vector or matrix where columns represent variables dates Optional input of dates for each measurement in the ’data’ variable. In case the data is a irregular time series, a vector with ’dates’ for each measurement can be supplied using this ’dates’ variable. The irregular data will be linked with the dates vector to create daily regular time series with a frequency = 365. Extra days in leap years might cause problems. Please be careful using this option as it is experimental. Feedback is welcome. type ("irregular") indicates that the data is collected at irregular dates and as such will be converted to a daily time series. ("16-day") indicates that data is col- lected at a regular time interval (every 16-days e.g. like the MODIS 16-day data products). ("10-day") indicates that data is collected at a 10-day time interval of the SPOT VEGETATION (S10) product. Warning: Only use this function for the SPOT VEGETATION S10 time series, as for other 10-day time series a different approach might be required. Details bfastts create a regular time series Value bfastts returns an object of class "ts", i.e., a list with components as follows. zz a regular "ts" time series with a frequency equal to 365 or 23 i.e. 16-day time series. Author(s) <NAME>, <NAME> See Also monitor, mefp, breakpoints Examples # 16-day time series (i.e. MODIS) timedf <- data.frame(y = som$NDVI.b, dates = dates[1:nrow(som)]) bfastts(timedf$y, timedf$dates, type = "16-day") # Irregular head(bfastts(timedf$y, timedf$dates, type = "irregular"), 50) ## Not run: # Example of use with a raster library("raster") f <- system.file("extdata/modisraster.grd", package="bfast") modisbrick <- brick(f) ndvi <- bfastts(as.vector(modisbrick[1]), dates, type = c("16-day")) ## data of pixel 1 plot(ndvi/10000) # Time series of 4 pixels modis_ts = t(as.data.frame(modisbrick))[,1:4] # Data with multiple columns, 2-4 are external regressors ndvi <- bfastts(modis_ts, dates, type = c("16-day")) plot(ndvi/10000) ## End(Not run) create16dayts-deprecated A helper function to create time series Description A deprecated alias to bfastts. Please use bfastts(type="16-day") instead. Usage create16dayts(data, dates) Arguments data Passed to bfastts. dates Passed to bfastts. Author(s) <NAME>, <NAME> See Also bfastmonitor bfast-deprecated dates A vector with date information (a Datum type) to be linked with each NDVI layer within the modis raster brick (modisraster data set) Description dates is an object of class "Date" and contains the "Date" information to create a 16-day time series object. Source <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. Examples ## see ?bfastmonitor for examples harvest 16-day NDVI time series for a Pinus radiata plantation. Description A univariate time series object of class "ts". Frequency is set to 23 – the approximate number of observations per year. Source <NAME>, <NAME>, <NAME>, <NAME> (2010). “Detecting trend and seasonal changes in satellite image time series.” Remote Sensing of Environment, 114(1), 106–115. ISSN 0034-4257, doi: 10.1016/j.rse.2009.08.014, https://doi.org/10.1016/j.rse.2009.08.014. Examples plot(harvest,ylab='NDVI') # References citation("bfast") modisraster A raster brick of 16-day satellite image NDVI time series for a small subset in south eastern Somalia. Description A raster brick containing 16-day NDVI satellite images (MOD13C1 product). Source <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. Examples ## see ?bfastmonitor ndvi A random NDVI time series Description A univariate time series object of class "ts". Frequency is set to 24. Examples plot(ndvi) plot.bfast Methods for objects of class "bfast". Description Plot methods for objects of class "bfast". Usage ## S3 method for class 'bfast' plot( x, type = c("components", "all", "data", "seasonal", "trend", "noise"), sim = NULL, largest = FALSE, main, ANOVA = FALSE, ... ) Arguments x bfast object type Indicates the type of plot. See details. sim Optional stl object containing the original components used when simulating x. largest If TRUE, show the largest jump in the trend component. main an overall title for the plot. ANOVA if TRUE Derive Slope and Significance values for each identified trend segment ... further arguments passed to the plot function. Details This function creates various plots to demonstrate the results of a bfast decomposition. The type of plot shown depends on the value of type. • components Shows the final estimated components with breakpoints. • all Plots the estimated components and breakpoints from all iterations. • data Just plots the original time series data. • seasonal Shows the trend component including breakpoints. • trend Shows the trend component including breakpoints. • noise Plots the noise component along with its acf and pacf. If sim is not NULL, the components used in simulation are also shown on each graph. Value No return value, called for side effects. Author(s) <NAME>, <NAME> and <NAME> Examples ## See \code{\link[bfast]{bfast}} for examples. plot.bfastlite Plot the time series and results of BFAST Lite Description The black line represents the original input data, the green line is the fitted model, the blue lines are the detected breaks, and the whiskers denote the magnitude (if magstat is specified). Usage ## S3 method for class 'bfastlite' plot(x, breaks = NULL, magstat = NULL, magcomp = "trend", ...) Arguments x bfastlite object from bfastlite() breaks number of breaks or optimal break selection method, see strucchangeRcpp::breakpoints() magstat name of the magnitude column to plot (e.g. RMSD, MAD, diff), see the Mag com- ponent of strucchangeRcpp::magnitude.breakpointsfull() magcomp name of the component (i.e. column in x$data_pp) to plot magnitudes of ... other parameters to pass to plot() Value Nothing, called for side effects. setoptions Set package options with regard to computation times Description These functions set options of the bfast and strucchangeRcpp packages to enable faster compu- tations. By default (set_default_options), these optimizations are enabled. Notice that only some functions of the bfast package make use of these options. set_fast_options is an alias for set_default_options. Usage set_default_options() set_fast_options() set_fallback_options() Value A list of modified options and their new values. Examples # run bfastmonitor with different options and compare computation times library(zoo) NDVIa <- as.ts(zoo(som$NDVI.a, som$Time)) set_default_options() ## Not run: system.time(replicate(100, bfastmonitor(NDVIa, start = c(2010, 13)))) ## End(Not run) set_fallback_options() ## Not run: system.time(replicate(100, bfastmonitor(NDVIa, start = c(2010, 13)))) ## End(Not run) simts Simulated seasonal 16-day NDVI time series Description simts is an object of class "stl" and consists of seasonal, trend (equal to 0) and noise components. The simulated noise is typical for remotely sensed satellite data. Source <NAME>, <NAME>, <NAME>, <NAME> (2010). “Detecting trend and seasonal changes in satellite image time series.” Remote Sensing of Environment, 114(1), 106–115. ISSN 0034-4257, doi: 10.1016/j.rse.2009.08.014, https://doi.org/10.1016/j.rse.2009.08.014. Examples plot(simts) # References citation("bfast") som Two 16-day NDVI time series from the south of Somalia Description som is a dataframe containing time and two NDVI time series to illlustrate how the monitoring approach works. Source <NAME>, <NAME>, <NAME> (2012). “Near real-time disturbance detection using satellite image time series.” Remote Sensing of Environment, 123, 98–108. ISSN 0034-4257, doi: 10.1016/ j.rse.2012.02.022, https://doi.org/10.1016/j.rse.2012.02.022. Examples ## first define the data as a regular time series (i.e. ts object) library(zoo) NDVI <- as.ts(zoo(som$NDVI.b,som$Time)) plot(NDVI)
tapper_absinthe_plug
hex
Erlang
Tapper Absinthe Integration === Works in concert with [`Tapper.Plug.Trace`](https://github.com/Financial-Times/tapper_plug) to propagate the Tapper Id into the Absinthe context. [![Hex pm](http://img.shields.io/hexpm/v/tapper_absinthe_plug.svg?style=flat)](https://hex.pm/packages/tapper_absinthe_plug) [![Inline docs](http://inch-ci.org/github/Financial-Times/tapper_absinthe_plug.svg)](http://inch-ci.org/github/Financial-Times/tapper_absinthe_plug) [![Build Status](https://travis-ci.org/Financial-Times/tapper_absinthe_plug.svg?branch=master)](https://travis-ci.org/Financial-Times/tapper_absinthe_plug) Synopsis --- Using this plug, you can access the Tapper Id via a resolver’s `info` (`%Absinthe.Resolution{}`) parameter, using [`Tapper.Plug.Absinthe.get/1`](Tapper.Plug.Absinthe.html#get/1). In your router: ``` plug Tapper.Plug.Trace # pick up the trace plug Tapper.Plug.Absinthe # copy the id into the Absinthe context ``` In your resolver: ``` def resolve(args, info) do # pick up from id info.context tapper_id = Tapper.Plug.Absinthe.get(info) tapper_id = Tapper.start_span(id, name: "my-resolver") # etc. ... Tapper.finish_span(tapper_id) end ``` ### See also * [Absinthe Guide - Context and Authentication](http://absinthe-graphql.org/guides/context-and-authentication/) * [`Tapper.Plug`](https://github.com/Financial-Times/tapper_plug) The API documentation can be found at <https://hexdocs.pm/tapper_absinthe_plug>. Helpers --- Since you’ll probably want to wrap a span around every resolver call, we provide [`Tapper.Absinthe.Helper.in_span/2`](Tapper.Absinthe.Helper.html#in_span/2), which wraps a new child span around a function call. Using this in your Absinthe schema or type definition looks something like: ``` import Tapper.Absinthe.Helper, only: [in_span: 2] query do @desc "Get a Thing by UUID" field :thing, type: :thing do @desc "A Thing UUID" arg :id, non_null(:id) resolve fn(%{id: thing_id}, info) - in_span(info, fn(tapper_id) -> # call real resolver function, passing %Tapper.Id{} etc. MyApp.ThingResolver.thing(thing_id, tapper_id) end) end end end ``` When the resolver is called, `in_span/2` will start a child span, using the `Tapper.Id` from the Absinthe context, and apply the function, passing the child span id, so you can add annotations and pass to other functions, and returning the result. It will take care of finishing the child span, even in exception situations. The name of the child span will be the schema node name, in this case `thing`. Installation --- For the latest pre-release (and unstable) code, add github repo to your mix dependencies: ``` def deps do [{:tapper_absinthe_plug, github: "Financial-Times/tapper_absinthe_plug"}] end ``` For release versions, the package can be installed by adding `tapper_absinthe_plug` to your list of dependencies in `mix.exs`: ``` def deps do [{:tapper_absinthe_plug, "~> 0.2"}] end ``` Ensure that the `:tapper` application is present in your mix project’s applications: ``` # Configuration for the OTP application. # # Type `mix help compile.app` for more information. def application do [ mod: {MyApp, []}, applications: [:tapper] ] end ``` Tapper Absinthe Plug v0.2.1 Tapper.Absinthe.Helper === Support functions for using Tapper in Absinthe. [Link to this section](#summary) Summary === [Types](#types) --- [resolver_ret()](#t:resolver_ret/0) [Functions](#functions) --- [in_span(info, fun)](#in_span/2) Wrap a resolver function call in span [Link to this section](#types) Types === [Link to this type](#t:resolver_ret/0 "Link to this type") resolver_ret() ``` resolver_ret() :: {:ok, any()} | {:error, any()} ``` [Link to this section](#functions) Functions === [Link to this function](#in_span/2 "Link to this function") in_span(info, fun) ``` in_span(info :: [Absinthe.Resolution.t](https://hexdocs.pm/absinthe/1.3.0/Absinthe.Resolution.html#t:t/0)(), fun :: ([Tapper.Id.t](https://hexdocs.pm/tapper/0.2.0/Tapper.Id.html#t:t/0)() -> [resolver_ret](#t:resolver_ret/0)())) :: [resolver_ret](#t:resolver_ret/0)() ``` Wrap a resolver function call in span. e.g. ``` import Tapper.Absinthe.Helper, only: [in_span: 2] query do @desc "Get a Thing by UUID" field :thing, type: :thing do @desc "A Thing UUID" arg :id, non_null(:id) resolve fn(%{id: thing_id}, info) - in_span(info, fn(tapper_id) -> # call real resolver function, passing %Tapper.Id{} etc. MyApp.ThingResolver.thing(thing_id, tapper_id) end) end end end ``` Tapper Absinthe Plug v0.2.1 Tapper.Plug.Absinthe === Works in concert with [`Tapper.Plug.Trace`](https://github.com/Financial-Times/tapper_plug) to propagate the Tapper Id into the Absinthe context. You can then access the Tapper id via a resolver’s `info` (`%Absinthe.Resolution{}`) parameter, using this modules’ [`get/1`](#get/1) function. In your router: ``` plug Tapper.Plug.Trace # pick up the trace plug Tapper.Plug.Absinthe # copy the id into the Absinthe context ``` In your resolver: ``` def resolve(args, info) do # pick up from id info.context tapper_id = Tapper.Plug.Absinthe.get(info) tapper_id = Tapper.start_span(id, name: "my-resolver") # etc. ... Tapper.finish_span(tapper_id) end ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [call(conn, config)](#call/2) Callback implementation for [`Plug.call/2`](https://hexdocs.pm/plug/1.3.5/Plug.html#c:call/2) [get(info)](#get/1) Get the tapper id from the Absinthe Resolution info if present, or return `:ignore` [init(opts)](#init/1) Callback implementation for [`Plug.init/1`](https://hexdocs.pm/plug/1.3.5/Plug.html#c:init/1) [Link to this section](#functions) Functions === [Link to this function](#call/2 "Link to this function") call(conn, config) ``` call(conn :: [Plug.Conn.t](https://hexdocs.pm/plug/1.3.5/Plug.Conn.html#t:t/0)(), config :: any()) :: [Plug.Conn.t](https://hexdocs.pm/plug/1.3.5/Plug.Conn.html#t:t/0)() ``` Callback implementation for [`Plug.call/2`](https://hexdocs.pm/plug/1.3.5/Plug.html#c:call/2). [Link to this function](#get/1 "Link to this function") get(info) ``` get(resolution :: %Absinthe.Resolution{acc: term(), adapter: term(), arguments: term(), context: term(), definition: term(), errors: term(), extensions: term(), middleware: term(), parent_type: term(), path: term(), private: term(), root_value: term(), schema: term(), source: term(), state: term(), value: term()} | map()) :: [Tapper.Id.t](https://hexdocs.pm/tapper/0.2.0/Tapper.Id.html#t:t/0)() ``` Get the tapper id from the Absinthe Resolution info if present, or return `:ignore`. [Link to this function](#init/1 "Link to this function") init(opts) Callback implementation for [`Plug.init/1`](https://hexdocs.pm/plug/1.3.5/Plug.html#c:init/1).
eve-srp
readthedoc
Unknown
EVE SRP Documentation Release <NAME> October 04, 2017 Contents 1.1 Quick Star... 3 1.2 External AP... 14 2.1 Authenticatio... 21 2.2 Killmail Handlin... 30 2.3 View... 35 2.4 Model... 38 2.5 Javascrip... 42 i ii EVE SRP Documentation, Release EVE-SRP is designed to facilitate a ship replacement (SRP) or reimbursement program in the game EVE Online. It features a pluggable authentication setup so it can integrate with existing authentication systems, and comes with built in support for TEST Alliance’s Auth and Brave’s Core systems. It also features a configurable killmail source system, with built in support for zKillboard based killboards and the recent ESI killmail endpoint. Again, this is an extensible system so if you have a custom killboard, as long as there’s some sort of programmatic access, you can probably write a custom adapter. For the users, EVE-SRP offers quick submission and an easy way to check your SRP pending requests. On the administrative side, EVE-SRP uses the concept of divisions, with different users and groups of users being able to submit requests, review them (set payouts and approve or reject requests), and finally pay out approved requests. This separation allows spreading of the labor intensive and low risk task of evaluating requests from the high privilege of paying out requests from a central wallet. This also means different groups can have different reviewing+paying teams. For example, you may wish for capital losses to be reviewed by a special team that is aware of your capital group’s fitting requirements, and in lieu of payouts you may have someone hand out replacement hulls. EVE SRP Documentation, Release 2 Contents CHAPTER 1 User Guide Quick Start Logging in and Submitting Requests When you first access a website running EVE-SRP, you will be asked to login. Select the appropriate login option if you are presented with multiple choices, enter your credentials and login. Once you have logged in, you will be able to see what reimbursement divisions you have been granted permissions in as well as all of the requests you have submitted. EVE SRP Documentation, Release To submit a request, click the “Submit” button at the top of the screen. The button will only be present if you have been granted the submit privilege within a division. In the form, enter a killmail URL and any details your organization normally requires. What kind of killmail URLs that are acceptable is up to your organization, but common choices are zKillboard based killboards or CREST killmail URLs from an in-game killmail. Click the “Submit” button once you are done entering the information. EVE SRP Documentation, Release You will be redirected to the request detail page once you have submitted your request. Via this page you can add comments for reviewers, or update the details to correct problems. EVE SRP Documentation, Release Reviewing Requests If you have the review permission in a division and are logged in, you can click on the “Pending” link at the top of the screen to see a list of requests that are not in a final (paid or rejected) state, and are thus able to be reviewed. The number of requests that are in the “Evaluating” state is displayed in the number badge next to the “Pending” button. In the list of requests, unevaluated requests have a yellow background, incomplete and rejected have a red background, approved (pending payout) have a blue one, and paid requests have a green background. To open a request, click the Request ID link (blue text). In addition to the controls available to a normal user, reviewers have a few extra controls available. The base payout can be set by entering a value (in millions of ISK) and clicking the “Set” button. EVE SRP Documentation, Release To apply bonuses and/or deduction, enter an amount in the “Add Modifier” form, enter a reason for the modifier, and then select the type of modifier from the dropdown button labeled, “Type”. Absolute modifiers (adding or subtracting a set amount of ISK) are applied first, followed by percentage deductions/bonuses. EVE SRP Documentation, Release If you make a mistake on a modifier and the request is still in the evaluating state, you can void the modifier by clicking the small “X”. Once you have applied all the modifiers you want/need, you can change the status of the request using the same interface used for commenting. Enter a reason for the status change in the comment box, click the dropdown button to EVE SRP Documentation, Release the right of the “Comment” button, and finally click the new status you want applied. If you missed something and need to add or void a modifier, or even change the base payout, you can set approved (but not yet paid) requests back to evaluating. EVE SRP Documentation, Release Paying Out Requests If you have the payer permission for a division, you can mark requests as paid. Typically this is handled by someone with access to the wallet in-game used to hold the SRP money. The number of requests pending payout is displayed in the number badge to the right of the “Pay Outs” button. This butotn is only visible if you have the payer permission. Click the button to see a list of approved requests. EVE SRP Documentation, Release This list tries to make paying out requests as quick as possible. Clicking one of the white buttons (under the “Pilot”, “Request ID (copy)”, or “Payout” columns) will copy the text within to your clipboard, making it quicker to enter the information in-game. The clipboard functionality requires Flash, so it should be done using an out of game browser. The work flow should be something like this: 1. Copy Pilot name from app using standard web browser. 2. Paste the name in a search box for transferring money (either frmo a corp wallet or a personal wallet). Select the user and have the Give/Transfer ISK dialog box up. 3. Copy payout amount from app. 4. Paste payout amount into the amount box in-game. 5. Copy the request ID from the app. 6. Paste the request ID into the reason box in-game. Click the OK button to transfer the money. 7. Once the transfer has completed, click the green “Paid” button. This will mark the request as paid. If you need to go back and fix something in a request, or to review them beforehand, you can clik the request ID text (the blue link). EVE SRP Documentation, Release Administering Divisions A fresh installation of EVE-SRP will not have any divisions configured, so one of the first actions after installation should be to configure divisions. If you have either the site administrator or administrator in a division, you will have an “Admin” button at the top of the screen. Clicking it will list all of the divisions you can administer. If you are a site administrator you will also see a button for creating divisions. To add a division, click the “Add Division” button, enter a name on the form, then click the “Create Division” button. EVE SRP Documentation, Release After creating a new division or clicking one of the links in the division listing, you will see the administration page for that division. To grant a permission to a user or group, start typing the name of that user or group in the text box corresponding to that permission. It will autocomplete their name if the app knows about it (i.e. if they’ve logged in before or a user in that group has logged in before). Either click the correct entry, or finish typing it out and click the “Add” button. To revoke privileges to a user or group, click the “X” in the “Remove” column. Divisions can be configured to have certain request attributes to be changed into links. This is covered in more detail in the (TODO) transformers section. EVE SRP Documentation, Release External API EVE-SRP provides read-only access to requests you can access to external applications. Responses can be formatted as XML or JSON depending on your requirements. The URLs for the API are the same ones you access normally in a web browser, just in a different format. API Keys The first step to using the external API is to create an API key. Click the “Create API Key” button, and a key will be generated. You can revoke API keys at any time by clicking the “X” in the “Remove” column. The key is the string of letters and numbers and can be copied to your clipboard by clicking on its button (requires Flash). To use the API key, provide it as a parameter in the query string along with the desired format. The parameter name for the key is apikey and the field name for the format is fmt, and valid values are json or xml. Lists of Requests You can retrieve lists of up to 200 requests per page through the API. Filtering and sorting options are applied the same way they are when viewing the lists as HTML. In addition to the personal, pending, pay and completed lists exposed in the UI, there is an all route that will list all requests you have access to. As with the other lists that show requests other than your own, you must have a permission greater than ‘submitter’ granted to you in a division to access those lists. JSON In response to http://example.com/request/personal/?apikey=<KEY> EVE SRP Documentation, Release { "api_keys": [ { "id": 6, "key": "<KEY> "timestamp": "Thu, 10 Jul 2014 06:48:51 GMT" } ], "requests": [ { "alliance": "Test Alliance Please Ignore", "base_payout": "40000000.00", "base_payout_str": "40,000,000.00", "corporation": "Dreddit", "details": "I literally forgot how to broadcast for armor.", "division": { "href": "/api/division/1/", "id": 1, "name": "Test Alliance" }, "href": "/request/39861569/", "id": 39861569, "kill_timestamp": "Wed, 02 Jul 2014 19:26:00 GMT", "killmail_url": "https://zkillboard.com/kill/39861569/", "payout": "40000000.00", "payout_str": "40,000,000.00", "pilot": "Paxswill", "status": "paid", "submit_timestamp": "Wed, 09 Jul 2014 19:43:58 GMT", "submitter": { "href": "/api/user/1/", "id": 1, "name": "paxswill" } }, { "alliance": "Test Alliance Please Ignore", "base_payout": "9158708.44", "base_payout_str": "9,158,708.44", "corporation": "Dreddit", "details": "crest mail?", "division": { "href": "/api/division/1/", "id": 1, "name": "Test Alliance" }, "href": "/request/39697412/", "id": 39697412, "kill_timestamp": "Mon, 23 Jun 2014 16:06:00 GMT", "killmail_url": "https://zkillboard.com/kill/39697412/", "payout": "9158708.44", "payout_str": "9,158,708.44", "pilot": "Paxswill", "status": "paid", "submit_timestamp": "Wed, 09 Jul 2014 09:12:19 GMT", "submitter": { "href": "/api/user/1/", "id": 1, EVE SRP Documentation, Release "name": "paxswill" } } ] } XML In response to http://example.com/request/personal/?apikey=<KEY>j <?xml version="1.0" encoding="UTF-8" ?> <response user="paxswill"> <apikeys> <apikey id="6"> <key><KEY>,</key> <timestamp>2014-07-10T06:48:51.167054</timestamp> </apikey> </apikeys> <requests> <request id="39861569" status="paid"> <payout> <base pretty="40,000,000.00">40000000.00</base> <computed pretty="40,000,000.00">40000000.00</computed> </payout> <details>I literally forgot how to broadcast for armor.</details> <pilot> <alliance>Test Alliance Please Ignore</alliance> <corporation>Dreddit</corporation> <name>Paxswill</name> </pilot> <submit-timestamp>2014-07-09T19:43:58.126158</submit-timestamp> <kill-timestamp>2014-07-02T19:26:00</kill-timestamp> <division id="1" name="Test Alliance" /> <submitter id="1" name="paxswill" /> <killmail-url>https://zkillboard.com/kill/39861569/</killmail-url> <url>/request/39861569/</url> <ship>Guardian</ship> <location> <system>WD-VTV</system> <constellation>UX3-N2</constellation> <region>Catch</region> </location> </request> <request id="39697412" status="paid"> <payout> <base pretty="9,158,708.44">9158708.44</base> <computed pretty="9,158,708.44">9158708.44</computed> </payout> <details>crest mail?</details> <pilot> <alliance>Test Alliance Please Ignore</alliance> <corporation>Dreddit</corporation> <name>Paxswill</name> </pilot> <submit-timestamp>2014-07-09T09:12:19.250893</submit-timestamp> <kill-timestamp>2014-06-23T16:06:00</kill-timestamp> <division id="1" name="Test Alliance" /> EVE SRP Documentation, Release <submitter id="1" name="paxswill" /> <killmail-url>https://zkillboard.com/kill/39697412/</killmail-url> <url>/request/39697412/</url> <ship>Tristan</ship> <location> <system>Hikkoken</system> <constellation>Ishaga</constellation> <region>Black Rise</region> </location> </request> </requests> </response> RSS An RSS feed for requests in a list is available by adding /rss.xml to the end of a list URL. For example, the URL for the feed of pending requests would be http://example.com/request/pending/rss.xml?apikey=<KEY>JW Request Details If you need details beyond that provided in the lists of requests, or to look up informa- tion on a specific request you can access a request’s URL through the API. For exam- ple, the request for killmail #39861569 in JSON format could be retrieved with the URL http://example.com/request/39861569/?apikey=<KEY>,&f The path for an individual requests is also returned as part of the response in request listings. JSON { "actions": [ { "id": 2, "note": "", "timestamp": "Thu, 10 Jul 2014 06:37:09 GMT", "type": "paid", "user": { "href": "/api/user/1/", "id": 1, "name": "paxswill" } }, { "id": 1, "note": "Good to go.", "timestamp": "Wed, 09 Jul 2014 19:58:56 GMT", "type": "approved", "user": { "href": "/api/user/1/", "id": 1, "name": "paxswill" } } ], EVE SRP Documentation, Release "alliance": "Test Alliance Please Ignore", "base_payout": "40000000.00", "base_payout_str": "40,000,000.00", "corporation": "Dreddit", "current_user": { "href": "/api/user/1/", "id": 1, "name": "paxswill" }, "details": "I literally forgot how to broadcast for armor.", "division": { "href": "/api/division/1/", "id": 1, "name": "Test Alliance" }, "href": "/request/39861569/", "id": 39861569, "kill_timestamp": "Wed, 02 Jul 2014 19:26:00 GMT", "killmail_url": "https://zkillboard.com/kill/39861569/", "modifiers": [ { "id": 1, "note": "You’re awesome!", "timestamp": "Wed, 09 Jul 2014 19:50:10 GMT", "user": { "href": "/api/user/1/", "id": 1, "name": "paxswill" }, "value": 0.15, "value_str": "15.0% bonus", "void": { "timestamp": "Wed, 09 Jul 2014 19:58:00 GMT", "user": { "href": "/api/user/1/", "id": 1, "name": "paxswill" } } } ], "payout": "40000000.00", "payout_str": "40,000,000.00", "pilot": "Paxswill", "status": "paid", "submit_timestamp": "Wed, 09 Jul 2014 19:43:58 GMT", "submitter": { "href": "/api/user/1/", "id": 1, "name": "paxswill" }, "valid_actions": [ "approved", "evaluating" ] } EVE SRP Documentation, Release XML <?xml version="1.0" encoding="UTF-8" ?> <response user="paxswill"> <request id="39861569" status="paid"> <payout> <base pretty="40,000,000.00">40000000.00</base> <computed pretty="40,000,000.00">40000000.00</computed> </payout> <details>I literally forgot how to broadcast for armor.</details> <pilot> <alliance>Test Alliance Please Ignore</alliance> <corporation>Dreddit</corporation> <name>Paxswill</name> </pilot> <submit-timestamp>2014-07-09T19:43:58.126158</submit-timestamp> <kill-timestamp>2014-07-02T19:26:00</kill-timestamp> <division id="1" name="Test Alliance" /> <submitter id="1" name="paxswill" /> <killmail-url>https://zkillboard.com/kill/39861569/</killmail-url> <url>/request/39861569/</url> <ship>Guardian</ship> <location> <system>WD-VTV</system> <constellation>UX3-N2</constellation> <region>Catch</region> </location> <actions> <action id="2" type="paid"> <note></note> <timestamp>2014-07-10T06:37:09.242568</timestamp> <user id="1" name="paxswill" /> </action> <action id="1" type="approved"> <note>Good to go.</note> <timestamp>2014-07-09T19:58:56.524278</timestamp> <user id="1" name="paxswill" /> </action> </actions> <modifiers> <modifier id="1"> <note>You&#39;re awesome!</note> <user id="1" name="paxswill" /> <value>15.0% bonus</value> <timestamp>2014-07-09T19:50:10.909394</timestamp> <void id="1" name="paxswill"> <timestamp>2014-07-09T19:58:00.069323</timestamp> </void> </modifier> </modifiers> </request> </response> EVE SRP Documentation, Release 20 Chapter 1. User Guide CHAPTER 2 Developers Guide Authentication Authentication in EVE-SRP was designed from the start to allow for multiple different authentication systems and to make it easy to integrate it with an existing authentication system. As an exercise in how to write your own authentication plugin, let’s write one that doesn’t rely on an external service. We’ll need to subclass two classes for this; AuthMethod and User Let’s start with subclassing User. This class is mapped to an SQL table using SQLAlchemy’s declarative extension (more specifically, the Flask-SQLAlchemy plugin for Flask). The parent class automatically sets up the table name and inheritance mapper arguments for you, so all you need to do is provide the id attribute that links your class with the parent class and an attribute to store the password hash. In the example below, we’re using the simple-pbkdf2 package to provide the password hashing. We also have a checking method to make life easier for us later. import os from hashlib import sha512 from evesrp import db from evesrp.auth.models import User from pbkdf2 import pbkdf2_bin class LocalUser(User): id = db.Column(db.Integer, db.ForeignKey(User.id), primary_key=True) password = db.Column(db.LargeBinary(24), nullable=False) salt = db.Column(db.LargeBinary(24), nullable=False) def __init__(self, name, password, authmethod, **kwargs): self.salt = os.urandom(24) self.password = pbkdf2_bin(password.encode(’utf-8’), self.salt, iterations=10000) super(LocalUser, self).__init__(name, authmethod, **kwargs) def check_password(self, password): key = pbkdf2_bin(password.encode(’utf-8’), self.salt, iterations=10000) matched = 0 for a, b in zip(self.password, key): matched |= ord(a) ^ ord(b) return matched == 0 AuthMethod subclasses have four methods they can implement to customize thier behavior. • AuthMethod.form() returns a Form subclass that represents the necessary fields. EVE SRP Documentation, Release • AuthMethod.login() performs the actual login process. As part of this, it is passed an instance of the class given by AuthMethod.form() with the submitted data via the form argument. • For those authentication methods that requires a secondary view/route, the AuthMethod.view() method can be implemented to handle requests made to login/safe_name where safe_name is the output of AuthMethod.safe_name. • Finally, the initializer should be overridden to provide a default name for your AuthMethod other than Base Authentication. • Finally, the initializer can be overridden to handle specialized configurations. With these in mind, let’s implement our AuthMethod subclass: from evesrp.auth import AuthMethod from flask import redirect, url_for, render_template, request from flask_wtf import Form from sqlalchemy.orm.exc import NoResultFound from wtforms.fields import StringField, PasswordField, SubmitField from wtforms.validators import InputRequired, EqualTo class LocalLoginForm(Form): username = StringField(’Username’, validators=[InputRequired()]) password = PasswordField(’Password’, validators=[InputRequired()]) submit = SubmitField(’Log In’) class LocalCreateUserForm(Form): username = StringField(’Username’, validators=[InputRequired()]) password = PasswordField(’Password’, validators=[InputRequired(), EqualTo(’password_repeat’, message=’Passwords must match’)]) password_repeat = PasswordField( ’Repeat Password’, validators=[InputRequired()]) submit = SubmitField(’Log In’) class LocalAuth(AuthMethod): def form(self): return LocalLoginForm def login(self, form): # form has already been validated, we just need to process it. try: user = LocalUser.query.filter_by(name=form.username.data).one() except NoResultFound: flash("No user found with that username.", ’error’) return redirect(url_for(’login.login’)) if user.check_password(form.password.data): self.login_user(user) return redirect(request.args.get(’next’) or url_for(’index’)) else: flash("Incorrect password.", ’error’) return redirect(url_for(’login.login’)) def view(self): form = LocalCreateUserForm() if form.validate_on_submit(): user = LocalUser(form.username.data, form.password.data) EVE SRP Documentation, Release db.session.add(user) db.session.commit() self.login_user(user) return redirect(url_for(’index’)) # form.html is a template included in Eve-SRP that renders all # elements of a form. return render_template(’form.html’, form=form) That’s all that’s necessary for a very simple AuthMethod. This example cuts some corners, and isn’t ready for production-level use, but it serves as a quick example of what’s necessary to write a custom authentication method. Feel free to look at the sources for the included AuthMethods below to gather ideas on how to use more complicated mechanisms. Included Authentication Methods Brave Core class evesrp.auth.bravecore.BraveCore(client_key, server_key, identifier, url=’https://core.braveineve.com’, **kwargs) Bases: evesrp.auth.AuthMethod __init__(client_key, server_key, identifier, url=’https://core.braveineve.com’, **kwargs) Authentication method using a Brave Core instance. Uses the native Core API to authenticate users. Currently only supports a single character at a time due to limitations in Core’s API. Parameters • client_key (str) – The client’s private key in hex form. • server_key (str) – The server’s public key for this app in hex form. • identifier (str) – The identifier for this app in Core. • url (str) – The URL of the Core instance to authenticate against. Default: ‘https://core.braveineve.com‘ • name (str) – The user-facing name for this authentication method. Default: ‘Brave Core’ TEST Legacy class evesrp.auth.testauth.TestAuth(api_key=None, **kwargs) Bases: evesrp.auth.AuthMethod __init__(api_key=None, **kwargs) Authentication method using TEST Auth‘s legacy (a.k.a v1) API. Parameters • api_key (str) – (optional) An Auth API key. Without this, only primary characters are able to be accessed/used. • name (str) – The user-facing name for this authentication method. Default: ‘Test Auth’ EVE SRP Documentation, Release OAuth A number of external authentication services have an OAuth provider for external applications to use with their API. To facilitate usage of thses services, an OAuthMethod class has been provided for easy integration. Subclasses will need to implement the get_user(), get_pilots() and get_groups() methods. Additionally, implementations for JFLP’s provider and TEST’s provider have been provided as a reference. class evesrp.auth.oauth.OAuthMethod(**kwargs) __init__(**kwargs) Abstract AuthMethod for OAuth-based login methods. Implementing classes need to implement get_user(), get_pilots(), and get_groups(). In addition to the keyword arguments from AuthMethod, this initializer accepts the following argu- ments that will be used in the creation of the OAuthMethod.oauth object (See the documentation for OAuthRemoteApp for more details): •client_id •client_secret •scope •access_token_url •refresh_token_url •authorize_url •access_token_params •method As a convenience, the key and secret keyword arguments will be treated as consumer_key and consumer_secret respectively. The name argument is used for both AuthMethod and for OAuthRemoteApp. Subclasses for providers that may be used by more than one entity are encouraged to provide their own defaults for the above arguments. The redirect URL for derived classes is based off of the safe_name of the implementing AuthMethod, specifically the URL for view(). For example, the default redirect URL for TestOAuth is similar to https://example.com/login/test_oauth/ (Note the trailing slash, it is significant). Parameters default_token_expiry (int) – The default time (in seconds) access tokens are valid for. Defaults to 5 minutes. get_groups() Returns a list of Groups for the given token. Like get_user() and get_pilots(), this method is to be implemented by OAuthMethod sub- classes to return a list of Groups associated with the account for the given access token. Return type list of Groups. get_pilots() Return a list of Pilots for the given token. Like get_user(), this method is to be implemented by OAuthMethod subclasses to return a list of Pilots associated with the account for the given access token. Return type list of Pilots. EVE SRP Documentation, Release get_user() Returns the OAuthUser instance for the current token. This method is to be implemented by subclasses of OAuthMethod to use whatever APIs they have access to to get the user account given an access token. Return type OAuthUser is_admin(user) Returns wether this user should be treated as a site-wide administrator. The default implementation checks if the user’s name is contained within the list of administrators supplied as an argument to OAuthMethod. Parameters user (OAuthUser) – The user to check. Return type bool refresh(user) Refreshes the current user’s information. Attempts to refresh the pilots and groups for the given user. If the current access token has expired, the refresh token is used to get a new access token. view() Handle creating and/or logging in the user and updating their Pilots and Groups. EVE SSO class evesrp.auth.evesso.EveSSO(singularity=False, **kwargs) Bases: evesrp.auth.oauth.OAuthMethod get_groups() Set the user’s groups for their pilot. At this time, Eve SSO only gives us character access, so they’re just set to the pilot’s corporation, and if they have on their alliance as well. In the future, this method may also add groups for mailing lists. J4OAuth class evesrp.auth.j4oauth.J4OAuth(base_url=’https://j4lp.com/oauth/api/v1/’, **kwargs) Bases: evesrp.auth.oauth.OAuthMethod __init__(base_url=’https://j4lp.com/oauth/api/v1/’, **kwargs) AuthMethod for using J4OAuth as an authentication source. Parameters • authorize_url (str) – The URL to request OAuth authorization tokens. Default: ’https://j4lp.com/oauth/authorize’. • access_token_url (str) – The URL for OAuth token exchange. Default: ’https://j4lp.com/oauth/token’. • base_str (str) – The base URL for API requests. Default: ’https://j4lp.com/oauth/api/v1/’. • request_token_params (dict) – Additional parameters to include with the authoriza- tion token request. Default: {’scope’: [’auth_info’, ’auth_groups’, ’characters’]}. EVE SRP Documentation, Release • access_token_method (str) – HTTP Method to use for exchanging authorization tokens for access tokens. Default: ’GET’. • name (str) – The name for this authentication method. Default: ’J4OAuth’. TestOAuth class evesrp.auth.testoauth.TestOAuth(devtest=False, **kwargs) Bases: evesrp.auth.oauth.OAuthMethod __init__(devtest=False, **kwargs) AuthMethod using TEST Auth’s OAuth-based API for authentication and authorization. Parameters • admins (list) – Two types of values are accepted as values in this list, either a string specifying a user’s primary character’s name, or their Auth ID as an integer. • devtest (bool) – Testing parameter that changes the default domain for URLs from ‘https://auth.pleaseignore.com‘ to ‘https://auth.devtest.pleaseignore.com‘. Default: False. • authorize_url (str) – The URL to request OAuth authorization tokens. Default: ’https://auth.pleaseignore.com/oauth2/authorize’. • access_token_url (str) – The URL for OAuth token exchange. Default: ’https://auth.pleaseignore.com/oauth2/access_token’. • base_str (str) – The base URL for API requests. Default: ’https://auth.pleaseignore.com/api/v3/’. • request_token_params (dict) – Additional parameters to include with the authorization token request. Default: {’scope’: ’private-read’}. • access_token_method (str) – HTTP Method to use for exchanging authorization tokens for access tokens. Default: ’POST’. • name (str) – The name for this authentication method. Default: ’Test OAuth’. Low-Level API class evesrp.auth.PermissionType Enumerated type for the types of permissions available. elevated Returns a frozenset of the permissions above submit. all Returns a frozenset of all possible permission values. admin = <admin> Division-level administrator permission audit = <audit> A special permission for allowing read-only elevated access pay = <pay> Permission for payers in a Division. review = <review> Permission for reviewers of requests in a Division. EVE SRP Documentation, Release submit = <submit> Permission allowing submission of Requests to a Division. class evesrp.auth.AuthMethod(admins=None, name=’Base Authentication’, **kwargs) Represents an authentication mechanism for users. __init__(admins=None, name=’Base Authentication’, **kwargs) Parameters • admins (list) – A list of usernames to treat as site-wide administrators. Useful for initial setup. • name (str) – The user-facing name for this authentication method. form() Return a flask_wtf.Form subclass to login with. login(form) Process a validated login form. You must return a valid response object. static login_user(user) Signal to the authentication systems that a new user has logged in. Handles calling flask_login.login_user() and any other related housekeeping functions for you. Parameters user (User) – The user that has been authenticated and is logging in. refresh(user) Refresh a user’s information (if possible). The AuthMethod should attmept to refresh the given user’s information as if they were logging in for the first time. Parameters user (User) – The user to refresh. Returns Wether or not the refresh attempt succeeded. Return type bool safe_name Normalizes a string to be a valid Python identifier (along with a few other things). Specifically, all letters are lower cased and non-ASCII and whitespace are replaced by underscores. Returns The normalized string. Rtype str view() Optional method for providing secondary views. evesrp.views.login.auth_method_login() is configured to allow both GET and POST requests, and will call this method as soon as it is known which auth method is meant to be called. The path for this view is /login/self.safe_name/, and can be generated with url_for(’login.auth_method_login’, auth_method=self.safe_name). The default implementation redirects to the main login view. class evesrp.auth.models.Entity(name, authmethod, **kwargs) Private class for shared functionality between User and Group. This class defines a number of helper methods used indirectly by User and Group subclasses such as automati- cally defining the table name and mapper arguments. EVE SRP Documentation, Release This class should not be inherited from directly, instead either User or Group should be used. authmethod The name of the AuthMethod for this entity. entity_permissions Permissions associated specifically with this entity. has_permission(permissions, division_or_request=None) Returns if this entity has been granted a permission in a division. If division_or_request is None, this method checks if this group has the given permission in any division. Parameters • permissions (iterable) – The series of permissions to check • division_or_request – The division to check. May also be None or an SRP request. Return type bool name The name of the entity. Usually a nickname. class evesrp.auth.models.User(name, authmethod, **kwargs) Bases: evesrp.auth.models.Entity User base class. Represents users who can submit, review and/or pay out requests. It also supplies a number of convenience methods for subclasses. actions Actions this user has performed on requests. admin If the user is an administrator. This allows the user to create and administer divisions. get_id() Part of the interface for Flask-Login. groups Groups this user is a member of is_active Part of the interface for Flask-Login. is_anonymous Part of the interface for Flask-Login. is_authenticated Part of the interface for Flask-Login. pilots Pilots associated with this user. requests Requests this user has submitted. submit_divisions() Get a list of the divisions this user is able to submit requests to. Returns A list of tuples. The tuples are in the form (division.id, division.name) Return type list EVE SRP Documentation, Release class evesrp.auth.models.Pilot(user, name, id_) Represents an in-game character. __init__(user, name, id_) Create a new Pilot instance. Parameters • user (User) – The user this character belpongs to. • name (str) – The name of this character. • id (int) – The CCP-given characterID number. name The name of the character requests The Requests filed with lossmails from this character. user The User this character belongs to. class evesrp.auth.models.APIKey(user) Represents an API key for use with the External API. hex_key The key data in a modified base-64 format safe for use in URLs. key The raw key data. user The User this key belongs to. class evesrp.auth.models.Note(user, noter, note) A note about a particular User. content The actual contents of this note. noter The author of this note. user The User this note refers to. class evesrp.auth.models.Group(name, authmethod, **kwargs) Bases: evesrp.auth.models.Entity Base class for a group of users. Represents a group of users. Usable for granting permissions to submit, evaluate and pay. permissions Synonym for entity_permissions users User s that belong to this group. class evesrp.auth.models.Permission(division, permission, entity) __init__(division, permission, entity) Create a Permission object granting an entity access to a division. EVE SRP Documentation, Release division The division this permission is granting access to entity The Entity being granted access permission The permission being granted. class evesrp.auth.models.Division(name) A reimbursement division. A division has (possibly non-intersecting) groups of people that can submit requests, review requests, and pay out requests. division_permissions All Permissions associated with this division. name The name of this division. permissions The permissions objects for this division, mapped via their permission names. requests Request s filed under this division. transformers A mapping of attribute names to Transformer instances. class evesrp.auth.models.TransformerRef(**kwargs) Stores associations between Transformers and Divisions. attribute_name The attribute this transformer is applied to. division The division the transformer is associated with transformer The transformer instance. Killmail Handling EVE-SRP relies on outside sources for its killmail information. Whether that source is ESI, zKillboard, or some private killboard does not matter, there just has to be some sort of access to the information. The interface for Killmail is fairly simple. It provides a number of attributes, and for those that correspond to in-game entities, it also provides their ID number. The default implementation has all values set to None. If a killmail is invalid in some way, it can be signaled either by raising a ValueError or LookupError in the killmail’s __init__() method or by defining a Killmail.verified property and returning False from it when the killmail is invalid. Two implementations for creating a Killmail from a URL are included: ESIMail is created from a ESI external killmail link, and ZKillmail is created from a zKillboard details link. EVE SRP Documentation, Release Extension Examples The reasoning behind having killmails handled in a separate class was for administrators to be able to customize behavior. Here’re a few useful snippets that may be useful for your situation. Restricting Valid zKillboards ZKillmail by default will accept any link that looks and acts like a zKillboard instance. It does not restrict itself to any particular domain name, but it makes allowances for this common requirement. from evesrp.killmail import ZKillmail class OnlyZKillboard(ZKillmail): def __init__(self, *args, **kwargs): super(TestZKillmail, self).__init__(*args, **kwargs) if self.domain != ’zkillboard.com’: raise ValueError(u"This killmail is from the wrong killboard.") Submitting ESI Links to zKillboard To streamline the process for users, you can accept ESI killmail links and then submits them to zKillboard.com and uses the new zKillboard.com link as the canonical URL for the request. from decimal import Decimal from flask import Markup from evesrp.killmail import ESIMail class SubmittedESIZKillmail(ESIMail): """Accepts and validates ESI killmail links, but submits them to ZKillboard and substitutes the zKB link in as the canonical link """ def __init__(self, url, **kwargs): # Let ESIMail validate the ESI link super(self.__class__, self).__init__(url, **kwargs) # Submit the ESI URL to ZKillboard resp = self.requests_session.post(’https://zkillboard.com/post/’, data={’killmailurl’: url}) # Use the URL we get from ZKillboard as the new URL (if it’s successful). if self.kill_id in resp.url: self.url = resp.url else: # Leave the ESI URL as-is and finish return # Grab zkb’s data from their API api_url = (’https://zkillboard.com/api/no-attackers/’ ’no-items/killID/{}’).format(self.kill_id) zkb_api = self.requests_session.get(api_url) retrieval_error = LookupError(u"Error retrieving killmail data (zKB): {}" .format(zkb_api.status_code)) if zkb_api.status_code != 200: raise retrieval_error try: json = zkb_api.json() EVE SRP Documentation, Release except ValueError as e: raise retrieval_error try: json = json[0] except IndexError as e: raise LookupError(u"Invalid killmail: {}".format(url)) # Recent versions of zKillboard calculate a loss’ value. try: self.value = Decimal(json[u’zkb’][u’totalValue’]) except KeyError: self.value = Decimal(0) description = Markup(u’An ESI external killmail link that will be ’ u’automatically submitted to <a href="https://’ u’zkillboard.com">zKillboard.com</a>.’) Setting Base Payouts from a Spreadsheet If you have standardized payout values in a Google spreadsheet, you can set Request.base_payout to the values in this spreadsheet. This is assuming your spreadsheet is set up with ship hull names in one column and payouts in another column. Both Columns need to have a header (‘Hull’ and ‘Payout’ in the example below). This uses the Google Data Python Client which only supports Python 2, and can be installed with pip install gdata. import gdata.spreadsheets.client from decimal import Decimal # patch the spreadsheet’s client to use the public feeds gdata.spreadsheets.client.PRIVATE_WORKSHEETS_URL = \ gdata.spreadsheets.client.WORKSHEETS_URL gdata.spreadsheets.client.WORKSHEETS_URL = (’https://spreadsheets.google.com/’ ’feeds/worksheets/%s/public/full’) gdata.spreadsheets.client.PRIVATE_LISTS_URL = \ gdata.spreadsheets.client.LISTS_URL gdata.spreadsheets.client.LISTS_URL = (’https://spreadsheets.google.com/feeds/’ ’list/%s/%s/public/full’) class SpreadsheetPayout(ZKillmail): # The spreadsheet’s key # (https://docs.google.com/spreadsheets/d/THE_PART_HERE/edit). # Make sure the spreadsheet has been published (File->Publish to web...) spreadsheet_key = ’THE_PART_HERE’ # The name of the worksheet with the payouts worksheet_name = ’Payouts’ # The header for the hull column (always lowercase, the Google API # lowercases it). hull_key = ’hull’ # And the same for the payout column payout_key = ’payout’ client = gdata.spreadsheets.client.SpreadsheetsClient() EVE SRP Documentation, Release @property def value(self): # Find the worksheet sheets = self.client.get_worksheets(self.spreadsheet_key) for sheet in sheets.entry: if sheet.title.text == self.worksheet_name: worksheet_id = sheet.get_worksheet_id() break else: return Decimal(’0’) # Read the worksheet’s data lists = self.client.get_list_feed(self.spreadsheet_key, worksheet_id, query=gdata.spreadsheets.client.ListQuery(sq=’{}={}’.format( self.hull_key, self.ship))) for entry in lists.entry: return Decimal(entry.get_value(self.payout_key)) return Decimal(’0’) Developer API class evesrp.killmail.Killmail(**kwargs) Base killmail representation. kill_id The ID integer of this killmail. Used by most killboards and by CCP to refer to killmails. ship_id The typeID integer of for the ship lost for this killmail. ship The human readable name of the ship lost for this killmail. pilot_id The ID number of the pilot who lost the ship. Referred to by CCP as characterID. pilot The name of the pilot who lost the ship. corp_id The ID number of the corporation pilot belonged to at the time this kill happened. corp The name of the corporation referred to by corp_id. alliance_id The ID number of the alliance corp belonged to at the time of this kill, or None if the corporation wasn’t in an alliance at the time. alliance The name of the alliance referred to by alliance_id. url A URL for viewing this killmail’s information later. Typically an online killboard such as zKillboard, but other kinds of links may be used. value The extimated ISK loss for the ship destroyed in this killmail. This is an optional attribute, and is None if unsupported. If this attribute is set, it should be a Decimal or at least a type that can be used as the value for the Decimal constructor. EVE SRP Documentation, Release timestamp The date and time that this kill occured as a datetime.datetime object (with a UTC timezone). verified Whether or not this killmail has been API verified (or more accurately, if it is to be trusted when making a Request. system The name of the system where the kill occured. system_id The ID of the system where the kill occured. constellation The name of the constellation where the kill occured. region The name of the region where the kill occured. __init__(**kwargs) Initialize a Killmail with None for all attributes. All subclasses of this class, (and all mixins designed to be used with it) must call super().__init__(**kwargs) to ensure all initialization is done. Param keyword arguments corresponding to attributes. __iter__() Iterate over the attributes of this killmail. Yields tuples in the form (’<name>’, <value>). This is used by Request.__init__ to initialize its data quickly. <name> in the returned tuples is the name of the attribute on the Request. description = l’A generic Killmail. If you see this text, you need to configure your application.’ A user-facing description of what kind of killmails this Killmail validates/handles. This text is dis- played below the text field for a killmail URL to let users know what kinds of links are acceptable. class evesrp.killmail.ZKillmail(url, **kwargs) Bases: evesrp.killmail.ESIMail domain The domain name of this killboard. class evesrp.killmail.ESIMail(url, **kwargs) Bases: evesrp.killmail.Killmail, evesrp.killmail.RequestsSessionMixin, evesrp.killmail.LocationMixin A killmail with data sourced from a ESI killmail link. __init__(url, **kwargs) Create a killmail from a ESI killmail link. Parameters url (str) – the ESI killmail URL. Raises • ValueError – if url is not a ESI URL. • LookupError – if the ESI API response is in an unexpected format. class evesrp.killmail.RequestsSessionMixin(requests_session=None, **kwargs) Mixin for providing a requests.Session. The shared session allows HTTP user agents to be set properly, and for possible connection pooling. EVE SRP Documentation, Release requests_session A Session for making HTTP requests. __init__(requests_session=None, **kwargs) Set up a Session for making HTTP requests. If an existing session is not provided, one will be created. Parameters requests_session – an existing session to use. class evesrp.killmail.ShipNameMixin Killmail mixin providing Killmail.ship from Killmail.ship_id. ship Looks up the ship name using Killmail.ship_id. class evesrp.killmail.LocationMixin Killmail mixin for providing solar system, constellation and region names from Killmail.system_id. constellation Provides the constellation name using Killmail.system_id. region Provides the region name using Killmail.system_id. system Provides the solar system name using Killmail.system_id. Views evesrp.views.index() The index page for EVE-SRP. Login evesrp.views.login.auth_method_login(auth_method) Trampoline for AuthMethod-specific views. See Authmethod.view for more details. evesrp.views.login.login() Presents the login form and processes responses from that form. When a POST request is recieved, this function passes control to the appropriate login method. evesrp.views.login.login_loader(userid) Pull a user object from the database. This is used for loading users from existing sessions. evesrp.views.login.logout() Logs the current user out. Redirects to index(). EVE SRP Documentation, Release Divisions evesrp.views.divisions.add_division() Present a form for adding a division and also process that form. Only accesible to adminstrators. evesrp.views.divisions.get_division_details(division_id=None, division=None) Generate a page showing the details of a division. Shows which groups and individuals have been granted permissions to each division. Only accesible to administrators. Parameters division_id (int) – The ID number of the division evesrp.views.divisions.list_transformers(division_id, attribute=None) API method to get a list of transformers for a division. Parameters • int (division_id) – the ID of the division to look up • str (attribute) – a specific attribute to look up. Optional. Returns JSON evesrp.views.divisions.modify_division(division_id) Dispatches modification requests to the specialized view function for that operation. evesrp.views.divisions.permissions() Show a page listing all divisions. evesrp.views.divisions.transformer_choices(attr) List of tuples enumerating attributes that can be transformed/linked. Mainly used as the choices argument to SelectField Requests class evesrp.views.requests.PayoutListing A special view made for quickly processing payouts for requests. class evesrp.views.requests.PermissionRequestListing(permissions, statuses, title=None) Show all requests that the current user has permissions to access. This is used for the various permission-specific views. __init__(permissions, statuses, title=None) Create a PermissionRequestListing for the given permissions and statuses. Parameters • permissions (tuple) – The permissions to filter by • statuses (tuple) – A tuple of valid statuses for requests to be in class evesrp.views.requests.PersonalRequests Shows a list of all personally submitted requests and divisions the user has permissions in. It will show all requests the current user has submitted. class evesrp.views.requests.RequestListing Abstract class for lists of Requests. Subclasses will be able to respond to both normal HTML requests as well as to API requests with JSON. EVE SRP Documentation, Release decorators = [<function login_required at 0x7f796028b7b8>, <function varies.<locals>.vary_decorator at 0x7f795f3d5 Decorators to apply to the view functions dispatch_request(filters=’‘, **kwargs) Returns the response to requests. Part of the flask.views.View interface. requests(filters) Returns a list Requests belonging to the specified Division, or all divisions if None. Returns Requests Return type iterable template = ‘requests_list.html’ The template to use for listing requests class evesrp.views.requests.ValidKillmail(mail_class, **kwargs) Custom :py:class:’~.Field’ validator that checks if any Killmail accepts the given URL. evesrp.views.requests.get_killmail_validators() Get a list of ValidKillmails for each killmail source. This method is used to delay accessing current_app until we’re in a request context. :returns: a list of ValidKillmails :rtype list: evesrp.views.requests.get_request_details(request_id=None, srp_request=None) Handles responding to all of the Request detail functions. The various modifier functions all depend on this function to create the actual response content. Only one of the arguments is required. The srp_request argument is a conveniece to other functions calling this function that have already retrieved the request. Parameters • request_id (int) – the ID of the request. • srp_request (Request) – the request. evesrp.views.requests.modify_request(request_id) Handles POST requests that modify Requests. Because of the numerous possible forms, this function bounces execution to a more specific function based on the form’s “id_” field. Parameters request_id (int) – the ID of the request. evesrp.views.requests.register_perm_request_listing(app, endpoint, path, permissions, statuses, title=None) Utility function for creating PermissionRequestListing views. Parameters • app (flask.Flask) – The application to add the view to • endpoint (str) – The name of the view • path (str) – The URL path for the view • permissions (tuple) – Passed to PermissionRequestListing.__init__() • statuses (iterable) – Passed to PermissionRequestListing.__init__() evesrp.views.requests.submit_request() Submit a Request. EVE SRP Documentation, Release Displays a form for submitting a request and then processes the submitted information. Verifies that the user has the appropriate permissions to submit a request for the chosen division and that the killmail URL given is valid. Also enforces that the user submitting this requests controls the character from the killmail and prevents duplicate requests. evesrp.views.requests.url_for_page(pager, page_num) Utility method used in Jinja templates. Models class evesrp.models.ActionType An Enum for representing the types of Actions performed on a Request in addition to the status of a Request. statuses A frozenset of all of the single ActionType members that also double as statuses for Requests. finalized A frozenset of the ActionTypes that are terminal states for a Request (paid and rejected). pending A frozenset of ActionTypes for Requests that require further action to be put in a finalized state. approved = <approved> Status for a request that has been evaluated and is awaitng payment. comment = <comment> A special type of Action representing a comment made on the request. evaluating = <evaluating> Status for a request being evaluated. incomplete = <incomplete> Status for a request that is missing details and needs further action. paid = <paid> Status for a request that has been paid. This is a terminatint state. rejected = <rejected> Status for a requests that has been rejected. This is a terminating state. exception evesrp.models.ActionError Error raised for invalid state changes for a Request. class evesrp.models.Action(request, user, note=None, type_=None) Bases: flask_sqlalchemy.Model, evesrp.util.models.AutoID, evesrp.util.models.Timestamped, evesrp.util.models.AutoName Actions change the state of a Request. Requests enforce permissions when actions are added to them. If the user adding the action does not have the appropriate Permissions in the request’s Division, an ActionError will be raised. With the exception of the comment action (which just adds text to a request), actions change the status of a Request. note Any additional notes for this action. EVE SRP Documentation, Release request The Request this action applies to. type_ The action be taken. See ActionType for possible values. user The User who made this action. class evesrp.models.ModifierError Error raised when a modification is attempted to a Request when it’s in an invalid state. class evesrp.models.Modifier(request, user, note, value) Bases: flask_sqlalchemy.Model, evesrp.util.models.AutoID, evesrp.util.models.Timestamped, evesrp.util.models.AutoName Modifiers apply bonuses or penalties to Requests. This is an abstract base class for the pair of concrete implementations. Modifiers can be voided at a later date. The user who voided a modifier and when it was voided are recorded. Requests enforce permissions when modifiers are added. If the user adding a modifier does not have the appropriate Permissions in the request’s Division, a ModifierError will be raised. voided Boolean of whether this modifier has been voided or not. This property is available as a hybrid_property, so it can be used natively in SQLAlchemy queries. note Any notes explaining this modification. request The Request this modifier applies to. user The User who added this modifier. void(user) Mark this modifier as void. Parameters user (User) – The user voiding this modifier voided_timestamp If this modifier has been voided, this will be the timestamp of when it was voided. voided_user The User who voided this modifier if it has been voided. class evesrp.models.AbsoluteModifier(request, user, note, value) Subclass of Modifier for representing absolute modifications. Absolute modifications are those that are not dependent on the value of Request.base_payout. value How much ISK to add or remove from the payout class evesrp.models.RelativeModifier(request, user, note, value) Subclass of Modifier for representing relative modifiers. Relative modifiers depend on the value of Modifier.base_payout to calculate their effect. value What percentage of the payout to add or remove EVE SRP Documentation, Release class evesrp.models.Request(submitter, details, division, killmail, **kwargs) Bases: flask_sqlalchemy.Model, evesrp.util.models.AutoID, evesrp.util.models.Timestamped, evesrp.util.models.AutoName Requests represent SRP requests. payout The total payout of this request taking all active modifiers into account. In calculating the total payout, all absolute modifiers along with the base_payout are summed. This is then multipled by the sum of all of the relative modifiers plus 1. This property is a read-only hybrid_property, so it can be used natively in SQLAlchemy queries. finalized Boolean of if this request is in a finalized state. Also a read-only hybrid_property so it can be used natively in SQLAlchemy queries. __init__(submitter, details, division, killmail, **kwargs) Create a Request. Parameters • submitter (User) – The user submitting this request • details (str) – Supporting details for this request • division (Division) – The division this request is being submitted to • killmail (Killmail) – The killmail this request pertains to actions A list of Actions that have been applied to this request, sorted in the order they were applied. alliance The alliance of the pilot at the time of the killmail. base_payout The base payout for this request. This value is clamped to a lower limit of 0. It can only be changed when this request is in an evaluating state, or else a ModifierError will be raised. constellation The constellation this loss occured in. corporation The corporation of the pilot at the time of the killmail. details Supporting information for the request. division The Division this request was submitted to. kill_timestamp The date and time of when the ship was destroyed. killmail_url The URL of the source killmail. modifiers A list of all Modifiers that have been applied to this request, regardless of wether they have been voided or not. They’re sorted in the order they were added. EVE SRP Documentation, Release payout The payout for this requests taking into account all active modifiers. pilot The Pilot who was the victim in the killmail. region The region this loss occured in. ship_type The type of ship that was destroyed. status This attribute is automatically kept in sync as Actions are added to the request. It should not be set otherwise. At the time an Action is added to this request, the type of action is checked and the state diagram below is enforced. If the action is invalid, an ActionError is raised. R R, S R incomplete rejected R R submitted evaluating P R P paid R approved P R means a reviewer can make that change, S means the submitter can make that change, and P means payers can make that change. Solid borders are terminal states. submitter The User who submitted this request. system The solar system this loss occured in. transformed Get a special HTML representation of an attribute. Divisions can have a transformer defined on various attributes that output a URL associated with that attribute. This property provides easy access to the output of any transformed attributes on this request. valid_actions(user) Get valid actions (besides comment) the given user can perform. EVE SRP Documentation, Release Javascript The following documentation is directed towards people developing the front-end for EVE-SRP. These functions should not be used by end-users, and are purely an implementation detail. Utilities month(month_int) Convert an integer representing a month to the three letter abbreviation. Arguments • month_int (int) – An integer (0-11) representing a month. Returns The three letter abbreviation for that month. Return type string padNum(num, width) Pad a number with leading 0s to the given width. Arguments • num (int) – The number to pad. • width (int) – The width to pad num to. Returns num padded to width with 0s. Return type string pageNumbers(num_pages, current_page[, options ]) Return an array of page numbers, skipping some of them as configured by the options argument. This function should be functionally identical to Flask-SQLAlchemy’s Pagination.iter_pages (including in default arguments). One deviation is that this function uses 0-indexed page numbers instead of 1-indexed, to ease compatibility with PourOver. Skipped numbers are represented by null. Arguments • num_pages (int) – The total number of pages. • current_page (int) – The index of the current page. • options – An object with vonfiguration values for where to sjip numbers. Keys are left_edge, left_current, right_current, and right_edge. The default val- ues are 2, 2, 5 and 2 respectively. Returns The page numbers to be show, in order. Return type An array on integers (and null). pager_a_click(ev) Event callback for pager links. It intercepts the event and changes the current PourOver view to reflect the new page. Arguments • ev (event) – The event object. EVE SRP Documentation, Release PourOver class RequestsView(name, collection) An extension of PourOver.View with a custom render function recreating a table of Requests with the associated pager. addRequestSorts(collection) Add sorts for Request attributes to the given PourOver.Collection. Arguments • collection (PourOver.Collection) – A collection of requests. addRequestFilters(collection) Add filters for Request attributes to the given PourOver.Collection. Arguments • collection (PourOver.Collection) – A collection of requests. EVE SRP Documentation, Release 44 Chapter 2. Developers Guide CHAPTER 3 Indices and tables • genindex • modindex • search 45 EVE SRP Documentation, Release 46 Chapter 3. Indices and tables Python Module Index e evesrp.auth, 26 evesrp.auth.bravecore, 23 evesrp.auth.evesso, 25 evesrp.auth.j4oauth, 25 evesrp.auth.models, 27 evesrp.auth.oauth, 23 evesrp.auth.testauth, 23 evesrp.auth.testoauth, 26 evesrp.killmail, 33 evesrp.models, 38 evesrp.views, 35 evesrp.views.divisions, 36 evesrp.views.login, 35 evesrp.views.requests, 36 47 EVE SRP Documentation, Release 48 Python Module Index
neuroptica
readthedoc
SQL
# neuroptica API¶ neuroptica neuroptica API ¶ components component_layers initializers layers losses models nonlinearities optimizers utils # components¶ `components` ¶ The components submodule contains functionality for simulating individual optical components, such as a single phase shifter, a beamsplitter, or an MZI. Components are combined in a `ComponentLayer` , which describes the arrangement of the components on-chip. `Beamsplitter` (m: int, n: int)[source]¶ * Simulation of a perfect 50:50 beamsplitter `MZI` (m: int, n: int, theta: float = None, phi: float = None, phase_uncert=0.0)[source]¶ * Simulation of a programmable phase-shifting Mach-Zehnder interferometer * `__init__` (m: int, n: int, theta: float = None, phi: float = None, phase_uncert=0.0)[source]¶ * b'Parameters:' * m – first waveguide index * n – second waveguide index * theta – phase shift value for inner phase shifter; assigned randomly between [0, 2pi) if unspecified * phi – phase shift value for outer phase shifter; assigned randomly between [0, 2pi) if unspecified * phase_uncertainty – optional uncertainty to add to the phase shifters; effective phase is computed as self.(theta, phi) + np.random.normal(0, self.phase_uncert) if add_uncertainties is set to True during simulation (backward=False, cumulative=True, add_uncertainties=False) → <MagicMock id='140414712532832'>[source]¶ * Compute the partial transfer matrices of each “column” of the MZI – after first beamsplitter, after first phase shifter, after second beamsplitter, and after second phase shifter :param backward: if true, compute the reverse transfer matrices in backward order :param cumulative: if true, each partial transfer matrix represents the total transfer matrix up to that point in the device :param add_uncertainties: whether to include uncertainties in the partial transfer matrix computation :return: numpy array of partial transfer matrices Bases: `object` Base class for an on-chip optical component Initialize the component b'Parameters:' * ports (list) – list of ports the component is connected to * tunable (bool) – whether the component is tunable or static * dof (int) – number of degrees of freedom the component has * id (int) – optional identifier for the component `PhaseShifter` (m: int, phi: float = None)[source]¶ * Single-mode phase shifter * `__init__` (m: int, phi: float = None)[source]¶ * b'Parameters:' * m – waveguide index * phi – optional phase shift value; assigned randomly between [0, 2pi) if unspecified # component_layers¶ `component_layers` ¶ The componet_layers submodule contains functionality for assembling optical components on a simulated chip and computing their transfer operations. A ComponentLayer represents a physical “column” of optical components which acts on an input in parallel. ComponentLayers can be assembled into an OpticalMesh or put into a NetworkLayer. `ComponentLayer` (N: int, components: List[Type[neuroptica.components.OpticalComponent]])[source]¶ * Bases: `object` Base class for a single physical column of optical components which acts on inputs in parallel. * `__init__` (N: int, components: List[Type[neuroptica.components.OpticalComponent]])[source]¶ * Initialize the ComponentLayer :param int N: number of waveguides in the ComponentLayer :param list[OpticalComponent] components: list of * `__iter__` () → Iterable[Type[neuroptica.components.OpticalComponent]][source]¶ * * `all_tunable_params` () → Iterable[float][source]¶ * `MZILayer` (N: int, mzis: List[neuroptica.components.MZI])[source]¶ * Represents a physical column of MZI’s attached to an ensemble of waveguides * `__init__` (N: int, mzis: List[neuroptica.components.MZI])[source]¶ * b'Parameters:' * N – number of waveguides in the system the MZI layer is embedded * mzis – list of MZIs in the column (can be less than N) * `__iter__` () → Iterable[neuroptica.components.MZI][source]¶ * * classmethod ``` from_waveguide_indices ``` (N: int, waveguide_indices: List[int])[source]¶ * Create an MZI layer from a list of an even number of input/output indices. Each pair of waveguides in the iteration order will be assigned to an MZI :param N: size of MZILayer :param waveguide_indices: list of waveguides the layer attaches to :return: MZILayer class instance with size N and MZIs attached to waveguide_indices (backward=False, cumulative=True, add_uncertainties=False) → <MagicMock id='140414712740048'>[source]¶ * Return a list of 4 partial transfer matrices for the entire MZI layer corresponding to (1) after first BS in each MZI, (2) after theta shifter, (3) after second BS, and (4) after phi shifter. Order is reversed in the backwards case :param backward: whether to compute the backward partial transfer matrices :param cumulative: if true, each partial transfer matrix represents the total transfer matrix up to that point in the device :param add_uncertainties: whether to include uncertainties in transfer matrix computation :return: numpy array of partial transfer matrices ``` get_partial_transfer_vectors ``` (backward=False, cumulative=True, add_uncertainties=False) → <MagicMock id='140414712752504'>[source]¶ * b'Parameters:' * backward – * cumulative – * add_uncertainties – b'Returns:' * `get_transfer_matrix` (add_uncertainties=False) → <MagicMock id='140414712614808'>[source]¶ * `OpticalMesh` (N: int, layers: List[Type[neuroptica.component_layers.ComponentLayer]])[source]¶ * Bases: `object` Represents an optical “mesh” consisting of several layers of optical components, e.g. a rectangular MZI mesh * `__init__` (N: int, layers: List[Type[neuroptica.component_layers.ComponentLayer]])[source]¶ * Initialize the OpticalMesh :param N: number of waveguides in the system the mesh is embedded in :param layers: list of ComponentLayers that the mesh contains (enumerates the columns of components) * `adjoint_optimize` (forward_field: <MagicMock id='140414712694880'>, adjoint_field: <MagicMock id='140414711705216'>, update_fn: Callable, accumulator: Callable[<MagicMock id='140414712246056'>, float] = <MagicMock id='140414712670136'>, dry_run=False, cache_fields=False, use_partial_vectors=False)[source]¶ * Compute the loss gradient as described in Hughes, et al (2018), “Training of photonic neural networks through in situ backpropagation and gradient measurement” and adjust the phase shifting parameters accordingly :param forward_field: forward-propagating input electric field at the beginning of the optical mesh :param adjoint_field: backward-propagating output electric field at the end of the optical mesh :param update_fn: a float=>float function to compute how to update parameters given a gradient :param accumulator: an array=>float function to compute how to compute a gradient from a batch of gradients; np.mean is used by default :param dry_run: if True, parameters will not be adjusted and a dictionary of parameter gradients for each ComponentLayer will be returned instead :param cache_fields: if True, forward and adjoint fields within the mesh will be cached :param use_partial_vectors: if True, uses partial vectors method to speed up transfer matrix computation :return: None, or (if dry_run==True) a dictionary of parameter gradients for each ComponentLayer ``` compute_adjoint_phase_shifter_fields ``` (delta: <MagicMock id='140414711655392'>, align='right', use_partial_vectors=False) → List[List[<MagicMock id='140414711680024'>]][source]¶ * Compute the backward propagating (adjoint) electric fields at the left/right of each phase shifter in the mesh :param delta: input adjoint field to the mesh :param align: whether to align the fields at the beginning or end of each column :return: a list of (list of field values to the left/right of each phase shifter in a layer) for each layer The ordering of the list is the opposite as in compute_phase_shifter_fields() * `compute_gradients` (forward_field: <MagicMock id='140414712275520'>, adjoint_field: <MagicMock id='140414712304528'>, cache_fields=False, use_partial_vectors=False) → Dict[Type[neuroptica.components.OpticalComponent], <MagicMock id='140414712325176'>][source]¶ * Compute the gradients for each optical component within the mesh, without adjusting the parameters :param forward_field: forward-propagating input electric field at the beginning of the optical mesh :param adjoint_field: backward-propagating output electric field at the end of the optical mesh :param cache_fields: if True, forward and adjoint fields within the mesh will be cached :param use_partial_vectors: if True, uses partial vectors method to speed up transfer matrix computation :return: ``` compute_phase_shifter_fields ``` (X: <MagicMock id='140414711576280'>, align='right', use_partial_vectors=False, include_bs=False) → List[List[<MagicMock id='140414711601136'>]][source]¶ * Compute the forward-propagating electric fields at the left/right of each phase shifter in the mesh :param X: input field to the mesh :param align: whether to align the fields at the beginning or end of each column :use_partial_vectors: can speed up the computation if set to True :include_bs: if true, also compute the phase shifter fields before/after each beamsplitter in the mesh :return: a list of (list of field values to the left/right of each phase shifter in a layer) for each layer (backward=False, cumulative=True) → List[<MagicMock id='140414711551032'>][source]¶ * Return the partial transfer matrices for the optical mesh after each column of components :param backward: whether to compute the backward partial transfer matrices :param cumulative: if true, each partial transfer matrix represents the total transfer matrix up to that point in the device :return: list of partial transfer matrices `PhaseShifterLayer` (N: int, phase_shifters: List[neuroptica.components.PhaseShifter] = None)[source]¶ * Represents a column of N single-mode phase shifters * `__init__` (N: int, phase_shifters: List[neuroptica.components.PhaseShifter] = None)[source]¶ * b'Parameters:' * N – number of waveguides the column is embedded in * phase_shifters – list of phase shifters in the column (can be less than N) * `__iter__` () → Iterable[neuroptica.components.PhaseShifter][source]¶ * # initializers¶ `initializers` ¶ [Incomplete module] The initializers submodule includes methods for initializing parameters (such as phase shifter values) throughout a NetworkLayer. `Initializer` [source]¶ * Bases: `object` Base initializer class ``` RandomPhaseInitializer ``` [source]¶ # layers¶ `layers` ¶ The layers submodule contains functionality for implementing a logical “layer” in the simulated optical neural network. The API for this module is based loosely on Keras. * class `neuroptica.layers.` `Activation` (nonlinearity: neuroptica.nonlinearities.Nonlinearity)[source]¶ * Represents a (nonlinear) activation layer. Note that in this layer, the usage of X and Z are reversed! (Z is input, X is output, input for next linear layer) * `__init__` (nonlinearity: neuroptica.nonlinearities.Nonlinearity)[source]¶ * Initialize the activation layer :param nonlinearity: a Nonlinearity instance * `backward_pass` (gamma: <MagicMock id='140414710167032'>) → <MagicMock id='140414710179488'>[source]¶ * * class `neuroptica.layers.` `ClementsLayer` (N: int, M=None, include_phase_shifter_layer=True, initializer=None)[source]¶ * Performs a unitary NxM operator with MZIs arranged in a Clements decomposition. If M=N then the layer can perform any arbitrary unitary operator * `backward_pass` (delta: <MagicMock id='140414710279248'>, cache_fields=False, use_partial_vectors=False) → <MagicMock id='140414710299952'>[source]¶ * * `forward_pass` (X: <MagicMock id='140414710258320'>, cache_fields=False, use_partial_vectors=False) → <MagicMock id='140414710270832'>[source]¶ * * class `neuroptica.layers.` `DropMask` (N: int, keep_ports=None, drop_ports=None)[source]¶ * Drop specified ports entirely, reducing the size of the network for the next layer. * `__init__` (N: int, keep_ports=None, drop_ports=None)[source]¶ * b'Parameters:' * N – number of input ports to the DropMask layer * keep_ports – list or iterable of which ports to keep (drop_ports must be None if keep_ports is specified) * drop_ports – list or iterable of which ports to drop (keep_ports must be None if drop_ports is specified) * `backward_pass` (delta: <MagicMock id='140414710587744'>) → <MagicMock id='140414710608392'>[source]¶ * * class `neuroptica.layers.` `NetworkLayer` (input_size: int, output_size: int, initializer=None)[source]¶ * Bases: `object` Represents a logical layer in a simulated optical neural network. A NetworkLayer is different from a ComponentLayer, but it may contain a ComponentLayer or an OpticalMesh to compute the forward and backward logic. Initialize the NetworkLayer :param input_size: number of input ports :param output_size: number of output ports (usually the same as input_size, unless DropMask is used) :param initializer: optional initializer method (WIP) * `backward_pass` (delta: <MagicMock id='140414710542120'>) → <MagicMock id='140414710554576'>[source]¶ * * class `neuroptica.layers.` ``` OpticalMeshNetworkLayer ``` Base class for any network layer consisting of an optical mesh of phase shifters and MZIs Initialize the OpticalMeshNetworkLayer :param input_size: number of input waveguides :param output_size: number of output waveguides :param initializer: optional initializer method (WIP) * `backward_pass` (delta: <MagicMock id='140414710229144'>, cache_fields=False, use_partial_vectors=False) → <MagicMock id='140414710237504'>[source]¶ * * `forward_pass` (X: <MagicMock id='140414710187848'>, cache_fields=False, use_partial_vectors=False) → <MagicMock id='140414710216688'>[source]¶ * Compute the forward pass of input fields into the network layer :param X: input fields to the NetworkLayer :return: transformed output fields to feed into the next layer of the ONN * class `neuroptica.layers.` `ReckLayer` (N: int, include_phase_shifter_layer=True, initializer=None)[source]¶ * Performs a unitary NxN operator with MZIs arranged in a Reck decomposition * `backward_pass` (delta: <MagicMock id='140414710349952'>) → <MagicMock id='140414710366560'>[source]¶ * * class `neuroptica.layers.` `StaticMatrix` (matrix: <MagicMock id='140414710616752'>)[source]¶ * Multiplies inputs by a static matrix (this is an aphysical layer) * `__init__` (matrix: <MagicMock id='140414710616752'>)[source]¶ * b'Parameters:' b'matrix &#8211; matrix to multiply inputs by' * `backward_pass` (delta: <MagicMock id='140414710649856'>)[source]¶ * # losses¶ `losses` ¶ The Losses submodule contains classes for computing common loss functions. * class `neuroptica.losses.` ``` CategoricalCrossEntropy ``` [source]¶ * ``` neuroptica.losses.Loss ``` Represents categorical cross entropy with a softmax layer implicitly applied to the outputs * static `L` (X: <MagicMock id='140414710017552'>, T: <MagicMock id='140414710034160'>) → <MagicMock id='140414710050768'>[source]¶ * The scalar, real-valued loss function (vectorized over multiple X, T inputs) :param X: the output of the network :param T: the target output :return: loss function for each X * static `dL` (X: <MagicMock id='140414710075688'>, T: <MagicMock id='140414710088200'>) → <MagicMock id='140414710100712'>[source]¶ * The derivative of the loss function dL/dX_L used for backpropagation (vectorized over multiple X) :param X: the output of the network :param T: the target output :return: dL/dX_L for each X * static # models¶ `models` ¶ This module contains classes to implement Keras-style Models, which combine several NetworkLayers to simulate a full optical neural network. Currently, only sequential models are supported, but more may be added in the future. * class `neuroptica.models.` `BaseModel` [source]¶ * Bases: `object` Base class for all models * class `neuroptica.models.` `Model` [source]¶ * Functional model class similar to the Keras model class, simulating an optical neural network with multiple layers * class `neuroptica.models.` `Sequential` (layers: List[neuroptica.layers.NetworkLayer])[source]¶ * Feed-foward model class similar to the Keras Sequential() model class * `__init__` (layers: List[neuroptica.layers.NetworkLayer])[source]¶ * Initialize the model :param layers: list of NetworkLayers contained in the optical neural network * `backward_pass` (d_loss: <MagicMock id='140414709717760'>, cache_fields=False, use_partial_vectors=False) → Dict[str, <MagicMock id='140414709734312'>][source]¶ * Returns the gradients for each layer resulting from backpropagating from derivative loss function d_loss :param d_loss: derivative of the loss function of the outputs :param cache_fields: if true, fields will be cached internally :param use_partial_vectors: if true, use the partial vectors method to speed up transfer matrix computation :return: dictionary of {layer: gradients} * `forward_pass` (X: <MagicMock id='140414709688864'>, cache_fields=False, use_partial_vectors=False) → <MagicMock id='140414709705304'>[source]¶ * Propagate an input field throughout the entire network :param X: input electric fields :param cache_fields: if true, fields will be cached internally :param use_partial_vectors: if true, use the partial vectors method to speed up transfer matrix computation :return: output electric fields (to be fed into a loss function) # nonlinearities¶ `nonlinearities` ¶ This module contains a collection of physical and aphysical activation functions. Nonlinearities can be incorporated into an optical neural network by using the Activation(nonlinearity) NetworkLayer. `Abs` (N, mode='polar')[source]¶ * Represents a transformation z -> |z|. This can be called in any of “full”, “condensed”, and “polar” modes * `__init__` (N, mode='polar')[source]¶ * * `dIm_dIm` (a: <MagicMock id='140414710928776'>, b: <MagicMock id='140414710941288'>)[source]¶ * * `dIm_dRe` (a: <MagicMock id='140414711952328'>, b: <MagicMock id='140414711964840'>)[source]¶ * * `dRe_dIm` (a: <MagicMock id='140414711927304'>, b: <MagicMock id='140414711939816'>)[source]¶ * * `dRe_dRe` (a: <MagicMock id='140414711902280'>, b: <MagicMock id='140414711910696'>)[source]¶ * * `df_dIm` (a: <MagicMock id='140414710987016'>, b: <MagicMock id='140414710999528'>)[source]¶ * * `df_dRe` (a: <MagicMock id='140414710957896'>, b: <MagicMock id='140414710970408'>)[source]¶ * * `df_dphi` (r: <MagicMock id='140414711037064'>, phi: <MagicMock id='140414711053672'>)[source]¶ * * `df_dr` (r: <MagicMock id='140414711016136'>, phi: <MagicMock id='140414711028648'>)[source]¶ * `AbsSquared` (N)[source]¶ * Maps z -> |z|^2, corresponding to power measurement by a photodetector. * `df_dphi` (r: <MagicMock id='140414711116184'>, phi: <MagicMock id='140414711128696'>)[source]¶ * * `df_dr` (r: <MagicMock id='140414711078808'>, phi: <MagicMock id='140414711103672'>)[source]¶ * `ComplexNonlinearity` (N, holomorphic=False, mode='condensed')[source]¶ * Base class for a complex-valued nonlinearity * `__init__` (N, holomorphic=False, mode='condensed')[source]¶ * * `backward_pass` (gamma: <MagicMock id='140414711219424'>, Z: <MagicMock id='140414711244224'>) → <MagicMock id='140414711252640'>[source]¶ * * `dIm_dIm` (a: <MagicMock id='140414712026784'>, b: <MagicMock id='140414712039296'>) → <MagicMock id='140414712055904'>[source]¶ * * `dIm_dRe` (a: <MagicMock id='140414711985152'>, b: <MagicMock id='140414711997664'>) → <MagicMock id='140414712014272'>[source]¶ * * `dRe_dIm` (a: <MagicMock id='140414711419232'>, b: <MagicMock id='140414711435840'>) → <MagicMock id='140414711972640'>[source]¶ * * `dRe_dRe` (a: <MagicMock id='140414711377536'>, b: <MagicMock id='140414711385952'>) → <MagicMock id='140414711410816'>[source]¶ * * `df_dIm` (a: <MagicMock id='140414711335904'>, b: <MagicMock id='140414711348416'>) → <MagicMock id='140414711365024'>[source]¶ * * `df_dRe` (a: <MagicMock id='140414711294272'>, b: <MagicMock id='140414711302688'>) → <MagicMock id='140414711319296'>[source]¶ * * `df_dZ` (Z: <MagicMock id='140414711265152'>) → <MagicMock id='140414711281760'>[source]¶ * * `df_dphi` (r: <MagicMock id='140414712105952'>, phi: <MagicMock id='140414712114368'>) → <MagicMock id='140414712139168'>[source]¶ * * `df_dr` (r: <MagicMock id='140414712068416'>, phi: <MagicMock id='140414712076832'>) → <MagicMock id='140414712093440'>[source]¶ * ``` ElectroOpticActivation ``` Electro-optic activation function with intensity modulation (remod). This activation can be configured either in terms of its physical parameters, detailed below, or directly in terms of the feedforward phase gain, g and the biasing phase, phi_b. If the electro-optic parameters below are specified g and phi_b are computed for the user. alpha: Amount of power tapped off to PD [unitless] responsivity: PD responsivity [Watts/amp] area: Modal area [micron^2] V_pi: Modulator V_pi (voltage required for a pi phase shift) [Volts] V_bias: Modulator static bias [Volts] R: Transimpedance gain [Ohms] impedance: Characteristic impedance for computing optical power [Ohms] * `df_dIm` (a: <MagicMock id='140414711827480'>, b: <MagicMock id='140414711844088'>) → <MagicMock id='140414711860696'>[source]¶ * * `df_dRe` (a: <MagicMock id='140414711794040'>, b: <MagicMock id='140414711806552'>) → <MagicMock id='140414711819064'>[source]¶ * `LinearMask` (N: int, mask=None)[source]¶ * Technically not a nonlinearity: apply a linear gain/loss to each element * `__init__` (N: int, mask=None)[source]¶ * * `df_dZ` (Z: <MagicMock id='140414710725264'>)[source]¶ * `SPMActivation` (N, gain)[source]¶ * Lossless SPM activation function phase_gain [ rad/(V^2/m^2) ] : The amount of phase shift per unit input “power” * `__init__` (N, gain)[source]¶ * * `df_dIm` (a: <MagicMock id='140414712214248'>, b: <MagicMock id='140414711706568'>) → <MagicMock id='140414711719080'>[source]¶ * * `df_dRe` (a: <MagicMock id='140414712176712'>, b: <MagicMock id='140414712189224'>) → <MagicMock id='140414712201736'>[source]¶ * `Sigmoid` (N)[source]¶ * Sigmoid activation; maps z -> 1 / (1 + np.exp(-z)) * `backward_pass` (gamma: <MagicMock id='140414711161968'>, Z: <MagicMock id='140414711174480'>)[source]¶ * `SoftMax` (N)[source]¶ * Applies softmax to the inputs. Do not use in with categorical cross entropy, which implicitly includes this. * `backward_pass` (gamma: <MagicMock id='140414710679424'>, Z: <MagicMock id='140414710696032'>)[source]¶ * `bpReLU` (N, cutoff=1, alpha=0)[source]¶ * Discontinuous (but holomorphic and backpropable) ReLU f(x_i) = alpha * x_i if |x_i| < cutoff f(x_i) = x_i if |x_i| >= cutoff cutoff: value of input |x_i| above which to fully transmit, below which to attentuate alpha: attenuation factor f(x_i) = f * `__init__` (N, cutoff=1, alpha=0)[source]¶ * * `df_dZ` (Z: <MagicMock id='140414710750400'>)[source]¶ * `cReLU` (N)[source]¶ * Contintous, but non-holomorphic and non-simply backpropabable ReLU of the form f(z) = ReLU(Re{z}) + 1j * ReLU(Im{z}) see: https://arxiv.org/pdf/1705.09792.pdf * `df_dIm` (a: <MagicMock id='140414710900384'>, b: <MagicMock id='140414710916992'>) → <MagicMock id='140414710409312'>[source]¶ * * `df_dRe` (a: <MagicMock id='140414710862848'>, b: <MagicMock id='140414710871264'>) → <MagicMock id='140414710887872'>[source]¶ * `modReLU` (N, cutoff=1)[source]¶ * Contintous, but non-holomorphic and non-simply backpropabable ReLU of the form f(z) = (|z| - cutoff) * z / |z| if |z| >= cutoff (else 0) see: https://arxiv.org/pdf/1705.09792.pdf (note, cutoff subtracted in this definition) cutoff: value of input |x_i| above which to * `__init__` (N, cutoff=1)[source]¶ * * `df_dphi` (r: <MagicMock id='140414710825200'>, phi: <MagicMock id='140414710837712'>)[source]¶ * * `df_dr` (r: <MagicMock id='140414710791920'>, phi: <MagicMock id='140414710804432'>)[source]¶ * `zReLU` (N)[source]¶ * Contintous, but non-holomorphic and non-simply backpropabable ReLU of the form f(z) = z if Re{z} > 0 and Im{z} > 0, else 0 see: https://arxiv.org/pdf/1705.09792.pdf * `df_dIm` (a: <MagicMock id='140414710476080'>, b: <MagicMock id='140414710488592'>) → <MagicMock id='140414710501104'>[source]¶ * * `df_dRe` (a: <MagicMock id='140414710438544'>, b: <MagicMock id='140414710451056'>) → <MagicMock id='140414710463568'>[source]¶ * # optimizers¶ `optimizers` ¶ This module contains a collection of optimizers for training neuroptica models to fit labeled data. All optimizers starting with “InSitu” use the on-chip interferometric gradient calculation routine described in Hughes, et al. (2018), “Training of photonic neural networks through in situ backpropagation and gradient measurement”. On-chip training with in-situ backpropagation using adjoint field method and adam optimizer * `fit` (data: <MagicMock id='140414709380712'>, labels: <MagicMock id='140414709401416'>, epochs=1000, batch_size=32, show_progress=True, cache_fields=False, use_partial_vectors=False)[source]¶ * Fit the model to the labeled data :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param epochs: :param batch_size: :param show_progress: :param cache_fields: if set to True, will cache fields at the phase shifters on the forward and backward pass :param use_partial_vectors: if set to True, the MZI partial matrices will be stored as Nx2 vectors :return: ``` InSituGradientDescent ``` On-chip training with in-situ backpropagation using adjoint field method and standard gradient descent * `fit` (data: <MagicMock id='140414709355576'>, labels: <MagicMock id='140414709372184'>, epochs=1000, batch_size=32, show_progress=True)[source]¶ * Fit the model to the labeled data :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param epochs: :param learning_rate: :param batch_size: :param show_progress: :return: `Optimizer` (model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss])[source]¶ * Bases: `object` Base class for an optimizer * `__init__` (model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss])[source]¶ * * static `make_batches` (data: <MagicMock id='140414709636344'>, labels: <MagicMock id='140414707866984'>, batch_size: int, shuffle=True) → Tuple[<MagicMock id='140414707883536'>, <MagicMock id='140414707916472'>][source]¶ * Prepare batches of a given size from data and labels :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param batch_size: size of the batch :param shuffle: if true, batches will be randomized :return: yields a tuple (data_batch, label_batch) # utils¶ `utils` ¶ This module contains a collection of miscellaneous utility functions. ``` generate_diagonal_planar_dataset ``` ``` generate_ring_planar_dataset ``` ``` generate_separable_planar_dataset ``` (N=100, noise_ratio=0.0, seed=None)[source]¶ *
meshx_node
hex
Erlang
MeshxNode === Service mesh distribution module. [`MeshxNode`](MeshxNode.html#content) implements custom carrier protocol for the Erlang [distribution as module](https://erlang.org/doc/apps/erts/alt_dist.html#distribution-module), designed to work with service mesh adapter implementing `Meshx.ServiceMesh` behavior. Standard Erlang distribution depends on two components: Erlang Port Mapper Daemon ([EPMD](https://erlang.org/doc/man/epmd.html)) and distribution module which by default is [:inet_tcp_dist](https://github.com/erlang/otp/blob/master/lib/kernel/src/inet_tcp_dist.erl). Traffic generated by both components is unencrypted. Distribution module :inet_tcp_dist can be replaced with [`:inet_tls_dist`](http://erlang.org/doc/apps/ssl/ssl_distribution.html) offering TLS. There is no out-of-the-box solution to secure EPMD. [`MeshxNode`](MeshxNode.html#content) can be considered as alternative to EPMD and :inet_tcp_dist. [`MeshxNode`](MeshxNode.html#content) by using service mesh provides following benefits: * automatic traffic encryption with service mesh data plane mTLS, * "node services" registration and management with external service mesh application control plane. [`MeshxNode`](MeshxNode.html#content) distribution module cannot be started during Beam VM boot process, `iex` command line arguments `--name` and `--sname` are not supported. Requirements --- [`MeshxNode`](MeshxNode.html#content) depends on service mesh adapter. `Meshx` is released with `MeshxConsul` service mesh adapter, required adapter configuration steps are described in package [documentation](https://github.com/andrzej-mag/meshx_consul). Installation --- Add `:meshx_consul` and `:meshx_node` to application dependencies: ``` # mix.exs def deps do [ {:meshx_consul, "~> 0.1.0"}, {:meshx_node, "~> 0.1.0"} ] end ``` Usage --- Start non-distributed node using [`MeshxNode`](MeshxNode.html#content) as distribution module: ``` iex -pa _build/dev/lib/meshx_node/ebin --erl "-proto_dist Elixir.MeshxNode" -S mix ``` Used command arguments: * `-pa` - adds specified directory to the code path (`dev` environment path is used here), * `-proto_dist` - specifies custom distribution module (physical file name is: `Elixir.MeshxNode_dist.beam`). Transform node into distributed using [`Node.start/3`](https://hexdocs.pm/elixir/Node.html#start/3): ``` iex(1)> Node.start(:node1@myhost) {:ok, #PID<0.248.0>} iex(node1@myhost)2> # [node1@myhost][stdout]: ==> Consul Connect proxy starting... # [node1@myhost][stdout]: Configuration mode: Agent API # [node1@myhost][stdout]: Sidecar for ID: node1@myhost # [node1@myhost][stdout]: Proxy ID: node1@myhost-sidecar-proxy # [node1@myhost][stdout]: ==> Log data will now stream in as it occurs: iex(node1@myhost)3> Node.self() :node1@myhost ``` [`Node.start/3`](https://hexdocs.pm/elixir/Node.html#start/3) command starts "node service" using `MeshxConsul.start/4`. Lines marked with `#` are stdout output of command running sidecar proxy binary, by default Consul Connect proxy. Using second terminal, start another node: ``` iex(1)> Node.start(:node2@myhost) {:ok, #PID<0.258.0>} ``` Consul UI screenshot showing both "node services" registered: ![image](assets/services.png) Use Consul UI to allow/deny connection between nodes with [Consul intentions](https://www.consul.io/docs/connect/intentions): ![image](assets/intentions.png) [`MeshxNode`](MeshxNode.html#content) is fully compatible with standard Erlang distribution. After connecting both nodes with [`Node.connect/1`](https://hexdocs.pm/elixir/Node.html#connect/1), one can for example spawn new process on remote node using [`Node.spawn/4`](https://hexdocs.pm/elixir/Node.html#spawn/4). When connecting to other node `MeshxConsul.connect/3` function is executed. Other `erl` command line options that might be helpful: * `-start_epmd false` - do not start EPMD, * `-kernel inet_dist_use_interface {127,0,0,1}` - limit Erlang listen interface to loopback, * `-connect_all false` - do not maintain a fully connected network of distributed Erlang nodes, * `-no_epmd` - specifies that the distributed node does not need epmd at all (OTP24+). Configuration options --- * `:mesh_adapter` - Required. Specifies service mesh adapter module. Example: `mesh_adapter: MeshxConsul`. * `:service_params` - 2-arity function executed when distribution is started. First function argument is node "short name", second argument is host name. For example: `:mynode@myhost` translates to `(:mynode, 'myhost')`. Function should return first argument `params` passed to `c:Meshx.ServiceMesh.start/4`, for example: `"mynode@myhost"`. Default: `&MeshxNode.Default.service_params/2`. * `:service_reg` - service registration template passed as second argument to `c:Meshx.ServiceMesh.start/4`. Default: `[]`. * `:upstream_params` - 1-arity function executed when connection between nodes is setup. Function argument is remote node name: running `Node.connect(:node1@myhost)` will invoke function with `(:node1@myhost)`. Function should return first argument `params` passed to `c:Meshx.ServiceMesh.connect/3`, for example: `"node1@myhost"`. Default: `&MeshxNode.Default.upstream_params/1`. * `:upstream_reg` - upstream registration template passed as second argument to `c:Meshx.ServiceMesh.connect/3`. Default: `nil`. * `:upstream_proxy` - 2-arity function executed when connection between nodes is setup. Function arguments are `(:remote_node_name, :local_node_name)`. Function should return third argument `proxy` passed to `c:Meshx.ServiceMesh.connect/3`, for example: `{"node1@myhost", "node1@myhost"}`. Default: `&MeshxNode.Default.upstream_proxy/2`. * `:force_registration?` - boolean passed as third argument to `c:Meshx.ServiceMesh.start/4`. Default: `false`. * `:timeout` - timeout value passed as fourth argument to `c:Meshx.ServiceMesh.start/4`. Default: `5000`. Credits --- [`MeshxNode`](MeshxNode.html#content) distribution module is based on [example code](https://github.com/erlang/otp/blob/master/lib/kernel/examples/erl_uds_dist/src/erl_uds_dist.erl) by <NAME>. [Link to this section](#summary) Summary === [Functions](#functions) --- [spawn_start(name, cookie, type \\ :longnames, tick_time \\ 15000)](#spawn_start/4) Asynchronous version of [`start/4`](#start/4). [start(node, cookie, type \\ :longnames, tick_time \\ 15000)](#start/4) Turns node into a distributed and sets node magic cookie. [Link to this section](#functions) Functions === MeshxNode.Default === Defaults for "node service" and upstream node connections registration parameters with service mesh adapter. [Link to this section](#summary) Summary === [Functions](#functions) --- [service_params(name, host)](#service_params/2) Returns service `params` required by `c:Meshx.ServiceMesh.start/4` as first argument. [upstream_params(node)](#upstream_params/1) Returns upstream `params` required by `c:Meshx.ServiceMesh.connect/3` as first argument. [upstream_proxy(node, my_node)](#upstream_proxy/2) Returns sidecar proxy service name used to register upstream connections to other nodes ("nodes services"). [Link to this section](#functions) Functions === API Reference === Modules --- [MeshxNode](MeshxNode.html) Service mesh distribution module. [MeshxNode.Default](MeshxNode.Default.html) Defaults for "node service" and upstream node connections registration parameters with service mesh adapter.
RMOA
cran
R
Package ‘RMOA’ October 12, 2022 Version 1.1.0 Title Connect R with MOA for Massive Online Analysis Description Connect R with MOA (Massive Online Analysis - <https://moa.cms.waikato.ac.nz/>) to build classification models and regression models on streaming data or out-of-RAM data. Also streaming recommendation models are made available. Depends RMOAjars (>= 1.0), rJava (>= 0.6-3), methods Suggests ff, recommenderlab SystemRequirements Java (>= 5.0) License GPL-3 Copyright Code is Copyright (C) <NAME> and BNOSAC Maintainer <NAME> <<EMAIL>> URL http://www.bnosac.be, https://github.com/jwijffels/RMOA, https://moa.cms.waikato.ac.nz/ RoxygenNote 7.1.1 NeedsCompilation no Author <NAME> [aut, cre], BNOSAC [cph] Repository CRAN Date/Publication 2022-07-17 21:00:02 UTC R topics documented: datastrea... 2 datastream_datafram... 3 datastream_ffd... 4 datastream_fil... 5 datastream_matri... 6 factoris... 7 MOAattribute... 8 MOAoption... 8 MOA_classification_activelearnin... 10 MOA_classification_baye... 11 MOA_classification_ensemblelearnin... 11 MOA_classification_tree... 13 MOA_classifie... 14 MOA_recommendation_engine... 15 MOA_recommende... 15 MOA_regresso... 16 MOA_regressor... 17 predict.MOA_trainedmode... 18 summary.MOA_classifie... 20 summary.MOA_recommende... 21 summary.MOA_regresso... 22 trainMO... 22 trainMOA.MOA_classifie... 23 trainMOA.MOA_recommende... 24 trainMOA.MOA_regresso... 26 datastream Datastream objects and methods Description Reference object of class datastream. This is a generic class which holds general information about the data stream. Currently streams are implemented for data in table format (streams of read.table, read.csv, read.csv2, read.delim, read.delim2), data in RAM (data.frame, matrix), data in ff (on disk). See the documentation of datastream_file, datastream_dataframe, datastream_matrix, and datastream_ffdf Arguments description The name how the stream is labelled args a list with arguments used to set up the stream and used in the datastream meth- ods Value A class of type datastream which contains description: character with the name how the stream is labelled. state: integer with the current state at which the stream will read new instances of data processed: integer with the number of instances already processed finished: logical indicating if the stream has finished processing all the instances args: list with arguments passed on to the stream when it is created (e.g. arguments of read.table) See Also datastream_file Examples ## Basic example, showing the general methods available for a datastream object x <- datastream(description = "My own datastream", args = list(a = "TEST")) x str(x) try(x$get_points(x)) datastream_dataframe data streams on a data.frame Description Reference object of class datastream_dataframe. This is a class which inherits from class datastream and which can be used to read in a stream from a data.frame. Arguments data a data.frame to extract data from in a streaming way Value A class of type datastream_dataframe which contains data: The data.frame to extract instances from all fields of the datastream superclass: See datastream Methods • get_points(n) Get data from a datastream object. n integer, indicating the number of instances to retrieve from the datastream See Also datastream Examples x <- datastream_dataframe(data=iris) x$get_points(10) x x$get_points(10) x datastream_ffdf data streams on an ffdf Description Reference object of class datastream_ffdf. This is a class which inherits from class datastream and which can be used to read in a stream from a ffdf from the ff package. Arguments data a data.frame to extract data from in a streaming way Value A class of type datastream_ffdf which contains data: The ffdf to extract instances from all fields of the datastream superclass: See datastream Methods • get_points(n) Get data from a datastream object. n integer, indicating the number of instances to retrieve from the datastream See Also datastream Examples ## You need to load package ff before you can use datastream_ffdf require(ff) irisff <- as.ffdf(factorise(iris)) x <- datastream_ffdf(data=irisff) x$get_points(10) x x$get_points(10) x datastream_file File data stream Description Reference object of class datastream_file. This is a class which inherits from class datastream and which can be used to read in a stream from a file. A number of file readers have been imple- mented, namely datastream_table, datastream_csv, datastream_csv2, datastream_delim, datastream_delim2. See the examples. Arguments description The name how the stream is labelled FUN The function to use to read in the file. Defaults to read.table for datastream_table, read.csv for datastream_csv, read.csv2 for datastream_csv2, read.delim for datastream_delim, read.delim2 for datastream_delim2 columnnames optional character vector of column to overwrite the column names of the data read in with in get_points file The file to read in. See e.g. read.table ... parameters passed on to FUN. See e.g. read.table Value A class of type datastream_file which contains FUN: The function to use to read in the file connection: A connection to the file columnnames: A character vector of column names to overwrite the column names with in get_points all fields of the datastream superclass: See datastream Methods • get_points(n) Get data from a datastream object. n integer, indicating the number of instances to retrieve from the datastream See Also read.table, read.csv, read.csv2, read.delim, read.delim2 Examples mydata <- iris mydata$Species[2:3] <- NA ## Example of a CSV file stream myfile <- tempfile() write.csv(iris, file = myfile, row.names=FALSE, na = "") x <- datastream_csv(file = myfile, na.strings = "") x x$get_points(n=10) x x$get_points(n=10) x x$stop() ## Create your own specific file stream write.table(iris, file = myfile, row.names=FALSE, na = "") x <- datastream_file(description="My file defintion stream", FUN=read.table, file = myfile, header=TRUE, na.strings="") x$get_points(n=10) x x$stop() ## Clean up for CRAN file.remove(myfile) datastream_matrix data streams on a matrix Description Reference object of class datastream_matrix. This is a class which inherits from class datastream and which can be used to read in a stream from a matrix. Arguments data a matrix to extract data from in a streaming way Value A class of type datastream_matrix which contains data: The matrix to extract instances from all fields of the datastream superclass: See datastream Methods • get_points(n) Get data from a datastream object. n integer, indicating the number of instances to retrieve from the datastream See Also datastream Examples data <- matrix(rnorm(1000*10), nrow = 1000, ncol = 10) x <- datastream_matrix(data=data) x$get_points(10) x x$get_points(10) x factorise Convert character strings to factors in a dataset Description Convert character strings to factors in a dataset Usage factorise(x, ...) Arguments x object of class data.frame ... other parameters currently not used yet Value a data.frame with the information in x where character columns are converted to factors Examples data(iris) str(iris) mydata <- factorise(iris) str(mydata) MOAattributes Define the attributes of a dataset (factor levels, numeric or string data) in a MOA setting Description Define the attributes of a dataset (factor levels, numeric or string data) in a MOA setting Usage MOAattributes(data, ...) Arguments data object of class data.frame ... other parameters currently not used yet Value An object of class MOAmodelAttributes Examples data(iris) mydata <- factorise(iris) atts <- MOAattributes(data=mydata) atts MOAoptions Get and set options for models build with MOA. Description Get and set options for models build with MOA. Usage MOAoptions(model, ...) Arguments model character string with a model or an object of class MOA_model. E.g. Hoeffd- ingTree, DecisionStump, NaiveBayes, HoeffdingOptionTree, ... The list of known models can be obtained by typing RMOA:::.moaknownmodels. See the exam- ples. ... other parameters specifying the MOA modelling options of each model. See the examples. Value An object of class MOAmodelOptions. This is a list with elements: 1. model: The name of the model 2. moamodelname: The purpose of the model known by MOA (getPurposeString) 3. javaObj: a java reference of MOA options 4. options: a list with options of the MOA model. Each list element contains the Name of the option, the Purpose of the option and the current Value See the examples. Examples control <- MOAoptions(model = "HoeffdingTree") control MOAoptions(model = "HoeffdingTree", leafprediction = "MC", removePoorAtts = TRUE, binarySplits = TRUE, tieThreshold = 0.20) ## Other models known by RMOA RMOA:::.moaknownmodels ## Classification Trees MOAoptions(model = "AdaHoeffdingOptionTree") MOAoptions(model = "ASHoeffdingTree") MOAoptions(model = "DecisionStump") MOAoptions(model = "HoeffdingAdaptiveTree") MOAoptions(model = "HoeffdingOptionTree") MOAoptions(model = "HoeffdingTree") MOAoptions(model = "LimAttHoeffdingTree") MOAoptions(model = "RandomHoeffdingTree") ## Classification using Bayes rule MOAoptions(model = "NaiveBayes") MOAoptions(model = "NaiveBayesMultinomial") ## Classification using Active learning MOAoptions(model = "ActiveClassifier") ## Classification using Ensemble learning MOAoptions(model = "AccuracyUpdatedEnsemble") MOAoptions(model = "AccuracyWeightedEnsemble") MOAoptions(model = "ADACC") MOAoptions(model = "DACC") MOAoptions(model = "LeveragingBag") MOAoptions(model = "OCBoost") MOAoptions(model = "OnlineAccuracyUpdatedEnsemble") MOAoptions(model = "OzaBag") MOAoptions(model = "OzaBagAdwin") MOAoptions(model = "OzaBagASHT") MOAoptions(model = "OzaBoost") MOAoptions(model = "OzaBoostAdwin") MOAoptions(model = "TemporallyAugmentedClassifier") MOAoptions(model = "WeightedMajorityAlgorithm") ## Regressions MOAoptions(model = "AMRulesRegressor") MOAoptions(model = "FadingTargetMean") MOAoptions(model = "FIMTDD") MOAoptions(model = "ORTO") MOAoptions(model = "Perceptron") MOAoptions(model = "SGD") MOAoptions(model = "TargetMean") ## Recommendation engines MOAoptions(model = "BRISMFPredictor") MOAoptions(model = "BaselinePredictor") MOA_classification_activelearning MOA active learning classification Description MOA active learning classification Usage ActiveClassifier(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "ActiveClassifier") mymodel <- ActiveClassifier(control=ctrl) mymodel MOA_classification_bayes MOA bayesian classification Description MOA bayesian classification Usage NaiveBayes(control = NULL, ...) NaiveBayesMultinomial(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "NaiveBayes") mymodel <- NaiveBayes(control=ctrl) mymodel MOA_classification_ensemblelearning MOA classification using ensembles Description MOA classification using ensembles (bagging/boosting/stacking/other) Usage AccuracyUpdatedEnsemble(control = NULL, ...) AccuracyWeightedEnsemble(control = NULL, ...) ADACC(control = NULL, ...) DACC(control = NULL, ...) LeveragingBag(control = NULL, ...) LimAttClassifier(control = NULL, ...) OCBoost(control = NULL, ...) OnlineAccuracyUpdatedEnsemble(control = NULL, ...) OzaBag(control = NULL, ...) OzaBagAdwin(control = NULL, ...) OzaBagASHT(control = NULL, ...) OzaBoost(control = NULL, ...) OzaBoostAdwin(control = NULL, ...) TemporallyAugmentedClassifier(control = NULL, ...) WeightedMajorityAlgorithm(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "OzaBoostAdwin") mymodel <- OzaBoostAdwin(control=ctrl) mymodel MOA_classification_trees MOA classification trees Description MOA classification trees Usage AdaHoeffdingOptionTree(control = NULL, ...) ASHoeffdingTree(control = NULL, ...) DecisionStump(control = NULL, ...) HoeffdingAdaptiveTree(control = NULL, ...) HoeffdingOptionTree(control = NULL, ...) HoeffdingTree(control = NULL, ...) LimAttHoeffdingTree(control = NULL, ...) RandomHoeffdingTree(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "HoeffdingTree", leafprediction = "MC", removePoorAtts = TRUE, binarySplits = TRUE, tieThreshold = 0.20) hdt <- HoeffdingTree(control=ctrl) hdt hdt <- HoeffdingTree(numericEstimator = "GaussianNumericAttributeClassObserver") hdt MOA_classifier Create a MOA classifier Description Create a MOA classifier Usage MOA_classifier(model, control = NULL, ...) Arguments model character string with a model. E.g. HoeffdingTree, DecisionStump, Naive- Bayes, HoeffdingOptionTree, ... The list of known models can be obtained by typing RMOA:::.moaknownmodels. See the examples and MOAoptions. control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier See Also MOAoptions Examples RMOA:::.moaknownmodels ctrl <- MOAoptions(model = "HoeffdingTree", leafprediction = "MC", removePoorAtts = TRUE, binarySplits = TRUE, tieThreshold = 0.20) hdt <- MOA_classifier(model = "HoeffdingTree", control=ctrl) hdt hdt <- MOA_classifier( model = "HoeffdingTree", numericEstimator = "GaussianNumericAttributeClassObserver") hdt MOA_recommendation_engines MOA recommendation engines Description MOA recommendation engines Usage BRISMFPredictor(control = NULL, ...) BaselinePredictor(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_recommender which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "BRISMFPredictor", features = 10) brism <- BRISMFPredictor(control=ctrl) brism baseline <- BaselinePredictor() baseline MOA_recommender Create a MOA recommendation engine Description Create a MOA recommendation engine Usage MOA_recommender(model, control = NULL, ...) Arguments model character string with a model. E.g. BRISMFPredictor, BaselinePredictor The list of known models can be obtained by typing RMOA:::.moaknownmodels. See the examples and MOAoptions. control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_recommender See Also MOAoptions Examples RMOA:::.moaknownmodels ctrl <- MOAoptions(model = "BRISMFPredictor", features = 10, lRate=0.002) brism <- MOA_recommender(model = "BRISMFPredictor", control=ctrl) brism MOAoptions(model = "BaselinePredictor") baseline <- MOA_recommender(model = "BaselinePredictor") baseline MOA_regressor Create a MOA regressor Description Create a MOA regressor Usage MOA_regressor(model, control = NULL, ...) Arguments model character string with a model. E.g. AMRulesRegressor, FadingTargetMean, FIMTDD, ORTO, Perceptron, RandomRules, SGD, TargetMean, ... The list of known models can be obtained by typing RMOA:::.moaknownmodels. See the examples and MOAoptions. control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_regressor See Also MOAoptions Examples mymodel <- MOA_regressor(model = "FIMTDD") mymodel data(iris) iris <- factorise(iris) irisdatastream <- datastream_dataframe(data=iris) ## Train the model mytrainedmodel <- trainMOA(model = mymodel, Sepal.Length ~ Petal.Length + Species, data = irisdatastream) mytrainedmodel$model summary(lm(Sepal.Length ~ Petal.Length + Species, data = iris)) predict(mytrainedmodel, newdata=iris) MOA_regressors MOA regressors Description MOA regressors Usage TargetMean(control = NULL, ...) FadingTargetMean(control = NULL, ...) Perceptron(control = NULL, ...) AMRulesRegressor(control = NULL, ...) FIMTDD(control = NULL, ...) ORTO(control = NULL, ...) Arguments control an object of class MOAmodelOptions as obtained by calling MOAoptions ... options of parameters passed on to MOAoptions, in case control is left to NULL. Ignored if control is supplied Value An object of class MOA_classifier which sets up an untrained MOA model, which can be trained using trainMOA See Also MOAoptions, trainMOA Examples ctrl <- MOAoptions(model = "FIMTDD", DoNotDetectChanges = TRUE, noAnomalyDetection=FALSE, univariateAnomalyprobabilityThreshold = 0.5, verbosity = 5) mymodel <- FIMTDD(control=ctrl) mymodel mymodel <- FIMTDD(ctrlDoNotDetectChanges = FALSE) mymodel predict.MOA_trainedmodel Predict using a MOA classifier, MOA regressor or MOA recommender on a new dataset Description Predict using a MOA classifier, MOA regressor or MOA recommender on a new dataset. \ Make sure the new dataset has the same structure and the same levels as get_points returns on the datastream which was used in trainMOA Usage ## S3 method for class 'MOA_trainedmodel' predict(object, newdata, type = "response", transFUN = object$transFUN, na.action = na.fail, ...) Arguments object an object of class MOA_trainedmodel, as returned by trainMOA newdata a data.frame with the same structure and the same levels as used in trainMOA for MOA classifier, MOA regressor, a data.frame with at least the user/item columns which were used in trainMOA when training the MOA recommendation engine type a character string, either ’response’ or ’votes’ transFUN a function which is used on newdata before applying model.frame. Useful if you want to change the results get_points on the datastream (e.g. for mak- ing sure the factor levels are the same in each chunk of processing, some data cleaning, ...). Defaults to transFUN available in object. na.action passed on to model.frame when constructing the model.matrix from newdata. Defaults to na.fail. ... other arguments, currently not used yet Value A matrix of votes or a vector with the predicted class for MOA classifier or MOA regressor. A See Also trainMOA Examples ## Hoeffdingtree hdt <- HoeffdingTree(numericEstimator = "GaussianNumericAttributeClassObserver") data(iris) ## Make a training set iris <- factorise(iris) traintest <- list() traintest$trainidx <- sample(nrow(iris), size=nrow(iris)/2) traintest$trainingset <- iris[traintest$trainidx, ] traintest$testset <- iris[-traintest$trainidx, ] irisdatastream <- datastream_dataframe(data=traintest$trainingset) ## Train the model hdtreetrained <- trainMOA(model = hdt, Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data = irisdatastream) ## Score the model on the holdoutset scores <- predict(hdtreetrained, newdata=traintest$testset[, c("Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")], type="response") str(scores) table(scores, traintest$testset$Species) scores <- predict(hdtreetrained, newdata=traintest$testset, type="votes") head(scores) ## Prediction based on recommendation engine require(recommenderlab) data(MovieLense) x <- getData.frame(MovieLense) x$itemid <- as.integer(as.factor(x$item)) x$userid <- as.integer(as.factor(x$user)) x$rating <- as.numeric(x$rating) x <- head(x, 2000) movielensestream <- datastream_dataframe(data=x) movielensestream$get_points(3) ctrl <- MOAoptions(model = "BRISMFPredictor", features = 10) brism <- BRISMFPredictor(control=ctrl) mymodel <- trainMOA(model = brism, rating ~ userid + itemid, data = movielensestream, chunksize = 1000, trace=TRUE) overview <- summary(mymodel$model) str(overview) predict(mymodel, head(x, 10), type = "response") x <- expand.grid(userid=overview$users[1:10], itemid=overview$items) predict(mymodel, x, type = "response") summary.MOA_classifier Summary statistics of a MOA classifier Description Summary statistics of a MOA classifier Usage ## S3 method for class 'MOA_classifier' summary(object, ...) Arguments object an object of class MOA_classifier ... other arguments, currently not used yet Value the form of the return value depends on the type of MOA model Examples hdt <- HoeffdingTree(numericEstimator = "GaussianNumericAttributeClassObserver") hdt data(iris) iris <- factorise(iris) irisdatastream <- datastream_dataframe(data=iris) ## Train the model hdtreetrained <- trainMOA(model = hdt, Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data = irisdatastream) summary(hdtreetrained$model) summary.MOA_recommender 21 summary.MOA_recommender Summary statistics of a MOA recommender Description Summary statistics of a MOA recommender Usage ## S3 method for class 'MOA_recommender' summary(object, ...) Arguments object an object of class MOA_recommender ... other arguments, currently not used yet Value the form of the return value depends on the type of MOA model Examples require(recommenderlab) data(MovieLense) x <- getData.frame(MovieLense) x$itemid <- as.integer(as.factor(x$item)) x$userid <- as.integer(as.factor(x$user)) x$rating <- as.numeric(x$rating) x <- head(x, 2000) movielensestream <- datastream_dataframe(data=x) movielensestream$get_points(3) ctrl <- MOAoptions(model = "BRISMFPredictor", features = 10) brism <- BRISMFPredictor(control=ctrl) mymodel <- trainMOA(model = brism, rating ~ userid + itemid, data = movielensestream, chunksize = 1000, trace=TRUE) overview <- summary(mymodel$model) str(overview) predict(mymodel, head(x, 10), type = "response") summary.MOA_regressor Summary statistics of a MOA regressor Description Summary statistics of a MOA regressor Usage ## S3 method for class 'MOA_regressor' summary(object, ...) Arguments object an object of class MOA_regressor ... other arguments, currently not used yet Value the form of the return value depends on the type of MOA model Examples ## TODO trainMOA Train a MOA classifier/regressor/recommendation engine on a datas- tream Description Train a MOA classifier/regressor/recommendation engine on a datastream Usage trainMOA(model, ...) Arguments model an object of class MOA_model, as returned by MOA_classifier, MOA_regressor, MOA_recommender ... other parameters passed on to the methods Value An object of class MOA_trainedmodel which is returned by the methods for the specific model. See trainMOA.MOA_classifier, trainMOA.MOA_regressor, trainMOA.MOA_recommender See Also trainMOA.MOA_classifier, trainMOA.MOA_regressor, trainMOA.MOA_recommender trainMOA.MOA_classifier Train a MOA classifier (e.g. a HoeffdingTree) on a datastream Description Train a MOA classifier (e.g. a HoeffdingTree) on a datastream Usage ## S3 method for class 'MOA_classifier' trainMOA(model, formula, data, subset, na.action = na.exclude, transFUN = identity, chunksize = 1000, reset = TRUE, trace = FALSE, options = list(maxruntime = +Inf), ...) Arguments model an object of class MOA_model, as returned by MOA_classifier, e.g. a HoeffdingTree formula a symbolic description of the model to be fit. data an object of class datastream set up e.g. with datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf or your own datastream. subset an optional vector specifying a subset of observations to be used in the fitting process. na.action a function which indicates what should happen when the data contain NAs. See model.frame for details. Defaults to na.exclude. transFUN a function which is used after obtaining chunksize number of rows from the data datastream before applying model.frame. Useful if you want to change the results get_points on the datastream (e.g. for making sure the factor levels are the same in each chunk of processing, some data cleaning, ...). Defaults to identity. chunksize the number of rows to obtain from the data datastream in one chunk of model processing. Defaults to 1000. Can be used to speed up things according to the backbone architecture of the datastream. reset logical indicating to reset the MOA_classifier so that it forgets what it already has learned. Defaults to TRUE. trace logical, indicating to show information on how many datastream chunks are already processed as a message. options a names list of further options. Currently not used. ... other arguments, currently not used yet Value An object of class MOA_trainedmodel which is a list with elements • model: the updated supplied model object of class MOA_classifier • call: the matched call • na.action: the value of na.action • terms: the terms in the model • transFUN: the transFUN argument See Also MOA_classifier, datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf, datastream, predict.MOA_trainedmodel Examples hdt <- HoeffdingTree(numericEstimator = "GaussianNumericAttributeClassObserver") hdt data(iris) iris <- factorise(iris) irisdatastream <- datastream_dataframe(data=iris) irisdatastream$get_points(3) mymodel <- trainMOA(model = hdt, Species ~ Sepal.Length + Sepal.Width + Petal.Length, data = irisdatastream, chunksize = 10) mymodel$model irisdatastream$reset() mymodel <- trainMOA(model = hdt, Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Length^2, data = irisdatastream, chunksize = 10, reset=TRUE, trace=TRUE) mymodel$model trainMOA.MOA_recommender Train a MOA recommender (e.g. a BRISMFPredictor) on a datastream Description Train a MOA recommender (e.g. a BRISMFPredictor) on a datastream Usage ## S3 method for class 'MOA_recommender' trainMOA(model, formula, data, subset, na.action = na.exclude, transFUN = identity, chunksize = 1000, trace = FALSE, options = list(maxruntime = +Inf), ...) Arguments model an object of class MOA_model, as returned by MOA_recommender, e.g. a BRISMFPredictor formula a symbolic description of the model to be fit. This should be of the form rating ~ userid + itemid, in that sequence. These should be columns in the data, where userid and itemid are integers and rating is numeric. data an object of class datastream set up e.g. with datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf or your own datastream. subset an optional vector specifying a subset of observations to be used in the fitting process. na.action a function which indicates what should happen when the data contain NAs. See model.frame for details. Defaults to na.exclude. transFUN a function which is used after obtaining chunksize number of rows from the data datastream before applying model.frame. Useful if you want to change the results get_points on the datastream (e.g. for making sure the factor levels are the same in each chunk of processing, some data cleaning, ...). Defaults to identity. chunksize the number of rows to obtain from the data datastream in one chunk of model processing. Defaults to 1000. Can be used to speed up things according to the backbone architecture of the datastream. trace logical, indicating to show information on how many datastream chunks are already processed as a message. options a names list of further options. Currently not used. ... other arguments, currently not used yet Value An object of class MOA_trainedmodel which is a list with elements • model: the updated supplied model object of class MOA_recommender • call: the matched call • na.action: the value of na.action • terms: the terms in the model • transFUN: the transFUN argument See Also MOA_recommender, datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf, datastream, predict.MOA_trainedmodel Examples require(recommenderlab) data(MovieLense) x <- getData.frame(MovieLense) x$itemid <- as.integer(as.factor(x$item)) x$userid <- as.integer(as.factor(x$user)) x$rating <- as.numeric(x$rating) x <- head(x, 5000) movielensestream <- datastream_dataframe(data=x) movielensestream$get_points(3) ctrl <- MOAoptions(model = "BRISMFPredictor", features = 10) brism <- BRISMFPredictor(control=ctrl) mymodel <- trainMOA(model = brism, rating ~ userid + itemid, data = movielensestream, chunksize = 1000, trace=TRUE) summary(mymodel$model) trainMOA.MOA_regressor Train a MOA regressor (e.g. a FIMTDD) on a datastream Description Train a MOA regressor (e.g. a FIMTDD) on a datastream Usage ## S3 method for class 'MOA_regressor' trainMOA(model, formula, data, subset, na.action = na.exclude, transFUN = identity, chunksize = 1000, reset = TRUE, trace = FALSE, options = list(maxruntime = +Inf), ...) Arguments model an object of class MOA_model, as returned by MOA_regressor, e.g. a FIMTDD formula a symbolic description of the model to be fit. data an object of class datastream set up e.g. with datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf or your own datastream. subset an optional vector specifying a subset of observations to be used in the fitting process. na.action a function which indicates what should happen when the data contain NAs. See model.frame for details. Defaults to na.exclude. transFUN a function which is used after obtaining chunksize number of rows from the data datastream before applying model.frame. Useful if you want to change the results get_points on the datastream (e.g. for making sure the factor levels are the same in each chunk of processing, some data cleaning, ...). Defaults to identity. chunksize the number of rows to obtain from the data datastream in one chunk of model processing. Defaults to 1000. Can be used to speed up things according to the backbone architecture of the datastream. reset logical indicating to reset the MOA_regressor so that it forgets what it already has learned. Defaults to TRUE. trace logical, indicating to show information on how many datastream chunks are already processed as a message. options a names list of further options. Currently not used. ... other arguments, currently not used yet Value An object of class MOA_trainedmodel which is a list with elements • model: the updated supplied model object of class MOA_regressor • call: the matched call • na.action: the value of na.action • terms: the terms in the model • transFUN: the transFUN argument See Also MOA_regressor, datastream_file, datastream_dataframe, datastream_matrix, datastream_ffdf, datastream, predict.MOA_trainedmodel Examples mymodel <- MOA_regressor(model = "FIMTDD") mymodel data(iris) iris <- factorise(iris) irisdatastream <- datastream_dataframe(data=iris) irisdatastream$get_points(3) ## Train the model mytrainedmodel <- trainMOA(model = mymodel, Sepal.Length ~ Petal.Length + Species, data = irisdatastream) mytrainedmodel$model irisdatastream$reset() mytrainedmodel <- trainMOA(model = mytrainedmodel$model, Sepal.Length ~ Petal.Length + Species, data = irisdatastream, chunksize = 10, reset=FALSE, trace=TRUE) mytrainedmodel$model
github.com/DataDog/datadog-go/statsd
go
Go
README [¶](#section-readme) --- ### Overview Package `statsd` provides a Go [dogstatsd](http://docs.datadoghq.com/guides/dogstatsd/) client. Dogstatsd extends Statsd, adding tags and histograms. Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Overview [¶](#pkg-overview) Package statsd provides a Go dogstatsd client. Dogstatsd extends the popular statsd, adding tags and histograms and pushing upstream to Datadog. Refer to <http://docs.datadoghq.com/guides/dogstatsd/> for information about DogStatsD. statsd is based on go-statsd-client. ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [type Client](#Client) * + [func CloneWithExtraOptions(c *Client, options ...Option) (*Client, error)](#CloneWithExtraOptions) + [func New(addr string, options ...Option) (*Client, error)](#New) + [func NewBuffered(addr string, buflen int) (*Client, error)](#NewBuffered) + [func NewWithWriter(w statsdWriter, options ...Option) (*Client, error)](#NewWithWriter) * + [func (c *Client) Close() error](#Client.Close) + [func (c *Client) Count(name string, value int64, tags []string, rate float64) error](#Client.Count) + [func (c *Client) Decr(name string, tags []string, rate float64) error](#Client.Decr) + [func (c *Client) Distribution(name string, value float64, tags []string, rate float64) error](#Client.Distribution) + [func (c *Client) Event(e *Event) error](#Client.Event) + [func (c *Client) Flush() error](#Client.Flush) + [func (c *Client) FlushTelemetryMetrics() ClientMetrics](#Client.FlushTelemetryMetrics) + [func (c *Client) Gauge(name string, value float64, tags []string, rate float64) error](#Client.Gauge) + [func (c *Client) Histogram(name string, value float64, tags []string, rate float64) error](#Client.Histogram) + [func (c *Client) Incr(name string, tags []string, rate float64) error](#Client.Incr) + [func (c *Client) ServiceCheck(sc *ServiceCheck) error](#Client.ServiceCheck) + [func (c *Client) Set(name string, value string, tags []string, rate float64) error](#Client.Set) + [func (c *Client) SetWriteTimeout(d time.Duration) error](#Client.SetWriteTimeout) + [func (c *Client) SimpleEvent(title, text string) error](#Client.SimpleEvent) + [func (c *Client) SimpleServiceCheck(name string, status ServiceCheckStatus) error](#Client.SimpleServiceCheck) + [func (c *Client) TimeInMilliseconds(name string, value float64, tags []string, rate float64) error](#Client.TimeInMilliseconds) + [func (c *Client) Timing(name string, value time.Duration, tags []string, rate float64) error](#Client.Timing) * [type ClientInterface](#ClientInterface) * [type ClientMetrics](#ClientMetrics) * [type Event](#Event) * + [func NewEvent(title, text string) *Event](#NewEvent) * + [func (e Event) Check() error](#Event.Check) + [func (e Event) Encode(tags ...string) (string, error)](#Event.Encode) * [type EventAlertType](#EventAlertType) * [type EventPriority](#EventPriority) * [type NoOpClient](#NoOpClient) * + [func (n *NoOpClient) Close() error](#NoOpClient.Close) + [func (n *NoOpClient) Count(name string, value int64, tags []string, rate float64) error](#NoOpClient.Count) + [func (n *NoOpClient) Decr(name string, tags []string, rate float64) error](#NoOpClient.Decr) + [func (n *NoOpClient) Distribution(name string, value float64, tags []string, rate float64) error](#NoOpClient.Distribution) + [func (n *NoOpClient) Event(e *Event) error](#NoOpClient.Event) + [func (n *NoOpClient) Flush() error](#NoOpClient.Flush) + [func (n *NoOpClient) Gauge(name string, value float64, tags []string, rate float64) error](#NoOpClient.Gauge) + [func (n *NoOpClient) Histogram(name string, value float64, tags []string, rate float64) error](#NoOpClient.Histogram) + [func (n *NoOpClient) Incr(name string, tags []string, rate float64) error](#NoOpClient.Incr) + [func (n *NoOpClient) ServiceCheck(sc *ServiceCheck) error](#NoOpClient.ServiceCheck) + [func (n *NoOpClient) Set(name string, value string, tags []string, rate float64) error](#NoOpClient.Set) + [func (n *NoOpClient) SetWriteTimeout(d time.Duration) error](#NoOpClient.SetWriteTimeout) + [func (n *NoOpClient) SimpleEvent(title, text string) error](#NoOpClient.SimpleEvent) + [func (n *NoOpClient) SimpleServiceCheck(name string, status ServiceCheckStatus) error](#NoOpClient.SimpleServiceCheck) + [func (n *NoOpClient) TimeInMilliseconds(name string, value float64, tags []string, rate float64) error](#NoOpClient.TimeInMilliseconds) + [func (n *NoOpClient) Timing(name string, value time.Duration, tags []string, rate float64) error](#NoOpClient.Timing) * [type Option](#Option) * + [func WithAggregationInterval(interval time.Duration) Option](#WithAggregationInterval) + [func WithBufferFlushInterval(bufferFlushInterval time.Duration) Option](#WithBufferFlushInterval) + [func WithBufferPoolSize(bufferPoolSize int) Option](#WithBufferPoolSize) + [func WithBufferShardCount(bufferShardCount int) Option](#WithBufferShardCount) + [func WithChannelMode() Option](#WithChannelMode) + [func WithChannelModeBufferSize(bufferSize int) Option](#WithChannelModeBufferSize) + [func WithClientSideAggregation() Option](#WithClientSideAggregation) + [func WithDevMode() Option](#WithDevMode) + [func WithExtendedClientSideAggregation() Option](#WithExtendedClientSideAggregation) + [func WithMaxBytesPerPayload(MaxBytesPerPayload int) Option](#WithMaxBytesPerPayload) + [func WithMaxMessagesPerPayload(maxMessagesPerPayload int) Option](#WithMaxMessagesPerPayload) + [func WithMutexMode() Option](#WithMutexMode) + [func WithNamespace(namespace string) Option](#WithNamespace) + [func WithSenderQueueSize(senderQueueSize int) Option](#WithSenderQueueSize) + [func WithTags(tags []string) Option](#WithTags) + [func WithTelemetryAddr(addr string) Option](#WithTelemetryAddr) + [func WithWriteTimeoutUDS(writeTimeoutUDS time.Duration) Option](#WithWriteTimeoutUDS) + [func WithoutClientSideAggregation() Option](#WithoutClientSideAggregation) + [func WithoutDevMode() Option](#WithoutDevMode) + [func WithoutTelemetry() Option](#WithoutTelemetry) * [type Options](#Options) * [type ReceivingMode](#ReceivingMode) * [type SenderMetrics](#SenderMetrics) * [type ServiceCheck](#ServiceCheck) * + [func NewServiceCheck(name string, status ServiceCheckStatus) *ServiceCheck](#NewServiceCheck) * + [func (sc ServiceCheck) Check() error](#ServiceCheck.Check) + [func (sc ServiceCheck) Encode(tags ...string) (string, error)](#ServiceCheck.Encode) * [type ServiceCheckStatus](#ServiceCheckStatus) ### Constants [¶](#pkg-constants) ``` const ( WriterNameUDP [string](/builtin#string) = "udp" WriterNameUDS [string](/builtin#string) = "uds" WriterWindowsPipe [string](/builtin#string) = "pipe" ) ``` ``` const DefaultMaxAgentPayloadSize = 8192 ``` DefaultMaxAgentPayloadSize is the default maximum payload size the agent can receive. This can be adjusted by changing dogstatsd_buffer_size in the agent configuration file datadog.yaml. This is also used as the optimal payload size for UDS datagrams. ``` const DefaultUDPBufferPoolSize = 2048 ``` DefaultUDPBufferPoolSize is the default size of the buffer pool for UDP clients. ``` const DefaultUDSBufferPoolSize = 512 ``` DefaultUDSBufferPoolSize is the default size of the buffer pool for UDS clients. ``` const ErrNoClient = noClientErr("statsd client is nil") ``` ErrNoClient is returned if statsd reporting methods are invoked on a nil client. ``` const MaxUDPPayloadSize = 65467 ``` MaxUDPPayloadSize defines the maximum payload size for a UDP datagram. Its value comes from the calculation: 65535 bytes Max UDP datagram size - 8byte UDP header - 60byte max IP headers any number greater than that will see frames being cut out. ``` const OptimalUDPPayloadSize = 1432 ``` OptimalUDPPayloadSize defines the optimal payload size for a UDP datagram, 1432 bytes is optimal for regular networks with an MTU of 1500 so datagrams don't get fragmented. It's generally recommended not to fragment UDP datagrams as losing a single fragment will cause the entire datagram to be lost. ``` const TelemetryInterval = 10 * [time](/time).[Second](/time#Second) ``` TelemetryInterval is the interval at which telemetry will be sent by the client. ``` const UnixAddressPrefix = "unix://" ``` UnixAddressPrefix holds the prefix to use to enable Unix Domain Socket traffic instead of UDP. ``` const WindowsPipeAddressPrefix = `\\.\pipe\` ``` WindowsPipeAddressPrefix holds the prefix to use to enable Windows Named Pipes traffic instead of UDP. ### Variables [¶](#pkg-variables) ``` var ( // DefaultNamespace is the default value for the Namespace option DefaultNamespace = "" // DefaultTags is the default value for the Tags option DefaultTags = [][string](/builtin#string){} // DefaultMaxBytesPerPayload is the default value for the MaxBytesPerPayload option DefaultMaxBytesPerPayload = 0 // DefaultMaxMessagesPerPayload is the default value for the MaxMessagesPerPayload option DefaultMaxMessagesPerPayload = [math](/math).[MaxInt32](/math#MaxInt32) // DefaultBufferPoolSize is the default value for the DefaultBufferPoolSize option DefaultBufferPoolSize = 0 // DefaultBufferFlushInterval is the default value for the BufferFlushInterval option DefaultBufferFlushInterval = 100 * [time](/time).[Millisecond](/time#Millisecond) // DefaultBufferShardCount is the default value for the BufferShardCount option DefaultBufferShardCount = 32 // DefaultSenderQueueSize is the default value for the DefaultSenderQueueSize option DefaultSenderQueueSize = 0 // DefaultWriteTimeoutUDS is the default value for the WriteTimeoutUDS option DefaultWriteTimeoutUDS = 100 * [time](/time).[Millisecond](/time#Millisecond) // DefaultTelemetry is the default value for the Telemetry option DefaultTelemetry = [true](/builtin#true) // DefaultReceivingMode is the default behavior when sending metrics DefaultReceivingMode = [MutexMode](#MutexMode) // DefaultChannelModeBufferSize is the default size of the channel holding incoming metrics DefaultChannelModeBufferSize = 4096 // DefaultAggregationFlushInterval is the default interval for the aggregator to flush metrics. // This should divide the Agent reporting period (default=10s) evenly to reduce "aliasing" that // can cause values to appear irregular. DefaultAggregationFlushInterval = 2 * [time](/time).[Second](/time#Second) // DefaultAggregation DefaultAggregation = [false](/builtin#false) // DefaultExtendedAggregation DefaultExtendedAggregation = [false](/builtin#false) // DefaultDevMode DefaultDevMode = [false](/builtin#false) ) ``` ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Client](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L193) [¶](#Client) ``` type Client struct { // Namespace to prepend to all statsd calls Namespace [string](/builtin#string) // Tags are global tags to be added to every statsd call Tags [][string](/builtin#string) // skipErrors turns off error passing and allows UDS to emulate UDP behaviour SkipErrors [bool](/builtin#bool) // contains filtered or unexported fields } ``` A Client is a handle for sending messages to dogstatsd. It is safe to use one Client from multiple goroutines simultaneously. #### func [CloneWithExtraOptions](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L309) [¶](#CloneWithExtraOptions) ``` func CloneWithExtraOptions(c *[Client](#Client), options ...[Option](#Option)) (*[Client](#Client), [error](/builtin#error)) ``` CloneWithExtraOptions create a new Client with extra options #### func [New](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L279) [¶](#New) ``` func New(addr [string](/builtin#string), options ...[Option](#Option)) (*[Client](#Client), [error](/builtin#error)) ``` New returns a pointer to a new Client given an addr in the format "hostname:port" for UDP, "unix:///path/to/socket" for UDS or "\\.\pipe\path\to\pipe" for Windows Named Pipes. #### func [NewBuffered](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L424) [¶](#NewBuffered) ``` func NewBuffered(addr [string](/builtin#string), buflen [int](/builtin#int)) (*[Client](#Client), [error](/builtin#error)) ``` NewBuffered returns a Client that buffers its output and sends it in chunks. Buflen is the length of the buffer in number of commands. When addr is empty, the client will default to a UDP client and use the DD_AGENT_HOST and (optionally) the DD_DOGSTATSD_PORT environment variables to build the target address. #### func [NewWithWriter](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L300) [¶](#NewWithWriter) ``` func NewWithWriter(w statsdWriter, options ...[Option](#Option)) (*[Client](#Client), [error](/builtin#error)) ``` NewWithWriter creates a new Client with given writer. Writer is a io.WriteCloser + SetWriteTimeout(time.Duration) error #### func (*Client) [Close](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L651) [¶](#Client.Close) ``` func (c *[Client](#Client)) Close() [error](/builtin#error) ``` Close the client connection. #### func (*Client) [Count](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L545) [¶](#Client.Count) ``` func (c *[Client](#Client)) Count(name [string](/builtin#string), value [int64](/builtin#int64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Count tracks how many times something happened per second. #### func (*Client) [Decr](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L581) [¶](#Client.Decr) ``` func (c *[Client](#Client)) Decr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Decr is just Count of -1 #### func (*Client) [Distribution](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L569) [¶](#Client.Distribution) ``` func (c *[Client](#Client)) Distribution(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Distribution tracks the statistical distribution of a set of values across your infrastructure. #### func (*Client) [Event](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L621) [¶](#Client.Event) ``` func (c *[Client](#Client)) Event(e *[Event](#Event)) [error](/builtin#error) ``` Event sends the provided Event. #### func (*Client) [Flush](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L457) [¶](#Client.Flush) ``` func (c *[Client](#Client)) Flush() [error](/builtin#error) ``` Flush forces a flush of all the queued dogstatsd payloads This method is blocking and will not return until everything is sent through the network. In MutexMode, this will also block sampling new data to the client while the workers and sender are flushed. #### func (*Client) [FlushTelemetryMetrics](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L475) [¶](#Client.FlushTelemetryMetrics) ``` func (c *[Client](#Client)) FlushTelemetryMetrics() [ClientMetrics](#ClientMetrics) ``` #### func (*Client) [Gauge](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L533) [¶](#Client.Gauge) ``` func (c *[Client](#Client)) Gauge(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Gauge measures the value of a metric at a particular time. #### func (*Client) [Histogram](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L557) [¶](#Client.Histogram) ``` func (c *[Client](#Client)) Histogram(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Histogram tracks the statistical distribution of a set of values on each host. #### func (*Client) [Incr](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L586) [¶](#Client.Incr) ``` func (c *[Client](#Client)) Incr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Incr is just Count of 1 #### func (*Client) [ServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L636) [¶](#Client.ServiceCheck) ``` func (c *[Client](#Client)) ServiceCheck(sc *[ServiceCheck](#ServiceCheck)) [error](/builtin#error) ``` ServiceCheck sends the provided ServiceCheck. #### func (*Client) [Set](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L591) [¶](#Client.Set) ``` func (c *[Client](#Client)) Set(name [string](/builtin#string), value [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Set counts the number of unique elements in a group. #### func (*Client) [SetWriteTimeout](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L430) [¶](#Client.SetWriteTimeout) ``` func (c *[Client](#Client)) SetWriteTimeout(d [time](/time).[Duration](/time#Duration)) [error](/builtin#error) ``` SetWriteTimeout allows the user to set a custom UDS write timeout. Not supported for UDP or Windows Pipes. #### func (*Client) [SimpleEvent](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L630) [¶](#Client.SimpleEvent) ``` func (c *[Client](#Client)) SimpleEvent(title, text [string](/builtin#string)) [error](/builtin#error) ``` SimpleEvent sends an event with the provided title and text. #### func (*Client) [SimpleServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L645) [¶](#Client.SimpleServiceCheck) ``` func (c *[Client](#Client)) SimpleServiceCheck(name [string](/builtin#string), status [ServiceCheckStatus](#ServiceCheckStatus)) [error](/builtin#error) ``` SimpleServiceCheck sends an serviceCheck with the provided name and status. #### func (*Client) [TimeInMilliseconds](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L609) [¶](#Client.TimeInMilliseconds) ``` func (c *[Client](#Client)) TimeInMilliseconds(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` TimeInMilliseconds sends timing information in milliseconds. It is flushed by statsd with percentiles, mean and other info (<https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing>) #### func (*Client) [Timing](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L603) [¶](#Client.Timing) ``` func (c *[Client](#Client)) Timing(name [string](/builtin#string), value [time](/time).[Duration](/time#Duration), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Timing sends timing information, it is an alias for TimeInMilliseconds #### type [ClientInterface](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L140) [¶](#ClientInterface) ``` type ClientInterface interface { // Gauge measures the value of a metric at a particular time. Gauge(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Count tracks how many times something happened per second. Count(name [string](/builtin#string), value [int64](/builtin#int64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Histogram tracks the statistical distribution of a set of values on each host. Histogram(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Distribution tracks the statistical distribution of a set of values across your infrastructure. Distribution(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Decr is just Count of -1 Decr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Incr is just Count of 1 Incr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Set counts the number of unique elements in a group. Set(name [string](/builtin#string), value [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Timing sends timing information, it is an alias for TimeInMilliseconds Timing(name [string](/builtin#string), value [time](/time).[Duration](/time#Duration), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // TimeInMilliseconds sends timing information in milliseconds. // It is flushed by statsd with percentiles, mean and other info (<https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing>) TimeInMilliseconds(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) // Event sends the provided Event. Event(e *[Event](#Event)) [error](/builtin#error) // SimpleEvent sends an event with the provided title and text. SimpleEvent(title, text [string](/builtin#string)) [error](/builtin#error) // ServiceCheck sends the provided ServiceCheck. ServiceCheck(sc *[ServiceCheck](#ServiceCheck)) [error](/builtin#error) // SimpleServiceCheck sends an serviceCheck with the provided name and status. SimpleServiceCheck(name [string](/builtin#string), status [ServiceCheckStatus](#ServiceCheckStatus)) [error](/builtin#error) // Close the client connection. Close() [error](/builtin#error) // Flush forces a flush of all the queued dogstatsd payloads. Flush() [error](/builtin#error) // SetWriteTimeout allows the user to set a custom write timeout. SetWriteTimeout(d [time](/time).[Duration](/time#Duration)) [error](/builtin#error) } ``` ClientInterface is an interface that exposes the common client functions for the purpose of being able to provide a no-op client or even mocking. This can aid downstream users' with their testing. #### type [ClientMetrics](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L218) [¶](#ClientMetrics) ``` type ClientMetrics struct { TotalMetrics [uint64](/builtin#uint64) TotalMetricsGauge [uint64](/builtin#uint64) TotalMetricsCount [uint64](/builtin#uint64) TotalMetricsHistogram [uint64](/builtin#uint64) TotalMetricsDistribution [uint64](/builtin#uint64) TotalMetricsSet [uint64](/builtin#uint64) TotalMetricsTiming [uint64](/builtin#uint64) TotalEvents [uint64](/builtin#uint64) TotalServiceChecks [uint64](/builtin#uint64) TotalDroppedOnReceive [uint64](/builtin#uint64) } ``` ClientMetrics contains metrics about the client #### type [Event](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L37) [¶](#Event) ``` type Event struct { // Title of the event. Required. Title [string](/builtin#string) // Text is the description of the event. Required. Text [string](/builtin#string) // Timestamp is a timestamp for the event. If not provided, the dogstatsd // server will set this to the current time. Timestamp [time](/time).[Time](/time#Time) // Hostname for the event. Hostname [string](/builtin#string) // AggregationKey groups this event with others of the same key. AggregationKey [string](/builtin#string) // Priority of the event. Can be statsd.Low or statsd.Normal. Priority [EventPriority](#EventPriority) // SourceTypeName is a source type for the event. SourceTypeName [string](/builtin#string) // AlertType can be statsd.Info, statsd.Error, statsd.Warning, or statsd.Success. // If absent, the default value applied by the dogstatsd server is Info. AlertType [EventAlertType](#EventAlertType) // Tags for the event. Tags [][string](/builtin#string) } ``` An Event is an object that can be posted to your DataDog event stream. #### func [NewEvent](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L62) [¶](#NewEvent) ``` func NewEvent(title, text [string](/builtin#string)) *[Event](#Event) ``` NewEvent creates a new event with the given title and text. Error checking against these values is done at send-time, or upon running e.Check. #### func (Event) [Check](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L70) [¶](#Event.Check) ``` func (e [Event](#Event)) Check() [error](/builtin#error) ``` Check verifies that an event is valid. #### func (Event) [Encode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L80) [¶](#Event.Encode) ``` func (e [Event](#Event)) Encode(tags ...[string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Encode returns the dogstatsd wire protocol representation for an event. Tags may be passed which will be added to the encoded output but not to the Event's list of tags, eg. for default tags. #### type [EventAlertType](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L13) [¶](#EventAlertType) ``` type EventAlertType [string](/builtin#string) ``` EventAlertType is the alert type for events ``` const ( // Info is the "info" AlertType for events Info [EventAlertType](#EventAlertType) = "info" // Error is the "error" AlertType for events Error [EventAlertType](#EventAlertType) = "error" // Warning is the "warning" AlertType for events Warning [EventAlertType](#EventAlertType) = "warning" // Success is the "success" AlertType for events Success [EventAlertType](#EventAlertType) = "success" ) ``` #### type [EventPriority](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/event.go#L27) [¶](#EventPriority) ``` type EventPriority [string](/builtin#string) ``` EventPriority is the event priority for events ``` const ( // Normal is the "normal" Priority for events Normal [EventPriority](#EventPriority) = "normal" // Low is the "low" Priority for events Low [EventPriority](#EventPriority) = "low" ) ``` #### type [NoOpClient](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L7) [¶](#NoOpClient) ``` type NoOpClient struct{} ``` NoOpClient is a statsd client that does nothing. Can be useful in testing situations for library users. #### func (*NoOpClient) [Close](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L75) [¶](#NoOpClient.Close) ``` func (n *[NoOpClient](#NoOpClient)) Close() [error](/builtin#error) ``` Close does nothing and returns nil #### func (*NoOpClient) [Count](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L15) [¶](#NoOpClient.Count) ``` func (n *[NoOpClient](#NoOpClient)) Count(name [string](/builtin#string), value [int64](/builtin#int64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Count does nothing and returns nil #### func (*NoOpClient) [Decr](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L30) [¶](#NoOpClient.Decr) ``` func (n *[NoOpClient](#NoOpClient)) Decr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Decr does nothing and returns nil #### func (*NoOpClient) [Distribution](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L25) [¶](#NoOpClient.Distribution) ``` func (n *[NoOpClient](#NoOpClient)) Distribution(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Distribution does nothing and returns nil #### func (*NoOpClient) [Event](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L55) [¶](#NoOpClient.Event) ``` func (n *[NoOpClient](#NoOpClient)) Event(e *[Event](#Event)) [error](/builtin#error) ``` Event does nothing and returns nil #### func (*NoOpClient) [Flush](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L80) [¶](#NoOpClient.Flush) ``` func (n *[NoOpClient](#NoOpClient)) Flush() [error](/builtin#error) ``` Flush does nothing and returns nil #### func (*NoOpClient) [Gauge](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L10) [¶](#NoOpClient.Gauge) ``` func (n *[NoOpClient](#NoOpClient)) Gauge(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Gauge does nothing and returns nil #### func (*NoOpClient) [Histogram](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L20) [¶](#NoOpClient.Histogram) ``` func (n *[NoOpClient](#NoOpClient)) Histogram(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Histogram does nothing and returns nil #### func (*NoOpClient) [Incr](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L35) [¶](#NoOpClient.Incr) ``` func (n *[NoOpClient](#NoOpClient)) Incr(name [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Incr does nothing and returns nil #### func (*NoOpClient) [ServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L65) [¶](#NoOpClient.ServiceCheck) ``` func (n *[NoOpClient](#NoOpClient)) ServiceCheck(sc *[ServiceCheck](#ServiceCheck)) [error](/builtin#error) ``` ServiceCheck does nothing and returns nil #### func (*NoOpClient) [Set](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L40) [¶](#NoOpClient.Set) ``` func (n *[NoOpClient](#NoOpClient)) Set(name [string](/builtin#string), value [string](/builtin#string), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Set does nothing and returns nil #### func (*NoOpClient) [SetWriteTimeout](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L85) [¶](#NoOpClient.SetWriteTimeout) ``` func (n *[NoOpClient](#NoOpClient)) SetWriteTimeout(d [time](/time).[Duration](/time#Duration)) [error](/builtin#error) ``` SetWriteTimeout does nothing and returns nil #### func (*NoOpClient) [SimpleEvent](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L60) [¶](#NoOpClient.SimpleEvent) ``` func (n *[NoOpClient](#NoOpClient)) SimpleEvent(title, text [string](/builtin#string)) [error](/builtin#error) ``` SimpleEvent does nothing and returns nil #### func (*NoOpClient) [SimpleServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L70) [¶](#NoOpClient.SimpleServiceCheck) ``` func (n *[NoOpClient](#NoOpClient)) SimpleServiceCheck(name [string](/builtin#string), status [ServiceCheckStatus](#ServiceCheckStatus)) [error](/builtin#error) ``` SimpleServiceCheck does nothing and returns nil #### func (*NoOpClient) [TimeInMilliseconds](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L50) [¶](#NoOpClient.TimeInMilliseconds) ``` func (n *[NoOpClient](#NoOpClient)) TimeInMilliseconds(name [string](/builtin#string), value [float64](/builtin#float64), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` TimeInMilliseconds does nothing and returns nil #### func (*NoOpClient) [Timing](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/noop.go#L45) [¶](#NoOpClient.Timing) ``` func (n *[NoOpClient](#NoOpClient)) Timing(name [string](/builtin#string), value [time](/time).[Duration](/time#Duration), tags [][string](/builtin#string), rate [float64](/builtin#float64)) [error](/builtin#error) ``` Timing does nothing and returns nil #### type [Option](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L148) [¶](#Option) ``` type Option func(*[Options](#Options)) [error](/builtin#error) ``` Option is a client option. Can return an error if validation fails. #### func [WithAggregationInterval](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L262) [¶](#WithAggregationInterval) ``` func WithAggregationInterval(interval [time](/time).[Duration](/time#Duration)) [Option](#Option) ``` WithAggregationInterval set the aggregation interval #### func [WithBufferFlushInterval](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L195) [¶](#WithBufferFlushInterval) ``` func WithBufferFlushInterval(bufferFlushInterval [time](/time).[Duration](/time#Duration)) [Option](#Option) ``` WithBufferFlushInterval sets the BufferFlushInterval option. #### func [WithBufferPoolSize](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L187) [¶](#WithBufferPoolSize) ``` func WithBufferPoolSize(bufferPoolSize [int](/builtin#int)) [Option](#Option) ``` WithBufferPoolSize sets the BufferPoolSize option. #### func [WithBufferShardCount](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L203) [¶](#WithBufferShardCount) ``` func WithBufferShardCount(bufferShardCount [int](/builtin#int)) [Option](#Option) ``` WithBufferShardCount sets the BufferShardCount option. #### func [WithChannelMode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L238) [¶](#WithChannelMode) ``` func WithChannelMode() [Option](#Option) ``` WithChannelMode will use channel to receive metrics #### func [WithChannelModeBufferSize](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L254) [¶](#WithChannelModeBufferSize) ``` func WithChannelModeBufferSize(bufferSize [int](/builtin#int)) [Option](#Option) ``` WithChannelModeBufferSize the channel buffer size when using "drop mode" #### func [WithClientSideAggregation](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L271) [¶](#WithClientSideAggregation) ``` func WithClientSideAggregation() [Option](#Option) ``` WithClientSideAggregation enables client side aggregation for Gauges, Counts and Sets. Client side aggregation is a beta feature. #### func [WithDevMode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L309) [¶](#WithDevMode) ``` func WithDevMode() [Option](#Option) ``` WithDevMode enables client "dev" mode, sending more Telemetry metrics to help troubleshoot client behavior. #### func [WithExtendedClientSideAggregation](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L291) [¶](#WithExtendedClientSideAggregation) ``` func WithExtendedClientSideAggregation() [Option](#Option) ``` WithExtendedClientSideAggregation enables client side aggregation for all types. This feature is only compatible with Agent's version >=6.25.0 && <7.0.0 or Agent's versions >=7.25.0. Client side aggregation is a beta feature. #### func [WithMaxBytesPerPayload](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L179) [¶](#WithMaxBytesPerPayload) ``` func WithMaxBytesPerPayload(MaxBytesPerPayload [int](/builtin#int)) [Option](#Option) ``` WithMaxBytesPerPayload sets the MaxBytesPerPayload option. #### func [WithMaxMessagesPerPayload](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L171) [¶](#WithMaxMessagesPerPayload) ``` func WithMaxMessagesPerPayload(maxMessagesPerPayload [int](/builtin#int)) [Option](#Option) ``` WithMaxMessagesPerPayload sets the MaxMessagesPerPayload option. #### func [WithMutexMode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L246) [¶](#WithMutexMode) ``` func WithMutexMode() [Option](#Option) ``` WithMutexMode will use mutex to receive metrics #### func [WithNamespace](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L151) [¶](#WithNamespace) ``` func WithNamespace(namespace [string](/builtin#string)) [Option](#Option) ``` WithNamespace sets the Namespace option. #### func [WithSenderQueueSize](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L214) [¶](#WithSenderQueueSize) ``` func WithSenderQueueSize(senderQueueSize [int](/builtin#int)) [Option](#Option) ``` WithSenderQueueSize sets the SenderQueueSize option. #### func [WithTags](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L163) [¶](#WithTags) ``` func WithTags(tags [][string](/builtin#string)) [Option](#Option) ``` WithTags sets the Tags option. #### func [WithTelemetryAddr](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L300) [¶](#WithTelemetryAddr) ``` func WithTelemetryAddr(addr [string](/builtin#string)) [Option](#Option) ``` WithTelemetryAddr specify a different address for telemetry metrics. #### func [WithWriteTimeoutUDS](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L222) [¶](#WithWriteTimeoutUDS) ``` func WithWriteTimeoutUDS(writeTimeoutUDS [time](/time).[Duration](/time#Duration)) [Option](#Option) ``` WithWriteTimeoutUDS sets the WriteTimeoutUDS option. #### func [WithoutClientSideAggregation](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L279) [¶](#WithoutClientSideAggregation) ``` func WithoutClientSideAggregation() [Option](#Option) ``` WithoutClientSideAggregation disables client side aggregation. #### func [WithoutDevMode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L318) [¶](#WithoutDevMode) ``` func WithoutDevMode() [Option](#Option) ``` WithoutDevMode disables client "dev" mode, sending more Telemetry metrics to help troubleshoot client behavior. #### func [WithoutTelemetry](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L230) [¶](#WithoutTelemetry) ``` func WithoutTelemetry() [Option](#Option) ``` WithoutTelemetry disables the telemetry #### type [Options](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/options.go#L48) [¶](#Options) ``` type Options struct { // Namespace to prepend to all metrics, events and service checks name. Namespace [string](/builtin#string) // Tags are global tags to be applied to every metrics, events and service checks. Tags [][string](/builtin#string) // MaxBytesPerPayload is the maximum number of bytes a single payload will contain. // The magic value 0 will set the option to the optimal size for the transport // protocol used when creating the client: 1432 for UDP and 8192 for UDS. MaxBytesPerPayload [int](/builtin#int) // MaxMessagesPerPayload is the maximum number of metrics, events and/or service checks a single payload will contain. // This option can be set to `1` to create an unbuffered client. MaxMessagesPerPayload [int](/builtin#int) // BufferPoolSize is the size of the pool of buffers in number of buffers. // The magic value 0 will set the option to the optimal size for the transport // protocol used when creating the client: 2048 for UDP and 512 for UDS. BufferPoolSize [int](/builtin#int) // BufferFlushInterval is the interval after which the current buffer will get flushed. BufferFlushInterval [time](/time).[Duration](/time#Duration) // BufferShardCount is the number of buffer "shards" that will be used. // Those shards allows the use of multiple buffers at the same time to reduce // lock contention. BufferShardCount [int](/builtin#int) // SenderQueueSize is the size of the sender queue in number of buffers. // The magic value 0 will set the option to the optimal size for the transport // protocol used when creating the client: 2048 for UDP and 512 for UDS. SenderQueueSize [int](/builtin#int) // WriteTimeoutUDS is the timeout after which a UDS packet is dropped. WriteTimeoutUDS [time](/time).[Duration](/time#Duration) // Telemetry is a set of metrics automatically injected by the client in the // dogstatsd stream to be able to monitor the client itself. Telemetry [bool](/builtin#bool) // ReceiveMode determins the behavior of the client when receiving to many // metrics. The client will either drop the metrics if its buffers are // full (ChannelMode mode) or block the caller until the metric can be // handled (MutexMode mode). By default the client will MutexMode. This // option should be set to ChannelMode only when use under very high // load. // // MutexMode uses a mutex internally which is much faster than // channel but causes some lock contention when used with a high number // of threads. Mutex are sharded based on the metrics name which // limit mutex contention when goroutines send different metrics. // // ChannelMode: uses channel (of ChannelModeBufferSize size) to send // metrics and drop metrics if the channel is full. Sending metrics in // this mode is slower that MutexMode (because of the channel), but // will not block the application. This mode is made for application // using many goroutines, sending the same metrics at a very high // volume. The goal is to not slow down the application at the cost of // dropping metrics and having a lower max throughput. ReceiveMode [ReceivingMode](#ReceivingMode) // ChannelModeBufferSize is the size of the channel holding incoming metrics ChannelModeBufferSize [int](/builtin#int) // AggregationFlushInterval is the interval for the aggregator to flush metrics AggregationFlushInterval [time](/time).[Duration](/time#Duration) // [beta] Aggregation enables/disables client side aggregation for // Gauges, Counts and Sets (compatible with every Agent's version). Aggregation [bool](/builtin#bool) // [beta] Extended aggregation enables/disables client side aggregation // for all types. This feature is only compatible with Agent's versions // >=7.25.0 or Agent's version >=6.25.0 && < 7.0.0. ExtendedAggregation [bool](/builtin#bool) // TelemetryAddr specify a different endpoint for telemetry metrics. TelemetryAddr [string](/builtin#string) // DevMode enables the "dev" mode where the client sends much more // telemetry metrics to help troubleshooting the client behavior. DevMode [bool](/builtin#bool) } ``` Options contains the configuration options for a client. #### type [ReceivingMode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/statsd.go#L98) [¶](#ReceivingMode) ``` type ReceivingMode [int](/builtin#int) ``` ``` const ( MutexMode [ReceivingMode](#ReceivingMode) = [iota](/builtin#iota) ChannelMode ) ``` #### type [SenderMetrics](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/sender.go#L19) [¶](#SenderMetrics) ``` type SenderMetrics struct { TotalSentBytes [uint64](/builtin#uint64) TotalSentPayloads [uint64](/builtin#uint64) TotalDroppedPayloads [uint64](/builtin#uint64) TotalDroppedBytes [uint64](/builtin#uint64) TotalDroppedPayloadsQueueFull [uint64](/builtin#uint64) TotalDroppedBytesQueueFull [uint64](/builtin#uint64) TotalDroppedPayloadsWriter [uint64](/builtin#uint64) TotalDroppedBytesWriter [uint64](/builtin#uint64) } ``` SenderMetrics contains metrics about the health of the sender #### type [ServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/service_check.go#L23) [¶](#ServiceCheck) ``` type ServiceCheck struct { // Name of the service check. Required. Name [string](/builtin#string) // Status of service check. Required. Status [ServiceCheckStatus](#ServiceCheckStatus) // Timestamp is a timestamp for the serviceCheck. If not provided, the dogstatsd // server will set this to the current time. Timestamp [time](/time).[Time](/time#Time) // Hostname for the serviceCheck. Hostname [string](/builtin#string) // A message describing the current state of the serviceCheck. Message [string](/builtin#string) // Tags for the serviceCheck. Tags [][string](/builtin#string) } ``` A ServiceCheck is an object that contains status of DataDog service check. #### func [NewServiceCheck](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/service_check.go#L41) [¶](#NewServiceCheck) ``` func NewServiceCheck(name [string](/builtin#string), status [ServiceCheckStatus](#ServiceCheckStatus)) *[ServiceCheck](#ServiceCheck) ``` NewServiceCheck creates a new serviceCheck with the given name and status. Error checking against these values is done at send-time, or upon running sc.Check. #### func (ServiceCheck) [Check](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/service_check.go#L49) [¶](#ServiceCheck.Check) ``` func (sc [ServiceCheck](#ServiceCheck)) Check() [error](/builtin#error) ``` Check verifies that a service check is valid. #### func (ServiceCheck) [Encode](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/service_check.go#L62) [¶](#ServiceCheck.Encode) ``` func (sc [ServiceCheck](#ServiceCheck)) Encode(tags ...[string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Encode returns the dogstatsd wire protocol representation for a service check. Tags may be passed which will be added to the encoded output but not to the Service Check's list of tags, eg. for default tags. #### type [ServiceCheckStatus](https://github.com/DataDog/datadog-go/blob/v4.8.3/statsd/service_check.go#L9) [¶](#ServiceCheckStatus) ``` type ServiceCheckStatus [byte](/builtin#byte) ``` ServiceCheckStatus support ``` const ( // Ok is the "ok" ServiceCheck status Ok [ServiceCheckStatus](#ServiceCheckStatus) = 0 // Warn is the "warning" ServiceCheck status Warn [ServiceCheckStatus](#ServiceCheckStatus) = 1 // Critical is the "critical" ServiceCheck status Critical [ServiceCheckStatus](#ServiceCheckStatus) = 2 // Unknown is the "unknown" ServiceCheck status Unknown [ServiceCheckStatus](#ServiceCheckStatus) = 3 ) ```
simmr
cran
R
Package ‘simmr’ October 2, 2023 Type Package Title A Stable Isotope Mixing Model Version 0.5.1.214 Date 2023-10-02 URL https://github.com/andrewcparnell/simmr, https://andrewcparnell.github.io/simmr/ BugReports https://github.com/andrewcparnell/simmr/issues Language en-US Description Fits Stable Isotope Mixing Models (SIMMs) and is meant as a longer term replace- ment to the previous widely-used package SIAR. SIMMs are used to infer dietary propor- tions of organisms consuming various food sources from observations on the stable isotope val- ues taken from the organisms' tissue samples. However SIMMs can also be used in other scenar- ios, such as in sediment mixing or the composition of fatty acids. The main func- tions are simmr_load() and simmr_mcmc(). The two vignettes con- tain a quick start and a full listing of all the features. The methods used are detailed in the pa- pers Parnell et al 2010 <doi:10.1371/journal.pone.0009672>, and Par- nell et al 2013 <doi:10.1002/env.2221>. Depends R (>= 3.5.0), R2jags, ggplot2 Imports compositions, boot, reshape2, graphics, stats, viridis, bayesplot, checkmate, Rcpp, GGally Encoding UTF-8 License GPL (>= 2) LazyData TRUE Suggests knitr, rmarkdown, readxl, testthat, covr, vdiffr, tibble, ggnewscale VignetteBuilder knitr NeedsCompilation yes RoxygenNote 7.2.3 Repository CRAN Date/Publication 2023-10-02 16:30:02 UTC LinkingTo Rcpp, RcppArmadillo, RcppDist, Author <NAME> [cre, aut], <NAME> [aut] Maintainer <NAME> <<EMAIL>> R topics documented: combine_source... 2 compare_group... 4 compare_source... 6 geese_dat... 8 geese_data_day... 9 plot.simmr_inpu... 10 plot.simmr_outpu... 12 posterior_predictiv... 14 print.simmr_inpu... 16 print.simmr_outpu... 16 prior_vi... 17 simm... 18 simmr_data_... 19 simmr_data_... 20 simmr_elici... 21 simmr_ffv... 23 simmr_loa... 27 simmr_mcm... 30 square_dat... 35 summary.simmr_outpu... 35 combine_sources Combine the dietary proportions from two food sources after running simmr Description This function takes in an object of class simmr_output and combines two of the food sources. It works for single and multiple group data. Usage combine_sources( simmr_out, to_combine = NULL, new_source_name = "combined_source" ) Arguments simmr_out An object of class simmr_output created from simmr_mcmc or simmr_ffvb to_combine The names of exactly two sources. These should match the names given to simmr_load. new_source_name A name to give to the new combined source. Details Often two sources either (1) lie in similar location on the iso-space plot, or (2) are very similar in phylogenetic terms. In case (1) it is common to experience high (negative) posterior correlations between the sources. Combining them can reduce this correlation and improve precision of the estimates. In case (2) we might wish to determine the joint amount eaten of the two sources when combined. This function thus combines two sources after a run of simmr_mcmc or simmr_ffvb (known as a posteriori combination). The new object can then be called with plot.simmr_input or plot.simmr_output to produce iso-space plots of summaries of the output after combination. Value A new simmr_output object Author(s) <NAME> <<EMAIL>>, <NAME> See Also See simmr_mcmc and simmr_ffvb and the associated vignette for examples. Examples # The data data(geese_data) # Load into simmr simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Print it print(simmr_1_out) # Summary summary(simmr_1_out) summary(simmr_1_out, type = "diagnostics") summary(simmr_1_out, type = "correlations") summary(simmr_1_out, type = "statistics") ans <- summary(simmr_1_out, type = c("quantiles", "statistics")) # Plot plot(simmr_1_out) plot(simmr_1_out, type = "boxplot") plot(simmr_1_out, type = "histogram") plot(simmr_1_out, type = "density") plot(simmr_1_out, type = "matrix") simmr_out_combine <- combine_sources(simmr_1_out, to_combine = c("U.lactuca", "Enteromorpha"), new_source_name = "U.lac+Ent" ) plot(simmr_out_combine$input) plot(simmr_out_combine, type = "boxplot", title = "simmr output: combined sources") compare_groups Compare dietary proportions for a single source across different groups Description This function takes in an object of class simmr_output and creates probabilistic comparisons for a given source and a set of at least two groups. Usage compare_groups( simmr_out, source_name = simmr_out$input$source_names[1], groups = 1:2, plot = TRUE ) Arguments simmr_out An object of class simmr_output created from simmr_mcmc or simmr_ffvb. source_name The name of a source. This should match the names exactly given to simmr_load. groups The integer values of the group numbers to be compared. At least two groups must be specified. plot A logical value specifying whether plots should be produced or not. Details When two groups are specified, the function produces a direct calculation of the probability that one group is bigger than the other. When more than two groups are given, the function produces a set of most likely probabilistic orderings for each combination of groups. The function produces boxplots by default and also allows for the storage of the output for further analysis if required. Value If there are two groups, a vector containing the differences between the two groups proportions for that source. If there are multiple groups, a list containing the following fields: Ordering The different possible orderings of the dietary proportions across groups out_all The dietary proportions for this source across the groups specified as columns in a matrix Author(s) <NAME> <<EMAIL>> See Also See simmr_mcmc for complete examples. Examples ## Not run: data(geese_data) simmr_in <- with( geese_data, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means, group = groups ) ) # Print simmr_in # Plot plot(simmr_in, group = 1:8, xlab = expression(paste(delta^13, "C (\\u2030)", sep = "")), ylab = expression(paste(delta^15, "N (\\u2030)", sep = "")), title = "Isospace plot of Inger et al Geese data" ) # Run MCMC for each group simmr_out <- simmr_ffvb(simmr_in) # Print output simmr_out # Summarise output summary(simmr_out, type = "quantiles", group = 1) summary(simmr_out, type = "quantiles", group = c(1, 3)) summary(simmr_out, type = c("quantiles", "statistics"), group = c(1, 3)) # Plot - only a single group allowed plot(simmr_out, type = "boxplot", group = 2, title = "simmr output group 2") plot(simmr_out, type = c("density", "matrix"), grp = 6, title = "simmr output group 6") # Compare groups compare_groups(simmr_out, source = "Zostera", groups = 1:2) compare_groups(simmr_out, source = "Zostera", groups = 1:3) compare_groups(simmr_out, source = "U.lactuca", groups = c(4:5, 7, 2)) ## End(Not run) compare_sources Compare dietary proportions between multiple sources Description This function takes in an object of class simmr_output and creates probabilistic comparisons be- tween the supplied sources. The group number can also be specified. Usage compare_sources( simmr_out, source_names = simmr_out$input$source_names, group = 1, plot = TRUE ) Arguments simmr_out An object of class simmr_output created from simmr_mcmc or simmr_ffvb. source_names The names of at least two sources. These should match the names exactly given to simmr_load. group The integer values of the group numbers to be compared. If not specified as- sumes the first or only group plot A logical value specifying whether plots should be produced or not. Details When two sources are specified, the function produces a direct calculation of the probability that the dietary proportion for one source is bigger than the other. When more than two sources are given, the function produces a set of most likely probabilistic orderings for each combination of sources. The function produces boxplots by default and also allows for the storage of the output for further analysis if required. Value If there are two sources, a vector containing the differences between the two dietary proportion proportions for these two sources. If there are multiple sources, a list containing the following fields: Ordering The different possible orderings of the dietary proportions across sources out_all The dietary proportions for these sources specified as columns in a matrix Author(s) <NAME> <<EMAIL>> See Also See simmr_mcmc for complete examples. Examples data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Print it print(simmr_1_out) # Summary summary(simmr_1_out) summary(simmr_1_out, type = "diagnostics") summary(simmr_1_out, type = "correlations") summary(simmr_1_out, type = "statistics") ans <- summary(simmr_1_out, type = c("quantiles", "statistics")) # Plot plot(simmr_1_out, type = "boxplot") plot(simmr_1_out, type = "histogram") plot(simmr_1_out, type = "density") plot(simmr_1_out, type = "matrix") # Compare two sources compare_sources(simmr_1_out, source_names = c("Zostera", "Grass")) # Compare multiple sources compare_sources(simmr_1_out) geese_data Geese stable isotope mixing data set Description A real Geese data set with 251 observations on 2 isotopes, with 4 sources, and with corrections/trophic enrichment factors (TEFs or TDFs), and concentration dependence means. Taken from Inger et al (2016). See link for paper Usage geese_data Format A list with the following elements mixtures A two column matrix containing delta 13C and delta 15N values respectively source_names A character vector of the food source names tracer_names A character vector of the tracer names (d13C, d15N, d34S) source_means A matrix of source mean values for the tracers in the same order as mixtures above source_sds A matrix of source sd values for the tracers in the same order as mixtures above correction_means A matrix of TEFs mean values for the tracers in the same order as mixtures above correction_sds A matrix of TEFs sd values for the tracers in the same order as mixtures above concentration_means A matrix of concentration dependence mean values for the tracers in the same order as mixtures above ... @seealso simmr_mcmc for an example where it is used Source <doi:10.1111/j.1365-2656.2006.01142.x> geese_data_day1 A smaller version of the Geese stable isotope mixing data set Description A real Geese data set with 9 observations on 2 isotopes, with 4 sources, and with corrections/trophic enrichment factors (TEFs or TDFs), and concentration dependence means. Taken from Inger et al (2016). See link for paper Usage geese_data_day1 Format A list with the following elements mixtures A two column matrix containing delta 13C and delta 15N values respectively source_names A character vector of the food source names tracer_names A character vector of the tracer names (d13C, d15N, d34S) source_means A matrix of source mean values for the tracers in the same order as mixtures above source_sds A matrix of source sd values for the tracers in the same order as mixtures above correction_means A matrix of TEFs mean values for the tracers in the same order as mixtures above correction_sds A matrix of TEFs sd values for the tracers in the same order as mixtures above concentration_means A matrix of concentration dependence mean values for the tracers in the same order as mixtures above ... @seealso simmr_mcmc for an example where it is used Source <doi:10.1111/j.1365-2656.2006.01142.x> plot.simmr_input Plot the simmr_input data created from simmr_load Description This function creates iso-space (AKA tracer-space or delta-space) plots. They are vital in determin- ing whether the data are suitable for running in a SIMM. Usage ## S3 method for class 'simmr_input' plot( x, tracers = c(1, 2), title = "Tracers plot", xlab = colnames(x$mixtures)[tracers[1]], ylab = colnames(x$mixtures)[tracers[2]], sigmas = 1, group = 1:x$n_groups, mix_name = "Mixtures", ggargs = NULL, colour = TRUE, ... ) Arguments x An object of class simmr_input created via the function simmr_load tracers The choice of tracers to plot. If there are more than two tracers, it is recom- mended to plot every pair of tracers to determine whether the mixtures lie in the mixing polygon defined by the sources title A title for the graph xlab The x-axis label. By default this is assumed to be delta-13C but can be made richer if required. See examples below. ylab The y-axis label. By default this is assumed to be delta-15N in per mil but can be changed as with the x-axis label sigmas The number of standard deviations to plot on the source values. Defaults to 1. group Which groups to plot. Can be a single group or multiple groups mix_name A optional string containing the name of the mixture objects, e.g. Geese. ggargs Extra arguments to be included in the ggplot (e.g. axis limits) colour If TRUE (default) creates a plot. If not, puts the plot in black and white ... Not used Details It is desirable to have the vast majority of the mixture observations to be inside the convex hull defined by the food sources. When there are more than two tracers (as in one of the examples below) it is recommended to plot all the different pairs of the food sources. See the vignette for further details of richer plots. Value isospace plot Author(s) <NAME> <<EMAIL>> See Also See plot.simmr_output for plotting the output of a simmr run. See simmr_mcmc for running a simmr object once the iso-space is deemed acceptable. Examples # A simple example with 10 observations, 4 food sources and 2 tracers data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) ### A more complicated example with 30 obs, 3 tracers and 4 sources data(simmr_data_2) simmr_3 <- with( simmr_data_2, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot 3 times - first default d13C vs d15N plot(simmr_3) # Now plot d15N vs d34S plot(simmr_3, tracers = c(2, 3)) # and finally d13C vs d34S plot(simmr_3, tracers = c(1, 3)) # See vignette('simmr') for fancier x-axis labels # An example with multiple groups - the Geese data from Inger et al 2006 data(geese_data) simmr_4 <- with( geese_data, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means, group = groups ) ) # Print simmr_4 # Plot plot(simmr_4, xlab = expression(paste(delta^13, "C (%)", sep = "")), ylab = expression(paste(delta^15, "N (%)", sep = "")), title = "Isospace plot of Inger et al Geese data" ) #' plot.simmr_output Plot different features of an object created from simmr_mcmc or simmr_ffvb. Description This function allows for 4 different types of plots of the simmr output created from simmr_mcmc or simmr_ffvb. The types are: histogram, kernel density plot, matrix plot (most useful) and boxplot. There are some minor customisation options. Usage ## S3 method for class 'simmr_output' plot( x, type = c("isospace", "histogram", "density", "matrix", "boxplot"), group = 1, binwidth = 0.05, alpha = 0.5, title = if (length(group) == 1) { "simmr output plot" } else { paste("simmr output plot: group", group) }, ggargs = NULL, ... ) Arguments x An object of class simmr_output created via simmr_mcmc or simmr_ffvb. type The type of plot required. Can be one or more of ’histogram’, ’density’, ’matrix’, or ’boxplot’ group Which group(s) to plot. binwidth The width of the bins for the histogram. Defaults to 0.05 alpha The degree of transparency of the plots. Not relevant for matrix plots title The title of the plot. ggargs Extra arguments to be included in the ggplot (e.g. axis limits) ... Currently not used Details The matrix plot should form a necessary part of any SIMM analysis since it allows the user to judge which sources are identifiable by the model. Further detail about these plots is provided in the vignette. Some code from https://stackoverflow.com/questions/14711550/is-there-a-way-to- change-the-color-palette-for-ggallyggpairs-using-ggplot accessed March 2023 Value one or more of ’histogram’, ’density’, ’matrix’, or ’boxplot’ Author(s) <NAME> <<EMAIL>>, <NAME> See Also See simmr_mcmc and simmr_ffvb for creating objects suitable for this function, and many more ex- amples. See also simmr_load for creating simmr objects, plot.simmr_input for creating isospace plots, summary.simmr_output for summarising output. Examples # A simple example with 10 observations, 2 tracers and 4 sources # The data data(geese_data) # Load into simmr simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Plot plot(simmr_1_out) # Creates all 4 plots plot(simmr_1_out, type = "boxplot") plot(simmr_1_out, type = "histogram") plot(simmr_1_out, type = "density") plot(simmr_1_out, type = "matrix") posterior_predictive Plot the posterior predictive distribution for a simmr run Description This function takes the output from simmr_mcmc or simmr_ffvb and plots the posterior predic- tive distribution to enable visualisation of model fit. The simulated posterior predicted values are returned as part of the object and can be saved for external use Usage posterior_predictive(simmr_out, group = 1, prob = 0.5, plot_ppc = TRUE) Arguments simmr_out A run of the simmr model from simmr_mcmc or simmr_ffvb. group Which group to run it for (currently only numeric rather than group names) prob The probability interval for the posterior predictives. The default is 0.5 (i.e. 50pc intervals) plot_ppc Whether to create a bayesplot of the posterior predictive or not. Value plot of posterior predictives and simulated values Examples data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Prior predictive post_pred <- posterior_predictive(simmr_1_out) print.simmr_input Print simmr input object Description Print simmr input object Usage ## S3 method for class 'simmr_input' print(x, ...) Arguments x An object of class simmr_input ... Other arguments (not supported) Value A neat presentation of your simmr object. print.simmr_output Print simmr output object Description Print simmr output object Usage ## S3 method for class 'simmr_output' print(x, ...) Arguments x An object of class simmr_output ... Other arguments (not supported) Value Returns a neat summary of the object See Also simmr_mcmc and simmr_ffvb for creating simmr_output objects prior_viz Plot the prior distribution for a simmr run Description This function takes the output from simmr_mcmc or simmr_ffvb and plots the prior distribution to enable visual inspection. This can be used by itself or together with posterior_predictive to visually evaluate the influence of the prior on the posterior distribution. Usage prior_viz( simmr_out, group = 1, plot = TRUE, include_posterior = TRUE, n_sims = 10000, scales = "free" ) Arguments simmr_out A run of the simmr model from simmr_mcmc or simmr_ffvb group Which group to run it for (currently only numeric rather than group names) plot Whether to create a density plot of the prior or not. The simulated prior values are returned as part of the object include_posterior Whether to include the posterior distribution on top of the priors. Defaults to TRUE n_sims The number of simulations from the prior distribution scales The type of scale from facet_wrap allowing for fixed, free, free_x, free_y Value A list containing plot: the ggplot object (useful if requires customisation), and sim: the simulated prior values which can be compared with the posterior densities Examples data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Prior predictive prior <- prior_viz(simmr_1_out) head(prior$p_prior_sim) summary(prior$p_prior_sim) simmr simmr: A package for fitting stable isotope mixing models via JAGS and FFVB in R Description This package runs a simple Stable Isotope Mixing Model (SIMM) and is meant as a longer term re- placement to the previous function SIAR.. These are used to infer dietary proportions of organisms consuming various food sources from observations on the stable isotope values taken from the or- ganisms’ tissue samples. However SIMMs can also be used in other scenarios, such as in sediment mixing or the composition of fatty acids. The main functions are simmr_load, simmr_mcmc, and simmr_ffvb. The help files contain examples of the use of this package. See also the vignette for a longer walkthrough. Details An even longer term replacement for properly running SIMMs is MixSIAR, which allows for more detailed random effects and the inclusion of covariates. Author(s) <NAME> <<EMAIL>> References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Bayesian stable isotope mixing models. Environmetrics, 24(6):387–399, 2013. <NAME>, <NAME>, <NAME>, and <NAME>. Source partitioning using stable isotopes: coping with too much variation. PLoS ONE, 5(3):5, 2010. Examples # A first example with 2 tracers (isotopes), 10 observations, and 4 food sources data(geese_data_day1) simmr_in <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_in) # MCMC run simmr_out <- simmr_mcmc(simmr_in) # Check convergence - values should all be close to 1 summary(simmr_out, type = "diagnostics") # Look at output summary(simmr_out, type = "statistics") # Look at influence of priors prior_viz(simmr_out) # Plot output plot(simmr_out, type = "histogram") simmr_data_1 A simple fake stable isotope mixing data set Description A simple fake data set with 10 observations on 2 isotopes, with 4 sources, and with corrections/trophic enrichment factors (TEFs or TDFs), and concentration dependence means Usage simmr_data_1 Format A list with the following elements mixtures A two column matrix containing delta 13C and delta 15N values respectively source_names A character vector of the food source names tracer_names A character vector of the tracer names (d13C and d15N) source_means A matrix of source mean values for the tracers in the same order as mixtures above source_sds A matrix of source sd values for the tracers in the same order as mixtures above correction_means A matrix of TEFs mean values for the tracers in the same order as mixtures above correction_sds A matrix of TEFs sd values for the tracers in the same order as mixtures above concentration_means A matrix of concentration dependence mean values for the tracers in the same order as mixtures above ... @seealso simmr_mcmc for an example where it is used simmr_data_2 A 3-isotope fake stable isotope mixing data set Description A fake data set with 30 observations on 3 isotopes, with 4 sources, and with corrections/trophic enrichment factors (TEFs or TDFs), and concentration dependence means Usage simmr_data_2 Format A list with the following elements mixtures A three column matrix containing delta 13C, delta 15N, and delta 34S values respectively source_names A character vector of the food source names tracer_names A character vector of the tracer names (d13C, d15N, d34S) source_means A matrix of source mean values for the tracers in the same order as mixtures above source_sds A matrix of source sd values for the tracers in the same order as mixtures above correction_means A matrix of TEFs mean values for the tracers in the same order as mixtures above correction_sds A matrix of TEFs sd values for the tracers in the same order as mixtures above concentration_means A matrix of concentration dependence mean values for the tracers in the same order as mixtures above ... @seealso simmr_mcmc for an example where it is used simmr_elicit Function to allow informative prior distribution to be included in simmr Description The main simmr_mcmc function allows for a prior distribution to be set for the dietary proportions. The prior distribution is specified by transforming the dietary proportions using the centralised log ratio (CLR). The simmr_elicit and simmr_elicit functions allows the user to specify prior means and standard deviations for each of the dietary proportions, and then finds CLR-transformed values suitable for input into simmr_mcmc. Usage simmr_elicit( n_sources, proportion_means = rep(1/n_sources, n_sources), proportion_sds = rep(0.1, n_sources), n_sims = 1000 ) Arguments n_sources The number of sources required proportion_means The desired prior proportion means. These should sum to 1. Should be a vector of length n_sources proportion_sds The desired prior proportions standard deviations. These have no restricted sum but should be reasonable estimates for a proportion. n_sims The number of simulations for which to run the optimisation routine. Details The function takes the desired proportion means and standard deviations, and fits an optimised least squares to the means and standard deviations in turn to produced CLR-transformed estimates for use in simmr_mcmc. Using prior information in SIMMs is highly desirable given the restricted nature of the inference. The prior information might come from previous studies, other experiments, or other observations of e.g. animal behaviour. Due to the nature of the restricted space over which the dietary proportions can span, and the fact that this function uses numerical optimisation, the procedure will not match the target dietary proportion means and standard deviations exactly. If this problem is severe, try increasing the n_sims value. Value A list object with two components mean The best estimates of the mean to use in control.prior in simmr_mcmc sd The best estimates of the standard deviations to use in control.prior in simmr_mcmc Author(s) <NAME> <<EMAIL>> Examples # Data set: 10 observations, 2 tracers, 4 sources data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Look at the prior influence prior_viz(simmr_1_out) # Summary summary(simmr_1_out, "quantiles") # A bit vague: # 2.5% 25% 50% 75% 97.5% # Source A 0.029 0.115 0.203 0.312 0.498 # Source B 0.146 0.232 0.284 0.338 0.453 # Source C 0.216 0.255 0.275 0.296 0.342 # Source D 0.032 0.123 0.205 0.299 0.465 # Now suppose I had prior information that: # proportion means = 0.5,0.2,0.2,0.1 # proportion sds = 0.08,0.02,0.01,0.02 prior <- simmr_elicit(4, c(0.5, 0.2, 0.2, 0.1), c(0.08, 0.02, 0.01, 0.02)) simmr_1a_out <- simmr_mcmc(simmr_1, prior_control = list(means = prior$mean, sd = prior$sd, sigma_shape = c(3,3), sigma_rate = c(3/50, 3/50))) #' # Look at the prior influence now prior_viz(simmr_1a_out) summary(simmr_1a_out, "quantiles") # Much more precise: # 2.5% 25% 50% 75% 97.5% # Source A 0.441 0.494 0.523 0.553 0.610 # Source B 0.144 0.173 0.188 0.204 0.236 # Source C 0.160 0.183 0.196 0.207 0.228 # Source D 0.060 0.079 0.091 0.105 0.135 simmr_ffvb Run a simmr_input object through the Fixed Form Variational Bayes(FFVB) function Description This is the main function of simmr. It takes a simmr_input object created via simmr_load, runs it in fixed form Variational Bayes to determine the dietary proportions, and then outputs a simmr_output object for further analysis and plotting via summary.simmr_output and plot.simmr_output. Usage simmr_ffvb( simmr_in, prior_control = list(mu_0 = rep(0, simmr_in$n_sources), sigma_0 = 1), ffvb_control = list(n_output = 3600, S = 100, P = 9, beta_1 = 0.9, beta_2 = 0.9, tau = 100, eps_0 = 0.1, t_W = 50) ) Arguments simmr_in An object created via the function simmr_load prior_control A list of values including arguments named mu_0 (prior for mu), and sigma_0 (prior for sigma). ffvb_control A list of values including arguments named n_output (number of rows in theta output), S (number of samples taken at each iteration of the algorithm), P (pa- tience parameter), beta_1 and beta_2 (adaptive learning weights), tau (thresh- old for exploring learning space), eps_0 (fixed learning rate), t_W (rolling win- dow size) Value An object of class simmr_output with two named top-level components: input The simmr_input object given to the simmr_ffvb function output A set of outputs produced by the FFVB function. These can be analysed using the summary.simmr_output and plot.simmr_output functions. Author(s) <NAME> <<EMAIL>>, <NAME> References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Bayesian stable isotope mixing models. Environmetrics, 24(6):387–399, 2013. <NAME>, <NAME>, <NAME>, and <NAME>. Source partitioning using stable isotopes: coping with too much variation. PLoS ONE, 5(3):5, 2010. See Also simmr_load for creating objects suitable for this function, plot.simmr_input for creating isospace plots, summary.simmr_output for summarising output, and plot.simmr_output for plotting out- put. Examples ## Not run: ## See the package vignette for a detailed run through of these 4 examples # Data set 1: 10 obs on 2 isos, 4 sources, with tefs and concdep data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # FFVB run simmr_1_out <- simmr_ffvb(simmr_1) # Print it print(simmr_1_out) # Summary summary(simmr_1_out, type = "correlations") summary(simmr_1_out, type = "statistics") ans <- summary(simmr_1_out, type = c("quantiles", "statistics")) # Plot plot(simmr_1_out, type = "boxplot") plot(simmr_1_out, type = "histogram") plot(simmr_1_out, type = "density") plot(simmr_1_out, type = "matrix") # Compare two sources compare_sources(simmr_1_out, source_names = c("Zostera", "Enteromorpha")) # Compare multiple sources compare_sources(simmr_1_out) ##################################################################################### # A version with just one observation data(geese_data_day1) simmr_2 <- with( geese_data_day1, simmr_load( mixtures = mixtures[1, , drop = FALSE], source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_2) # FFVB run - automatically detects the single observation simmr_2_out <- simmr_ffvb(simmr_2) # Print it print(simmr_2_out) # Summary summary(simmr_2_out) ans <- summary(simmr_2_out, type = c("quantiles")) # Plot plot(simmr_2_out) plot(simmr_2_out, type = "boxplot") plot(simmr_2_out, type = "histogram") plot(simmr_2_out, type = "density") plot(simmr_2_out, type = "matrix") ##################################################################################### # Data set 2: 3 isotopes (d13C, d15N and d34S), 30 observations, 4 sources data(simmr_data_2) simmr_3 <- with( simmr_data_2, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Get summary print(simmr_3) # Plot 3 times plot(simmr_3) plot(simmr_3, tracers = c(2, 3)) plot(simmr_3, tracers = c(1, 3)) # See vignette('simmr') for fancier axis labels # FFVB run simmr_3_out <- simmr_ffvb(simmr_3) # Print it print(simmr_3_out) # Summary summary(simmr_3_out) summary(simmr_3_out, type = "quantiles") summary(simmr_3_out, type = "correlations") # Plot plot(simmr_3_out) plot(simmr_3_out, type = "boxplot") plot(simmr_3_out, type = "histogram") plot(simmr_3_out, type = "density") plot(simmr_3_out, type = "matrix") ################################################################ # Data set 5 - Multiple groups Geese data from Inger et al 2006 # Do this in raw data format - Note that there's quite a few mixtures! data(geese_data) simmr_5 <- with( geese_data, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means, group = groups ) ) # Plot plot(simmr_5, xlab = expression(paste(delta^13, "C (\\u2030)", sep = "")), ylab = expression(paste(delta^15, "N (\\u2030)", sep = "")), title = "Isospace plot of Inger et al Geese data" ) # Run MCMC for each group simmr_5_out <- simmr_ffvb(simmr_5) # Summarise output summary(simmr_5_out, type = "quantiles", group = 1) summary(simmr_5_out, type = "quantiles", group = c(1, 3)) summary(simmr_5_out, type = c("quantiles", "statistics"), group = c(1, 3)) # Plot - only a single group allowed plot(simmr_5_out, type = "boxplot", group = 2, title = "simmr output group 2") plot(simmr_5_out, type = c("density", "matrix"), grp = 6, title = "simmr output group 6") # Compare sources within a group compare_sources(simmr_5_out, source_names = c("Zostera", "U.lactuca"), group = 2) compare_sources(simmr_5_out, group = 2) # Compare between groups compare_groups(simmr_5_out, source = "Zostera", groups = 1:2) compare_groups(simmr_5_out, source = "Zostera", groups = 1:3) compare_groups(simmr_5_out, source = "U.lactuca", groups = c(4:5, 7, 2)) ## End(Not run) simmr_load Function to load in simmr data and check for errors Description This function takes in the mixture data, food source means and standard deviations, and (optionally) correction factor means and standard deviations, and concentration proportions. It performs some (non-exhaustive) checking of the data to make sure it will run through simmr. It outputs an object of class simmr_input. Usage simmr_load( mixtures, source_names, source_means, source_sds, correction_means = NULL, correction_sds = NULL, concentration_means = NULL, group = NULL ) Arguments mixtures The mixture data given as a matrix where the number of rows is the number of observations and the number of columns is the number of tracers (usually isotopes) source_names The names of the sources given as a character string source_means The means of the source values, given as a matrix where the number of rows is the number of sources and the number of columns is the number of tracers source_sds The standard deviations of the source values, given as a matrix where the number of rows is the number of sources and the number of columns is the number of tracers correction_means The means of the correction values, given as a matrix where the number of rows is the number of sources and the number of columns is the number of tracers. If not provided these are set to 0. correction_sds The standard deviations of the correction values, given as a matrix where the number of rows is the number of sources and the number of columns is the number of tracers. If not provided these are set to 0. concentration_means The means of the concentration values, given as a matrix where the number of rows is the number of sources and the number of columns is the number of tracers. These should be between 0 and 1. If not provided these are all set to 1. group A grouping variable. These can be a character or factor variable Details For standard stable isotope mixture modelling, the mixture matrix will contain a row for each indi- vidual and a column for each isotopic value. simmr will allow for any number of isotopes and any number of observations, within computational limits. The source means/sds should be provided for each food source on each isotope. The correction means (usually trophic enrichment factors) can be set as zero if required, and should be of the same shape as the source values. The concentration dependence means should be estimated values of the proportion of each element in the food source in question and should be given in proportion format between 0 and 1. At present there is no means to include concentration standard deviations. Value An object of class simmr_input with the following elements: mixtures The mixture data source_neams Source means sources_sds Source standard deviations correction_means Correction means correction_sds Correction standard deviations concentration_means Concentration dependence means n_obs The number of observations n_tracers The number of tracers/isotopes n_sources The number of sources n_groups The number of groups Author(s) <NAME> <<EMAIL>> See Also See simmr_mcmc for complete examples. Examples # A simple example with 10 observations, 2 tracers and 4 sources data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) print(simmr_1) simmr_mcmc Run a simmr_input object through the main simmr Markov chain Monte Carlo (MCMC) function Description This is the main function of simmr. It takes a simmr_input object created via simmr_load, runs an MCMC to determine the dietary proportions, and then outputs a simmr_output object for further analysis and plotting via summary.simmr_output and plot.simmr_output. Usage simmr_mcmc( simmr_in, prior_control = list(means = rep(0, simmr_in$n_sources), sd = rep(1, simmr_in$n_sources), sigma_shape = rep(3, simmr_in$n_tracers), sigma_rate = rep(3/50, simmr_in$n_tracers)), mcmc_control = list(iter = 10000, burn = 1000, thin = 10, n.chain = 4) ) Arguments simmr_in An object created via the function simmr_load prior_control A list of values including arguments named: means and sd which represent the prior means and standard deviations of the dietary proportions in centralised log-ratio space; shape and rate which represent the prior distribution on the residual standard deviation. These can usually be left at their default values un- less you wish to include to include prior information, in which case you should use the function simmr_elicit. mcmc_control A list of values including arguments named iter (number of iterations), burn (size of burn-in), thin (amount of thinning), and n.chain (number of MCMC chains). Details If, after running simmr_mcmc the convergence diagnostics in summary.simmr_output are not sat- isfactory, the values of iter, burn and thin in mcmc_control should be increased by a factor of 10. Value An object of class simmr_output with two named top-level components: input The simmr_input object given to the simmr_mcmc function output A set of MCMC chains of class mcmc.list from the coda package. These can be analysed using the summary.simmr_output and plot.simmr_output func- tions. Author(s) <NAME> <<EMAIL>> References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Bayesian stable isotope mixing models. Environmetrics, 24(6):387–399, 2013. <NAME>, <NAME>, <NAME>, and <NAME>. Source partitioning using stable isotopes: coping with too much variation. PLoS ONE, 5(3):5, 2010. See Also simmr_load for creating objects suitable for this function, plot.simmr_input for creating isospace plots, summary.simmr_output for summarising output, and plot.simmr_output for plotting out- put. Examples ## Not run: ## See the package vignette for a detailed run through of these 4 examples # Data set 1: 10 obs on 2 isos, 4 sources, with tefs and concdep data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # Print simmr_1 # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Print it print(simmr_1_out) # Summary summary(simmr_1_out, type = "diagnostics") summary(simmr_1_out, type = "correlations") summary(simmr_1_out, type = "statistics") ans <- summary(simmr_1_out, type = c("quantiles", "statistics")) # Plot plot(simmr_1_out, type = "boxplot") plot(simmr_1_out, type = "histogram") plot(simmr_1_out, type = "density") plot(simmr_1_out, type = "matrix") # Compare two sources compare_sources(simmr_1_out, source_names = c("Zostera", "Enteromorpha")) # Compare multiple sources compare_sources(simmr_1_out) ##################################################################################### # A version with just one observation data(geese_data_day1) simmr_2 <- with( geese_data_day1, simmr_load( mixtures = mixtures[1, , drop = FALSE], source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_2) # MCMC run - automatically detects the single observation simmr_2_out <- simmr_mcmc(simmr_2) # Print it print(simmr_2_out) # Summary summary(simmr_2_out) summary(simmr_2_out, type = "diagnostics") ans <- summary(simmr_2_out, type = c("quantiles")) # Plot plot(simmr_2_out) plot(simmr_2_out, type = "boxplot") plot(simmr_2_out, type = "histogram") plot(simmr_2_out, type = "density") plot(simmr_2_out, type = "matrix") ##################################################################################### # Data set 2: 3 isotopes (d13C, d15N and d34S), 30 observations, 4 sources data(simmr_data_2) simmr_3 <- with( simmr_data_2, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Get summary print(simmr_3) # Plot 3 times plot(simmr_3) plot(simmr_3, tracers = c(2, 3)) plot(simmr_3, tracers = c(1, 3)) # See vignette('simmr') for fancier axis labels # MCMC run simmr_3_out <- simmr_mcmc(simmr_3) # Print it print(simmr_3_out) # Summary summary(simmr_3_out) summary(simmr_3_out, type = "diagnostics") summary(simmr_3_out, type = "quantiles") summary(simmr_3_out, type = "correlations") # Plot plot(simmr_3_out) plot(simmr_3_out, type = "boxplot") plot(simmr_3_out, type = "histogram") plot(simmr_3_out, type = "density") plot(simmr_3_out, type = "matrix") ##################################################################################### # Data set 5 - Multiple groups Geese data from Inger et al 2006 # Do this in raw data format - Note that there's quite a few mixtures! data(geese_data) simmr_5 <- with( geese_data, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means, group = groups ) ) # Plot plot(simmr_5, xlab = expression(paste(delta^13, "C (\\u2030)", sep = "")), ylab = expression(paste(delta^15, "N (\\u2030)", sep = "")), title = "Isospace plot of Inger et al Geese data" ) # Run MCMC for each group simmr_5_out <- simmr_mcmc(simmr_5) # Summarise output summary(simmr_5_out, type = "quantiles", group = 1) summary(simmr_5_out, type = "quantiles", group = c(1, 3)) summary(simmr_5_out, type = c("quantiles", "statistics"), group = c(1, 3)) # Plot - only a single group allowed plot(simmr_5_out, type = "boxplot", group = 2, title = "simmr output group 2") plot(simmr_5_out, type = c("density", "matrix"), grp = 6, title = "simmr output group 6") # Compare sources within a group compare_sources(simmr_5_out, source_names = c("Zostera", "U.lactuca"), group = 2) compare_sources(simmr_5_out, group = 2) # Compare between groups compare_groups(simmr_5_out, source = "Zostera", groups = 1:2) compare_groups(simmr_5_out, source = "Zostera", groups = 1:3) compare_groups(simmr_5_out, source = "U.lactuca", groups = c(4:5, 7, 2)) ## End(Not run) square_data An artificial data set used to indicate effect of priors Description A fake box data set identified by Fry (2014) as a failing of SIMMs See the link for more interpreta- tion of these data and the output Usage square_data Format A list with the following elements mixtures A two column matrix containing delta 13C and delta 15N values respectively source_names A character vector of the food source names tracer_names A character vector of the tracer names (d13C, d15N) source_means A matrix of source mean values for the tracers in the same order as mixtures above source_sds A matrix of source sd values for the tracers in the same order as mixtures above correction_means A matrix of TEFs mean values for the tracers in the same order as mixtures above correction_sds A matrix of TEFs sd values for the tracers in the same order as mixtures above concentration_means A matrix of concentration dependence mean values for the tracers in the same order as mixtures above ... @seealso simmr_mcmc for an example where it is used Source <doi:10.3354/meps10535> summary.simmr_output Summarises the output created with simmr_mcmc or simmr_ffvb Description Produces textual summaries and convergence diagnostics for an object created with simmr_mcmc or simmr_ffvb. The different options are: ’diagnostics’ which produces Brooks-Gelman-Rubin diagnostics to assess MCMC convergence, ’quantiles’ which produces credible intervals for the parameters, ’statistics’ which produces means and standard deviations, and ’correlations’ which produces correlations between the parameters. Usage ## S3 method for class 'simmr_output' summary( object, type = c("diagnostics", "quantiles", "statistics", "correlations"), group = 1, ... ) Arguments object An object of class simmr_output produced by the function simmr_mcmc or simmr_ffvb type The type of output required. At least none of ’diagnostics’, ’quantiles’, ’statis- tics’, or ’correlations’. group Which group or groups the output is required for. ... Not used Details The quantile output allows easy calculation of 95 per cent credible intervals of the posterior dietary proportions. The correlations, along with the matrix plot in plot.simmr_output allow the user to judge which sources are non-identifiable. The Gelman diagnostic values should be close to 1 to ensure satisfactory convergence. When multiple groups are included, the output automatically includes the results for all groups. Value A list containing the following components: gelman The convergence diagnostics quantiles The quantiles of each parameter from the posterior distribution statistics The means and standard deviations of each parameter correlations The posterior correlations between the parameters Note that this object is reported silently so will be discarded unless the function is called with an object as in the example below. Author(s) <NAME> <<EMAIL>>, <NAME> See Also See simmr_mcmc and simmr_ffvbfor creating objects suitable for this function, and many more ex- amples. See also simmr_load for creating simmr objects, plot.simmr_input for creating isospace plots, plot.simmr_output for plotting output. summary.simmr_output 37 Examples # A simple example with 10 observations, 2 tracers and 4 sources # The data data(geese_data_day1) simmr_1 <- with( geese_data_day1, simmr_load( mixtures = mixtures, source_names = source_names, source_means = source_means, source_sds = source_sds, correction_means = correction_means, correction_sds = correction_sds, concentration_means = concentration_means ) ) # Plot plot(simmr_1) # MCMC run simmr_1_out <- simmr_mcmc(simmr_1) # Summarise summary(simmr_1_out) # This outputs all the summaries summary(simmr_1_out, type = "diagnostics") # Just the diagnostics # Store the output in an ans <- summary(simmr_1_out, type = c("quantiles", "statistics") )
malachite
rust
Rust
Module malachite::bools === Functions for working with `bool`s. Modules --- * constantsConstants associated with `bool`s. * exhaustiveAn iterator that generates `bool`s without repetition. * not_assignThe implementation of `NotAssign` for `bool`. * randomIterators that generate `bool`s randomly. Module malachite::chars === Functions for working with `char`s. Modules --- * constantsConstants associated with `char`s. * crementFunctions for incrementing and decrementing `char`s. * exhaustiveIterators that generate `char`s without repetition. * randomIterators that generate `char`s randomly. Functions --- * char_is_graphicDetermines whether a `char` is graphic. Module malachite::comparison === Macros and traits related to comparing values. Modules --- * macrosMacros related to comparing values. * traitsTraits related to comparing values. Module malachite::integer === `Integer`, a type representing integers with arbitrarily large absolute values. Modules --- * arithmeticTraits for arithmetic. * comparisonTraits for comparing `Integer`s for equality or order. * conversionTraits for converting to and from `Integer`s, converting to and from strings, and extracting digits. * exhaustiveIterators that generate `Integer`s without repetition. * logicTraits for logic and bit manipulation. * randomIterators that generate `Integer`s randomly. Structs --- * IntegerAn integer. Struct malachite::integer::Integer === ``` pub struct Integer { /* private fields */ } ``` An integer. Any `Integer` whose absolute value is small enough to fit into a `Limb` is represented inline. Only integers outside this range incur the costs of heap-allocation. Implementations --- ### impl Integer #### pub const fn unsigned_abs_ref(&self) -> &Natural Finds the absolute value of an `Integer`, taking the `Integer` by reference and returning a reference to the internal `Natural` absolute value. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(*Integer::ZERO.unsigned_abs_ref(), 0); assert_eq!(*Integer::from(123).unsigned_abs_ref(), 123); assert_eq!(*Integer::from(-123).unsigned_abs_ref(), 123); ``` #### pub fn mutate_unsigned_abs<F, T>(&mut self, f: F) -> Twhere F: FnOnce(&mut Natural) -> T, Mutates the absolute value of an `Integer` using a provided closure, and then returns whatever the closure returns. This function is similar to the `unsigned_abs_ref` function, which returns a reference to the absolute value. A function that returns a *mutable* reference would be too dangerous, as it could leave the `Integer` in an invalid state (specifically, with a negative sign but a zero absolute value). So rather than returning a mutable reference, this function allows mutation of the absolute value using a closure. After the closure executes, this function ensures that the `Integer` remains valid. There is only constant time and memory overhead on top of the time and memory used by the closure. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_base::num::basic::traits::Two; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; let mut n = Integer::from(-123); let remainder = n.mutate_unsigned_abs(|x| x.div_assign_mod(Natural::TWO)); assert_eq!(n, -61); assert_eq!(remainder, 1); let mut n = Integer::from(-123); n.mutate_unsigned_abs(|x| *x >>= 10); assert_eq!(n, 0); ``` ### impl Integer #### pub fn from_sign_and_abs(sign: bool, abs: Natural) -> Integer Converts a sign and a `Natural` to an `Integer`, taking the `Natural` by value. The `Natural` becomes the `Integer`’s absolute value, and the sign indicates whether the `Integer` should be non-negative. If the `Natural` is zero, then the `Integer` will be non-negative regardless of the sign. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from_sign_and_abs(true, Natural::from(123u32)), 123); assert_eq!(Integer::from_sign_and_abs(false, Natural::from(123u32)), -123); ``` #### pub fn from_sign_and_abs_ref(sign: bool, abs: &Natural) -> Integer Converts a sign and an `Natural` to an `Integer`, taking the `Natural` by reference. The `Natural` becomes the `Integer`’s absolute value, and the sign indicates whether the `Integer` should be non-negative. If the `Natural` is zero, then the `Integer` will be non-negative regardless of the sign. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, $n$ is `abs.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from_sign_and_abs_ref(true, &Natural::from(123u32)), 123); assert_eq!(Integer::from_sign_and_abs_ref(false, &Natural::from(123u32)), -123); ``` ### impl Integer #### pub const fn const_from_unsigned(x: u64) -> Integer Converts a `Limb` to an `Integer`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; const TEN: Integer = Integer::const_from_unsigned(10); assert_eq!(TEN, 10); ``` #### pub const fn const_from_signed(x: i64) -> Integer Converts a `SignedLimb` to an `Integer`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; const TEN: Integer = Integer::const_from_signed(10); assert_eq!(TEN, 10); const NEGATIVE_TEN: Integer = Integer::const_from_signed(-10); assert_eq!(NEGATIVE_TEN, -10); ``` ### impl Integer #### pub fn from_twos_complement_limbs_asc(xs: &[u64]) -> Integer Converts a slice of limbs to an `Integer`, in ascending order, so that less significant limbs have lower indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function borrows a slice. If taking ownership of a `Vec` is possible instead, `from_owned_twos_complement_limbs_asc` is more efficient. This function is more efficient than `from_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_twos_complement_limbs_asc(&[]), 0); assert_eq!(Integer::from_twos_complement_limbs_asc(&[123]), 123); assert_eq!(Integer::from_twos_complement_limbs_asc(&[4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_twos_complement_limbs_asc(&[3567587328, 232]), 1000000000000u64 ); assert_eq!( Integer::from_twos_complement_limbs_asc(&[727379968, 4294967063]), -1000000000000i64 ); } ``` #### pub fn from_twos_complement_limbs_desc(xs: &[u64]) -> Integer Converts a slice of limbs to an `Integer`, in descending order, so that less significant limbs have higher indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function borrows a slice. If taking ownership of a `Vec` is possible instead, `from_owned_twos_complement_limbs_desc` is more efficient. This function is less efficient than `from_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_twos_complement_limbs_desc(&[]), 0); assert_eq!(Integer::from_twos_complement_limbs_desc(&[123]), 123); assert_eq!(Integer::from_twos_complement_limbs_desc(&[4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_twos_complement_limbs_desc(&[232, 3567587328]), 1000000000000u64 ); assert_eq!( Integer::from_twos_complement_limbs_desc(&[4294967063, 727379968]), -1000000000000i64 ); } ``` #### pub fn from_owned_twos_complement_limbs_asc(xs: Vec<u64, Global>) -> Integer Converts a slice of limbs to an `Integer`, in ascending order, so that less significant limbs have lower indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function takes ownership of a `Vec`. If it’s necessary to borrow a slice instead, use `from_twos_complement_limbs_asc` This function is more efficient than `from_owned_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![]), 0); assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![123]), 123); assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_owned_twos_complement_limbs_asc(vec![3567587328, 232]), 1000000000000i64 ); assert_eq!( Integer::from_owned_twos_complement_limbs_asc(vec![727379968, 4294967063]), -1000000000000i64 ); } ``` #### pub fn from_owned_twos_complement_limbs_desc(xs: Vec<u64, Global>) -> Integer Converts a slice of limbs to an `Integer`, in descending order, so that less significant limbs have higher indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function takes ownership of a `Vec`. If it’s necessary to borrow a slice instead, use `from_twos_complement_limbs_desc`. This function is less efficient than `from_owned_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![]), 0); assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![123]), 123); assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_owned_twos_complement_limbs_desc(vec![232, 3567587328]), 1000000000000i64 ); assert_eq!( Integer::from_owned_twos_complement_limbs_desc(vec![4294967063, 727379968]), -1000000000000i64 ); } ``` ### impl Integer #### pub fn to_twos_complement_limbs_asc(&self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in ascending order, so that less significant limbs have lower indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no trailing zero limbs if the `Integer` is positive or trailing `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This function borrows `self`. If taking ownership of `self` is possible, `into_twos_complement_limbs_asc` is more efficient. This function is more efficient than `to_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.to_twos_complement_limbs_asc().is_empty()); assert_eq!(Integer::from(123).to_twos_complement_limbs_asc(), &[123]); assert_eq!(Integer::from(-123).to_twos_complement_limbs_asc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).to_twos_complement_limbs_asc(), &[3567587328, 232] ); assert_eq!( (-Integer::from(10u32).pow(12)).to_twos_complement_limbs_asc(), &[727379968, 4294967063] ); } ``` #### pub fn to_twos_complement_limbs_desc(&self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in descending order, so that less significant limbs have higher indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no leading zero limbs if the `Integer` is non-negative or leading `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This is similar to how `BigInteger`s in Java are represented. This function borrows `self`. If taking ownership of `self` is possible, `into_twos_complement_limbs_desc` is more efficient. This function is less efficient than `to_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.to_twos_complement_limbs_desc().is_empty()); assert_eq!(Integer::from(123).to_twos_complement_limbs_desc(), &[123]); assert_eq!(Integer::from(-123).to_twos_complement_limbs_desc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).to_twos_complement_limbs_desc(), &[232, 3567587328] ); assert_eq!( (-Integer::from(10u32).pow(12)).to_twos_complement_limbs_desc(), &[4294967063, 727379968] ); } ``` #### pub fn into_twos_complement_limbs_asc(self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in ascending order, so that less significant limbs have lower indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no trailing zero limbs if the `Integer` is positive or trailing `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This function takes ownership of `self`. If it’s necessary to borrow `self` instead, use `to_twos_complement_limbs_asc`. This function is more efficient than `into_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.into_twos_complement_limbs_asc().is_empty()); assert_eq!(Integer::from(123).into_twos_complement_limbs_asc(), &[123]); assert_eq!(Integer::from(-123).into_twos_complement_limbs_asc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).into_twos_complement_limbs_asc(), &[3567587328, 232] ); assert_eq!( (-Integer::from(10u32).pow(12)).into_twos_complement_limbs_asc(), &[727379968, 4294967063] ); } ``` #### pub fn into_twos_complement_limbs_desc(self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in descending order, so that less significant limbs have higher indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no leading zero limbs if the `Integer` is non-negative or leading `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This is similar to how `BigInteger`s in Java are represented. This function takes ownership of `self`. If it’s necessary to borrow `self` instead, use `to_twos_complement_limbs_desc`. This function is less efficient than `into_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.into_twos_complement_limbs_desc().is_empty()); assert_eq!(Integer::from(123).into_twos_complement_limbs_desc(), &[123]); assert_eq!(Integer::from(-123).into_twos_complement_limbs_desc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).into_twos_complement_limbs_desc(), &[232, 3567587328] ); assert_eq!( (-Integer::from(10u32).pow(12)).into_twos_complement_limbs_desc(), &[4294967063, 727379968] ); } ``` #### pub fn twos_complement_limbs(&self) -> TwosComplementLimbIterator<'_Returns a double-ended iterator over the twos-complement limbs of an `Integer`. The forward order is ascending, so that less significant limbs appear first. There may be a most-significant sign-extension limb. If it’s necessary to get a `Vec` of all the twos_complement limbs, consider using `to_twos_complement_limbs_asc`, `to_twos_complement_limbs_desc`, `into_twos_complement_limbs_asc`, or `into_twos_complement_limbs_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.twos_complement_limbs().next().is_none()); assert_eq!(Integer::from(123).twos_complement_limbs().collect_vec(), &[123]); assert_eq!(Integer::from(-123).twos_complement_limbs().collect_vec(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).twos_complement_limbs().collect_vec(), &[3567587328, 232] ); // Sign-extension for a non-negative `Integer` assert_eq!( Integer::from(4294967295i64).twos_complement_limbs().collect_vec(), &[4294967295, 0] ); assert_eq!( (-Integer::from(10u32).pow(12)).twos_complement_limbs().collect_vec(), &[727379968, 4294967063] ); // Sign-extension for a negative `Integer` assert_eq!( (-Integer::from(4294967295i64)).twos_complement_limbs().collect_vec(), &[1, 4294967295] ); assert!(Integer::ZERO.twos_complement_limbs().rev().next().is_none()); assert_eq!(Integer::from(123).twos_complement_limbs().rev().collect_vec(), &[123]); assert_eq!( Integer::from(-123).twos_complement_limbs().rev().collect_vec(), &[4294967173] ); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).twos_complement_limbs().rev().collect_vec(), &[232, 3567587328] ); // Sign-extension for a non-negative `Integer` assert_eq!( Integer::from(4294967295i64).twos_complement_limbs().rev().collect_vec(), &[0, 4294967295] ); assert_eq!( (-Integer::from(10u32).pow(12)).twos_complement_limbs().rev().collect_vec(), &[4294967063, 727379968] ); // Sign-extension for a negative `Integer` assert_eq!( (-Integer::from(4294967295i64)).twos_complement_limbs().rev().collect_vec(), &[4294967295, 1] ); } ``` ### impl Integer #### pub fn checked_count_ones(&self) -> Option<u64Counts the number of ones in the binary expansion of an `Integer`. If the `Integer` is negative, then the number of ones is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.checked_count_ones(), Some(0)); // 105 = 1101001b assert_eq!(Integer::from(105).checked_count_ones(), Some(4)); assert_eq!(Integer::from(-105).checked_count_ones(), None); // 10^12 = 1110100011010100101001010001000000000000b assert_eq!(Integer::from(10u32).pow(12).checked_count_ones(), Some(13)); ``` ### impl Integer #### pub fn checked_count_zeros(&self) -> Option<u64Counts the number of zeros in the binary expansion of an `Integer`. If the `Integer` is non-negative, then the number of zeros is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.checked_count_zeros(), None); // -105 = 10010111 in two's complement assert_eq!(Integer::from(-105).checked_count_zeros(), Some(3)); assert_eq!(Integer::from(105).checked_count_zeros(), None); // -10^12 = 10001011100101011010110101111000000000000 in two's complement assert_eq!((-Integer::from(10u32).pow(12)).checked_count_zeros(), Some(24)); ``` ### impl Integer #### pub fn trailing_zeros(&self) -> Option<u64Returns the number of trailing zeros in the binary expansion of an `Integer` (equivalently, the multiplicity of 2 in its prime factorization), or `None` is the `Integer` is 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.trailing_zeros(), None); assert_eq!(Integer::from(3).trailing_zeros(), Some(0)); assert_eq!(Integer::from(-72).trailing_zeros(), Some(3)); assert_eq!(Integer::from(100).trailing_zeros(), Some(2)); assert_eq!((-Integer::from(10u32).pow(12)).trailing_zeros(), Some(12)); ``` Trait Implementations --- ### impl<'a> Abs for &'a Integer #### fn abs(self) -> Integer Takes the absolute value of an `Integer`, taking the `Integer` by reference. $$ f(x) = |x|. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).abs(), 0); assert_eq!((&Integer::from(123)).abs(), 123); assert_eq!((&Integer::from(-123)).abs(), 123); ``` #### type Output = Integer ### impl Abs for Integer #### fn abs(self) -> Integer Takes the absolute value of an `Integer`, taking the `Integer` by value. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.abs(), 0); assert_eq!(Integer::from(123).abs(), 123); assert_eq!(Integer::from(-123).abs(), 123); ``` #### type Output = Integer ### impl AbsAssign for Integer #### fn abs_assign(&mut self) Replaces an `Integer` with its absolute value. $$ x \gets |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::AbsAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.abs_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.abs_assign(); assert_eq!(x, 123); let mut x = Integer::from(-123); x.abs_assign(); assert_eq!(x, 123); ``` ### impl<'a, 'b> Add<&'a Integer> for &'b Integer #### fn add(self, other: &'a Integer) -> Integer Adds two `Integer`s, taking both by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO + &Integer::from(123), 123); assert_eq!(&Integer::from(-123) + &Integer::ZERO, -123); assert_eq!(&Integer::from(-123) + &Integer::from(456), 333); assert_eq!( &-Integer::from(10u32).pow(12) + &(Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> Add<&'a Integer> for Integer #### fn add(self, other: &'a Integer) -> Integer Adds two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO + &Integer::from(123), 123); assert_eq!(Integer::from(-123) + &Integer::ZERO, -123); assert_eq!(Integer::from(-123) + &Integer::from(456), 333); assert_eq!( -Integer::from(10u32).pow(12) + &(Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> Add<Integer> for &'a Integer #### fn add(self, other: Integer) -> Integer Adds two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO + Integer::from(123), 123); assert_eq!(&Integer::from(-123) + Integer::ZERO, -123); assert_eq!(&Integer::from(-123) + Integer::from(456), 333); assert_eq!( &-Integer::from(10u32).pow(12) + (Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl Add<Integer> for Integer #### fn add(self, other: Integer) -> Integer Adds two `Integer`s, taking both by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO + Integer::from(123), 123); assert_eq!(Integer::from(-123) + Integer::ZERO, -123); assert_eq!(Integer::from(-123) + Integer::from(456), 333); assert_eq!( -Integer::from(10u32).pow(12) + (Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> AddAssign<&'a Integer> for Integer #### fn add_assign(&mut self, other: &'a Integer) Adds an `Integer` to an `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x += &(-Integer::from(10u32).pow(12)); x += &(Integer::from(10u32).pow(12) * Integer::from(2u32)); x += &(-Integer::from(10u32).pow(12) * Integer::from(3u32)); x += &(Integer::from(10u32).pow(12) * Integer::from(4u32)); assert_eq!(x, 2000000000000u64); ``` ### impl AddAssign<Integer> for Integer #### fn add_assign(&mut self, other: Integer) Adds an `Integer` to an `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x += -Integer::from(10u32).pow(12); x += Integer::from(10u32).pow(12) * Integer::from(2u32); x += -Integer::from(10u32).pow(12) * Integer::from(3u32); x += Integer::from(10u32).pow(12) * Integer::from(4u32); assert_eq!(x, 2000000000000u64); ``` ### impl<'a> AddMul<&'a Integer, Integer> for Integer #### fn add_mul(self, y: &'a Integer, z: Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first and third by value and the second by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(&Integer::from(3u32), Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b, 'c> AddMul<&'a Integer, &'b Integer> for &'c Integer #### fn add_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking all three by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(10u32)).add_mul(&Integer::from(3u32), &Integer::from(4u32)), 22 ); assert_eq!( (&-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b> AddMul<&'a Integer, &'b Integer> for Integer #### fn add_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first by value and the second and third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(&Integer::from(3u32), &Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> AddMul<Integer, &'a Integer> for Integer #### fn add_mul(self, y: Integer, z: &'a Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first two by value and the third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(Integer::from(3u32), &Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl AddMul<Integer, Integer> for Integer #### fn add_mul(self, y: Integer, z: Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking all three by value. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(Integer::from(3u32), Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(Integer::from(0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> AddMulAssign<&'a Integer, Integer> for Integer #### fn add_mul_assign(&mut self, y: &'a Integer, z: Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking the first `Integer` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(&Integer::from(3u32), Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(&Integer::from(0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a, 'b> AddMulAssign<&'a Integer, &'b Integer> for Integer #### fn add_mul_assign(&mut self, y: &'a Integer, z: &'b Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking both `Integer`s on the right-hand side by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(&Integer::from(3u32), &Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> AddMulAssign<Integer, &'a Integer> for Integer #### fn add_mul_assign(&mut self, y: Integer, z: &'a Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking the first `Integer` on the right-hand side by value and the second by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(Integer::from(3u32), &Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(Integer::from(0x10000), &-Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl AddMulAssign<Integer, Integer> for Integer #### fn add_mul_assign(&mut self, y: Integer, z: Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking both `Integer`s on the right-hand side by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(Integer::from(3u32), Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(Integer::from(0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl Binary for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a binary `String`. Using the `#` format flag prepends `"0b"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToBinaryString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_binary_string(), "0"); assert_eq!(Integer::from(123).to_binary_string(), "1111011"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_binary_string(), "1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Integer::from(123)), "00001111011"); assert_eq!(Integer::from(-123).to_binary_string(), "-1111011"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_binary_string(), "-1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Integer::from(-123)), "-0001111011"); assert_eq!(format!("{:#b}", Integer::ZERO), "0b0"); assert_eq!(format!("{:#b}", Integer::from(123)), "0b1111011"); assert_eq!( format!("{:#b}", Integer::from_str("1000000000000").unwrap()), "0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Integer::from(123)), "0b001111011"); assert_eq!(format!("{:#b}", Integer::from(-123)), "-0b1111011"); assert_eq!( format!("{:#b}", Integer::from_str("-1000000000000").unwrap()), "-0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Integer::from(-123)), "-0b01111011"); ``` ### impl<'a> BinomialCoefficient<&'a Integer> for Integer #### fn binomial_coefficient(n: &'a Integer, k: &'a Integer) -> Integer Computes the binomial coefficient of two `Integer`s, taking both by reference. The second argument must be non-negative, but the first may be negative. If it is, the identity $\binom{-n}{k} = (-1)^k \binom{n+k-1}{k}$ is used. $$ f(n, k) = \begin{cases} \binom{n}{k} & \text{if} \quad n \geq 0, \\ (-1)^k \binom{-n+k-1}{k} & \text{if} \quad n < 0. \end{cases} $$ ##### Worst-case complexity TODO ##### Panics Panics if $k$ is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::integer::Integer; assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(1)), 4); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(3)), 4); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(4)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(10), &Integer::from(5)), 252); assert_eq!( Integer::binomial_coefficient(&Integer::from(100), &Integer::from(50)).to_string(), "100891344545564193334812497256" ); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(1)), -3); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(3)), -10); ``` ### impl BinomialCoefficient<Integer> for Integer #### fn binomial_coefficient(n: Integer, k: Integer) -> Integer Computes the binomial coefficient of two `Integer`s, taking both by value. The second argument must be non-negative, but the first may be negative. If it is, the identity $\binom{-n}{k} = (-1)^k \binom{n+k-1}{k}$ is used. $$ f(n, k) = \begin{cases} \binom{n}{k} & \text{if} \quad n \geq 0, \\ (-1)^k \binom{-n+k-1}{k} & \text{if} \quad n < 0. \end{cases} $$ ##### Worst-case complexity TODO ##### Panics Panics if $k$ is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::integer::Integer; assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(1)), 4); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(3)), 4); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(4)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(10), Integer::from(5)), 252); assert_eq!( Integer::binomial_coefficient(Integer::from(100), Integer::from(50)).to_string(), "100891344545564193334812497256" ); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(1)), -3); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(3)), -10); ``` ### impl BitAccess for Integer Provides functions for accessing and modifying the $i$th bit of a `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion. #### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::{NegativeOne, Zero}; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, 100); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, 0); let mut x = Integer::from(-0x100); x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, -156); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, -256); let mut x = Integer::ZERO; x.flip_bit(10); assert_eq!(x, 1024); x.flip_bit(10); assert_eq!(x, 0); let mut x = Integer::NEGATIVE_ONE; x.flip_bit(10); assert_eq!(x, -1025); x.flip_bit(10); assert_eq!(x, -1); ``` #### fn get_bit(&self, index: u64) -> bool Determines whether the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion, is 0 or 1. `false` means 0 and `true` means 1. Getting bits beyond the `Integer`’s width is allowed; those bits are `false` if the `Integer` is non-negative and `true` if it is negative. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $f(n, i) = (b_i = 1)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::integer::Integer; assert_eq!(Integer::from(123).get_bit(2), false); assert_eq!(Integer::from(123).get_bit(3), true); assert_eq!(Integer::from(123).get_bit(100), false); assert_eq!(Integer::from(-123).get_bit(0), true); assert_eq!(Integer::from(-123).get_bit(1), false); assert_eq!(Integer::from(-123).get_bit(100), true); assert_eq!(Integer::from(10u32).pow(12).get_bit(12), true); assert_eq!(Integer::from(10u32).pow(12).get_bit(100), false); assert_eq!((-Integer::from(10u32).pow(12)).get_bit(12), true); assert_eq!((-Integer::from(10u32).pow(12)).get_bit(100), true); ``` #### fn set_bit(&mut self, index: u64) Sets the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion, to 1. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $$ n \gets \begin{cases} n + 2^j & \text{if} \quad b_j = 0, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, 100); let mut x = Integer::from(-0x100); x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, -156); ``` #### fn clear_bit(&mut self, index: u64) Sets the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its binary expansion, to 0. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $$ n \gets \begin{cases} n - 2^j & \text{if} \quad b_j = 1, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_nz::integer::Integer; let mut x = Integer::from(0x7f); x.clear_bit(0); x.clear_bit(1); x.clear_bit(3); x.clear_bit(4); assert_eq!(x, 100); let mut x = Integer::from(-156); x.clear_bit(2); x.clear_bit(5); x.clear_bit(6); assert_eq!(x, -256); ``` #### fn assign_bit(&mut self, index: u64, bit: bool) Sets the bit at `index` to whichever value `bit` is. Sets the bit at `index` to the opposite of its original value. #### fn bitand(self, other: &'a Integer) -> Integer Takes the bitwise and of two `Integer`s, taking both by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) & &Integer::from(-456), -512); assert_eq!( &-Integer::from(10u32).pow(12) & &-(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAnd<&'a Integer> for Integer #### fn bitand(self, other: &'a Integer) -> Integer Takes the bitwise and of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) & &Integer::from(-456), -512); assert_eq!( -Integer::from(10u32).pow(12) & &-(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAnd<Integer> for &'a Integer #### fn bitand(self, other: Integer) -> Integer Takes the bitwise and of two `Integer`s, taking the first by reference and the seocnd by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) & Integer::from(-456), -512); assert_eq!( &-Integer::from(10u32).pow(12) & -(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl BitAnd<Integer> for Integer #### fn bitand(self, other: Integer) -> Integer Takes the bitwise and of two `Integer`s, taking both by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) & Integer::from(-456), -512); assert_eq!( -Integer::from(10u32).pow(12) & -(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAndAssign<&'a Integer> for Integer #### fn bitand_assign(&mut self, other: &'a Integer) Bitwise-ands an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::NEGATIVE_ONE; x &= &Integer::from(0x70ffffff); x &= &Integer::from(0x7ff0_ffff); x &= &Integer::from(0x7ffff0ff); x &= &Integer::from(0x7ffffff0); assert_eq!(x, 0x70f0f0f0); ``` ### impl BitAndAssign<Integer> for Integer #### fn bitand_assign(&mut self, other: Integer) Bitwise-ands an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::NEGATIVE_ONE; x &= Integer::from(0x70ffffff); x &= Integer::from(0x7ff0_ffff); x &= Integer::from(0x7ffff0ff); x &= Integer::from(0x7ffffff0); assert_eq!(x, 0x70f0f0f0); ``` ### impl BitBlockAccess for Integer #### fn get_bits(&self, start: u64, end: u64) -> Natural Extracts a block of adjacent two’s complement bits from an `Integer`, taking the `Integer` by reference. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), end)`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits(16, 48), Natural::from(0x10feedcbu32) ); assert_eq!( Integer::from(0xabcdef0112345678u64).get_bits(4, 16), Natural::from(0x567u32) ); assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits(0, 100), Natural::from_str("1267650600215849587758112418184").unwrap() ); assert_eq!(Integer::from(0xabcdef0112345678u64).get_bits(10, 10), Natural::ZERO); ``` #### fn get_bits_owned(self, start: u64, end: u64) -> Natural Extracts a block of adjacent two’s complement bits from an `Integer`, taking the `Integer` by value. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), end)`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits_owned(16, 48), Natural::from(0x10feedcbu32) ); assert_eq!( Integer::from(0xabcdef0112345678u64).get_bits_owned(4, 16), Natural::from(0x567u32) ); assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits_owned(0, 100), Natural::from_str("1267650600215849587758112418184").unwrap() ); assert_eq!(Integer::from(0xabcdef0112345678u64).get_bits_owned(10, 10), Natural::ZERO); ``` #### fn assign_bits(&mut self, start: u64, end: u64, bits: &Natural) Replaces a block of adjacent two’s complement bits in an `Integer` with other bits. The least-significant `end - start` bits of `bits` are assigned to bits `start` through `end - 1`, inclusive, of `self`. Let $n$ be `self` and let $m$ be `bits`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ m = \sum_{i=0}^k 2^{d_i}, $$ where for all $i$, $d_i\in \{0, 1\}$. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ n \gets \sum_{i=0}^\infty 2^{c_i}, $$ where $$ \{c_0, c_1, c_2, \ldots \} = \{b_0, b_1, b_2, \ldots, b_{p-1}, d_0, d_1, \ldots, d_{p-q-1}, b_q, b_{q+1}, \ldots \}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), end)`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; let mut n = Integer::from(123); n.assign_bits(5, 7, &Natural::from(456u32)); assert_eq!(n.to_string(), "27"); let mut n = Integer::from(-123); n.assign_bits(64, 128, &Natural::from(456u32)); assert_eq!(n.to_string(), "-340282366920938455033212565746503123067"); let mut n = Integer::from(-123); n.assign_bits(80, 100, &Natural::from(456u32)); assert_eq!(n.to_string(), "-1267098121128665515963862483067"); ``` #### type Bits = Natural ### impl BitConvertible for Integer #### fn to_bits_asc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the twos-complement bits of an `Integer` in ascending order: least- to most-significant. The most significant bit indicates the sign; if the bit is `false`, the `Integer` is positive, and if the bit is `true` it is negative. There are no trailing `false` bits if the `Integer` is positive or trailing `true` bits if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no bits. This function is more efficient than `to_bits_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert!(Integer::ZERO.to_bits_asc().is_empty()); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).to_bits_asc(), &[true, false, false, true, false, true, true, false] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).to_bits_asc(), &[true, true, true, false, true, false, false, true] ); ``` #### fn to_bits_desc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the twos-complement bits of an `Integer` in descending order: most- to least-significant. The most significant bit indicates the sign; if the bit is `false`, the `Integer` is positive, and if the bit is `true` it is negative. There are no leading `false` bits if the `Integer` is positive or leading `true` bits if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no bits. This is similar to how `BigInteger`s in Java are represented. This function is less efficient than `to_bits_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert!(Integer::ZERO.to_bits_desc().is_empty()); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).to_bits_desc(), &[false, true, true, false, true, false, false, true] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).to_bits_desc(), &[true, false, false, true, false, true, true, true] ); ``` #### fn from_bits_asc<I>(xs: I) -> Integerwhere I: Iterator<Item = bool>, Converts an iterator of twos-complement bits into an `Integer`. The bits should be in ascending order (least- to most-significant). Let $k$ be `bits.count()`. If $k = 0$ or $b_{k-1}$ is `false`, then $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^i [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. If $b_{k-1}$ is `true`, then $$ f((b_i)_ {i=0}^{k-1}) = \left ( \sum_{i=0}^{k-1}2^i [b_i] \right ) - 2^k. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::integer::Integer; use std::iter::empty; assert_eq!(Integer::from_bits_asc(empty()), 0); // 105 = 1101001b assert_eq!( Integer::from_bits_asc( [true, false, false, true, false, true, true, false].iter().cloned() ), 105 ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from_bits_asc( [true, true, true, false, true, false, false, true].iter().cloned() ), -105 ); ``` #### fn from_bits_desc<I>(xs: I) -> Integerwhere I: Iterator<Item = bool>, Converts an iterator of twos-complement bits into an `Integer`. The bits should be in descending order (most- to least-significant). If `bits` is empty or $b_0$ is `false`, then $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^{k-i-1} [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. If $b_0$ is `true`, then $$ f((b_i)_ {i=0}^{k-1}) = \left ( \sum_{i=0}^{k-1}2^{k-i-1} [b_i] \right ) - 2^k. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::integer::Integer; use std::iter::empty; assert_eq!(Integer::from_bits_desc(empty()), 0); // 105 = 1101001b assert_eq!( Integer::from_bits_desc( [false, true, true, false, true, false, false, true].iter().cloned() ), 105 ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from_bits_desc( [true, false, false, true, false, true, true, true].iter().cloned() ), -105 ); ``` ### impl<'a> BitIterable for &'a Integer #### fn bits(self) -> IntegerBitIterator<'aReturns a double-ended iterator over the bits of an `Integer`. The forward order is ascending, so that less significant bits appear first. There are no trailing false bits going forward, or leading falses going backward, except for possibly a most-significant sign-extension bit. If it’s necessary to get a `Vec` of all the bits, consider using `to_bits_asc` or `to_bits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitIterable; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.bits().next(), None); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).bits().collect_vec(), &[true, false, false, true, false, true, true, false] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).bits().collect_vec(), &[true, true, true, false, true, false, false, true] ); assert_eq!(Integer::ZERO.bits().next_back(), None); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).bits().rev().collect_vec(), &[false, true, true, false, true, false, false, true] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).bits().rev().collect_vec(), &[true, false, false, true, false, true, true, true] ); ``` #### type BitIterator = IntegerBitIterator<'a### impl<'a, 'b> BitOr<&'a Integer> for &'b Integer #### fn bitor(self, other: &'a Integer) -> Integer Takes the bitwise or of two `Integer`s, taking both by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(&Integer::from(-123) | &Integer::from(-456), -67); assert_eq!( &-Integer::from(10u32).pow(12) | &-(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOr<&'a Integer> for Integer #### fn bitor(self, other: &'a Integer) -> Integer Takes the bitwise or of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) | &Integer::from(-456), -67); assert_eq!( -Integer::from(10u32).pow(12) | &-(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOr<Integer> for &'a Integer #### fn bitor(self, other: Integer) -> Integer Takes the bitwise or of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(&Integer::from(-123) | Integer::from(-456), -67); assert_eq!( &-Integer::from(10u32).pow(12) | -(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl BitOr<Integer> for Integer #### fn bitor(self, other: Integer) -> Integer Takes the bitwise or of two `Integer`s, taking both by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) | Integer::from(-456), -67); assert_eq!( -Integer::from(10u32).pow(12) | -(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOrAssign<&'a Integer> for Integer #### fn bitor_assign(&mut self, other: &'a Integer) Bitwise-ors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x |= &Integer::from(0x0000000f); x |= &Integer::from(0x00000f00); x |= &Integer::from(0x000f_0000); x |= &Integer::from(0x0f000000); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitOrAssign<Integer> for Integer #### fn bitor_assign(&mut self, other: Integer) Bitwise-ors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x |= Integer::from(0x0000000f); x |= Integer::from(0x00000f00); x |= Integer::from(0x000f_0000); x |= Integer::from(0x0f000000); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> BitScan for &'a Integer #### fn index_of_next_false_bit(self, starting_index: u64) -> Option<u64Given an `Integer` and a starting index, searches the `Integer` for the smallest index of a `false` bit that is greater than or equal to the starting index. If the [`Integer]` is negative, and the starting index is too large and there are no more `false` bits above it, `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::integer::Integer; assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(0), Some(0)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(20), Some(20)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(31), Some(31)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(32), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(33), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(34), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(35), None); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(100), None); ``` #### fn index_of_next_true_bit(self, starting_index: u64) -> Option<u64Given an `Integer` and a starting index, searches the `Integer` for the smallest index of a `true` bit that is greater than or equal to the starting index. If the `Integer` is non-negative, and the starting index is too large and there are no more `true` bits above it, `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::integer::Integer; assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(0), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(20), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(31), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(32), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(33), Some(33)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(34), Some(35)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(35), Some(35)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(36), Some(36)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(100), Some(100)); ``` ### impl<'a, 'b> BitXor<&'a Integer> for &'b Integer #### fn bitxor(self, other: &'a Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking both by reference. $$ f(x, y) = x \oplus y. $$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) ^ &Integer::from(-456), 445); assert_eq!( &-Integer::from(10u32).pow(12) ^ &-(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXor<&'a Integer> for Integer #### fn bitxor(self, other: &'a Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) ^ &Integer::from(-456), 445); assert_eq!( -Integer::from(10u32).pow(12) ^ &-(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXor<Integer> for &'a Integer #### fn bitxor(self, other: Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) ^ Integer::from(-456), 445); assert_eq!( &-Integer::from(10u32).pow(12) ^ -(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl BitXor<Integer> for Integer #### fn bitxor(self, other: Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking both by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) ^ Integer::from(-456), 445); assert_eq!( -Integer::from(10u32).pow(12) ^ -(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXorAssign<&'a Integer> for Integer #### fn bitxor_assign(&mut self, other: &'a Integer) Bitwise-xors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::from(u32::MAX); x ^= &Integer::from(0x0000000f); x ^= &Integer::from(0x00000f00); x ^= &Integer::from(0x000f_0000); x ^= &Integer::from(0x0f000000); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitXorAssign<Integer> for Integer #### fn bitxor_assign(&mut self, other: Integer) Bitwise-xors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::from(u32::MAX); x ^= Integer::from(0x0000000f); x ^= Integer::from(0x00000f00); x ^= Integer::from(0x000f_0000); x ^= Integer::from(0x0f000000); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl<'a> CeilingDivAssignMod<&'a Integer> for Integer #### fn ceiling_div_assign_mod(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil\frac{x}{y} \right \rceil, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignMod; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(10)), -7); assert_eq!(x, 3); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(10)), -3); assert_eq!(x, -2); // 3 * -10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(-10)), 7); assert_eq!(x, 3); ``` #### type ModOutput = Integer ### impl CeilingDivAssignMod<Integer> for Integer #### fn ceiling_div_assign_mod(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil\frac{x}{y} \right \rceil, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignMod; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(10)), -7); assert_eq!(x, 3); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(10)), -3); assert_eq!(x, -2); // 3 * -10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(-10)), 7); assert_eq!(x, 3); ``` #### type ModOutput = Integer ### impl<'a, 'b> CeilingDivMod<&'b Integer> for &'a Integer #### fn ceiling_div_mod(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> CeilingDivMod<&'a Integer> for Integer #### fn ceiling_div_mod(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> CeilingDivMod<Integer> for &'a Integer #### fn ceiling_div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl CeilingDivMod<Integer> for Integer #### fn ceiling_div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a, 'b> CeilingMod<&'b Integer> for &'a Integer #### fn ceiling_mod(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(&Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(&Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(&Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(&Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingMod<&'a Integer> for Integer #### fn ceiling_mod(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).ceiling_mod(&Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).ceiling_mod(&Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).ceiling_mod(&Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).ceiling_mod(&Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingMod<Integer> for &'a Integer #### fn ceiling_mod(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(Integer::from(-10)), 7); ``` #### type Output = Integer ### impl CeilingMod<Integer> for Integer #### fn ceiling_mod(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).ceiling_mod(Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).ceiling_mod(Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).ceiling_mod(Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).ceiling_mod(Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingModAssign<&'a Integer> for Integer #### fn ceiling_mod_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the `Integer` on the right-hand side by reference and replacing the first number by the remainder. The remainder has the opposite sign as the second number. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lceil\frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(&Integer::from(10)); assert_eq!(x, -7); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(&Integer::from(-10)); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(&Integer::from(10)); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(&Integer::from(-10)); assert_eq!(x, 7); ``` ### impl CeilingModAssign<Integer> for Integer #### fn ceiling_mod_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the `Integer` on the right-hand side by value and replacing the first number by the remainder. The remainder has the opposite sign as the second number. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lceil\frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(Integer::from(10)); assert_eq!(x, -7); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(Integer::from(-10)); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(Integer::from(10)); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(Integer::from(-10)); assert_eq!(x, 7); ``` ### impl<'a> CeilingModPowerOf2 for &'a Integer #### fn ceiling_mod_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ f(x, y) = x - 2^k\left \lceil \frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 assert_eq!((&Integer::from(260)).ceiling_mod_power_of_2(8), -252); // -100 * 2^4 + -11 = -1611 assert_eq!((&Integer::from(-1611)).ceiling_mod_power_of_2(4), -11); ``` #### type Output = Integer ### impl CeilingModPowerOf2 for Integer #### fn ceiling_mod_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ f(x, y) = x - 2^k\left \lceil \frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 assert_eq!(Integer::from(260).ceiling_mod_power_of_2(8), -252); // -100 * 2^4 + -11 = -1611 assert_eq!(Integer::from(-1611).ceiling_mod_power_of_2(4), -11); ``` #### type Output = Integer ### impl CeilingModPowerOf2Assign for Integer #### fn ceiling_mod_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ x \gets x - 2^k\left \lceil\frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2Assign; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 let mut x = Integer::from(260); x.ceiling_mod_power_of_2_assign(8); assert_eq!(x, -252); // -100 * 2^4 + -11 = -1611 let mut x = Integer::from(-1611); x.ceiling_mod_power_of_2_assign(4); assert_eq!(x, -11); ``` ### impl<'a> CeilingRoot<u64> for &'a Integer #### fn ceiling_root(self, exp: u64) -> Integer Returns the ceiling of the $n$th root of an `Integer`, taking the `Integer` by reference. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).ceiling_root(3), 10); assert_eq!(Integer::from(1000).ceiling_root(3), 10); assert_eq!(Integer::from(1001).ceiling_root(3), 11); assert_eq!(Integer::from(100000000000i64).ceiling_root(5), 159); assert_eq!(Integer::from(-100000000000i64).ceiling_root(5), -158); ``` #### type Output = Integer ### impl CeilingRoot<u64> for Integer #### fn ceiling_root(self, exp: u64) -> Integer Returns the ceiling of the $n$th root of an `Integer`, taking the `Integer` by value. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).ceiling_root(3), 10); assert_eq!(Integer::from(1000).ceiling_root(3), 10); assert_eq!(Integer::from(1001).ceiling_root(3), 11); assert_eq!(Integer::from(100000000000i64).ceiling_root(5), 159); assert_eq!(Integer::from(-100000000000i64).ceiling_root(5), -158); ``` #### type Output = Integer ### impl CeilingRootAssign<u64> for Integer #### fn ceiling_root_assign(&mut self, exp: u64) Replaces an `Integer` with the ceiling of its $n$th root. $x \gets \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRootAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(999); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1000); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1001); x.ceiling_root_assign(3); assert_eq!(x, 11); let mut x = Integer::from(100000000000i64); x.ceiling_root_assign(5); assert_eq!(x, 159); let mut x = Integer::from(-100000000000i64); x.ceiling_root_assign(5); assert_eq!(x, -158); ``` ### impl<'a> CeilingSqrt for &'a Integer #### fn ceiling_sqrt(self) -> Integer Returns the ceiling of the square root of an `Integer`, taking it by reference. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).ceiling_sqrt(), 10); assert_eq!(Integer::from(100).ceiling_sqrt(), 10); assert_eq!(Integer::from(101).ceiling_sqrt(), 11); assert_eq!(Integer::from(1000000000).ceiling_sqrt(), 31623); assert_eq!(Integer::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Integer ### impl CeilingSqrt for Integer #### fn ceiling_sqrt(self) -> Integer Returns the ceiling of the square root of an `Integer`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).ceiling_sqrt(), 10); assert_eq!(Integer::from(100).ceiling_sqrt(), 10); assert_eq!(Integer::from(101).ceiling_sqrt(), 11); assert_eq!(Integer::from(1000000000).ceiling_sqrt(), 31623); assert_eq!(Integer::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Integer ### impl CeilingSqrtAssign for Integer #### fn ceiling_sqrt_assign(&mut self) Replaces an `Integer` with the ceiling of its square root. $x \gets \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrtAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(99u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(100); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(101); x.ceiling_sqrt_assign(); assert_eq!(x, 11); let mut x = Integer::from(1000000000); x.ceiling_sqrt_assign(); assert_eq!(x, 31623); let mut x = Integer::from(10000000000u64); x.ceiling_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a, 'b> CheckedHammingDistance<&'a Integer> for &'b Integer #### fn checked_hamming_distance(self, other: &Integer) -> Option<u64Determines the Hamming distance between two `Integer`s. The two `Integer`s have infinitely many leading zeros or infinitely many leading ones, depending on their signs. If they are both non-negative or both negative, the Hamming distance is finite. If one is non-negative and the other is negative, the Hamming distance is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::logic::traits::CheckedHammingDistance; use malachite_nz::integer::Integer; assert_eq!(Integer::from(123).checked_hamming_distance(&Integer::from(123)), Some(0)); // 105 = 1101001b, 123 = 1111011 assert_eq!(Integer::from(-105).checked_hamming_distance(&Integer::from(-123)), Some(2)); assert_eq!(Integer::from(-105).checked_hamming_distance(&Integer::from(123)), None); ``` ### impl<'a> CheckedRoot<u64> for &'a Integer #### fn checked_root(self, exp: u64) -> Option<IntegerReturns the the $n$th root of an `Integer`, or `None` if the `Integer` is not a perfect $n$th power. The `Integer` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(999)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Integer::from(1000)).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!((&Integer::from(1001)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Integer::from(100000000000i64)).checked_root(5).to_debug_string(), "None"); assert_eq!((&Integer::from(-100000000000i64)).checked_root(5).to_debug_string(), "None"); assert_eq!((&Integer::from(10000000000i64)).checked_root(5).to_debug_string(), "Some(100)"); assert_eq!( (&Integer::from(-10000000000i64)).checked_root(5).to_debug_string(), "Some(-100)" ); ``` #### type Output = Integer ### impl CheckedRoot<u64> for Integer #### fn checked_root(self, exp: u64) -> Option<IntegerReturns the the $n$th root of an `Integer`, or `None` if the `Integer` is not a perfect $n$th power. The `Integer` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).checked_root(3).to_debug_string(), "None"); assert_eq!(Integer::from(1000).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!(Integer::from(1001).checked_root(3).to_debug_string(), "None"); assert_eq!(Integer::from(100000000000i64).checked_root(5).to_debug_string(), "None"); assert_eq!(Integer::from(-100000000000i64).checked_root(5).to_debug_string(), "None"); assert_eq!(Integer::from(10000000000i64).checked_root(5).to_debug_string(), "Some(100)"); assert_eq!(Integer::from(-10000000000i64).checked_root(5).to_debug_string(), "Some(-100)"); ``` #### type Output = Integer ### impl<'a> CheckedSqrt for &'a Integer #### fn checked_sqrt(self) -> Option<IntegerReturns the the square root of an `Integer`, or `None` if it is not a perfect square. The `Integer` is taken by reference. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(99u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Integer::from(100u8)).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!((&Integer::from(101u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Integer::from(1000000000u32)).checked_sqrt().to_debug_string(), "None"); assert_eq!( (&Integer::from(10000000000u64)).checked_sqrt().to_debug_string(), "Some(100000)" ); ``` #### type Output = Integer ### impl CheckedSqrt for Integer #### fn checked_sqrt(self) -> Option<IntegerReturns the the square root of an `Integer`, or `None` if it is not a perfect square. The `Integer` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(100u8).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!(Integer::from(101u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(1000000000u32).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(10000000000u64).checked_sqrt().to_debug_string(), "Some(100000)"); ``` #### type Output = Integer ### impl Clone for Integer #### fn clone(&self) -> Integer Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(&Integer::from(123)), true); assert_eq!(Natural::convertible_from(&Integer::from(-123)), false); assert_eq!(Natural::convertible_from(&Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(&-Integer::from(10u32).pow(12)), false); ``` ### impl<'a> ConvertibleFrom<&'a Integer> for f32 #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for f64 #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i128 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i16 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i32 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i64 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i8 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for isize #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u128 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u16 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u32 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u64 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u8 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for usize #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for Integer #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to an `Integer`, taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Integer::convertible_from(&Rational::from(123)), true); assert_eq!(Integer::convertible_from(&Rational::from(-123)), true); assert_eq!(Integer::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl ConvertibleFrom<Integer> for Natural #### fn convertible_from(value: Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(Integer::from(123)), true); assert_eq!(Natural::convertible_from(Integer::from(-123)), false); assert_eq!(Natural::convertible_from(Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(-Integer::from(10u32).pow(12)), false); ``` ### impl ConvertibleFrom<f32> for Integer #### fn convertible_from(value: f32) -> bool Determines whether a primitive float can be exactly converted to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<f64> for Integer #### fn convertible_from(value: f64) -> bool Determines whether a primitive float can be exactly converted to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Debug for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a `String`. This is the same as the `Display::fmt` implementation. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_debug_string(), "0"); assert_eq!(Integer::from(123).to_debug_string(), "123"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_debug_string(), "1000000000000" ); assert_eq!(format!("{:05?}", Integer::from(123)), "00123"); assert_eq!(Integer::from(-123).to_debug_string(), "-123"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_debug_string(), "-1000000000000" ); assert_eq!(format!("{:05?}", Integer::from(-123)), "-0123"); ``` ### impl Default for Integer #### fn default() -> Integer The default value of an `Integer`, 0. ### impl Display for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a `String`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_string(), "0"); assert_eq!(Integer::from(123).to_string(), "123"); assert_eq!( Integer::from_str("1000000000000").unwrap().to_string(), "1000000000000" ); assert_eq!(format!("{:05}", Integer::from(123)), "00123"); assert_eq!(Integer::from(-123).to_string(), "-123"); assert_eq!( Integer::from_str("-1000000000000").unwrap().to_string(), "-1000000000000" ); assert_eq!(format!("{:05}", Integer::from(-123)), "-0123"); ``` ### impl<'a, 'b> Div<&'b Integer> for &'a Integer #### fn div(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) / &Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(&Integer::from(23) / &Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(&Integer::from(-23) / &Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) / &Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> Div<&'a Integer> for Integer #### fn div(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) / &Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23) / &Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23) / &Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) / &Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> Div<Integer> for &'a Integer #### fn div(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) / Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(&Integer::from(23) / Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(&Integer::from(-23) / Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) / Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl Div<Integer> for Integer #### fn div(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) / Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23) / Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23) / Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) / Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> DivAssign<&'a Integer> for Integer #### fn div_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x /= &Integer::from(10); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); x /= &Integer::from(-10); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); x /= &Integer::from(10); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x /= &Integer::from(-10); assert_eq!(x, 2); ``` ### impl DivAssign<Integer> for Integer #### fn div_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x /= Integer::from(10); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); x /= Integer::from(-10); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); x /= Integer::from(10); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x /= Integer::from(-10); assert_eq!(x, 2); ``` ### impl<'a> DivAssignMod<&'a Integer> for Integer #### fn div_assign_mod(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(&Integer::from(10)), 3); assert_eq!(x, 2); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(&Integer::from(-10)), -7); assert_eq!(x, -3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(&Integer::from(10)), 7); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(&Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type ModOutput = Integer ### impl DivAssignMod<Integer> for Integer #### fn div_assign_mod(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(Integer::from(10)), 3); assert_eq!(x, 2); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(Integer::from(-10)), -7); assert_eq!(x, -3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(Integer::from(10)), 7); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type ModOutput = Integer ### impl<'a> DivAssignRem<&'a Integer> for Integer #### fn div_assign_rem(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, $$ $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(&Integer::from(10)), 3); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(&Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(&Integer::from(10)), -3); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(&Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type RemOutput = Integer ### impl DivAssignRem<Integer> for Integer #### fn div_assign_rem(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, $$ $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(Integer::from(10)), 3); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(Integer::from(10)), -3); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type RemOutput = Integer ### impl<'a, 'b> DivExact<&'b Integer> for &'a Integer #### fn div_exact(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / &other` instead. If you’re unsure and you want to know, use `(&self).div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(&other, RoundingMode::Exact)`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!((&Integer::from(-56088)).div_exact(&Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( (&Integer::from_str("121932631112635269000000").unwrap()) .div_exact(&Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExact<&'a Integer> for Integer #### fn div_exact(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / &other` instead. If you’re unsure and you want to know, use `self.div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!(Integer::from(-56088).div_exact(&Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( Integer::from_str("121932631112635269000000").unwrap() .div_exact(&Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExact<Integer> for &'a Integer #### fn div_exact(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!((&Integer::from(-56088)).div_exact(Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( (&Integer::from_str("121932631112635269000000").unwrap()) .div_exact(Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl DivExact<Integer> for Integer #### fn div_exact(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!(Integer::from(-56088).div_exact(Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( Integer::from_str("121932631112635269000000").unwrap() .div_exact(Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExactAssign<&'a Integer> for Integer #### fn div_exact_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= &other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 let mut x = Integer::from(-56088); x.div_exact_assign(&Integer::from(456)); assert_eq!(x, -123); // -123456789000 * -987654321000 = 121932631112635269000000 let mut x = Integer::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(&Integer::from_str("-987654321000").unwrap()); assert_eq!(x, -123456789000i64); ``` ### impl DivExactAssign<Integer> for Integer #### fn div_exact_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 let mut x = Integer::from(-56088); x.div_exact_assign(Integer::from(456)); assert_eq!(x, -123); // -123456789000 * -987654321000 = 121932631112635269000000 let mut x = Integer::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(Integer::from_str("-987654321000").unwrap()); assert_eq!(x, -123456789000i64); ``` ### impl<'a, 'b> DivMod<&'b Integer> for &'a Integer #### fn div_mod(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_mod(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!( (&Integer::from(23)).div_mod(&Integer::from(-10)).to_debug_string(), "(-3, -7)" ); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).div_mod(&Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_mod(&Integer::from(-10)).to_debug_string(), "(2, -3)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> DivMod<&'a Integer> for Integer #### fn div_mod(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_mod(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).div_mod(&Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).div_mod(&Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_mod(&Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> DivMod<Integer> for &'a Integer #### fn div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_mod(Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).div_mod(Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).div_mod(Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_mod(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl DivMod<Integer> for Integer #### fn div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_mod(Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).div_mod(Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).div_mod(Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_mod(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a, 'b> DivRem<&'b Integer> for &'a Integer #### fn div_rem(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(&Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_rem(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 2 * -10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_rem(&Integer::from(-10)).to_debug_string(), "(2, -3)" ); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a> DivRem<&'a Integer> for Integer #### fn div_rem(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(&Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(&Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(&Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a> DivRem<Integer> for &'a Integer #### fn div_rem(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_rem(Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_rem(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl DivRem<Integer> for Integer #### fn div_rem(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a, 'b> DivRound<&'b Integer> for &'a Integer #### fn div_round(self, other: &'b Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking both by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( (&Integer::from(-20)).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&Integer::from(-14)).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( (&Integer::from(-20)).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( (&Integer::from(-14)).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRound<&'a Integer> for Integer #### fn div_round(self, other: &'a Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( Integer::from(-20).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( Integer::from(-14).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( Integer::from(-20).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( Integer::from(-14).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRound<Integer> for &'a Integer #### fn div_round(self, other: Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( (&Integer::from(-10)).div_round(Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( (&Integer::from(-20)).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&Integer::from(-14)).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( (&Integer::from(-20)).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( (&Integer::from(-14)).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl DivRound<Integer> for Integer #### fn div_round(self, other: Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking both by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( Integer::from(-20).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( Integer::from(-14).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( Integer::from(-20).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( Integer::from(-14).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRoundAssign<&'a Integer> for Integer #### fn div_round_assign(&mut self, other: &'a Integer, rm: RoundingMode) -> Ordering Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Down), Ordering::Greater); assert_eq!(n, -2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Integer::from(3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, -333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(n, -3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Integer::from(3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, -333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, -2); let mut n = Integer::from(-10); assert_eq!( n.div_round_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, -3); let mut n = Integer::from(-20); assert_eq!(n.div_round_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -7); let mut n = Integer::from(-10); assert_eq!( n.div_round_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, -2); let mut n = Integer::from(-14); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -4); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Integer::from(-3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Integer::from(-3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Integer::from(-20); assert_eq!( n.div_round_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Integer::from(-14); assert_eq!( n.div_round_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl DivRoundAssign<Integer> for Integer #### fn div_round_assign(&mut self, other: Integer, rm: RoundingMode) -> Ordering Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Down), Ordering::Greater); assert_eq!(n, -2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, -333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(n, -3); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Ceiling), Ordering::Greater); assert_eq!(n, -333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, -2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Greater); assert_eq!(n, -3); let mut n = Integer::from(-20); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Greater); assert_eq!(n, -2); let mut n = Integer::from(-14); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -4); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(-3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(Integer::from(-3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Integer::from(-20); assert_eq!( n.div_round_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Integer::from(-14); assert_eq!( n.div_round_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl<'a, 'b> DivisibleBy<&'b Integer> for &'a Integer #### fn divisible_by(self, other: &'b Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. Both `Integer`s are taken by reference. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!((&Integer::ZERO).divisible_by(&Integer::ZERO), true); assert_eq!((&Integer::from(-100)).divisible_by(&Integer::from(-3)), false); assert_eq!((&Integer::from(102)).divisible_by(&Integer::from(-3)), true); assert_eq!( (&Integer::from_str("-1000000000000000000000000").unwrap()) .divisible_by(&Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<&'a Integer> for Integer #### fn divisible_by(self, other: &'a Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. The first `Integer` is taken by value and the second by reference. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.divisible_by(&Integer::ZERO), true); assert_eq!(Integer::from(-100).divisible_by(&Integer::from(-3)), false); assert_eq!(Integer::from(102).divisible_by(&Integer::from(-3)), true); assert_eq!( Integer::from_str("-1000000000000000000000000").unwrap() .divisible_by(&Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<Integer> for &'a Integer #### fn divisible_by(self, other: Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. The first `Integer` is taken by reference and the second by value. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!((&Integer::ZERO).divisible_by(Integer::ZERO), true); assert_eq!((&Integer::from(-100)).divisible_by(Integer::from(-3)), false); assert_eq!((&Integer::from(102)).divisible_by(Integer::from(-3)), true); assert_eq!( (&Integer::from_str("-1000000000000000000000000").unwrap()) .divisible_by(Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl DivisibleBy<Integer> for Integer #### fn divisible_by(self, other: Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. Both `Integer`s are taken by value. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.divisible_by(Integer::ZERO), true); assert_eq!(Integer::from(-100).divisible_by(Integer::from(-3)), false); assert_eq!(Integer::from(102).divisible_by(Integer::from(-3)), true); assert_eq!( Integer::from_str("-1000000000000000000000000").unwrap() .divisible_by(Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleByPowerOf2 for &'a Integer #### fn divisible_by_power_of_2(self, pow: u64) -> bool Returns whether an `Integer` is divisible by $2^k$. $f(x, k) = (2^k|x)$. $f(x, k) = (\exists n \in \N : \ x = n2^k)$. If `self` is 0, the result is always true; otherwise, it is equivalent to `self.trailing_zeros().unwrap() <= pow`, but more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivisibleByPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.divisible_by_power_of_2(100), true); assert_eq!(Integer::from(-100).divisible_by_power_of_2(2), true); assert_eq!(Integer::from(100u32).divisible_by_power_of_2(3), false); assert_eq!((-Integer::from(10u32).pow(12)).divisible_by_power_of_2(12), true); assert_eq!((-Integer::from(10u32).pow(12)).divisible_by_power_of_2(13), false); ``` ### impl<'a, 'b, 'c> EqMod<&'b Integer, &'c Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: &'c Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Integer, &'b Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by value and the second and third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Integer, Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Integer, Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by value and the second by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Integer, &'b Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, &'a Natural> for Integer #### fn eq_mod(self, other: Integer, m: &'a Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by value and the third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by reference and the second and third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Integer, Natural> for Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqModPowerOf2<&'b Integer> for &'a Integer #### fn eq_mod_power_of_2(self, other: &'b Integer, pow: u64) -> bool Returns whether one `Integer` is equal to another modulo $2^k$; that is, whether their $k$ least-significant bits (in two’s complement) are equal. $f(x, y, k) = (x \equiv y \mod 2^k)$. $f(x, y, k) = (\exists n \in \Z : x - y = n2^k)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqModPowerOf2; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.eq_mod_power_of_2(&Integer::from(-256), 8), true); assert_eq!(Integer::from(-0b1101).eq_mod_power_of_2(&Integer::from(0b11011), 3), true); assert_eq!(Integer::from(-0b1101).eq_mod_power_of_2(&Integer::from(0b11011), 4), false); ``` ### impl<'a, 'b> ExtendedGcd<&'a Integer> for &'b Integer #### fn extended_gcd(self, other: &'a Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Integer`s are taken by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(3)).extended_gcd(&Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Integer::from(240)).extended_gcd(&Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( (&Integer::from(-111)).extended_gcd(&Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<&'a Integer> for Integer #### fn extended_gcd(self, other: &'a Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Integer` is taken by value and the second by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(3).extended_gcd(&Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Integer::from(240).extended_gcd(&Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( Integer::from(-111).extended_gcd(&Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<Integer> for &'a Integer #### fn extended_gcd(self, other: Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Integer` is taken by reference and the second by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(3)).extended_gcd(Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Integer::from(240)).extended_gcd(Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( (&Integer::from(-111)).extended_gcd(Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl ExtendedGcd<Integer> for Integer #### fn extended_gcd(self, other: Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Integer`s are taken by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(3).extended_gcd(Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Integer::from(240).extended_gcd(Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( Integer::from(-111).extended_gcd(Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> FloorRoot<u64> for &'a Integer #### fn floor_root(self, exp: u64) -> Integer Returns the floor of the $n$th root of an `Integer`, taking the `Integer` by reference. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(999)).floor_root(3), 9); assert_eq!((&Integer::from(1000)).floor_root(3), 10); assert_eq!((&Integer::from(1001)).floor_root(3), 10); assert_eq!((&Integer::from(100000000000i64)).floor_root(5), 158); assert_eq!((&Integer::from(-100000000000i64)).floor_root(5), -159); ``` #### type Output = Integer ### impl FloorRoot<u64> for Integer #### fn floor_root(self, exp: u64) -> Integer Returns the floor of the $n$th root of an `Integer`, taking the `Integer` by value. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).floor_root(3), 9); assert_eq!(Integer::from(1000).floor_root(3), 10); assert_eq!(Integer::from(1001).floor_root(3), 10); assert_eq!(Integer::from(100000000000i64).floor_root(5), 158); assert_eq!(Integer::from(-100000000000i64).floor_root(5), -159); ``` #### type Output = Integer ### impl FloorRootAssign<u64> for Integer #### fn floor_root_assign(&mut self, exp: u64) Replaces an `Integer` with the floor of its $n$th root. $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRootAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(999); x.floor_root_assign(3); assert_eq!(x, 9); let mut x = Integer::from(1000); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1001); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(100000000000i64); x.floor_root_assign(5); assert_eq!(x, 158); let mut x = Integer::from(-100000000000i64); x.floor_root_assign(5); assert_eq!(x, -159); ``` ### impl<'a> FloorSqrt for &'a Integer #### fn floor_sqrt(self) -> Integer Returns the floor of the square root of an `Integer`, taking it by reference. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(99)).floor_sqrt(), 9); assert_eq!((&Integer::from(100)).floor_sqrt(), 10); assert_eq!((&Integer::from(101)).floor_sqrt(), 10); assert_eq!((&Integer::from(1000000000)).floor_sqrt(), 31622); assert_eq!((&Integer::from(10000000000u64)).floor_sqrt(), 100000); ``` #### type Output = Integer ### impl FloorSqrt for Integer #### fn floor_sqrt(self) -> Integer Returns the floor of the square root of an `Integer`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).floor_sqrt(), 9); assert_eq!(Integer::from(100).floor_sqrt(), 10); assert_eq!(Integer::from(101).floor_sqrt(), 10); assert_eq!(Integer::from(1000000000).floor_sqrt(), 31622); assert_eq!(Integer::from(10000000000u64).floor_sqrt(), 100000); ``` #### type Output = Integer ### impl FloorSqrtAssign for Integer #### fn floor_sqrt_assign(&mut self) Replaces an `Integer` with the floor of its square root. $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrtAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(99); x.floor_sqrt_assign(); assert_eq!(x, 9); let mut x = Integer::from(100); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(101); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(1000000000); x.floor_sqrt_assign(); assert_eq!(x, 31622); let mut x = Integer::from(10000000000u64); x.floor_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a> From<&'a Integer> for Rational #### fn from(value: &'a Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(&Integer::from(123)), 123); assert_eq!(Rational::from(&Integer::from(-123)), -123); ``` ### impl<'a> From<&'a Natural> for Integer #### fn from(value: &'a Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(&Natural::from(123u32)), 123); assert_eq!(Integer::from(&Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<Integer> for Rational #### fn from(value: Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(Integer::from(123)), 123); assert_eq!(Rational::from(Integer::from(-123)), -123); ``` ### impl From<Natural> for Integer #### fn from(value: Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(Natural::from(123u32)), 123); assert_eq!(Integer::from(Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<bool> for Integer #### fn from(b: bool) -> Integer Converts a `bool` to 0 or 1. This function is known as the Iverson bracket. $$ f(P) = [P] = \begin{cases} 1 & \text{if} \quad P, \\ 0 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; assert_eq!(Integer::from(false), 0); assert_eq!(Integer::from(true), 1); ``` ### impl From<i128> for Integer #### fn from(i: i128) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i16> for Integer #### fn from(i: i16) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i32> for Integer #### fn from(i: i32) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i64> for Integer #### fn from(i: i64) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i8> for Integer #### fn from(i: i8) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<isize> for Integer #### fn from(i: isize) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u128> for Integer #### fn from(u: u128) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u16> for Integer #### fn from(u: u16) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u32> for Integer #### fn from(u: u32) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u64> for Integer #### fn from(u: u64) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u8> for Integer #### fn from(u: u8) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<usize> for Integer #### fn from(u: usize) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl FromSciString for Integer #### fn from_sci_string_with_options( s: &str, options: FromSciStringOptions ) -> Option<IntegerConverts a string, possibly in scientfic notation, to an `Integer`. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive) and the rounding mode, in case rounding is necessary because the string represents a non-integer. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. These are most useful in conjunction with exponents, but they may be used on their own. If the string represents a non-integer, the rounding mode specified in `options` is used to round to an integer. If the string is unparseable, `None` is returned. `None` is also returned if the rounding mode in options is `Exact`, but rounding is necessary. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::num::conversion::traits::FromSciString; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; assert_eq!(Integer::from_sci_string("123").unwrap(), 123); assert_eq!(Integer::from_sci_string("123.5").unwrap(), 124); assert_eq!(Integer::from_sci_string("-123.5").unwrap(), -124); assert_eq!(Integer::from_sci_string("1.23e10").unwrap(), 12300000000i64); let mut options = FromSciStringOptions::default(); assert_eq!(Integer::from_sci_string_with_options("123.5", options).unwrap(), 124); options.set_rounding_mode(RoundingMode::Floor); assert_eq!(Integer::from_sci_string_with_options("123.5", options).unwrap(), 123); options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Integer::from_sci_string_with_options("ff", options).unwrap(), 255); ``` #### fn from_sci_string(s: &str) -> Option<SelfConverts a `&str`, possibly in scientific notation, to a number, using the default `FromSciStringOptions`.### impl FromStr for Integer #### fn from_str(s: &str) -> Result<Integer, ()Converts an string to an `Integer`. If the string does not represent a valid `Integer`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, with an optional leading `'-'`. Leading zeros are allowed, as is the string `"-0"`. The string `"-"` is not. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::from_str("123456").unwrap(), 123456); assert_eq!(Integer::from_str("00123456").unwrap(), 123456); assert_eq!(Integer::from_str("0").unwrap(), 0); assert_eq!(Integer::from_str("-123456").unwrap(), -123456); assert_eq!(Integer::from_str("-00123456").unwrap(), -123456); assert_eq!(Integer::from_str("-0").unwrap(), 0); assert!(Integer::from_str("").is_err()); assert!(Integer::from_str("a").is_err()); ``` #### type Err = () The associated error which can be returned from parsing.### impl FromStringBase for Integer #### fn from_string_base(base: u8, s: &str) -> Option<IntegerConverts an string, in a specified base, to an `Integer`. If the string does not represent a valid `Integer`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, `'a'` through `'z'`, and `'A'` through `'Z'`, with an optional leading `'-'`; and only characters that represent digits smaller than the base are allowed. Leading zeros are allowed, as is the string `"-0"`. The string `"-"` is not. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::{Digits, FromStringBase}; use malachite_nz::integer::Integer; assert_eq!(Integer::from_string_base(10, "123456").unwrap(), 123456); assert_eq!(Integer::from_string_base(10, "00123456").unwrap(), 123456); assert_eq!(Integer::from_string_base(16, "0").unwrap(), 0); assert_eq!( Integer::from_string_base(16, "deadbeef").unwrap(), 3735928559i64 ); assert_eq!( Integer::from_string_base(16, "deAdBeEf").unwrap(), 3735928559i64 ); assert_eq!(Integer::from_string_base(10, "-123456").unwrap(), -123456); assert_eq!(Integer::from_string_base(10, "-00123456").unwrap(), -123456); assert_eq!(Integer::from_string_base(16, "-0").unwrap(), 0); assert_eq!( Integer::from_string_base(16, "-deadbeef").unwrap(), -3735928559i64 ); assert_eq!( Integer::from_string_base(16, "-deAdBeEf").unwrap(), -3735928559i64 ); assert!(Integer::from_string_base(10, "").is_none()); assert!(Integer::from_string_base(10, "a").is_none()); assert!(Integer::from_string_base(2, "2").is_none()); assert!(Integer::from_string_base(2, "-2").is_none()); ``` ### impl Hash for Integer #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn is_integer(self) -> bool Determines whether an `Integer` is an integer. It always returns `true`. $f(x) = \textrm{true}$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::{NegativeOne, One, Zero}; use malachite_base::num::conversion::traits::IsInteger; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.is_integer(), true); assert_eq!(Integer::ONE.is_integer(), true); assert_eq!(Integer::from(100).is_integer(), true); assert_eq!(Integer::NEGATIVE_ONE.is_integer(), true); assert_eq!(Integer::from(-100).is_integer(), true); ``` ### impl<'a, 'b> JacobiSymbol<&'a Integer> for &'b Integer #### fn jacobi_symbol(self, other: &'a Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).jacobi_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).jacobi_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).jacobi_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(11)).jacobi_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(-7)).jacobi_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).jacobi_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).jacobi_symbol(&Integer::from(9)), 1); ``` ### impl<'a> JacobiSymbol<&'a Integer> for Integer #### fn jacobi_symbol(self, other: &'a Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).jacobi_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).jacobi_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).jacobi_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(11).jacobi_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(-7).jacobi_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).jacobi_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-11).jacobi_symbol(&Integer::from(9)), 1); ``` ### impl<'a> JacobiSymbol<Integer> for &'a Integer #### fn jacobi_symbol(self, other: Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).jacobi_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).jacobi_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).jacobi_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(11)).jacobi_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(-7)).jacobi_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).jacobi_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).jacobi_symbol(Integer::from(9)), 1); ``` ### impl JacobiSymbol<Integer> for Integer #### fn jacobi_symbol(self, other: Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).jacobi_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).jacobi_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).jacobi_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(11).jacobi_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(-7).jacobi_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).jacobi_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-11).jacobi_symbol(Integer::from(9)), 1); ``` ### impl<'a, 'b> KroneckerSymbol<&'a Integer> for &'b Integer #### fn kronecker_symbol(self, other: &'a Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).kronecker_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).kronecker_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(8)), -1); assert_eq!((&Integer::from(-7)).kronecker_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(8)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(-8)), 1); ``` ### impl<'a> KroneckerSymbol<&'a Integer> for Integer #### fn kronecker_symbol(self, other: &'a Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).kronecker_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).kronecker_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(8)), -1); assert_eq!(Integer::from(-7).kronecker_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(8)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(-8)), 1); ``` ### impl<'a> KroneckerSymbol<Integer> for &'a Integer #### fn kronecker_symbol(self, other: Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking the first by reference and the second value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).kronecker_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).kronecker_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(8)), -1); assert_eq!((&Integer::from(-7)).kronecker_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(8)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(-8)), 1); ``` ### impl KroneckerSymbol<Integer> for Integer #### fn kronecker_symbol(self, other: Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).kronecker_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).kronecker_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(8)), -1); assert_eq!(Integer::from(-7).kronecker_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(8)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(-8)), 1); ``` ### impl<'a, 'b> LegendreSymbol<&'a Integer> for &'b Integer #### fn legendre_symbol(self, other: &'a Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking both by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).legendre_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).legendre_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).legendre_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-7)).legendre_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).legendre_symbol(&Integer::from(5)), 1); ``` ### impl<'a> LegendreSymbol<&'a Integer> for Integer #### fn legendre_symbol(self, other: &'a Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking the first by value and the second by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).legendre_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).legendre_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).legendre_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-7).legendre_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).legendre_symbol(&Integer::from(5)), 1); ``` ### impl<'a> LegendreSymbol<Integer> for &'a Integer #### fn legendre_symbol(self, other: Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking the first by reference and the second by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).legendre_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).legendre_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).legendre_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-7)).legendre_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).legendre_symbol(Integer::from(5)), 1); ``` ### impl LegendreSymbol<Integer> for Integer #### fn legendre_symbol(self, other: Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking both by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).legendre_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).legendre_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).legendre_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-7).legendre_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).legendre_symbol(Integer::from(5)), 1); ``` ### impl LowMask for Integer #### fn low_mask(bits: u64) -> Integer Returns an `Integer` whose least significant $b$ bits are `true` and whose other bits are `false`. $f(b) = 2^b - 1$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `bits`. ##### Examples ``` use malachite_base::num::logic::traits::LowMask; use malachite_nz::integer::Integer; assert_eq!(Integer::low_mask(0), 0); assert_eq!(Integer::low_mask(3), 7); assert_eq!(Integer::low_mask(100).to_string(), "1267650600228229401496703205375"); ``` ### impl LowerHex for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a hexadecimal `String` using lowercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToLowerHexString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_lower_hex_string(), "0"); assert_eq!(Integer::from(123).to_lower_hex_string(), "7b"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_lower_hex_string(), "e8d4a51000" ); assert_eq!(format!("{:07x}", Integer::from(123)), "000007b"); assert_eq!(Integer::from(-123).to_lower_hex_string(), "-7b"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_lower_hex_string(), "-e8d4a51000" ); assert_eq!(format!("{:07x}", Integer::from(-123)), "-00007b"); assert_eq!(format!("{:#x}", Integer::ZERO), "0x0"); assert_eq!(format!("{:#x}", Integer::from(123)), "0x7b"); assert_eq!( format!("{:#x}", Integer::from_str("1000000000000").unwrap()), "0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Integer::from(123)), "0x0007b"); assert_eq!(format!("{:#x}", Integer::from(-123)), "-0x7b"); assert_eq!( format!("{:#x}", Integer::from_str("-1000000000000").unwrap()), "-0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Integer::from(-123)), "-0x007b"); ``` ### impl<'a, 'b> Mod<&'b Integer> for &'a Integer #### fn mod_op(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).mod_op(&Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).mod_op(&Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).mod_op(&Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).mod_op(&Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> Mod<&'a Integer> for Integer #### fn mod_op(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).mod_op(&Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).mod_op(&Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).mod_op(&Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).mod_op(&Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> Mod<Integer> for &'a Integer #### fn mod_op(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).mod_op(Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).mod_op(Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).mod_op(Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).mod_op(Integer::from(-10)), -3); ``` #### type Output = Integer ### impl Mod<Integer> for Integer #### fn mod_op(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).mod_op(Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).mod_op(Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).mod_op(Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).mod_op(Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> ModAssign<&'a Integer> for Integer #### fn mod_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by reference and replacing the first by the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.mod_assign(&Integer::from(10)); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.mod_assign(&Integer::from(-10)); assert_eq!(x, -7); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.mod_assign(&Integer::from(10)); assert_eq!(x, 7); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.mod_assign(&Integer::from(-10)); assert_eq!(x, -3); ``` ### impl ModAssign<Integer> for Integer #### fn mod_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by value and replacing the first by the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.mod_assign(Integer::from(10)); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.mod_assign(Integer::from(-10)); assert_eq!(x, -7); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.mod_assign(Integer::from(10)); assert_eq!(x, 7); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.mod_assign(Integer::from(-10)); assert_eq!(x, -3); ``` ### impl<'a> ModPowerOf2 for &'a Integer #### fn mod_power_of_2(self, pow: u64) -> Natural Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder is non-negative. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2`, this function always returns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!((&Integer::from(260)).mod_power_of_2(8), 4); // -101 * 2^4 + 5 = -1611 assert_eq!((&Integer::from(-1611)).mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl ModPowerOf2 for Integer #### fn mod_power_of_2(self, pow: u64) -> Natural Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder is non-negative. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2`, this function always returns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!(Integer::from(260).mod_power_of_2(8), 4); // -101 * 2^4 + 5 = -1611 assert_eq!(Integer::from(-1611).mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl ModPowerOf2Assign for Integer #### fn mod_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder is non-negative. If the quotient were computed, he quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2_assign`, this function always assigns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Assign; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 let mut x = Integer::from(260); x.mod_power_of_2_assign(8); assert_eq!(x, 4); // -101 * 2^4 + 5 = -1611 let mut x = Integer::from(-1611); x.mod_power_of_2_assign(4); assert_eq!(x, 5); ``` ### impl<'a, 'b> Mul<&'a Integer> for &'b Integer #### fn mul(self, other: &'a Integer) -> Integer Multiplies two `Integer`s, taking both by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::ONE * &Integer::from(123), 123); assert_eq!(&Integer::from(123) * &Integer::ZERO, 0); assert_eq!(&Integer::from(123) * &Integer::from(-456), -56088); assert_eq!( (&Integer::from(-123456789000i64) * &Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> Mul<&'a Integer> for Integer #### fn mul(self, other: &'a Integer) -> Integer Multiplies two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ONE * &Integer::from(123), 123); assert_eq!(Integer::from(123) * &Integer::ZERO, 0); assert_eq!(Integer::from(123) * &Integer::from(-456), -56088); assert_eq!( (Integer::from(-123456789000i64) * &Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> Mul<Integer> for &'a Integer #### fn mul(self, other: Integer) -> Integer Multiplies two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::ONE * Integer::from(123), 123); assert_eq!(&Integer::from(123) * Integer::ZERO, 0); assert_eq!(&Integer::from(123) * Integer::from(-456), -56088); assert_eq!( (&Integer::from(-123456789000i64) * Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl Mul<Integer> for Integer #### fn mul(self, other: Integer) -> Integer Multiplies two `Integer`s, taking both by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ONE * Integer::from(123), 123); assert_eq!(Integer::from(123) * Integer::ZERO, 0); assert_eq!(Integer::from(123) * Integer::from(-456), -56088); assert_eq!( (Integer::from(-123456789000i64) * Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> MulAssign<&'a Integer> for Integer #### fn mul_assign(&mut self, other: &'a Integer) Multiplies an `Integer` by an `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::NEGATIVE_ONE; x *= &Integer::from(1000); x *= &Integer::from(2000); x *= &Integer::from(3000); x *= &Integer::from(4000); assert_eq!(x, -24000000000000i64); ``` ### impl MulAssign<Integer> for Integer #### fn mul_assign(&mut self, other: Integer) Multiplies an `Integer` by an `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::NEGATIVE_ONE; x *= Integer::from(1000); x *= Integer::from(2000); x *= Integer::from(3000); x *= Integer::from(4000); assert_eq!(x, -24000000000000i64); ``` ### impl Named for Integer #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl<'a> Neg for &'a Integer #### fn neg(self) -> Integer Negates an `Integer`, taking it by reference. $$ f(x) = -x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(-&Integer::ZERO, 0); assert_eq!(-&Integer::from(123), -123); assert_eq!(-&Integer::from(-123), 123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Neg for Integer #### fn neg(self) -> Integer Negates an `Integer`, taking it by value. $$ f(x) = -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(-Integer::ZERO, 0); assert_eq!(-Integer::from(123), -123); assert_eq!(-Integer::from(-123), 123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl NegAssign for Integer #### fn neg_assign(&mut self) Negates an `Integer` in place. $$ x \gets -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.neg_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.neg_assign(); assert_eq!(x, -123); let mut x = Integer::from(-123); x.neg_assign(); assert_eq!(x, 123); ``` ### impl NegativeOne for Integer The constant -1. #### const NEGATIVE_ONE: Integer = _ ### impl<'a> Not for &'a Integer #### fn not(self) -> Integer Returns the bitwise negation of an `Integer`, taking it by reference. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(!&Integer::ZERO, -1); assert_eq!(!&Integer::from(123), -124); assert_eq!(!&Integer::from(-123), 122); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Not for Integer #### fn not(self) -> Integer Returns the bitwise negation of an `Integer`, taking it by value. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(!Integer::ZERO, -1); assert_eq!(!Integer::from(123), -124); assert_eq!(!Integer::from(-123), 122); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl NotAssign for Integer #### fn not_assign(&mut self) Replaces an `Integer` with its bitwise negation. $$ n \gets -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::NotAssign; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.not_assign(); assert_eq!(x, -1); let mut x = Integer::from(123); x.not_assign(); assert_eq!(x, -124); let mut x = Integer::from(-123); x.not_assign(); assert_eq!(x, 122); ``` ### impl Octal for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to an octal `String`. Using the `#` format flag prepends `"0o"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToOctalString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_octal_string(), "0"); assert_eq!(Integer::from(123).to_octal_string(), "173"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_octal_string(), "16432451210000" ); assert_eq!(format!("{:07o}", Integer::from(123)), "0000173"); assert_eq!(Integer::from(-123).to_octal_string(), "-173"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_octal_string(), "-16432451210000" ); assert_eq!(format!("{:07o}", Integer::from(-123)), "-000173"); assert_eq!(format!("{:#o}", Integer::ZERO), "0o0"); assert_eq!(format!("{:#o}", Integer::from(123)), "0o173"); assert_eq!( format!("{:#o}", Integer::from_str("1000000000000").unwrap()), "0o16432451210000" ); assert_eq!(format!("{:#07o}", Integer::from(123)), "0o00173"); assert_eq!(format!("{:#o}", Integer::from(-123)), "-0o173"); assert_eq!( format!("{:#o}", Integer::from_str("-1000000000000").unwrap()), "-0o16432451210000" ); assert_eq!(format!("{:#07o}", Integer::from(-123)), "-0o0173"); ``` ### impl One for Integer The constant 1. #### const ONE: Integer = _ ### impl Ord for Integer #### fn cmp(&self, other: &Integer) -> Ordering Compares two `Integer`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; assert!(Integer::from(-123) < Integer::from(-122)); assert!(Integer::from(-123) <= Integer::from(-122)); assert!(Integer::from(-123) > Integer::from(-124)); assert!(Integer::from(-123) >= Integer::from(-124)); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn cmp_abs(&self, other: &Integer) -> Ordering Compares the absolute values of two `Integer`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; assert!(Integer::from(-123).lt_abs(&Integer::from(-124))); assert!(Integer::from(-123).le_abs(&Integer::from(-124))); assert!(Integer::from(-124).gt_abs(&Integer::from(-123))); assert!(Integer::from(-124).ge_abs(&Integer::from(-123))); ``` ### impl<'a> OverflowingFrom<&'a Integer> for i128 #### fn overflowing_from(value: &Integer) -> (i128, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i16 #### fn overflowing_from(value: &Integer) -> (i16, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i32 #### fn overflowing_from(value: &Integer) -> (i32, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i64 #### fn overflowing_from(value: &Integer) -> (i64, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i8 #### fn overflowing_from(value: &Integer) -> (i8, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for isize #### fn overflowing_from(value: &Integer) -> (isize, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u128 #### fn overflowing_from(value: &Integer) -> (u128, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u16 #### fn overflowing_from(value: &Integer) -> (u16, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u32 #### fn overflowing_from(value: &Integer) -> (u32, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u64 #### fn overflowing_from(value: &Integer) -> (u64, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u8 #### fn overflowing_from(value: &Integer) -> (u8, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for usize #### fn overflowing_from(value: &Integer) -> (usize, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> Parity for &'a Integer #### fn even(self) -> bool Tests whether an `Integer` is even. $f(x) = (2|x)$. $f(x) = (\exists k \in \N : x = 2k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.even(), true); assert_eq!(Integer::from(123).even(), false); assert_eq!(Integer::from(-0x80).even(), true); assert_eq!(Integer::from(10u32).pow(12).even(), true); assert_eq!((-Integer::from(10u32).pow(12) - Integer::ONE).even(), false); ``` #### fn odd(self) -> bool Tests whether an `Integer` is odd. $f(x) = (2\nmid x)$. $f(x) = (\exists k \in \N : x = 2k+1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.odd(), false); assert_eq!(Integer::from(123).odd(), true); assert_eq!(Integer::from(-0x80).odd(), false); assert_eq!(Integer::from(10u32).pow(12).odd(), false); assert_eq!((-Integer::from(10u32).pow(12) - Integer::ONE).odd(), true); ``` ### impl PartialEq<Integer> for Natural #### fn eq(&self, other: &Integer) -> bool Determines whether a `Natural` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) == Integer::from(123)); assert!(Natural::from(123u32) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Integer> for Rational #### fn eq(&self, other: &Integer) -> bool Determines whether a `Rational` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from(-123) == Integer::from(-123)); assert!(Rational::from_signeds(22, 7) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Integer #### fn eq(&self, other: &Natural) -> bool Determines whether an `Integer` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) == Natural::from(123u32)); assert!(Integer::from(123) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Integer #### fn eq(&self, other: &Rational) -> bool Determines whether an `Integer` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(-123) == Rational::from(-123)); assert!(Integer::from(5) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f32> for Integer #### fn eq(&self, other: &f32) -> bool Determines whether an `Integer` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Integer #### fn eq(&self, other: &f64) -> bool Determines whether an `Integer` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i128> for Integer #### fn eq(&self, other: &i128) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i16> for Integer #### fn eq(&self, other: &i16) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i32> for Integer #### fn eq(&self, other: &i32) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i64> for Integer #### fn eq(&self, other: &i64) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i8> for Integer #### fn eq(&self, other: &i8) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<isize> for Integer #### fn eq(&self, other: &isize) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u128> for Integer #### fn eq(&self, other: &u128) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Integer #### fn eq(&self, other: &u16) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u32> for Integer #### fn eq(&self, other: &u32) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u64> for Integer #### fn eq(&self, other: &u64) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u8> for Integer #### fn eq(&self, other: &u8) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<usize> for Integer #### fn eq(&self, other: &usize) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Integer> for Integer #### fn eq(&self, other: &Integer) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Integer> for Natural #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Natural` to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Integer::from(122)); assert!(Natural::from(123u32) >= Integer::from(122)); assert!(Natural::from(123u32) < Integer::from(124)); assert!(Natural::from(123u32) <= Integer::from(124)); assert!(Natural::from(123u32) > Integer::from(-123)); assert!(Natural::from(123u32) >= Integer::from(-123)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Rational` to an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Integer::from(3)); assert!(Rational::from_signeds(22, 7) < Integer::from(4)); assert!(Rational::from_signeds(-22, 7) < Integer::from(-3)); assert!(Rational::from_signeds(-22, 7) > Integer::from(-4)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares an `Integer` to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) > Natural::from(122u32)); assert!(Integer::from(123) >= Natural::from(122u32)); assert!(Integer::from(123) < Natural::from(124u32)); assert!(Integer::from(123) <= Natural::from(124u32)); assert!(Integer::from(-123) < Natural::from(123u32)); assert!(Integer::from(-123) <= Natural::from(123u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares an `Integer` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(3) < Rational::from_signeds(22, 7)); assert!(Integer::from(4) > Rational::from_signeds(22, 7)); assert!(Integer::from(-3) > Rational::from_signeds(-22, 7)); assert!(Integer::from(-4) < Rational::from_signeds(-22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f32) -> Option<OrderingCompares an `Integer` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f64) -> Option<OrderingCompares an `Integer` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i128) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i16) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i32) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i64) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i8) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &isize) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u128) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u16) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u32) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u64) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u8) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &usize) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares two `Integer`s. See the documentation for the `Ord` implementation. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Natural` and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32).gt_abs(&Integer::from(122))); assert!(Natural::from(123u32).ge_abs(&Integer::from(122))); assert!(Natural::from(123u32).lt_abs(&Integer::from(124))); assert!(Natural::from(123u32).le_abs(&Integer::from(124))); assert!(Natural::from(123u32).lt_abs(&Integer::from(-124))); assert!(Natural::from(123u32).le_abs(&Integer::from(-124))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Rational` and an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Integer::from(3)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Integer::from(-3)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a primitive float and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a primitive float and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of an `Integer` and a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123).gt_abs(&Natural::from(122u32))); assert!(Integer::from(123).ge_abs(&Natural::from(122u32))); assert!(Integer::from(123).lt_abs(&Natural::from(124u32))); assert!(Integer::from(123).le_abs(&Natural::from(124u32))); assert!(Integer::from(-124).gt_abs(&Natural::from(123u32))); assert!(Integer::from(-124).ge_abs(&Natural::from(123u32))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an `Integer` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Integer::from(3).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Integer::from(-3).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f32) -> Option<OrderingCompares the absolute values of an `Integer` and a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f64) -> Option<OrderingCompares the absolute values of an `Integer` and a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i128) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i16) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i32) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i64) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i8) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &isize) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u128) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u16) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u32) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u64) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u8) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &usize) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of two `Integer`s. See the documentation for the `OrdAbs` implementation. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn pow(self, exp: u64) -> Integer Raises an `Integer` to a power, taking the `Integer` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!( (&Integer::from(-3)).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( (&Integer::from_str("-12345678987654321").unwrap()).pow(3).to_string(), "-1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Integer ### impl Pow<u64> for Integer #### fn pow(self, exp: u64) -> Integer Raises an `Integer` to a power, taking the `Integer` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!( Integer::from(-3).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( Integer::from_str("-12345678987654321").unwrap().pow(3).to_string(), "-1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Integer ### impl PowAssign<u64> for Integer #### fn pow_assign(&mut self, exp: u64) Raises an `Integer` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::from(-3); x.pow_assign(100); assert_eq!(x.to_string(), "515377520732011331036461129765621272702107522001"); let mut x = Integer::from_str("-12345678987654321").unwrap(); x.pow_assign(3); assert_eq!(x.to_string(), "-1881676411868862234942354805142998028003108518161"); ``` ### impl PowerOf2<u64> for Integer #### fn power_of_2(pow: u64) -> Integer Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_nz::integer::Integer; assert_eq!(Integer::power_of_2(0), 1); assert_eq!(Integer::power_of_2(3), 8); assert_eq!(Integer::power_of_2(100).to_string(), "1267650600228229401496703205376"); ``` ### impl<'a> Product<&'a Integer> for Integer #### fn product<I>(xs: I) -> Integerwhere I: Iterator<Item = &'a Integer>, Multiplies together all the `Integer`s in an iterator of `Integer` references. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Product; assert_eq!( Integer::product(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().iter()), -210 ); ``` ### impl Product<Integer> for Integer #### fn product<I>(xs: I) -> Integerwhere I: Iterator<Item = Integer>, Multiplies together all the `Integer`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Product; assert_eq!( Integer::product(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().into_iter()), -210 ); ``` ### impl<'a, 'b> Rem<&'b Integer> for &'a Integer #### fn rem(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) % &Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(&Integer::from(23) % &Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(&Integer::from(-23) % &Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) % &Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> Rem<&'a Integer> for Integer #### fn rem(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) % &Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23) % &Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23) % &Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) % &Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> Rem<Integer> for &'a Integer #### fn rem(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) % Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(&Integer::from(23) % Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(&Integer::from(-23) % Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) % Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl Rem<Integer> for Integer #### fn rem(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) % Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23) % Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23) % Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) % Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> RemAssign<&'a Integer> for Integer #### fn rem_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by reference and replacing the first by the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x %= &Integer::from(10); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x %= &Integer::from(-10); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x %= &Integer::from(10); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x %= &Integer::from(-10); assert_eq!(x, -3); ``` ### impl RemAssign<Integer> for Integer #### fn rem_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by value and replacing the first by the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x %= Integer::from(10); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x %= Integer::from(-10); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x %= Integer::from(10); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x %= Integer::from(-10); assert_eq!(x, -3); ``` ### impl<'a> RemPowerOf2 for &'a Integer #### fn rem_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder has the same sign as the first number. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq |r| < 2^k$. $$ f(x, k) = x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2`, this function always returns zero or a number with the same sign as `self`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!((&Integer::from(260)).rem_power_of_2(8), 4); // -100 * 2^4 + -11 = -1611 assert_eq!((&Integer::from(-1611)).rem_power_of_2(4), -11); ``` #### type Output = Integer ### impl RemPowerOf2 for Integer #### fn rem_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder has the same sign as the first number. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq |r| < 2^k$. $$ f(x, k) = x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2`, this function always returns zero or a number with the same sign as `self`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!(Integer::from(260).rem_power_of_2(8), 4); // -100 * 2^4 + -11 = -1611 assert_eq!(Integer::from(-1611).rem_power_of_2(4), -11); ``` #### type Output = Integer ### impl RemPowerOf2Assign for Integer #### fn rem_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder has the same sign as the `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2_assign`, this function does never changes the sign of `self`, except possibly to set `self` to 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2Assign; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 let mut x = Integer::from(260); x.rem_power_of_2_assign(8); assert_eq!(x, 4); // -100 * 2^4 + -11 = -1611 let mut x = Integer::from(-1611); x.rem_power_of_2_assign(4); assert_eq!(x, -11); ``` ### impl<'a, 'b> RoundToMultiple<&'b Integer> for &'a Integer #### fn round_to_multiple( self, other: &'b Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. Both `Integer`s are taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(-5)).round_to_multiple(&Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultiple<&'a Integer> for Integer #### fn round_to_multiple( self, other: &'a Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. The first `Integer` is taken by value and the second by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(-5).round_to_multiple(&Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultiple<Integer> for &'a Integer #### fn round_to_multiple( self, other: Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. The first `Integer` is taken by reference and the second by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(-5)).round_to_multiple(Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl RoundToMultiple<Integer> for Integer #### fn round_to_multiple( self, other: Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. Both `Integer`s are taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(-5).round_to_multiple(Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultipleAssign<&'a Integer> for Integer #### fn round_to_multiple_assign( &mut self, other: &'a Integer, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of another `Integer` in place, according to a specified rounding mode. The `Integer` on the right-hand side is taken by reference. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut x = Integer::from(-5); assert_eq!( x.round_to_multiple_assign(&Integer::ZERO, RoundingMode::Down), Ordering::Greater ); assert_eq!(x, 0); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); ``` ### impl RoundToMultipleAssign<Integer> for Integer #### fn round_to_multiple_assign( &mut self, other: Integer, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of another `Integer` in place, according to a specified rounding mode. The `Integer` on the right-hand side is taken by value. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut x = Integer::from(-5); assert_eq!( x.round_to_multiple_assign(Integer::ZERO, RoundingMode::Down), Ordering::Greater ); assert_eq!(x, 0); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!(x.round_to_multiple_assign(Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); ``` ### impl<'a> RoundToMultipleOfPowerOf2<u64> for &'a Integer #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of $2^k$ according to a specified rounding mode. The `Integer` is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-12)).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(-12, Equal)" ); ``` #### type Output = Integer ### impl RoundToMultipleOfPowerOf2<u64> for Integer #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of $2^k$ according to a specified rounding mode. The `Integer` is taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-12).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(-12, Equal)" ); ``` #### type Output = Integer ### impl RoundToMultipleOfPowerOf2Assign<u64> for Integer #### fn round_to_multiple_of_power_of_2_assign( &mut self, pow: u64, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of $2^k$ in place, according to a specified rounding mode. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultipleOfPowerOf2` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2_assign(pow, RoundingMode::Exact);` * `assert!(x.divisible_by_power_of_2(pow));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2Assign; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Floor), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, -8); let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Down), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-10); assert_eq!(n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Up), Ordering::Less); assert_eq!(n, -12); let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-12); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Exact), Ordering::Equal ); assert_eq!(n, -12); ``` ### impl<'a> RoundingFrom<&'a Integer> for f32 #### fn rounding_from(value: &'a Integer, rm: RoundingMode) -> (f32, Ordering) Converts an `Integer` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` the largest float less than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Integer` is non-negative and as with `Ceiling` if the `Integer` is negative. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Integer` is non-negative and as with `Floor` if the `Integer` is negative. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Integer` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Integer> for f64 #### fn rounding_from(value: &'a Integer, rm: RoundingMode) -> (f64, Ordering) Converts an `Integer` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` the largest float less than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Integer` is non-negative and as with `Ceiling` if the `Integer` is negative. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Integer` is non-negative and as with `Floor` if the `Integer` is negative. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Integer` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for Integer #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!(Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl RoundingFrom<Rational> for Integer #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl RoundingFrom<f32> for Integer #### fn rounding_from(value: f32, rm: RoundingMode) -> (Integer, Ordering) Converts a primitive float to an `Integer`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl RoundingFrom<f64> for Integer #### fn rounding_from(value: f64, rm: RoundingMode) -> (Integer, Ordering) Converts a primitive float to an `Integer`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for Natural #### fn saturating_from(value: &'a Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(&Integer::from(123)), 123); assert_eq!(Natural::saturating_from(&Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(&Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(&-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> SaturatingFrom<&'a Integer> for i128 #### fn saturating_from(value: &Integer) -> i128 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i16 #### fn saturating_from(value: &Integer) -> i16 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i32 #### fn saturating_from(value: &Integer) -> i32 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i64 #### fn saturating_from(value: &Integer) -> i64 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i8 #### fn saturating_from(value: &Integer) -> i8 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for isize #### fn saturating_from(value: &Integer) -> isize Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u128 #### fn saturating_from(value: &Integer) -> u128 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u16 #### fn saturating_from(value: &Integer) -> u16 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u32 #### fn saturating_from(value: &Integer) -> u32 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u64 #### fn saturating_from(value: &Integer) -> u64 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u8 #### fn saturating_from(value: &Integer) -> u8 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for usize #### fn saturating_from(value: &Integer) -> usize Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<Integer> for Natural #### fn saturating_from(value: Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(Integer::from(123)), 123); assert_eq!(Natural::saturating_from(Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> Shl<i128> for &'a Integer #### fn shl(self, bits: i128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i128> for Integer #### fn shl(self, bits: i128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i16> for &'a Integer #### fn shl(self, bits: i16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i16> for Integer #### fn shl(self, bits: i16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i32> for &'a Integer #### fn shl(self, bits: i32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i32> for Integer #### fn shl(self, bits: i32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i64> for &'a Integer #### fn shl(self, bits: i64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i64> for Integer #### fn shl(self, bits: i64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i8> for &'a Integer #### fn shl(self, bits: i8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i8> for Integer #### fn shl(self, bits: i8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<isize> for &'a Integer #### fn shl(self, bits: isize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<isize> for Integer #### fn shl(self, bits: isize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u128> for &'a Integer #### fn shl(self, bits: u128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u128> for Integer #### fn shl(self, bits: u128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u16> for &'a Integer #### fn shl(self, bits: u16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u16> for Integer #### fn shl(self, bits: u16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u32> for &'a Integer #### fn shl(self, bits: u32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u32> for Integer #### fn shl(self, bits: u32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u64> for &'a Integer #### fn shl(self, bits: u64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u64> for Integer #### fn shl(self, bits: u64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u8> for &'a Integer #### fn shl(self, bits: u8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u8> for Integer #### fn shl(self, bits: u8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<usize> for &'a Integer #### fn shl(self, bits: usize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<usize> for Integer #### fn shl(self, bits: usize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl ShlAssign<i128> for Integer #### fn shl_assign(&mut self, bits: i128) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i16> for Integer #### fn shl_assign(&mut self, bits: i16) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i32> for Integer #### fn shl_assign(&mut self, bits: i32) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i64> for Integer #### fn shl_assign(&mut self, bits: i64) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i8> for Integer #### fn shl_assign(&mut self, bits: i8) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<isize> for Integer #### fn shl_assign(&mut self, bits: isize) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<u128> for Integer #### fn shl_assign(&mut self, bits: u128) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u16> for Integer #### fn shl_assign(&mut self, bits: u16) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u32> for Integer #### fn shl_assign(&mut self, bits: u32) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u64> for Integer #### fn shl_assign(&mut self, bits: u64) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u8> for Integer #### fn shl_assign(&mut self, bits: u8) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<usize> for Integer #### fn shl_assign(&mut self, bits: usize) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ShlRound<i128> for &'a Integer #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i128> for Integer #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i16> for &'a Integer #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i16> for Integer #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i32> for &'a Integer #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i32> for Integer #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i64> for &'a Integer #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i64> for Integer #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i8> for &'a Integer #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i8> for Integer #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<isize> for &'a Integer #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<isize> for Integer #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRoundAssign<i128> for Integer #### fn shl_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i16> for Integer #### fn shl_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i32> for Integer #### fn shl_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i64> for Integer #### fn shl_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i8> for Integer #### fn shl_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<isize> for Integer #### fn shl_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl<'a> Shr<i128> for &'a Integer #### fn shr(self, bits: i128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i128> for Integer #### fn shr(self, bits: i128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i16> for &'a Integer #### fn shr(self, bits: i16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i16> for Integer #### fn shr(self, bits: i16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i32> for &'a Integer #### fn shr(self, bits: i32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i32> for Integer #### fn shr(self, bits: i32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i64> for &'a Integer #### fn shr(self, bits: i64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i64> for Integer #### fn shr(self, bits: i64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i8> for &'a Integer #### fn shr(self, bits: i8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i8> for Integer #### fn shr(self, bits: i8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<isize> for &'a Integer #### fn shr(self, bits: isize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<isize> for Integer #### fn shr(self, bits: isize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u128> for &'a Integer #### fn shr(self, bits: u128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u128> for Integer #### fn shr(self, bits: u128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u16> for &'a Integer #### fn shr(self, bits: u16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u16> for Integer #### fn shr(self, bits: u16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u32> for &'a Integer #### fn shr(self, bits: u32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u32> for Integer #### fn shr(self, bits: u32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u64> for &'a Integer #### fn shr(self, bits: u64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u64> for Integer #### fn shr(self, bits: u64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u8> for &'a Integer #### fn shr(self, bits: u8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u8> for Integer #### fn shr(self, bits: u8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<usize> for &'a Integer #### fn shr(self, bits: usize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<usize> for Integer #### fn shr(self, bits: usize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl ShrAssign<i128> for Integer #### fn shr_assign(&mut self, bits: i128) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i16> for Integer #### fn shr_assign(&mut self, bits: i16) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i32> for Integer #### fn shr_assign(&mut self, bits: i32) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i64> for Integer #### fn shr_assign(&mut self, bits: i64) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i8> for Integer #### fn shr_assign(&mut self, bits: i8) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<isize> for Integer #### fn shr_assign(&mut self, bits: isize) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u128> for Integer #### fn shr_assign(&mut self, bits: u128) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u16> for Integer #### fn shr_assign(&mut self, bits: u16) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u32> for Integer #### fn shr_assign(&mut self, bits: u32) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u64> for Integer #### fn shr_assign(&mut self, bits: u64) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u8> for Integer #### fn shr_assign(&mut self, bits: u8) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<usize> for Integer #### fn shr_assign(&mut self, bits: usize) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl<'a> ShrRound<i128> for &'a Integer #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i128> for Integer #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i16> for &'a Integer #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i16> for Integer #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i32> for &'a Integer #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i32> for Integer #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i64> for &'a Integer #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i64> for Integer #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i8> for &'a Integer #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i8> for Integer #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<isize> for &'a Integer #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<isize> for Integer #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u128> for &'a Integer #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u128> for Integer #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u16> for &'a Integer #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u16> for Integer #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u32> for &'a Integer #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u32> for Integer #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u64> for &'a Integer #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u64> for Integer #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u8> for &'a Integer #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u8> for Integer #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<usize> for &'a Integer #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<usize> for Integer #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRoundAssign<i128> for Integer #### fn shr_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i16> for Integer #### fn shr_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i32> for Integer #### fn shr_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i64> for Integer #### fn shr_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i8> for Integer #### fn shr_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<isize> for Integer #### fn shr_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u128> for Integer #### fn shr_round_assign(&mut self, bits: u128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u16> for Integer #### fn shr_round_assign(&mut self, bits: u16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u32> for Integer #### fn shr_round_assign(&mut self, bits: u32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u64> for Integer #### fn shr_round_assign(&mut self, bits: u64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u8> for Integer #### fn shr_round_assign(&mut self, bits: u8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<usize> for Integer #### fn shr_round_assign(&mut self, bits: usize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl Sign for Integer #### fn sign(&self) -> Ordering Compares an `Integer` to zero. Returns `Greater`, `Equal`, or `Less`, depending on whether the `Integer` is positive, zero, or negative, respectively. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Sign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!(Integer::ZERO.sign(), Ordering::Equal); assert_eq!(Integer::from(123).sign(), Ordering::Greater); assert_eq!(Integer::from(-123).sign(), Ordering::Less); ``` ### impl<'a> SignificantBits for &'a Integer #### fn significant_bits(self) -> u64 Returns the number of significant bits of an `Integer`’s absolute value. $$ f(n) = \begin{cases} 0 & \text{if} \quad n = 0, \\ \lfloor \log_2 |n| \rfloor + 1 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::logic::traits::SignificantBits; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.significant_bits(), 0); assert_eq!(Integer::from(100).significant_bits(), 7); assert_eq!(Integer::from(-100).significant_bits(), 7); ``` ### impl<'a> Square for &'a Integer #### fn square(self) -> Integer Squares an `Integer`, taking it by reference. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).square(), 0); assert_eq!((&Integer::from(123)).square(), 15129); assert_eq!((&Integer::from(-123)).square(), 15129); ``` #### type Output = Integer ### impl Square for Integer #### fn square(self) -> Integer Squares an `Integer`, taking it by value. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.square(), 0); assert_eq!(Integer::from(123).square(), 15129); assert_eq!(Integer::from(-123).square(), 15129); ``` #### type Output = Integer ### impl SquareAssign for Integer #### fn square_assign(&mut self) Squares an `Integer` in place. $$ x \gets x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.square_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.square_assign(); assert_eq!(x, 15129); let mut x = Integer::from(-123); x.square_assign(); assert_eq!(x, 15129); ``` ### impl<'a, 'b> Sub<&'a Integer> for &'b Integer #### fn sub(self, other: &'a Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking both by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO - &Integer::from(123), -123); assert_eq!(&Integer::from(123) - &Integer::ZERO, 123); assert_eq!(&Integer::from(456) - &Integer::from(-123), 579); assert_eq!( &-Integer::from(10u32).pow(12) - &(-Integer::from(10u32).pow(12) * Integer::from(2u32)), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> Sub<&'a Integer> for Integer #### fn sub(self, other: &'a Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking the first by value and the second by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO - &Integer::from(123), -123); assert_eq!(Integer::from(123) - &Integer::ZERO, 123); assert_eq!(Integer::from(456) - &Integer::from(-123), 579); assert_eq!( -Integer::from(10u32).pow(12) - &(-Integer::from(10u32).pow(12) * Integer::from(2u32)), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> Sub<Integer> for &'a Integer #### fn sub(self, other: Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking the first by reference and the second by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO - Integer::from(123), -123); assert_eq!(&Integer::from(123) - Integer::ZERO, 123); assert_eq!(&Integer::from(456) - Integer::from(-123), 579); assert_eq!( &-Integer::from(10u32).pow(12) - -Integer::from(10u32).pow(12) * Integer::from(2u32), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Sub<Integer> for Integer #### fn sub(self, other: Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking both by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO - Integer::from(123), -123); assert_eq!(Integer::from(123) - Integer::ZERO, 123); assert_eq!(Integer::from(456) - Integer::from(-123), 579); assert_eq!( -Integer::from(10u32).pow(12) - -Integer::from(10u32).pow(12) * Integer::from(2u32), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> SubAssign<&'a Integer> for Integer #### fn sub_assign(&mut self, other: &'a Integer) Subtracts an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x -= &(-Integer::from(10u32).pow(12)); x -= &(Integer::from(10u32).pow(12) * Integer::from(2u32)); x -= &(-Integer::from(10u32).pow(12) * Integer::from(3u32)); x -= &(Integer::from(10u32).pow(12) * Integer::from(4u32)); assert_eq!(x, -2000000000000i64); ``` ### impl SubAssign<Integer> for Integer #### fn sub_assign(&mut self, other: Integer) Subtracts an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x -= -Integer::from(10u32).pow(12); x -= Integer::from(10u32).pow(12) * Integer::from(2u32); x -= -Integer::from(10u32).pow(12) * Integer::from(3u32); x -= Integer::from(10u32).pow(12) * Integer::from(4u32); assert_eq!(x, -2000000000000i64); ``` ### impl<'a> SubMul<&'a Integer, Integer> for Integer #### fn sub_mul(self, y: &'a Integer, z: Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first and third by value and the second by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(&Integer::from(3u32), Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b, 'c> SubMul<&'a Integer, &'b Integer> for &'c Integer #### fn sub_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking all three by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10u32)).sub_mul(&Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (&-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b> SubMul<&'a Integer, &'b Integer> for Integer #### fn sub_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first by value and the second and third by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(&Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> SubMul<Integer, &'a Integer> for Integer #### fn sub_mul(self, y: Integer, z: &'a Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first two by value and the third by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl SubMul<Integer, Integer> for Integer #### fn sub_mul(self, y: Integer, z: Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking all three by value. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(Integer::from(3u32), Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(Integer::from(-0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> SubMulAssign<&'a Integer, Integer> for Integer #### fn sub_mul_assign(&mut self, y: &'a Integer, z: Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking the first `Integer` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(&Integer::from(3u32), Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(&Integer::from(-0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a, 'b> SubMulAssign<&'a Integer, &'b Integer> for Integer #### fn sub_mul_assign(&mut self, y: &'a Integer, z: &'b Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking both `Integer`s on the right-hand side by reference. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(&Integer::from(3u32), &Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(&Integer::from(-0x10000), &(-Integer::from(10u32).pow(12))); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> SubMulAssign<Integer, &'a Integer> for Integer #### fn sub_mul_assign(&mut self, y: Integer, z: &'a Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking the first `Integer` on the right-hand side by value and the second by reference. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(Integer::from(3u32), &Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(Integer::from(-0x10000), &(-Integer::from(10u32).pow(12))); assert_eq!(x, -65537000000000000i64); ``` ### impl SubMulAssign<Integer, Integer> for Integer #### fn sub_mul_assign(&mut self, y: Integer, z: Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking both `Integer`s on the right-hand side by value. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(Integer::from(3u32), Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(Integer::from(-0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> Sum<&'a Integer> for Integer #### fn sum<I>(xs: I) -> Integerwhere I: Iterator<Item = &'a Integer>, Adds up all the `Integer`s in an iterator of `Integer` references. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Sum; assert_eq!(Integer::sum(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().iter()), 11); ``` ### impl Sum<Integer> for Integer #### fn sum<I>(xs: I) -> Integerwhere I: Iterator<Item = Integer>, Adds up all the `Integer`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Sum; assert_eq!( Integer::sum(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().into_iter()), 11 ); ``` ### impl ToSci for Integer #### fn fmt_sci_valid(&self, options: ToSciOptions) -> bool Determines whether an `Integer` can be converted to a string using `to_sci` and a particular set of options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; let mut options = ToSciOptions::default(); assert!(Integer::from(123).fmt_sci_valid(options)); assert!(Integer::from(u128::MAX).fmt_sci_valid(options)); // u128::MAX has more than 16 significant digits options.set_rounding_mode(RoundingMode::Exact); assert!(!Integer::from(u128::MAX).fmt_sci_valid(options)); options.set_precision(50); assert!(Integer::from(u128::MAX).fmt_sci_valid(options)); ``` #### fn fmt_sci( &self, f: &mut Formatter<'_>, options: ToSciOptions ) -> Result<(), ErrorConverts an `Integer` to a string using a specified base, possibly formatting the number using scientific notation. See `ToSciOptions` for details on the available options. Note that setting `neg_exp_threshold` has no effect, since there is never a need to use negative exponents when representing an `Integer`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `options.rounding_mode` is `Exact`, but the size options are such that the input must be rounded. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; assert_eq!(Integer::from(u128::MAX).to_sci().to_string(), "3.402823669209385e38"); assert_eq!(Integer::from(i128::MIN).to_sci().to_string(), "-1.701411834604692e38"); let n = Integer::from(123456u32); let mut options = ToSciOptions::default(); assert_eq!(n.to_sci_with_options(options).to_string(), "123456"); options.set_precision(3); assert_eq!(n.to_sci_with_options(options).to_string(), "1.23e5"); options.set_rounding_mode(RoundingMode::Ceiling); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24e5"); options.set_e_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E5"); options.set_force_exponent_plus_sign(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E+5"); options = ToSciOptions::default(); options.set_base(36); assert_eq!(n.to_sci_with_options(options).to_string(), "2n9c"); options.set_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "2N9C"); options.set_base(2); options.set_precision(10); assert_eq!(n.to_sci_with_options(options).to_string(), "1.1110001e16"); options.set_include_trailing_zeros(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.111000100e16"); ``` #### fn to_sci_with_options(&self, options: ToSciOptions) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation.#### fn to_sci(&self) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation, using the default `ToSciOptions`.### impl ToStringBase for Integer #### fn to_string_base(&self, base: u8) -> String Converts an `Integer` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the lowercase `char`s `'a'` to `'z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::integer::Integer; assert_eq!(Integer::from(1000).to_string_base(2), "1111101000"); assert_eq!(Integer::from(1000).to_string_base(10), "1000"); assert_eq!(Integer::from(1000).to_string_base(36), "rs"); assert_eq!(Integer::from(-1000).to_string_base(2), "-1111101000"); assert_eq!(Integer::from(-1000).to_string_base(10), "-1000"); assert_eq!(Integer::from(-1000).to_string_base(36), "-rs"); ``` #### fn to_string_base_upper(&self, base: u8) -> String Converts an `Integer` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the uppercase `char`s `'A'` to `'Z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::integer::Integer; assert_eq!(Integer::from(1000).to_string_base_upper(2), "1111101000"); assert_eq!(Integer::from(1000).to_string_base_upper(10), "1000"); assert_eq!(Integer::from(1000).to_string_base_upper(36), "RS"); assert_eq!(Integer::from(-1000).to_string_base_upper(2), "-1111101000"); assert_eq!(Integer::from(-1000).to_string_base_upper(10), "-1000"); assert_eq!(Integer::from(-1000).to_string_base_upper(36), "-RS"); ``` ### impl<'a> TryFrom<&'a Integer> for Natural #### fn try_from( value: &'a Integer ) -> Result<Natural, <Natural as TryFrom<&'a Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(&Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(&Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(&Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(&(-Integer::from(10u32).pow(12))).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl<'a> TryFrom<&'a Rational> for Integer #### fn try_from( x: &Rational ) -> Result<Integer, <Integer as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by reference. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(&Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(&Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Integer> for Natural #### fn try_from( value: Integer ) -> Result<Natural, <Natural as TryFrom<Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(-Integer::from(10u32).pow(12)).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Integer #### fn try_from( x: Rational ) -> Result<Integer, <Integer as TryFrom<Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by value. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl TryFrom<SerdeInteger> for Integer #### type Error = String The type returned in the event of a conversion error.#### fn try_from(s: SerdeInteger) -> Result<Integer, StringPerforms the conversion.### impl TryFrom<f32> for Integer #### fn try_from(value: f32) -> Result<Integer, <Integer as TryFrom<f32>>::ErrorConverts a primitive float to an `Integer`. If the input isn’t exactly equal to some `Integer`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = SignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<f64> for Integer #### fn try_from(value: f64) -> Result<Integer, <Integer as TryFrom<f64>>::ErrorConverts a primitive float to an `Integer`. If the input isn’t exactly equal to some `Integer`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = SignedFromFloatError The type returned in the event of a conversion error.### impl Two for Integer The constant 2. #### const TWO: Integer = _ ### impl<'a> UnsignedAbs for &'a Integer #### fn unsigned_abs(self) -> Natural Takes the absolute value of an `Integer`, taking the `Integer` by reference and converting the result to a `Natural`. $$ f(x) = |x|. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::UnsignedAbs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).unsigned_abs(), 0); assert_eq!((&Integer::from(123)).unsigned_abs(), 123); assert_eq!((&Integer::from(-123)).unsigned_abs(), 123); ``` #### type Output = Natural ### impl UnsignedAbs for Integer #### fn unsigned_abs(self) -> Natural Takes the absolute value of an `Integer`, taking the `Integer` by value and converting the result to a `Natural`. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::UnsignedAbs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.unsigned_abs(), 0); assert_eq!(Integer::from(123).unsigned_abs(), 123); assert_eq!(Integer::from(-123).unsigned_abs(), 123); ``` #### type Output = Natural ### impl UpperHex for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a hexadecimal `String` using uppercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToUpperHexString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_upper_hex_string(), "0"); assert_eq!(Integer::from(123).to_upper_hex_string(), "7B"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_upper_hex_string(), "E8D4A51000" ); assert_eq!(format!("{:07X}", Integer::from(123)), "000007B"); assert_eq!(Integer::from(-123).to_upper_hex_string(), "-7B"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_upper_hex_string(), "-E8D4A51000" ); assert_eq!(format!("{:07X}", Integer::from(-123)), "-00007B"); assert_eq!(format!("{:#X}", Integer::ZERO), "0x0"); assert_eq!(format!("{:#X}", Integer::from(123)), "0x7B"); assert_eq!( format!("{:#X}", Integer::from_str("1000000000000").unwrap()), "0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Integer::from(123)), "0x0007B"); assert_eq!(format!("{:#X}", Integer::from(-123)), "-0x7B"); assert_eq!( format!("{:#X}", Integer::from_str("-1000000000000").unwrap()), "-0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Integer::from(-123)), "-0x007B"); ``` ### impl<'a> WrappingFrom<&'a Integer> for i128 #### fn wrapping_from(value: &Integer) -> i128 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i16 #### fn wrapping_from(value: &Integer) -> i16 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i32 #### fn wrapping_from(value: &Integer) -> i32 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i64 #### fn wrapping_from(value: &Integer) -> i64 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i8 #### fn wrapping_from(value: &Integer) -> i8 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for isize #### fn wrapping_from(value: &Integer) -> isize Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u128 #### fn wrapping_from(value: &Integer) -> u128 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u16 #### fn wrapping_from(value: &Integer) -> u16 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u32 #### fn wrapping_from(value: &Integer) -> u32 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u64 #### fn wrapping_from(value: &Integer) -> u64 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u8 #### fn wrapping_from(value: &Integer) -> u8 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for usize #### fn wrapping_from(value: &Integer) -> usize Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Zero for Integer The constant 0. #### const ZERO: Integer = _ ### impl Eq for Integer ### impl StructuralEq for Integer ### impl StructuralPartialEq for Integer Auto Trait Implementations --- ### impl RefUnwindSafe for Integer ### impl Send for Integer ### impl Sync for Integer ### impl Unpin for Integer ### impl UnwindSafe for Integer Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToBinaryString for Twhere T: Binary, #### fn to_binary_string(&self) -> String Returns the `String` produced by `T`s `Binary` implementation. ##### Examples ``` use malachite_base::strings::ToBinaryString; assert_eq!(5u64.to_binary_string(), "101"); assert_eq!((-100i16).to_binary_string(), "1111111110011100"); ``` ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToLowerHexString for Twhere T: LowerHex, #### fn to_lower_hex_string(&self) -> String Returns the `String` produced by `T`s `LowerHex` implementation. ##### Examples ``` use malachite_base::strings::ToLowerHexString; assert_eq!(50u64.to_lower_hex_string(), "32"); assert_eq!((-100i16).to_lower_hex_string(), "ff9c"); ``` ### impl<T> ToOctalString for Twhere T: Octal, #### fn to_octal_string(&self) -> String Returns the `String` produced by `T`s `Octal` implementation. ##### Examples ``` use malachite_base::strings::ToOctalString; assert_eq!(50u64.to_octal_string(), "62"); assert_eq!((-100i16).to_octal_string(), "177634"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. T: UpperHex, #### fn to_upper_hex_string(&self) -> String Returns the `String` produced by `T`s `UpperHex` implementation. ##### Examples ``` use malachite_base::strings::ToUpperHexString; assert_eq!(50u64.to_upper_hex_string(), "32"); assert_eq!((-100i16).to_upper_hex_string(), "FF9C"); ``` ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U {"IntegerBitIterator<'a>":"<h3>Notable traits for <code><a class=\"enum\" href=\"logic/bit_iterable/enum.IntegerBitIterator.html\" title=\"enum malachite::integer::logic::bit_iterable::IntegerBitIterator\">IntegerBitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"logic/bit_iterable/enum.IntegerBitIterator.html\" title=\"enum malachite::integer::logic::bit_iterable::IntegerBitIterator\">IntegerBitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.bool.html\">bool</a>;</span>","TwosComplementLimbIterator<'_>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/to_twos_complement_limbs/enum.TwosComplementLimbIterator.html\" title=\"enum malachite::integer::conversion::to_twos_complement_limbs::TwosComplementLimbIterator\">TwosComplementLimbIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/to_twos_complement_limbs/enum.TwosComplementLimbIterator.html\" title=\"enum malachite::integer::conversion::to_twos_complement_limbs::TwosComplementLimbIterator\">TwosComplementLimbIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u64.html\">u64</a>;</span>"} Module malachite::iterators === Functions and adaptors for iterators. Modules --- * bit_distributorContains `BitDistributor`, which helps generate tuples exhaustively. * comparisonFunctions that compare adjacent iterator elements. * iterator_cacheContains `IteratorCache`, which remembers values produced by an iterator. Structs --- * IterWindowsGenerates sliding windows of elements from an iterator. * NonzeroValuesGenerates all the nonzero values of a provided iterator. * WithSpecialValueAn iterator that randomly produces another iterator’s values, or produces a special value. * WithSpecialValuesAn iterator that randomly produces another iterator’s values, or samples from a `Vec` of special values. Functions --- * count_is_at_leastReturns whether an iterator returns at least some number of values. * count_is_at_mostReturns whether an iterator returns at most some number of values. * first_and_lastReturns the first and last elements of an iterator, or `None` if it is empty. * is_constantReturns whether all of the values generated by an iterator are equal. * is_uniqueReturns whether an iterator never returns the same value twice. * iter_windowsReturns windows of $n$ adjacent elements of an iterator, advancing the window by 1 in each iteration. The values are cloned each time a new window is generated. * matching_intervals_in_iteratorGroups elements of an iterator into intervals of adjacent elements that match a predicate. The endpoints of each interval are returned. * nonzero_valuesReturns an iterator that generates all the nonzero values of a provided iterator. * prefix_to_stringConverts a prefix of an iterator to a string. * with_special_valueAn iterator that randomly produces another iterator’s values, or produces a special value. * with_special_valuesAn iterator that randomly produces another iterator’s values, or produces a random special value from a `Vec`. Module malachite::named === The `Named` trait, for getting a type’s name. Traits --- * NamedDefines the name of a type. This is useful for constructing error messages in a generic function. Trait malachite::named::Named === ``` pub trait Named { const NAME: &'static str; } ``` Defines the name of a type. This is useful for constructing error messages in a generic function. Required Associated Constants --- #### const NAME: &'static str The name of `Self`. Implementations on Foreign Types --- ### impl Named for u128 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for isize #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for usize #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for i8 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for bool #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for i64 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for f64 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for u16 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for i32 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for u32 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for f32 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for i16 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for u64 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for String #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for u8 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for char #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Named for i128 #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. Implementors --- ### impl Named for RoundingMode #### const NAME: &'static str = _ ### impl Named for Integer #### const NAME: &'static str = _ ### impl Named for Natural #### const NAME: &'static str = _ ### impl Named for Rational #### const NAME: &'static str = _ Module malachite::natural === `Natural`, a type representing arbitrarily large non-negative integers. Modules --- * arithmeticTraits for arithmetic. * comparisonTraits for comparing `Natural`s for equality or order. * conversionTraits for converting to and from `Natural`s, converting to and from strings, and extracting digits. * exhaustiveIterators that generate `Natural`s without repetition. * factorizationTraits for generating primes, primality testing, and factorization (TODO!) * logicTraits for logic and bit manipulation. * randomIterators that generate `Natural`s randomly. Structs --- * NaturalA natural (non-negative) integer. Struct malachite::natural::Natural === ``` pub struct Natural(/* private fields */); ``` A natural (non-negative) integer. Any `Natural` small enough to fit into a `Limb` is represented inline. Only `Natural`s outside this range incur the costs of heap-allocation. Here’s a diagram of a slice of `Natural`s (using 32-bit limbs) containing the first 8 values of Sylvester’s sequence: ![Natural memory layout](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0nVVRGLTgnPz4KPCEtLSBUaGlzIGZpbGUgd2FzIGdlbmVyYXRlZCBieSBkdmlzdmdtIDIuMTEuMSAtLT4KPHN2ZyB2ZXJzaW9uPScxLjEnIHhtbG5zPSdodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZycgeG1sbnM6eGxpbms9J2h0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsnIHdpZHRoPSc0MDBwdCcgaGVpZ2h0PScyODUuODU3MTQzcHQnIHZpZXdCb3g9JzcyLj<KEY>AtOTknLz4KPC9nPgo8L3N2Zz4=) Implementations --- ### impl Natural #### pub fn approx_log(&self) -> f64 Calculates the approximate natural logarithm of a nonzero `Natural`. $f(x) = (1+\epsilon)(\log x)$, where $|\epsilon| < 2^{-52}.$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::float::NiceFloat; use malachite_nz::natural::Natural; assert_eq!(NiceFloat(Natural::from(10u32).approx_log()), NiceFloat(2.3025850929940455)); assert_eq!( NiceFloat(Natural::from(10u32).pow(10000).approx_log()), NiceFloat(23025.850929940454) ); ``` This is equivalent to `fmpz_dlog` from `fmpz/dlog.c`, FLINT 2.7.1. ### impl Natural #### pub fn cmp_normalized(&self, other: &Natural) -> Ordering Returns a result of a comparison between two `Natural`s as if each had been multiplied by some power of 2 to bring it into the interval $[1, 2)$. That is, the comparison is equivalent to a comparison between $f(x)$ and $f(y)$, where $$ f(n) = n2^{\lfloor\log_2 n \rfloor}. $$ The multiplication is not actually performed. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if either argument is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::cmp::Ordering; // 1 == 1.0 * 2^0, 4 == 1.0 * 2^2 // 1.0 == 1.0 assert_eq!(Natural::from(1u32).cmp_normalized(&Natural::from(4u32)), Ordering::Equal); // 5 == 1.25 * 2^2, 6 == 1.5 * 2^2 // 1.25 < 1.5 assert_eq!(Natural::from(5u32).cmp_normalized(&Natural::from(6u32)), Ordering::Less); // 3 == 1.5 * 2^1, 17 == 1.0625 * 2^4 // 1.5 > 1.0625 assert_eq!(Natural::from(3u32).cmp_normalized(&Natural::from(17u32)), Ordering::Greater); // 9 == 1.125 * 2^3, 36 == 1.125 * 2^5 // 1.125 == 1.125 assert_eq!(Natural::from(9u32).cmp_normalized(&Natural::from(36u32)), Ordering::Equal); ``` ### impl Natural #### pub fn from_limbs_asc(xs: &[u64]) -> Natural Converts a slice of limbs to a `Natural`. The limbs are in ascending order, so that less-significant limbs have lower indices in the input slice. This function borrows the limbs. If taking ownership of limbs is possible, `from_owned_limbs_asc` is more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is more efficient than `from_limbs_desc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_limbs_asc(&[]), 0); assert_eq!(Natural::from_limbs_asc(&[123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_limbs_asc(&[3567587328, 232]), 1000000000000u64); } ``` #### pub fn from_limbs_desc(xs: &[u64]) -> Natural Converts a slice of limbs to a `Natural`. The limbs in descending order, so that less-significant limbs have higher indices in the input slice. This function borrows the limbs. If taking ownership of the limbs is possible, `from_owned_limbs_desc` is more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is less efficient than `from_limbs_asc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_limbs_desc(&[]), 0); assert_eq!(Natural::from_limbs_desc(&[123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_limbs_desc(&[232, 3567587328]), 1000000000000u64); } ``` #### pub fn from_owned_limbs_asc(xs: Vec<u64, Global>) -> Natural Converts a `Vec` of limbs to a `Natural`. The limbs are in ascending order, so that less-significant limbs have lower indices in the input `Vec`. This function takes ownership of the limbs. If it’s necessary to borrow the limbs instead, use `from_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is more efficient than `from_limbs_desc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_owned_limbs_asc(vec![]), 0); assert_eq!(Natural::from_owned_limbs_asc(vec![123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_owned_limbs_asc(vec![3567587328, 232]), 1000000000000u64); } ``` #### pub fn from_owned_limbs_desc(xs: Vec<u64, Global>) -> Natural Converts a `Vec` of limbs to a `Natural`. The limbs are in descending order, so that less-significant limbs have higher indices in the input `Vec`. This function takes ownership of the limbs. If it’s necessary to borrow the limbs instead, use `from_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is less efficient than `from_limbs_asc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_owned_limbs_desc(vec![]), 0); assert_eq!(Natural::from_owned_limbs_desc(vec![123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_owned_limbs_desc(vec![232, 3567587328]), 1000000000000u64); } ``` ### impl Natural #### pub const fn const_from(x: u64) -> Natural Converts a `Limb` to a `Natural`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; const TEN: Natural = Natural::const_from(10); assert_eq!(TEN, 10); ``` ### impl Natural #### pub fn limb_count(&self) -> u64 Returns the number of limbs of a `Natural`. Zero has 0 limbs. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::ZERO.limb_count(), 0); assert_eq!(Natural::from(123u32).limb_count(), 1); assert_eq!(Natural::from(10u32).pow(12).limb_count(), 2); } ``` ### impl Natural #### pub fn sci_mantissa_and_exponent_round<T>( &self, rm: RoundingMode ) -> Option<(T, u64, Ordering)>where T: PrimitiveFloat, Returns a `Natural`’s scientific mantissa and exponent, rounding according to the specified rounding mode. An `Ordering` is also returned, indicating whether the mantissa and exponent represent a value that is less than, equal to, or greater than the original value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the provided rounding mode. If the rounding mode is `Exact` but the conversion is not exact, `None` is returned. $$ f(x, r) \approx \left (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor\right ). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::num::float::NiceFloat; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let test = |n: Natural, rm: RoundingMode, out: Option<(f32, u64, Ordering)>| { assert_eq!( n.sci_mantissa_and_exponent_round(rm) .map(|(m, e, o)| (NiceFloat(m), e, o)), out.map(|(m, e, o)| (NiceFloat(m), e, o)) ); }; test(Natural::from(3u32), RoundingMode::Floor, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Down, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Ceiling, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Up, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Nearest, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Exact, Some((1.5, 1, Ordering::Equal))); test( Natural::from(123u32), RoundingMode::Floor, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Down, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Ceiling, Some((1.921875, 6, Ordering::Equal)), ); test(Natural::from(123u32), RoundingMode::Up, Some((1.921875, 6, Ordering::Equal))); test( Natural::from(123u32), RoundingMode::Nearest, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Exact, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(1000000000u32), RoundingMode::Nearest, Some((1.8626451, 29, Ordering::Equal)), ); test( Natural::from(10u32).pow(52), RoundingMode::Nearest, Some((1.670478, 172, Ordering::Greater)), ); test(Natural::from(10u32).pow(52), RoundingMode::Exact, None); ``` #### pub fn from_sci_mantissa_and_exponent_round<T>( sci_mantissa: T, sci_exponent: u64, rm: RoundingMode ) -> Option<(Natural, Ordering)>where T: PrimitiveFloat, Constructs a `Natural` from its scientific mantissa and exponent, rounding according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value represented by the mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the specified rounding mode. If the rounding mode is `Exact` but the input does not exactly specify a `Natural`, `None` is returned. $$ f(x, r) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Panics Panics if `sci_mantissa` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; use std::str::FromStr; let test = | mantissa: f32, exponent: u64, rm: RoundingMode, out: Option<(Natural, Ordering)> | { assert_eq!( Natural::from_sci_mantissa_and_exponent_round(mantissa, exponent, rm), out ); }; test(1.5, 1, RoundingMode::Floor, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Down, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Ceiling, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Up, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Nearest, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Exact, Some((Natural::from(3u32), Ordering::Equal))); test(1.51, 1, RoundingMode::Floor, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Down, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Ceiling, Some((Natural::from(4u32), Ordering::Greater))); test(1.51, 1, RoundingMode::Up, Some((Natural::from(4u32), Ordering::Greater))); test(1.51, 1, RoundingMode::Nearest, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Exact, None); test( 1.670478, 172, RoundingMode::Nearest, Some( ( Natural::from_str("10000000254586612611935772707803116801852191350456320") .unwrap(), Ordering::Equal ) ), ); test(2.0, 1, RoundingMode::Floor, None); test(10.0, 1, RoundingMode::Floor, None); test(0.5, 1, RoundingMode::Floor, None); ``` ### impl Natural #### pub fn to_limbs_asc(&self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in ascending order, so that less-significant limbs have lower indices in the output vector. There are no trailing zero limbs. This function borrows the `Natural`. If taking ownership is possible instead, `into_limbs_asc` is more efficient. This function is more efficient than `to_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.to_limbs_asc().is_empty()); assert_eq!(Natural::from(123u32).to_limbs_asc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).to_limbs_asc(), &[3567587328, 232]); } ``` #### pub fn to_limbs_desc(&self) -> Vec<u64, GlobalReturns the limbs of a `Natural` in descending order, so that less-significant limbs have higher indices in the output vector. There are no leading zero limbs. This function borrows the `Natural`. If taking ownership is possible instead, `into_limbs_desc` is more efficient. This function is less efficient than `to_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.to_limbs_desc().is_empty()); assert_eq!(Natural::from(123u32).to_limbs_desc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).to_limbs_desc(), &[232, 3567587328]); } ``` #### pub fn into_limbs_asc(self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in ascending order, so that less-significant limbs have lower indices in the output vector. There are no trailing zero limbs. This function takes ownership of the `Natural`. If it’s necessary to borrow instead, use `to_limbs_asc`. This function is more efficient than `into_limbs_desc`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.into_limbs_asc().is_empty()); assert_eq!(Natural::from(123u32).into_limbs_asc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).into_limbs_asc(), &[3567587328, 232]); } ``` #### pub fn into_limbs_desc(self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in descending order, so that less-significant limbs have higher indices in the output vector. There are no leading zero limbs. This function takes ownership of the `Natural`. If it’s necessary to borrow instead, use `to_limbs_desc`. This function is less efficient than `into_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.into_limbs_desc().is_empty()); assert_eq!(Natural::from(123u32).into_limbs_desc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).into_limbs_desc(), &[232, 3567587328]); } ``` #### pub fn limbs(&self) -> LimbIterator<'_Returns a double-ended iterator over the limbs of a `Natural`. The forward order is ascending, so that less-significant limbs appear first. There are no trailing zero limbs going forward, or leading zeros going backward. If it’s necessary to get a `Vec` of all the limbs, consider using `to_limbs_asc`, `to_limbs_desc`, `into_limbs_asc`, or `into_limbs_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.limbs().next().is_none()); assert_eq!(Natural::from(123u32).limbs().collect_vec(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).limbs().collect_vec(), &[3567587328, 232]); assert!(Natural::ZERO.limbs().rev().next().is_none()); assert_eq!(Natural::from(123u32).limbs().rev().collect_vec(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Natural::from(10u32).pow(12).limbs().rev().collect_vec(), &[232, 3567587328] ); } ``` ### impl Natural #### pub fn trailing_zeros(&self) -> Option<u64Returns the number of trailing zeros in the binary expansion of a `Natural` (equivalently, the multiplicity of 2 in its prime factorization), or `None` is the `Natural` is 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.trailing_zeros(), None); assert_eq!(Natural::from(3u32).trailing_zeros(), Some(0)); assert_eq!(Natural::from(72u32).trailing_zeros(), Some(3)); assert_eq!(Natural::from(100u32).trailing_zeros(), Some(2)); assert_eq!(Natural::from(10u32).pow(12).trailing_zeros(), Some(12)); ``` Trait Implementations --- ### impl<'a, 'b> Add<&'a Natural> for &'b Natural #### fn add(self, other: &'a Natural) -> Natural Adds two `Natural`s, taking both by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::ZERO + &Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) + &Natural::ZERO, 123); assert_eq!(&Natural::from(123u32) + &Natural::from(456u32), 579); assert_eq!( &Natural::from(10u32).pow(12) + &(Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> Add<&'a Natural> for Natural #### fn add(self, other: &'a Natural) -> Natural Adds two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO + &Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) + &Natural::ZERO, 123); assert_eq!(Natural::from(123u32) + &Natural::from(456u32), 579); assert_eq!( Natural::from(10u32).pow(12) + &(Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> Add<Natural> for &'a Natural #### fn add(self, other: Natural) -> Natural Adds two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::ZERO + Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) + Natural::ZERO, 123); assert_eq!(&Natural::from(123u32) + Natural::from(456u32), 579); assert_eq!( &Natural::from(10u32).pow(12) + (Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl Add<Natural> for Natural #### fn add(self, other: Natural) -> Natural Adds two `Natural`s, taking both by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO + Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) + Natural::ZERO, 123); assert_eq!(Natural::from(123u32) + Natural::from(456u32), 579); assert_eq!( Natural::from(10u32).pow(12) + (Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> AddAssign<&'a Natural> for Natural #### fn add_assign(&mut self, other: &'a Natural) Adds a `Natural` to a `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x += &Natural::from(10u32).pow(12); x += &(Natural::from(10u32).pow(12) * Natural::from(2u32)); x += &(Natural::from(10u32).pow(12) * Natural::from(3u32)); x += &(Natural::from(10u32).pow(12) * Natural::from(4u32)); assert_eq!(x, 10000000000000u64); ``` ### impl AddAssign<Natural> for Natural #### fn add_assign(&mut self, other: Natural) Adds a `Natural` to a `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x += Natural::from(10u32).pow(12); x += Natural::from(10u32).pow(12) * Natural::from(2u32); x += Natural::from(10u32).pow(12) * Natural::from(3u32); x += Natural::from(10u32).pow(12) * Natural::from(4u32); assert_eq!(x, 10000000000000u64); ``` ### impl<'a> AddMul<&'a Natural, Natural> for Natural #### fn add_mul(self, y: &'a Natural, z: Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first and third by value and the second by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(&Natural::from(3u32), Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(&Natural::from(0x10000u32), Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> AddMul<&'a Natural, &'b Natural> for &'c Natural #### fn add_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking all three by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).add_mul(&Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( (&Natural::from(10u32).pow(12)) .add_mul(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a, 'b> AddMul<&'a Natural, &'b Natural> for Natural #### fn add_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first by value and the second and third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(&Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a> AddMul<Natural, &'a Natural> for Natural #### fn add_mul(self, y: Natural, z: &'a Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first two by value and the third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl AddMul<Natural, Natural> for Natural #### fn add_mul(self, y: Natural, z: Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking all three by value. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(Natural::from(3u32), Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(Natural::from(0x10000u32), Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a> AddMulAssign<&'a Natural, Natural> for Natural #### fn add_mul_assign(&mut self, y: &'a Natural, z: Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking the first `Natural` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(&Natural::from(0x10000u32), Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl<'a, 'b> AddMulAssign<&'a Natural, &'b Natural> for Natural #### fn add_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking both `Natural`s on the right-hand side by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl<'a> AddMulAssign<Natural, &'a Natural> for Natural #### fn add_mul_assign(&mut self, y: Natural, z: &'a Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking the first `Natural` on the right-hand side by value and the second by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(Natural::from(0x10000u32), &Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl AddMulAssign<Natural, Natural> for Natural #### fn add_mul_assign(&mut self, y: Natural, z: Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking both `Natural`s on the right-hand side by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(Natural::from(0x10000u32), Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl Binary for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a binary `String`. Using the `#` format flag prepends `"0b"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToBinaryString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_binary_string(), "0"); assert_eq!(Natural::from(123u32).to_binary_string(), "1111011"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_binary_string(), "1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Natural::from(123u32)), "00001111011"); assert_eq!(format!("{:#b}", Natural::ZERO), "0b0"); assert_eq!(format!("{:#b}", Natural::from(123u32)), "0b1111011"); assert_eq!( format!("{:#b}", Natural::from_str("1000000000000").unwrap()), "0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Natural::from(123u32)), "0b001111011"); ``` ### impl<'a> BinomialCoefficient<&'a Natural> for Natural #### fn binomial_coefficient(n: &'a Natural, k: &'a Natural) -> Natural Computes the binomial coefficient of two `Natural`s, taking both by reference. $$ f(n, k) =binom{n}{k} =frac{n!}{k!(n-k)!}. $$ ##### Worst-case complexity TODO ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::natural::Natural; assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(0u32)), 1); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(1u32)), 4); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(2u32)), 6); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(3u32)), 4); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(4u32)), 1); assert_eq!( Natural::binomial_coefficient(&Natural::from(10u32), &Natural::from(5u32)), 252 ); assert_eq!( Natural::binomial_coefficient(&Natural::from(100u32), &Natural::from(50u32)) .to_string(), "100891344545564193334812497256" ); ``` ### impl BinomialCoefficient<Natural> for Natural #### fn binomial_coefficient(n: Natural, k: Natural) -> Natural Computes the binomial coefficient of two `Natural`s, taking both by value. $$ f(n, k) =binom{n}{k} =frac{n!}{k!(n-k)!}. $$ ##### Worst-case complexity TODO ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::natural::Natural; assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(0u32)), 1); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(1u32)), 4); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(2u32)), 6); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(3u32)), 4); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(4u32)), 1); assert_eq!(Natural::binomial_coefficient(Natural::from(10u32), Natural::from(5u32)), 252); assert_eq!( Natural::binomial_coefficient(Natural::from(100u32), Natural::from(50u32)).to_string(), "100891344545564193334812497256" ); ``` ### impl BitAccess for Natural Provides functions for accessing and modifying the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion. #### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, 100); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, 0); let mut x = Natural::ZERO; x.flip_bit(10); assert_eq!(x, 1024); x.flip_bit(10); assert_eq!(x, 0); ``` #### fn get_bit(&self, index: u64) -> bool Determines whether the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, is 0 or 1. `false` means 0 and `true` means 1. Getting bits beyond the `Natural`’s width is allowed; those bits are `false`. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $f(n, j) = (b_j = 1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).get_bit(2), false); assert_eq!(Natural::from(123u32).get_bit(3), true); assert_eq!(Natural::from(123u32).get_bit(100), false); assert_eq!(Natural::from(10u32).pow(12).get_bit(12), true); assert_eq!(Natural::from(10u32).pow(12).get_bit(100), false); ``` #### fn set_bit(&mut self, index: u64) Sets the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, to 1. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ n \gets \begin{cases} n + 2^j & \text{if} \quad b_j = 0, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, 100); ``` #### fn clear_bit(&mut self, index: u64) Sets the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, to 0. Clearing bits beyond the `Natural`’s width is allowed; since those bits are already `false`, clearing them does nothing. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ n \gets \begin{cases} n - 2^j & \text{if} \quad b_j = 1, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; let mut x = Natural::from(0x7fu32); x.clear_bit(0); x.clear_bit(1); x.clear_bit(3); x.clear_bit(4); assert_eq!(x, 100); ``` #### fn assign_bit(&mut self, index: u64, bit: bool) Sets the bit at `index` to whichever value `bit` is. Sets the bit at `index` to the opposite of its original value. #### fn bitand(self, other: &'a Natural) -> Natural Takes the bitwise and of two `Natural`s, taking both by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) & &Natural::from(456u32), 72); assert_eq!( &Natural::from(10u32).pow(12) & &(Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAnd<&'a Natural> for Natural #### fn bitand(self, other: &'a Natural) -> Natural Takes the bitwise and of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) & &Natural::from(456u32), 72); assert_eq!( Natural::from(10u32).pow(12) & &(Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAnd<Natural> for &'a Natural #### fn bitand(self, other: Natural) -> Natural Takes the bitwise and of two `Natural`s, taking the first by reference and the seocnd by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) & Natural::from(456u32), 72); assert_eq!( &Natural::from(10u32).pow(12) & (Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl BitAnd<Natural> for Natural #### fn bitand(self, other: Natural) -> Natural Takes the bitwise and of two `Natural`s, taking both by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) & Natural::from(456u32), 72); assert_eq!( Natural::from(10u32).pow(12) & (Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAndAssign<&'a Natural> for Natural #### fn bitand_assign(&mut self, other: &'a Natural) Bitwise-ands a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; let mut x = Natural::from(u32::MAX); x &= &Natural::from(0xf0ffffffu32); x &= &Natural::from(0xfff0_ffffu32); x &= &Natural::from(0xfffff0ffu32); x &= &Natural::from(0xfffffff0u32); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitAndAssign<Natural> for Natural #### fn bitand_assign(&mut self, other: Natural) Bitwise-ands a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; let mut x = Natural::from(u32::MAX); x &= Natural::from(0xf0ffffffu32); x &= Natural::from(0xfff0_ffffu32); x &= Natural::from(0xfffff0ffu32); x &= Natural::from(0xfffffff0u32); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitBlockAccess for Natural #### fn get_bits(&self, start: u64, end: u64) -> Natural Extracts a block of adjacent bits from a `Natural`, taking the `Natural` by reference. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(16, 48), 0xef011234u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(4, 16), 0x567u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(0, 100), 0xabcdef0112345678u64); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(10, 10), 0); ``` #### fn get_bits_owned(self, start: u64, end: u64) -> Natural Extracts a block of adjacent bits from a `Natural`, taking the `Natural` by value. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(16, 48), 0xef011234u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(4, 16), 0x567u32); assert_eq!( Natural::from(0xabcdef0112345678u64).get_bits_owned(0, 100), 0xabcdef0112345678u64 ); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(10, 10), 0); ``` #### fn assign_bits(&mut self, start: u64, end: u64, bits: &Natural) Replaces a block of adjacent bits in a `Natural` with other bits. The least-significant `end - start` bits of `bits` are assigned to bits `start` through `end - 1`, inclusive, of `self`. Let $n$ be `self` and let $m$ be `bits`, and let $p$ and $q$ be `start` and `end`, respectively. If `bits` has fewer bits than `end - start`, the high bits are interpreted as 0. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Let $$ m = \sum_{i=0}^k 2^{d_i}, $$ where for all $i$, $d_i\in \{0, 1\}$. Also, let $p, q \in \mathbb{N}$, and let $W$ be `max(self.significant_bits(), end + 1)`. Then $$ n \gets \sum_{i=0}^{W-1} 2^{c_i}, $$ where $$ \{c_0, c_1, c_2, \ldots, c_ {W-1}\} = \{b_0, b_1, b_2, \ldots, b_{p-1}, d_0, d_1, \ldots, d_{p-q-1}, b_q, \ldots, b_ {W-1}\}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `end`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; let mut n = Natural::from(123u32); n.assign_bits(5, 7, &Natural::from(456u32)); assert_eq!(n, 27); let mut n = Natural::from(123u32); n.assign_bits(64, 128, &Natural::from(456u32)); assert_eq!(n.to_string(), "8411715297611555537019"); let mut n = Natural::from(123u32); n.assign_bits(80, 100, &Natural::from(456u32)); assert_eq!(n.to_string(), "551270173744270903666016379"); ``` #### type Bits = Natural ### impl BitConvertible for Natural #### fn to_bits_asc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the bits of a `Natural` in ascending order: least- to most-significant. If the number is 0, the `Vec` is empty; otherwise, it ends with `true`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert!(Natural::ZERO.to_bits_asc().is_empty()); // 105 = 1101001b assert_eq!( Natural::from(105u32).to_bits_asc(), &[true, false, false, true, false, true, true] ); ``` #### fn to_bits_desc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the bits of a `Natural` in descending order: most- to least-significant. If the number is 0, the `Vec` is empty; otherwise, it begins with `true`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert!(Natural::ZERO.to_bits_desc().is_empty()); // 105 = 1101001b assert_eq!( Natural::from(105u32).to_bits_desc(), &[true, true, false, true, false, false, true] ); ``` #### fn from_bits_asc<I>(xs: I) -> Naturalwhere I: Iterator<Item = bool>, Converts an iterator of bits into a `Natural`. The bits should be in ascending order (least- to most-significant). $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^i [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::natural::Natural; use std::iter::empty; assert_eq!(Natural::from_bits_asc(empty()), 0); // 105 = 1101001b assert_eq!( Natural::from_bits_asc([true, false, false, true, false, true, true].iter().cloned()), 105 ); ``` #### fn from_bits_desc<I>(xs: I) -> Naturalwhere I: Iterator<Item = bool>, Converts an iterator of bits into a `Natural`. The bits should be in descending order (most- to least-significant). $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^{k-i-1} [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::natural::Natural; use std::iter::empty; assert_eq!(Natural::from_bits_desc(empty()), 0); // 105 = 1101001b assert_eq!( Natural::from_bits_desc([true, true, false, true, false, false, true].iter().cloned()), 105 ); ``` ### impl<'a> BitIterable for &'a Natural #### fn bits(self) -> NaturalBitIterator<'aReturns a double-ended iterator over the bits of a `Natural`. The forward order is ascending, so that less significant bits appear first. There are no trailing false bits going forward, or leading falses going backward. If it’s necessary to get a `Vec` of all the bits, consider using `to_bits_asc` or `to_bits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitIterable; use malachite_nz::natural::Natural; assert!(Natural::ZERO.bits().next().is_none()); // 105 = 1101001b assert_eq!( Natural::from(105u32).bits().collect::<Vec<bool>>(), &[true, false, false, true, false, true, true] ); assert!(Natural::ZERO.bits().next_back().is_none()); // 105 = 1101001b assert_eq!( Natural::from(105u32).bits().rev().collect::<Vec<bool>>(), &[true, true, false, true, false, false, true] ); ``` #### type BitIterator = NaturalBitIterator<'a### impl<'a, 'b> BitOr<&'a Natural> for &'b Natural #### fn bitor(self, other: &'a Natural) -> Natural Takes the bitwise or of two `Natural`s, taking both by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) | &Natural::from(456u32), 507); assert_eq!( &Natural::from(10u32).pow(12) | &(Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOr<&'a Natural> for Natural #### fn bitor(self, other: &'a Natural) -> Natural Takes the bitwise or of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) | &Natural::from(456u32), 507); assert_eq!( Natural::from(10u32).pow(12) | &(Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOr<Natural> for &'a Natural #### fn bitor(self, other: Natural) -> Natural Takes the bitwise or of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) | Natural::from(456u32), 507); assert_eq!( &Natural::from(10u32).pow(12) | (Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl BitOr<Natural> for Natural #### fn bitor(self, other: Natural) -> Natural Takes the bitwise or of two `Natural`s, taking both by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) | Natural::from(456u32), 507); assert_eq!( Natural::from(10u32).pow(12) | (Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOrAssign<&'a Natural> for Natural #### fn bitor_assign(&mut self, other: &'a Natural) Bitwise-ors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= &Natural::from(0x0000000fu32); x |= &Natural::from(0x00000f00u32); x |= &Natural::from(0x000f_0000u32); x |= &Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitOrAssign<Natural> for Natural #### fn bitor_assign(&mut self, other: Natural) Bitwise-ors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= Natural::from(0x0000000fu32); x |= Natural::from(0x00000f00u32); x |= Natural::from(0x000f_0000u32); x |= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> BitScan for &'a Natural #### fn index_of_next_false_bit(self, start: u64) -> Option<u64Given a `Natural` and a starting index, searches the `Natural` for the smallest index of a `false` bit that is greater than or equal to the starting index. Since every `Natural` has an implicit prefix of infinitely-many zeros, this function always returns a value. Starting beyond the `Natural`’s width is allowed; the result is the starting index. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(0), Some(0)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(20), Some(20)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(31), Some(31)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(32), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(33), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(34), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(35), Some(36)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(100), Some(100)); ``` #### fn index_of_next_true_bit(self, start: u64) -> Option<u64Given a `Natural` and a starting index, searches the `Natural` for the smallest index of a `true` bit that is greater than or equal to the starting index. If the starting index is greater than or equal to the `Natural`’s width, the result is `None` since there are no `true` bits past that point. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(0), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(20), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(31), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(32), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(33), Some(33)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(34), Some(35)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(35), Some(35)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(36), None); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(100), None); ``` ### impl<'a, 'b> BitXor<&'a Natural> for &'b Natural #### fn bitxor(self, other: &'a Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking both by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) ^ &Natural::from(456u32), 435); assert_eq!( &Natural::from(10u32).pow(12) ^ &(Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXor<&'a Natural> for Natural #### fn bitxor(self, other: &'a Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) ^ &Natural::from(456u32), 435); assert_eq!( Natural::from(10u32).pow(12) ^ &(Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXor<Natural> for &'a Natural #### fn bitxor(self, other: Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) ^ Natural::from(456u32), 435); assert_eq!( &Natural::from(10u32).pow(12) ^ (Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl BitXor<Natural> for Natural #### fn bitxor(self, other: Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking both by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) ^ Natural::from(456u32), 435); assert_eq!( Natural::from(10u32).pow(12) ^ (Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXorAssign<&'a Natural> for Natural #### fn bitxor_assign(&mut self, other: &'a Natural) Bitwise-xors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= Natural::from(0x0000000fu32); x |= Natural::from(0x00000f00u32); x |= Natural::from(0x000f_0000u32); x |= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitXorAssign<Natural> for Natural #### fn bitxor_assign(&mut self, other: Natural) Bitwise-xors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x ^= Natural::from(0x0000000fu32); x ^= Natural::from(0x00000f00u32); x ^= Natural::from(0x000f_0000u32); x ^= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> CeilingDivAssignNegMod<&'a Natural> for Natural #### fn ceiling_div_assign_neg_mod(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and returning the remainder of the negative of the first number divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignNegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); assert_eq!(x.ceiling_div_assign_neg_mod(&Natural::from(10u32)), 7); assert_eq!(x, 3); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.ceiling_div_assign_neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64, ); assert_eq!(x, 810000006724u64); ``` #### type ModOutput = Natural ### impl CeilingDivAssignNegMod<Natural> for Natural #### fn ceiling_div_assign_neg_mod(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder of the negative of the first number divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignNegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); assert_eq!(x.ceiling_div_assign_neg_mod(Natural::from(10u32)), 7); assert_eq!(x, 3); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.ceiling_div_assign_neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64, ); assert_eq!(x, 810000006724u64); ``` #### type ModOutput = Natural ### impl<'a, 'b> CeilingDivNegMod<&'b Natural> for &'a Natural #### fn ceiling_div_neg_mod(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( (&Natural::from(23u32)).ceiling_div_neg_mod(&Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .ceiling_div_neg_mod(&Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> CeilingDivNegMod<&'a Natural> for Natural #### fn ceiling_div_neg_mod(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( Natural::from(23u32).ceiling_div_neg_mod(&Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .ceiling_div_neg_mod(&Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> CeilingDivNegMod<Natural> for &'a Natural #### fn ceiling_div_neg_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( (&Natural::from(23u32)).ceiling_div_neg_mod(Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .ceiling_div_neg_mod(Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl CeilingDivNegMod<Natural> for Natural #### fn ceiling_div_neg_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( Natural::from(23u32).ceiling_div_neg_mod(Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .ceiling_div_neg_mod(Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a, 'b> CeilingLogBase<&'b Natural> for &'a Natural #### fn ceiling_log_base(self, base: &Natural) -> u64 Returns the ceiling of the base-$b$ logarithm of a positive `Natural`. $f(x, b) = \lceil\log_b x\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).ceiling_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(81u32).ceiling_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(82u32).ceiling_log_base(&Natural::from(3u32)), 5); assert_eq!(Natural::from(4294967296u64).ceiling_log_base(&Natural::from(10u32)), 10); ``` This is equivalent to `fmpz_clog` from `fmpz/clog.c`, FLINT 2.7.1. #### type Output = u64 ### impl<'a> CeilingLogBase2 for &'a Natural #### fn ceiling_log_base_2(self) -> u64 Returns the ceiling of the base-2 logarithm of a positive `Natural`. $f(x) = \lceil\log_2 x\rceil$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).ceiling_log_base_2(), 2); assert_eq!(Natural::from(100u32).ceiling_log_base_2(), 7); ``` #### type Output = u64 ### impl<'a> CeilingLogBasePowerOf2<u64> for &'a Natural #### fn ceiling_log_base_power_of_2(self, pow: u64) -> u64 Returns the ceiling of the base-$2^k$ logarithm of a positive `Natural`. $f(x, k) = \lceil\log_{2^k} x\rceil$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBasePowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(100u32).ceiling_log_base_power_of_2(2), 4); assert_eq!(Natural::from(4294967296u64).ceiling_log_base_power_of_2(8), 4); ``` #### type Output = u64 ### impl<'a> CeilingRoot<u64> for &'a Natural #### fn ceiling_root(self, exp: u64) -> Natural Returns the ceiling of the $n$th root of a `Natural`, taking the `Natural` by reference. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).ceiling_root(3), 10); assert_eq!(Natural::from(1000u16).ceiling_root(3), 10); assert_eq!(Natural::from(1001u16).ceiling_root(3), 11); assert_eq!(Natural::from(100000000000u64).ceiling_root(5), 159); ``` #### type Output = Natural ### impl CeilingRoot<u64> for Natural #### fn ceiling_root(self, exp: u64) -> Natural Returns the ceiling of the $n$th root of a `Natural`, taking the `Natural` by value. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).ceiling_root(3), 10); assert_eq!(Natural::from(1000u16).ceiling_root(3), 10); assert_eq!(Natural::from(1001u16).ceiling_root(3), 11); assert_eq!(Natural::from(100000000000u64).ceiling_root(5), 159); ``` #### type Output = Natural ### impl CeilingRootAssign<u64> for Natural #### fn ceiling_root_assign(&mut self, exp: u64) Replaces a `Natural` with the ceiling of its $n$th root. $x \gets \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRootAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1000u16); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1001u16); x.ceiling_root_assign(3); assert_eq!(x, 11); let mut x = Natural::from(100000000000u64); x.ceiling_root_assign(5); assert_eq!(x, 159); ``` ### impl<'a> CeilingSqrt for &'a Natural #### fn ceiling_sqrt(self) -> Natural Returns the ceiling of the square root of a `Natural`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(100u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(101u8).ceiling_sqrt(), 11); assert_eq!(Natural::from(1000000000u32).ceiling_sqrt(), 31623); assert_eq!(Natural::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Natural ### impl CeilingSqrt for Natural #### fn ceiling_sqrt(self) -> Natural Returns the ceiling of the square root of a `Natural`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(100u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(101u8).ceiling_sqrt(), 11); assert_eq!(Natural::from(1000000000u32).ceiling_sqrt(), 31623); assert_eq!(Natural::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Natural ### impl CeilingSqrtAssign for Natural #### fn ceiling_sqrt_assign(&mut self) Replaces a `Natural` with the ceiling of its square root. $x \gets \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrtAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(100u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(101u8); x.ceiling_sqrt_assign(); assert_eq!(x, 11); let mut x = Natural::from(1000000000u32); x.ceiling_sqrt_assign(); assert_eq!(x, 31623); let mut x = Natural::from(10000000000u64); x.ceiling_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a, 'b> CheckedLogBase<&'b Natural> for &'a Natural #### fn checked_log_base(self, base: &Natural) -> Option<u64Returns the base-$b$ logarithm of a positive `Natural`. If the `Natural` is not a power of $b$, then `None` is returned. $$ f(x, b) = \begin{cases} \operatorname{Some}(\log_b x) & \text{if} \quad \log_b x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).checked_log_base(&Natural::from(3u32)), None); assert_eq!(Natural::from(81u32).checked_log_base(&Natural::from(3u32)), Some(4)); assert_eq!(Natural::from(82u32).checked_log_base(&Natural::from(3u32)), None); assert_eq!(Natural::from(4294967296u64).checked_log_base(&Natural::from(10u32)), None); ``` #### type Output = u64 ### impl<'a> CheckedLogBase2 for &'a Natural #### fn checked_log_base_2(self) -> Option<u64Returns the base-2 logarithm of a positive `Natural`. If the `Natural` is not a power of 2, then `None` is returned. $$ f(x) = \begin{cases} \operatorname{Some}(\log_2 x) & \text{if} \quad \log_2 x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase2; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from(3u32).checked_log_base_2(), None); assert_eq!(Natural::from(4u32).checked_log_base_2(), Some(2)); assert_eq!( Natural::from_str("1267650600228229401496703205376").unwrap().checked_log_base_2(), Some(100) ); ``` #### type Output = u64 ### impl<'a> CheckedLogBasePowerOf2<u64> for &'a Natural #### fn checked_log_base_power_of_2(self, pow: u64) -> Option<u64Returns the base-$2^k$ logarithm of a positive `Natural`. If the `Natural` is not a power of $2^k$, then `None` is returned. $$ f(x, k) = \begin{cases} \operatorname{Some}(\log_{2^k} x) & \text{if} \quad \log_{2^k} x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBasePowerOf2; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from(100u32).checked_log_base_power_of_2(2), None); assert_eq!(Natural::from(4294967296u64).checked_log_base_power_of_2(8), Some(4)); ``` #### type Output = u64 ### impl<'a> CheckedRoot<u64> for &'a Natural #### fn checked_root(self, exp: u64) -> Option<NaturalReturns the the $n$th root of a `Natural`, or `None` if the `Natural` is not a perfect $n$th power. The `Natural` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Natural::from(1000u16)).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!((&Natural::from(1001u16)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Natural::from(100000000000u64)).checked_root(5).to_debug_string(), "None"); assert_eq!( (&Natural::from(10000000000u64)).checked_root(5).to_debug_string(), "Some(100)" ); ``` #### type Output = Natural ### impl CheckedRoot<u64> for Natural #### fn checked_root(self, exp: u64) -> Option<NaturalReturns the the $n$th root of a `Natural`, or `None` if the `Natural` is not a perfect $n$th power. The `Natural` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).checked_root(3).to_debug_string(), "None"); assert_eq!(Natural::from(1000u16).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!(Natural::from(1001u16).checked_root(3).to_debug_string(), "None"); assert_eq!(Natural::from(100000000000u64).checked_root(5).to_debug_string(), "None"); assert_eq!(Natural::from(10000000000u64).checked_root(5).to_debug_string(), "Some(100)"); ``` #### type Output = Natural ### impl<'a> CheckedSqrt for &'a Natural #### fn checked_sqrt(self) -> Option<NaturalReturns the the square root of a `Natural`, or `None` if it is not a perfect square. The `Natural` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Natural::from(100u8)).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!((&Natural::from(101u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Natural::from(1000000000u32)).checked_sqrt().to_debug_string(), "None"); assert_eq!( (&Natural::from(10000000000u64)).checked_sqrt().to_debug_string(), "Some(100000)" ); ``` #### type Output = Natural ### impl CheckedSqrt for Natural #### fn checked_sqrt(self) -> Option<NaturalReturns the the square root of a `Natural`, or `None` if it is not a perfect square. The `Natural` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(100u8).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!(Natural::from(101u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(1000000000u32).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(10000000000u64).checked_sqrt().to_debug_string(), "Some(100000)"); ``` #### type Output = Natural ### impl<'a, 'b> CheckedSub<&'a Natural> for &'b Natural #### fn checked_sub(self, other: &'a Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking both by reference and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).checked_sub(&Natural::from(123u32)).to_debug_string(), "None"); assert_eq!((&Natural::from(123u32)).checked_sub(&Natural::ZERO).to_debug_string(), "Some(123)"); assert_eq!((&Natural::from(456u32)).checked_sub(&Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .checked_sub(&Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSub<&'a Natural> for Natural #### fn checked_sub(self, other: &'a Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking the first by value and the second by reference and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.checked_sub(&Natural::from(123u32)).to_debug_string(), "None"); assert_eq!( Natural::from(123u32).checked_sub(&Natural::ZERO).to_debug_string(), "Some(123)" ); assert_eq!(Natural::from(456u32).checked_sub(&Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .checked_sub(&Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSub<Natural> for &'a Natural #### fn checked_sub(self, other: Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking the first by reference and the second by value and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).checked_sub(Natural::from(123u32)).to_debug_string(), "None"); assert_eq!((&Natural::from(123u32)).checked_sub(Natural::ZERO).to_debug_string(), "Some(123)"); assert_eq!((&Natural::from(456u32)).checked_sub(Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .checked_sub(Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl CheckedSub<Natural> for Natural #### fn checked_sub(self, other: Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking both by value and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.checked_sub(Natural::from(123u32)).to_debug_string(), "None"); assert_eq!( Natural::from(123u32).checked_sub(Natural::ZERO).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from(456u32).checked_sub(Natural::from(123u32)).to_debug_string(), "Some(333)" ); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .checked_sub(Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSubMul<&'a Natural, Natural> for Natural #### fn checked_sub_mul(self, y: &'a Natural, z: Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(&Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(&Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> CheckedSubMul<&'a Natural, &'b Natural> for &'c Natural #### fn checked_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking all three by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(20u32)).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( (&Natural::from(10u32)).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( (&Natural::from(10u32).pow(12)) .checked_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a, 'b> CheckedSubMul<&'a Natural, &'b Natural> for Natural #### fn checked_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a> CheckedSubMul<Natural, &'a Natural> for Natural #### fn checked_sub_mul(self, y: Natural, z: &'a Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl CheckedSubMul<Natural, Natural> for Natural #### fn checked_sub_mul(self, y: Natural, z: Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking all three by value and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl Clone for Natural #### fn clone(&self) -> Natural Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(&Integer::from(123)), true); assert_eq!(Natural::convertible_from(&Integer::from(-123)), false); assert_eq!(Natural::convertible_from(&Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(&-Integer::from(10u32).pow(12)), false); ``` ### impl<'a> ConvertibleFrom<&'a Natural> for f32 #### fn convertible_from(value: &'a Natural) -> bool Determines whether a `Natural` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for f64 #### fn convertible_from(value: &'a Natural) -> bool Determines whether a `Natural` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i128 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i16 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i32 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i64 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `SignedLimb` (the signed type whose width is the same as a limb’s). ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i8 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for isize #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to an `isize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u128 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u16 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u32 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u64 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u8 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for usize #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `usize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for Natural #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to a `Natural` (when the `Rational` is non-negative and an integer), taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Natural::convertible_from(&Rational::from(123)), true); assert_eq!(Natural::convertible_from(&Rational::from(-123)), false); assert_eq!(Natural::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl ConvertibleFrom<Integer> for Natural #### fn convertible_from(value: Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(Integer::from(123)), true); assert_eq!(Natural::convertible_from(Integer::from(-123)), false); assert_eq!(Natural::convertible_from(Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(-Integer::from(10u32).pow(12)), false); ``` ### impl ConvertibleFrom<f32> for Natural #### fn convertible_from(value: f32) -> bool Determines whether a floating-point value can be exactly converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<f64> for Natural #### fn convertible_from(value: f64) -> bool Determines whether a floating-point value can be exactly converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i128> for Natural #### fn convertible_from(i: i128) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i16> for Natural #### fn convertible_from(i: i16) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i32> for Natural #### fn convertible_from(i: i32) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i64> for Natural #### fn convertible_from(i: i64) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i8> for Natural #### fn convertible_from(i: i8) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<isize> for Natural #### fn convertible_from(i: isize) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a, 'b> CoprimeWith<&'b Natural> for &'a Natural #### fn coprime_with(self, other: &'b Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. Both `Natural`s are taken by reference. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).coprime_with(Natural::from(5u32)), true); assert_eq!((&Natural::from(12u32)).coprime_with(Natural::from(90u32)), false); ``` ### impl<'a> CoprimeWith<&'a Natural> for Natural #### fn coprime_with(self, other: &'a Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. The first `Natural` is taken by value and the second by reference. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).coprime_with(&Natural::from(5u32)), true); assert_eq!(Natural::from(12u32).coprime_with(&Natural::from(90u32)), false); ``` ### impl<'a> CoprimeWith<Natural> for &'a Natural #### fn coprime_with(self, other: Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. The first `Natural` is taken by reference and the second by value. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).coprime_with(Natural::from(5u32)), true); assert_eq!((&Natural::from(12u32)).coprime_with(Natural::from(90u32)), false); ``` ### impl CoprimeWith<Natural> for Natural #### fn coprime_with(self, other: Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. Both `Natural`s are taken by value. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).coprime_with(Natural::from(5u32)), true); assert_eq!(Natural::from(12u32).coprime_with(Natural::from(90u32)), false); ``` ### impl CountOnes for &Natural #### fn count_ones(self) -> u64 Counts the number of ones in the binary expansion of a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::CountOnes; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.count_ones(), 0); // 105 = 1101001b assert_eq!(Natural::from(105u32).count_ones(), 4); // 10^12 = 1110100011010100101001010001000000000000b assert_eq!(Natural::from(10u32).pow(12).count_ones(), 13); ``` ### impl Debug for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a `String`. This is the same as the `Display::fmt` implementation. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_debug_string(), "0"); assert_eq!(Natural::from(123u32).to_debug_string(), "123"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_debug_string(), "1000000000000" ); assert_eq!(format!("{:05?}", Natural::from(123u32)), "00123"); ``` ### impl Default for Natural #### fn default() -> Natural The default value of a `Natural`, 0. ### impl Digits<Natural> for Natural #### fn to_digits_asc(&self, base: &Natural) -> Vec<Natural, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.to_digits_asc(&Natural::from(6u32)).to_debug_string(), "[]"); assert_eq!(Natural::TWO.to_digits_asc(&Natural::from(6u32)).to_debug_string(), "[2]"); assert_eq!( Natural::from(123456u32).to_digits_asc(&Natural::from(3u32)).to_debug_string(), "[0, 1, 1, 0, 0, 1, 1, 2, 0, 0, 2]" ); ``` #### fn to_digits_desc(&self, base: &Natural) -> Vec<Natural, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.to_digits_desc(&Natural::from(6u32)).to_debug_string(), "[]"); assert_eq!(Natural::TWO.to_digits_desc(&Natural::from(6u32)).to_debug_string(), "[2]"); assert_eq!( Natural::from(123456u32).to_digits_desc(&Natural::from(3u32)).to_debug_string(), "[2, 0, 0, 2, 1, 1, 0, 0, 1, 1, 0]" ); ``` #### fn from_digits_asc<I>(base: &Natural, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n, m) = O(nm (\log (nm))^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from_digits_asc( &Natural::from(64u32), vec_from_str::<Natural>("[0, 0, 0]").unwrap().into_iter() ).to_debug_string(), "Some(0)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(3u32), vec_from_str::<Natural>("[0, 1, 1, 0, 0, 1, 1, 2, 0, 0, 2]").unwrap().into_iter() ).to_debug_string(), "Some(123456)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(8u32), vec_from_str::<Natural>("[3, 7, 1]").unwrap().into_iter() ).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(8u32), vec_from_str::<Natural>("[1, 10, 3]").unwrap().into_iter() ).to_debug_string(), "None" ); ``` #### fn from_digits_desc<I>(base: &Natural, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm (\log (nm))^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from_digits_desc( &Natural::from(64u32), vec_from_str::<Natural>("[0, 0, 0]").unwrap().into_iter() ).to_debug_string(), "Some(0)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(3u32), vec_from_str::<Natural>("[2, 0, 0, 2, 1, 1, 0, 0, 1, 1, 0]").unwrap().into_iter() ).to_debug_string(), "Some(123456)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(8u32), vec_from_str::<Natural>("[1, 7, 3]").unwrap().into_iter() ).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(8u32), vec_from_str::<Natural>("[3, 10, 1]").unwrap().into_iter() ).to_debug_string(), "None" ); ``` ### impl Digits<u128> for Natural #### fn to_digits_asc(&self, base: &u128) -> Vec<u128, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u128) -> Vec<u128, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u128, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u128, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u16> for Natural #### fn to_digits_asc(&self, base: &u16) -> Vec<u16, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u16) -> Vec<u16, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u16, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u16, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u32> for Natural #### fn to_digits_asc(&self, base: &u32) -> Vec<u32, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u32) -> Vec<u32, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u32, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u32, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u64> for Natural #### fn to_digits_asc(&self, base: &u64) -> Vec<u64, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u64) -> Vec<u64, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u8> for Natural #### fn to_digits_asc(&self, base: &u8) -> Vec<u8, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u8) -> Vec<u8, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u8, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u8, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<usize> for Natural #### fn to_digits_asc(&self, base: &usize) -> Vec<usize, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &usize) -> Vec<usize, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &usize, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &usize, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Display for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a `String`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_string(), "0"); assert_eq!(Natural::from(123u32).to_string(), "123"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_string(), "1000000000000" ); assert_eq!(format!("{:05}", Natural::from(123u32)), "00123"); ``` ### impl<'a, 'b> Div<&'b Natural> for &'a Natural #### fn div(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) / &Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() / &Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> Div<&'a Natural> for Natural #### fn div(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) / &Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() / &Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> Div<Natural> for &'a Natural #### fn div(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) / Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() / Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl Div<Natural> for Natural #### fn div(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) / Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() / Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> DivAssign<&'a Natural> for Natural #### fn div_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x /= &Natural::from(10u32); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x /= &Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 810000006723u64); ``` ### impl DivAssign<Natural> for Natural #### fn div_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x /= Natural::from(10u32); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x /= Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 810000006723u64); ``` ### impl<'a> DivAssignMod<&'a Natural> for Natural #### fn div_assign_mod(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_mod(&Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.div_assign_mod(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); assert_eq!(x, 810000006723u64); ``` #### type ModOutput = Natural ### impl DivAssignMod<Natural> for Natural #### fn div_assign_mod(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_mod(Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!(x.div_assign_mod(Natural::from_str("1234567890987").unwrap()), 530068894399u64); assert_eq!(x, 810000006723u64); ``` #### type ModOutput = Natural ### impl<'a> DivAssignRem<&'a Natural> for Natural #### fn div_assign_rem(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and returning the remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `div_assign_rem` is equivalent to `div_assign_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_rem(&Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.div_assign_rem(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); assert_eq!(x, 810000006723u64); ``` #### type RemOutput = Natural ### impl DivAssignRem<Natural> for Natural #### fn div_assign_rem(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `div_assign_rem` is equivalent to `div_assign_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_rem(Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!(x.div_assign_rem(Natural::from_str("1234567890987").unwrap()), 530068894399u64); assert_eq!(x, 810000006723u64); ``` #### type RemOutput = Natural ### impl<'a, 'b> DivExact<&'b Natural> for &'a Natural #### fn div_exact(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / &other` instead. If you’re unsure and you want to know, use `(&self).div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!((&Natural::from(56088u32)).div_exact(&Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( (&Natural::from_str("121932631112635269000000").unwrap()) .div_exact(&Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExact<&'a Natural> for Natural #### fn div_exact(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / &other` instead. If you’re unsure and you want to know, use `self.div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!(Natural::from(56088u32).div_exact(&Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( Natural::from_str("121932631112635269000000").unwrap() .div_exact(&Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExact<Natural> for &'a Natural #### fn div_exact(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!((&Natural::from(56088u32)).div_exact(Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( (&Natural::from_str("121932631112635269000000").unwrap()) .div_exact(Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl DivExact<Natural> for Natural #### fn div_exact(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!(Natural::from(56088u32).div_exact(Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( Natural::from_str("121932631112635269000000").unwrap() .div_exact(Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExactAssign<&'a Natural> for Natural #### fn div_exact_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= &other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 let mut x = Natural::from(56088u32); x.div_exact_assign(&Natural::from(456u32)); assert_eq!(x, 123); // 123456789000 * 987654321000 = 121932631112635269000000 let mut x = Natural::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(&Natural::from_str("987654321000").unwrap()); assert_eq!(x, 123456789000u64); ``` ### impl DivExactAssign<Natural> for Natural #### fn div_exact_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 let mut x = Natural::from(56088u32); x.div_exact_assign(Natural::from(456u32)); assert_eq!(x, 123); // 123456789000 * 987654321000 = 121932631112635269000000 let mut x = Natural::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(Natural::from_str("987654321000").unwrap()); assert_eq!(x, 123456789000u64); ``` ### impl<'a, 'b> DivMod<&'b Natural> for &'a Natural #### fn div_mod(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_mod(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_mod(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> DivMod<&'a Natural> for Natural #### fn div_mod(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( Natural::from(23u32).div_mod(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_mod(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> DivMod<Natural> for &'a Natural #### fn div_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_mod(Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_mod(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl DivMod<Natural> for Natural #### fn div_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).div_mod(Natural::from(10u32)).to_debug_string(), "(2, 3)"); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_mod(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a, 'b> DivRem<&'b Natural> for &'a Natural #### fn div_rem(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_rem(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_rem(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a> DivRem<&'a Natural> for Natural #### fn div_rem(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( Natural::from(23u32).div_rem(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_rem(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a> DivRem<Natural> for &'a Natural #### fn div_rem(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_rem(Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_rem(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl DivRem<Natural> for Natural #### fn div_rem(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).div_rem(Natural::from(10u32)).to_debug_string(), "(2, 3)"); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_rem(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a, 'b> DivRound<&'b Natural> for &'a Natural #### fn div_round(self, other: &'b Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking both by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(&Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(&Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( (&Natural::from(20u32)).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(14u32)).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRound<&'a Natural> for Natural #### fn div_round(self, other: &'a Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(10u32).pow(12).div_round(&Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).pow(12).div_round(&Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( Natural::from(20u32).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(14u32).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRound<Natural> for &'a Natural #### fn div_round(self, other: Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( (&Natural::from(20u32)).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(14u32)).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl DivRound<Natural> for Natural #### fn div_round(self, other: Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking both by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(10u32).pow(12).div_round(Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).pow(12).div_round(Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( Natural::from(20u32).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(14u32).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRoundAssign<&'a Natural> for Natural #### fn div_round_assign(&mut self, other: &'a Natural, rm: RoundingMode) -> Ordering Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(4u32), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Natural::from(3u32), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(4u32), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = Natural::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(5u32), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Natural::from(10u32); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 3); let mut n = Natural::from(20u32); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Natural::from(10u32); assert_eq!( n.div_round_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 2); let mut n = Natural::from(14u32); assert_eq!( n.div_round_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl DivRoundAssign<Natural> for Natural #### fn div_round_assign(&mut self, other: Natural, rm: RoundingMode) -> Ordering Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(10u32).pow(12); assert_eq!(n.div_round_assign(Natural::from(3u32), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = Natural::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(5u32), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Natural::from(20u32); assert_eq!( n.div_round_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(14u32); assert_eq!( n.div_round_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl<'a, 'b> DivisibleBy<&'b Natural> for &'a Natural #### fn divisible_by(self, other: &'b Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. Both `Natural`s are taken by reference. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).divisible_by(&Natural::ZERO), true); assert_eq!((&Natural::from(100u32)).divisible_by(&Natural::from(3u32)), false); assert_eq!((&Natural::from(102u32)).divisible_by(&Natural::from(3u32)), true); assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .divisible_by(&Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<&'a Natural> for Natural #### fn divisible_by(self, other: &'a Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. The first `Natural`s is taken by reference and the second by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.divisible_by(&Natural::ZERO), true); assert_eq!(Natural::from(100u32).divisible_by(&Natural::from(3u32)), false); assert_eq!(Natural::from(102u32).divisible_by(&Natural::from(3u32)), true); assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .divisible_by(&Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<Natural> for &'a Natural #### fn divisible_by(self, other: Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. The first `Natural`s are taken by reference and the second by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).divisible_by(Natural::ZERO), true); assert_eq!((&Natural::from(100u32)).divisible_by(Natural::from(3u32)), false); assert_eq!((&Natural::from(102u32)).divisible_by(Natural::from(3u32)), true); assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .divisible_by(Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl DivisibleBy<Natural> for Natural #### fn divisible_by(self, other: Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. Both `Natural`s are taken by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.divisible_by(Natural::ZERO), true); assert_eq!(Natural::from(100u32).divisible_by(Natural::from(3u32)), false); assert_eq!(Natural::from(102u32).divisible_by(Natural::from(3u32)), true); assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .divisible_by(Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleByPowerOf2 for &'a Natural #### fn divisible_by_power_of_2(self, pow: u64) -> bool Returns whether a `Natural` is divisible by $2^k$. $f(x, k) = (2^k|x)$. $f(x, k) = (\exists n \in \N : \ x = n2^k)$. If `self` is 0, the result is always true; otherwise, it is equivalent to `self.trailing_zeros().unwrap() <= pow`, but more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivisibleByPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.divisible_by_power_of_2(100), true); assert_eq!(Natural::from(100u32).divisible_by_power_of_2(2), true); assert_eq!(Natural::from(100u32).divisible_by_power_of_2(3), false); assert_eq!(Natural::from(10u32).pow(12).divisible_by_power_of_2(12), true); assert_eq!(Natural::from(10u32).pow(12).divisible_by_power_of_2(13), false); ``` ### impl DoubleFactorial for Natural #### fn double_factorial(n: u64) -> Natural Computes the double factorial of a number. $$ f(n) = n!! = n \times (n - 2) \times (n - 4) \times \cdots \times i, $$ where $i$ is 1 if $n$ is odd and $2$ if $n$ is even. $n!! = O(\sqrt{n}(n/e)^{n/2})$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::DoubleFactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::double_factorial(0), 1); assert_eq!(Natural::double_factorial(1), 1); assert_eq!(Natural::double_factorial(2), 2); assert_eq!(Natural::double_factorial(3), 3); assert_eq!(Natural::double_factorial(4), 8); assert_eq!(Natural::double_factorial(5), 15); assert_eq!(Natural::double_factorial(6), 48); assert_eq!(Natural::double_factorial(7), 105); assert_eq!( Natural::double_factorial(99).to_string(), "2725392139750729502980713245400918633290796330545803413734328823443106201171875" ); assert_eq!( Natural::double_factorial(100).to_string(), "34243224702511976248246432895208185975118675053719198827915654463488000000000000" ); ``` This is equivalent to `mpz_2fac_ui` from `mpz/2fac_ui.c`, GMP 6.2.1. ### impl<'a, 'b, 'c> EqMod<&'b Integer, &'c Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: &'c Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Integer, &'b Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by value and the second and third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Integer, Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Integer, Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by value and the second by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Natural, Natural> for Natural #### fn eq_mod(self, other: &'a Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and third are taken by value and the second by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(&Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b, 'c> EqMod<&'b Natural, &'c Natural> for &'a Natural #### fn eq_mod(self, other: &'b Natural, m: &'c Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. All three are taken by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(&Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Natural, &'b Natural> for Natural #### fn eq_mod(self, other: &'a Natural, m: &'b Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first is taken by value and the second and third by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(&Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Natural, Natural> for &'a Natural #### fn eq_mod(self, other: &'b Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and second are taken by reference and the third by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(&Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Integer, &'b Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, &'a Natural> for Integer #### fn eq_mod(self, other: Integer, m: &'a Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by value and the third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by reference and the second and third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Integer, Natural> for Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Natural, &'b Natural> for &'a Natural #### fn eq_mod(self, other: Natural, m: &'b Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and third are taken by reference and the second by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Natural, &'a Natural> for Natural #### fn eq_mod(self, other: Natural, m: &'a Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first two are taken by value and the third by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Natural, Natural> for &'a Natural #### fn eq_mod(self, other: Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first is taken by reference and the second and third by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Natural, Natural> for Natural #### fn eq_mod(self, other: Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. All three are taken by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqModPowerOf2<&'b Natural> for &'a Natural #### fn eq_mod_power_of_2(self, other: &'b Natural, pow: u64) -> bool Returns whether one `Natural` is equal to another modulo $2^k$; that is, whether their $k$ least-significant bits are equal. $f(x, y, k) = (x \equiv y \mod 2^k)$. $f(x, y, k) = (\exists n \in \Z : x - y = n2^k)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqModPowerOf2; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).eq_mod_power_of_2(&Natural::from(256u32), 8), true); assert_eq!( (&Natural::from(0b1101u32)).eq_mod_power_of_2(&Natural::from(0b10101u32), 3), true ); assert_eq!( (&Natural::from(0b1101u32)).eq_mod_power_of_2(&Natural::from(0b10101u32), 4), false ); ``` ### impl<'a, 'b> ExtendedGcd<&'a Natural> for &'b Natural #### fn extended_gcd(self, other: &'a Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Natural`s are taken by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).extended_gcd(&Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Natural::from(240u32)).extended_gcd(&Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<&'a Natural> for Natural #### fn extended_gcd(self, other: &'a Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Natural` is taken by value and the second by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).extended_gcd(&Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Natural::from(240u32).extended_gcd(&Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<Natural> for &'a Natural #### fn extended_gcd(self, other: Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Natural` is taken by reference and the second by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).extended_gcd(Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Natural::from(240u32)).extended_gcd(Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl ExtendedGcd<Natural> for Natural #### fn extended_gcd(self, other: Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Natural`s are taken by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).extended_gcd(Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Natural::from(240u32).extended_gcd(Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl Factorial for Natural #### fn factorial(n: u64) -> Natural Computes the factorial of a number. $$ f(n) = n! = 1 \times 2 \times 3 \times \cdots \times n. $$ $n! = O(\sqrt{n}(n/e)^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `n`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Factorial; use malachite_nz::natural::Natural; assert_eq!(Natural::factorial(0), 1); assert_eq!(Natural::factorial(1), 1); assert_eq!(Natural::factorial(2), 2); assert_eq!(Natural::factorial(3), 6); assert_eq!(Natural::factorial(4), 24); assert_eq!(Natural::factorial(5), 120); assert_eq!( Natural::factorial(100).to_string(), "9332621544394415268169923885626670049071596826438162146859296389521759999322991560894\ 1463976156518286253697920827223758251185210916864000000000000000000000000" ); ``` This is equivalent to `mpz_fac_ui` from `mpz/fac_ui.c`, GMP 6.2.1. ### impl<'a, 'b> FloorLogBase<&'b Natural> for &'a Natural #### fn floor_log_base(self, base: &Natural) -> u64 Returns the floor of the base-$b$ logarithm of a positive `Natural`. $f(x, b) = \lfloor\log_b x\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).floor_log_base(&Natural::from(3u32)), 3); assert_eq!(Natural::from(81u32).floor_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(82u32).floor_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(4294967296u64).floor_log_base(&Natural::from(10u32)), 9); ``` This is equivalent to `fmpz_flog` from `fmpz/flog.c`, FLINT 2.7.1. #### type Output = u64 ### impl<'a> FloorLogBase2 for &'a Natural #### fn floor_log_base_2(self) -> u64 Returns the floor of the base-2 logarithm of a positive `Natural`. $f(x) = \lfloor\log_2 x\rfloor$. ##### Worst-case complexity Constant time and additional memory. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).floor_log_base_2(), 1); assert_eq!(Natural::from(100u32).floor_log_base_2(), 6); ``` #### type Output = u64 ### impl<'a> FloorLogBasePowerOf2<u64> for &'a Natural #### fn floor_log_base_power_of_2(self, pow: u64) -> u64 Returns the floor of the base-$2^k$ logarithm of a positive `Natural`. $f(x, k) = \lfloor\log_{2^k} x\rfloor$. ##### Worst-case complexity Constant time and additional memory. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBasePowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(100u32).floor_log_base_power_of_2(2), 3); assert_eq!(Natural::from(4294967296u64).floor_log_base_power_of_2(8), 4); ``` #### type Output = u64 ### impl<'a> FloorRoot<u64> for &'a Natural #### fn floor_root(self, exp: u64) -> Natural Returns the floor of the $n$th root of a `Natural`, taking the `Natural` by reference. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).floor_root(3), 9); assert_eq!((&Natural::from(1000u16)).floor_root(3), 10); assert_eq!((&Natural::from(1001u16)).floor_root(3), 10); assert_eq!((&Natural::from(100000000000u64)).floor_root(5), 158); ``` #### type Output = Natural ### impl FloorRoot<u64> for Natural #### fn floor_root(self, exp: u64) -> Natural Returns the floor of the $n$th root of a `Natural`, taking the `Natural` by value. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).floor_root(3), 9); assert_eq!(Natural::from(1000u16).floor_root(3), 10); assert_eq!(Natural::from(1001u16).floor_root(3), 10); assert_eq!(Natural::from(100000000000u64).floor_root(5), 158); ``` #### type Output = Natural ### impl FloorRootAssign<u64> for Natural #### fn floor_root_assign(&mut self, exp: u64) Replaces a `Natural` with the floor of its $n$th root. $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRootAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); x.floor_root_assign(3); assert_eq!(x, 9); let mut x = Natural::from(1000u16); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1001u16); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(100000000000u64); x.floor_root_assign(5); assert_eq!(x, 158); ``` ### impl<'a> FloorSqrt for &'a Natural #### fn floor_sqrt(self) -> Natural Returns the floor of the square root of a `Natural`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).floor_sqrt(), 9); assert_eq!((&Natural::from(100u8)).floor_sqrt(), 10); assert_eq!((&Natural::from(101u8)).floor_sqrt(), 10); assert_eq!((&Natural::from(1000000000u32)).floor_sqrt(), 31622); assert_eq!((&Natural::from(10000000000u64)).floor_sqrt(), 100000); ``` #### type Output = Natural ### impl FloorSqrt for Natural #### fn floor_sqrt(self) -> Natural Returns the floor of the square root of a `Natural`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).floor_sqrt(), 9); assert_eq!(Natural::from(100u8).floor_sqrt(), 10); assert_eq!(Natural::from(101u8).floor_sqrt(), 10); assert_eq!(Natural::from(1000000000u32).floor_sqrt(), 31622); assert_eq!(Natural::from(10000000000u64).floor_sqrt(), 100000); ``` #### type Output = Natural ### impl FloorSqrtAssign for Natural #### fn floor_sqrt_assign(&mut self) Replaces a `Natural` with the floor of its square root. $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrtAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); x.floor_sqrt_assign(); assert_eq!(x, 9); let mut x = Natural::from(100u8); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(101u8); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(1000000000u32); x.floor_sqrt_assign(); assert_eq!(x, 31622); let mut x = Natural::from(10000000000u64); x.floor_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a> From<&'a Natural> for Integer #### fn from(value: &'a Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(&Natural::from(123u32)), 123); assert_eq!(Integer::from(&Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl<'a> From<&'a Natural> for Rational #### fn from(value: &'a Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(&Natural::from(123u32)), 123); ``` ### impl From<Natural> for Integer #### fn from(value: Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(Natural::from(123u32)), 123); assert_eq!(Integer::from(Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<Natural> for Rational #### fn from(value: Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(Natural::from(123u32)), 123); ``` ### impl From<bool> for Natural #### fn from(b: bool) -> Natural Converts a `bool` to 0 or 1. This function is known as the Iverson bracket. $$ f(P) = [P] = \begin{cases} 1 & \text{if} \quad P, \\ 0 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; assert_eq!(Natural::from(false), 0); assert_eq!(Natural::from(true), 1); ``` ### impl From<u128> for Natural #### fn from(u: u128) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is larger than a `Limb`’s. This implementation is general enough to also work for `usize`, regardless of whether it is equal in width to `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u16> for Natural #### fn from(u: u16) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u32> for Natural #### fn from(u: u32) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u64> for Natural #### fn from(u: u64) -> Natural Converts a `Limb` to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u8> for Natural #### fn from(u: u8) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<usize> for Natural #### fn from(u: usize) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is larger than a `Limb`’s. This implementation is general enough to also work for `usize`, regardless of whether it is equal in width to `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl FromSciString for Natural #### fn from_sci_string_with_options( s: &str, options: FromSciStringOptions ) -> Option<NaturalConverts a string, possibly in scientfic notation, to a `Natural`. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive) and the rounding mode, in case rounding is necessary because the string represents a non-integer. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. These are most useful in conjunction with exponents, but they may be used on their own. If the string represents a non-integer, the rounding mode specified in `options` is used to round to an integer. If the string is unparseable, `None` is returned. `None` is also returned if the rounding mode in options is `Exact`, but rounding is necessary. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::num::conversion::traits::FromSciString; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; assert_eq!(Natural::from_sci_string("123").unwrap(), 123); assert_eq!(Natural::from_sci_string("123.5").unwrap(), 124); assert_eq!(Natural::from_sci_string("-123.5"), None); assert_eq!(Natural::from_sci_string("1.23e10").unwrap(), 12300000000u64); let mut options = FromSciStringOptions::default(); assert_eq!(Natural::from_sci_string_with_options("123.5", options).unwrap(), 124); options.set_rounding_mode(RoundingMode::Floor); assert_eq!(Natural::from_sci_string_with_options("123.5", options).unwrap(), 123); options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Natural::from_sci_string_with_options("ff", options).unwrap(), 255); options = FromSciStringOptions::default(); options.set_base(36); assert_eq!(Natural::from_sci_string_with_options("1e5", options).unwrap(), 1805); assert_eq!(Natural::from_sci_string_with_options("1e+5", options).unwrap(), 60466176); assert_eq!(Natural::from_sci_string_with_options("1e-5", options).unwrap(), 0); ``` #### fn from_sci_string(s: &str) -> Option<SelfConverts a `&str`, possibly in scientific notation, to a number, using the default `FromSciStringOptions`.### impl FromStr for Natural #### fn from_str(s: &str) -> Result<Natural, ()Converts an string to a `Natural`. If the string does not represent a valid `Natural`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`. Leading zeros are allowed. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from_str("123456").unwrap(), 123456); assert_eq!(Natural::from_str("00123456").unwrap(), 123456); assert_eq!(Natural::from_str("0").unwrap(), 0); assert!(Natural::from_str("").is_err()); assert!(Natural::from_str("a").is_err()); assert!(Natural::from_str("-5").is_err()); ``` #### type Err = () The associated error which can be returned from parsing.### impl FromStringBase for Natural #### fn from_string_base(base: u8, s: &str) -> Option<NaturalConverts an string, in a specified base, to a `Natural`. If the string does not represent a valid `Natural`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, `'a'` through `'z'`, and `'A'` through `'Z'`; and only characters that represent digits smaller than the base are allowed. Leading zeros are always allowed. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::{Digits, FromStringBase}; use malachite_nz::natural::Natural; assert_eq!(Natural::from_string_base(10, "123456").unwrap(), 123456); assert_eq!(Natural::from_string_base(10, "00123456").unwrap(), 123456); assert_eq!(Natural::from_string_base(16, "0").unwrap(), 0); assert_eq!(Natural::from_string_base(16, "deadbeef").unwrap(), 3735928559u32); assert_eq!(Natural::from_string_base(16, "deAdBeEf").unwrap(), 3735928559u32); assert!(Natural::from_string_base(10, "").is_none()); assert!(Natural::from_string_base(10, "a").is_none()); assert!(Natural::from_string_base(10, "-5").is_none()); assert!(Natural::from_string_base(2, "2").is_none()); ``` ### impl<'a, 'b> Gcd<&'a Natural> for &'b Natural #### fn gcd(self, other: &'a Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking both by reference. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).gcd(&Natural::from(5u32)), 1); assert_eq!((&Natural::from(12u32)).gcd(&Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> Gcd<&'a Natural> for Natural #### fn gcd(self, other: &'a Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking the first by value and the second by reference. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).gcd(&Natural::from(5u32)), 1); assert_eq!(Natural::from(12u32).gcd(&Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> Gcd<Natural> for &'a Natural #### fn gcd(self, other: Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking the first by reference and the second by value. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).gcd(Natural::from(5u32)), 1); assert_eq!((&Natural::from(12u32)).gcd(Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl Gcd<Natural> for Natural #### fn gcd(self, other: Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking both by value. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).gcd(Natural::from(5u32)), 1); assert_eq!(Natural::from(12u32).gcd(Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> GcdAssign<&'a Natural> for Natural #### fn gcd_assign(&mut self, other: &'a Natural) Replaces a `Natural` by its GCD (greatest common divisor) with another `Natural`, taking the `Natural` on the right-hand side by reference. $$ x \gets \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::GcdAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.gcd_assign(&Natural::from(5u32)); assert_eq!(x, 1); let mut x = Natural::from(12u32); x.gcd_assign(&Natural::from(90u32)); assert_eq!(x, 6); ``` ### impl GcdAssign<Natural> for Natural #### fn gcd_assign(&mut self, other: Natural) Replaces a `Natural` by its GCD (greatest common divisor) with another `Natural`, taking the `Natural` on the right-hand side by value. $$ x \gets \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::GcdAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.gcd_assign(Natural::from(5u32)); assert_eq!(x, 1); let mut x = Natural::from(12u32); x.gcd_assign(Natural::from(90u32)); assert_eq!(x, 6); ``` ### impl<'a, 'b> HammingDistance<&'a Natural> for &'b Natural #### fn hamming_distance(self, other: &'a Natural) -> u64 Determines the Hamming distance between two [`Natural]`s. Both `Natural`s have infinitely many implicit leading zeros. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_base::num::logic::traits::HammingDistance; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).hamming_distance(&Natural::from(123u32)), 0); // 105 = 1101001b, 123 = 1111011 assert_eq!(Natural::from(105u32).hamming_distance(&Natural::from(123u32)), 2); let n = Natural::ONE << 100u32; assert_eq!(n.hamming_distance(&(&n - Natural::ONE)), 101); ``` ### impl Hash for Natural #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn integer_mantissa_and_exponent(self) -> (Natural, u64) Returns a `Natural`’s integer mantissa and exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = (\frac{|x|}{2^{e_i}}, e_i), $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. The inverse operation is `from_integer_mantissa_and_exponent`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!( Natural::from(123u32).integer_mantissa_and_exponent(), (Natural::from(123u32), 0) ); assert_eq!( Natural::from(100u32).integer_mantissa_and_exponent(), (Natural::from(25u32), 2) ); ``` #### fn integer_mantissa(self) -> Natural Returns a `Natural`’s integer mantissa. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = \frac{|x|}{2^{e_i}}, $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).integer_mantissa(), 123); assert_eq!(Natural::from(100u32).integer_mantissa(), 25); ``` #### fn integer_exponent(self) -> u64 Returns a `Natural`’s integer exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = e_i, $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).integer_exponent(), 0); assert_eq!(Natural::from(100u32).integer_exponent(), 2); ``` #### fn from_integer_mantissa_and_exponent( integer_mantissa: Natural, integer_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its integer mantissa and exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = 2^{e_i}m_i. $$ The input does not have to be reduced; that is, the mantissa does not have to be odd. The result is an `Option`, but for this trait implementation the result is always `Some`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `integer_mantissa.significant_bits() + integer_exponent`. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; let n = <&Natural as IntegerMantissaAndExponent<_, _, _>> ::from_integer_mantissa_and_exponent(Natural::from(123u32), 0).unwrap(); assert_eq!(n, 123); let n = <&Natural as IntegerMantissaAndExponent<_, _, _>> ::from_integer_mantissa_and_exponent(Natural::from(25u32), 2).unwrap(); assert_eq!(n, 100); ``` ### impl<'a> IsInteger for &'a Natural #### fn is_integer(self) -> bool Determines whether a `Natural` is an integer. It always returns `true`. $f(x) = \textrm{true}$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_base::num::conversion::traits::IsInteger; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.is_integer(), true); assert_eq!(Natural::ONE.is_integer(), true); assert_eq!(Natural::from(100u32).is_integer(), true); ``` ### impl IsPowerOf2 for Natural #### fn is_power_of_2(&self) -> bool Determines whether a `Natural` is an integer power of 2. $f(x) = (\exists n \in \Z : 2^n = x)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{IsPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.is_power_of_2(), false); assert_eq!(Natural::from(123u32).is_power_of_2(), false); assert_eq!(Natural::from(0x80u32).is_power_of_2(), true); assert_eq!(Natural::from(10u32).pow(12).is_power_of_2(), false); assert_eq!(Natural::from_str("1099511627776").unwrap().is_power_of_2(), true); ``` ### impl<'a, 'b> JacobiSymbol<&'a Natural> for &'b Natural #### fn jacobi_symbol(self, other: &'a Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).jacobi_symbol(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).jacobi_symbol(&Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(&Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(&Natural::from(9u32)), 1); ``` ### impl<'a> JacobiSymbol<&'a Natural> for Natural #### fn jacobi_symbol(self, other: &'a Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).jacobi_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).jacobi_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).jacobi_symbol(&Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).jacobi_symbol(&Natural::from(9u32)), 1); ``` ### impl<'a> JacobiSymbol<Natural> for &'a Natural #### fn jacobi_symbol(self, other: Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).jacobi_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).jacobi_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(Natural::from(9u32)), 1); ``` ### impl JacobiSymbol<Natural> for Natural #### fn jacobi_symbol(self, other: Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).jacobi_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).jacobi_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).jacobi_symbol(Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).jacobi_symbol(Natural::from(9u32)), 1); ``` ### impl<'a, 'b> KroneckerSymbol<&'a Natural> for &'b Natural #### fn kronecker_symbol(self, other: &'a Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl<'a> KroneckerSymbol<&'a Natural> for Natural #### fn kronecker_symbol(self, other: &'a Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).kronecker_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).kronecker_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(9u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(8u32)), -1); ``` ### impl<'a> KroneckerSymbol<Natural> for &'a Natural #### fn kronecker_symbol(self, other: Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl KroneckerSymbol<Natural> for Natural #### fn kronecker_symbol(self, other: Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl<'a, 'b> Lcm<&'a Natural> for &'b Natural #### fn lcm(self, other: &'a Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking both by reference. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).lcm(&Natural::from(5u32)), 15); assert_eq!((&Natural::from(12u32)).lcm(&Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> Lcm<&'a Natural> for Natural #### fn lcm(self, other: &'a Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).lcm(&Natural::from(5u32)), 15); assert_eq!(Natural::from(12u32).lcm(&Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> Lcm<Natural> for &'a Natural #### fn lcm(self, other: Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).lcm(Natural::from(5u32)), 15); assert_eq!((&Natural::from(12u32)).lcm(Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl Lcm<Natural> for Natural #### fn lcm(self, other: Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking both by value. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).lcm(Natural::from(5u32)), 15); assert_eq!(Natural::from(12u32).lcm(Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> LcmAssign<&'a Natural> for Natural #### fn lcm_assign(&mut self, other: &'a Natural) Replaces a `Natural` by its LCM (least common multiple) with another `Natural`, taking the `Natural` on the right-hand side by reference. $$ x \gets \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::LcmAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.lcm_assign(&Natural::from(5u32)); assert_eq!(x, 15); let mut x = Natural::from(12u32); x.lcm_assign(&Natural::from(90u32)); assert_eq!(x, 180); ``` ### impl LcmAssign<Natural> for Natural #### fn lcm_assign(&mut self, other: Natural) Replaces a `Natural` by its LCM (least common multiple) with another `Natural`, taking the `Natural` on the right-hand side by value. $$ x \gets \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::LcmAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.lcm_assign(Natural::from(5u32)); assert_eq!(x, 15); let mut x = Natural::from(12u32); x.lcm_assign(Natural::from(90u32)); assert_eq!(x, 180); ``` ### impl<'a, 'b> LegendreSymbol<&'a Natural> for &'b Natural #### fn legendre_symbol(self, other: &'a Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).legendre_symbol(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).legendre_symbol(&Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).legendre_symbol(&Natural::from(5u32)), 1); ``` ### impl<'a> LegendreSymbol<&'a Natural> for Natural #### fn legendre_symbol(self, other: &'a Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking the first by value and the second by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).legendre_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).legendre_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).legendre_symbol(&Natural::from(5u32)), 1); ``` ### impl<'a> LegendreSymbol<Natural> for &'a Natural #### fn legendre_symbol(self, other: Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both the first by reference and the second by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).legendre_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).legendre_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).legendre_symbol(Natural::from(5u32)), 1); ``` ### impl LegendreSymbol<Natural> for Natural #### fn legendre_symbol(self, other: Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).legendre_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).legendre_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).legendre_symbol(Natural::from(5u32)), 1); ``` ### impl LowMask for Natural #### fn low_mask(bits: u64) -> Natural Returns a `Natural` whose least significant $b$ bits are `true` and whose other bits are `false`. $f(b) = 2^b - 1$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `bits`. ##### Examples ``` use malachite_base::num::logic::traits::LowMask; use malachite_nz::natural::Natural; assert_eq!(Natural::low_mask(0), 0); assert_eq!(Natural::low_mask(3), 7); assert_eq!(Natural::low_mask(100).to_string(), "1267650600228229401496703205375"); ``` ### impl LowerHex for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a hexadecimal `String` using lowercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToLowerHexString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_lower_hex_string(), "0"); assert_eq!(Natural::from(123u32).to_lower_hex_string(), "7b"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_lower_hex_string(), "e8d4a51000" ); assert_eq!(format!("{:07x}", Natural::from(123u32)), "000007b"); assert_eq!(format!("{:#x}", Natural::ZERO), "0x0"); assert_eq!(format!("{:#x}", Natural::from(123u32)), "0x7b"); assert_eq!( format!("{:#x}", Natural::from_str("1000000000000").unwrap()), "0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Natural::from(123u32)), "0x0007b"); ``` ### impl Min for Natural The minimum value of a `Natural`, 0. #### const MIN: Natural = Natural::ZERO The minimum value of `Self`.### impl<'a, 'b> Mod<&'b Natural> for &'a Natural #### fn mod_op(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!((&Natural::from(23u32)).mod_op(&Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .mod_op(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> Mod<&'a Natural> for Natural #### fn mod_op(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).mod_op(&Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .mod_op(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> Mod<Natural> for &'a Natural #### fn mod_op(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!((&Natural::from(23u32)).mod_op(Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .mod_op(Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl Mod<Natural> for Natural #### fn mod_op(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).mod_op(Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .mod_op(Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> ModAdd<&'a Natural, Natural> for Natural #### fn mod_add(self, other: &'a Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(&Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(&Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModAdd<&'b Natural, &'c Natural> for &'a Natural #### fn mod_add(self, other: &'b Natural, m: &'c Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(&Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(&Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModAdd<&'a Natural, &'b Natural> for Natural #### fn mod_add(self, other: &'a Natural, m: &'b Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(&Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(&Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModAdd<&'b Natural, Natural> for &'a Natural #### fn mod_add(self, other: &'b Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(&Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(&Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModAdd<Natural, &'b Natural> for &'a Natural #### fn mod_add(self, other: Natural, m: &'b Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModAdd<Natural, &'a Natural> for Natural #### fn mod_add(self, other: Natural, m: &'a Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModAdd<Natural, Natural> for &'a Natural #### fn mod_add(self, other: Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. assert_eq!((&Natural::ZERO).mod_add(Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(Natural::from(5u32), Natural::from(10u32)), 2); ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!( (&Natural::ZERO).mod_add(Natural::from(3u32), Natural::from(5u32)).to_string(), "3" ); assert_eq!( (&Natural::from(7u32)).mod_add(Natural::from(5u32), Natural::from(10u32)).to_string(), "2" ); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModAdd<Natural, Natural> for Natural #### fn mod_add(self, other: Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModAddAssign<&'a Natural, Natural> for Natural #### fn mod_add_assign(&mut self, other: &'a Natural, m: Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(&Natural::from(3u32), Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(&Natural::from(5u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModAddAssign<&'a Natural, &'b Natural> for Natural #### fn mod_add_assign(&mut self, other: &'a Natural, m: &'b Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(&Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(&Natural::from(5u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModAddAssign<Natural, &'a Natural> for Natural #### fn mod_add_assign(&mut self, other: Natural, m: &'a Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(Natural::from(5u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModAddAssign<Natural, Natural> for Natural #### fn mod_add_assign(&mut self, other: Natural, m: Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(Natural::from(3u32), Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(Natural::from(5u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a> ModAssign<&'a Natural> for Natural #### fn mod_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by reference and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x.mod_assign(&Natural::from(10u32)); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.mod_assign(&Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 530068894399u64); ``` ### impl ModAssign<Natural> for Natural #### fn mod_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by value and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x.mod_assign(Natural::from(10u32)); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.mod_assign(Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 530068894399u64); ``` ### impl<'a, 'b> ModInverse<&'a Natural> for &'b Natural #### fn mod_inverse(self, m: &'a Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. Both `Natural`s are taken by reference. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_inverse(&Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!((&Natural::from(4u32)).mod_inverse(&Natural::from(10u32)), None); ``` #### type Output = Natural ### impl<'a> ModInverse<&'a Natural> for Natural #### fn mod_inverse(self, m: &'a Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).mod_inverse(&Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!(Natural::from(4u32).mod_inverse(&Natural::from(10u32)), None); ``` #### type Output = Natural ### impl<'a> ModInverse<Natural> for &'a Natural #### fn mod_inverse(self, m: Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. The first `Natural`s is taken by reference and the second by value. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_inverse(Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!((&Natural::from(4u32)).mod_inverse(Natural::from(10u32)), None); ``` #### type Output = Natural ### impl ModInverse<Natural> for Natural #### fn mod_inverse(self, m: Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. Both `Natural`s are taken by value. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).mod_inverse(Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!(Natural::from(4u32).mod_inverse(Natural::from(10u32)), None); ``` #### type Output = Natural ### impl ModIsReduced<Natural> for Natural #### fn mod_is_reduced(&self, m: &Natural) -> bool Returns whether a `Natural` is reduced modulo another `Natural` $m$; in other words, whether it is less than $m$. $m$ cannot be zero. $f(x, m) = (x < m)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `m` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModIsReduced, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_is_reduced(&Natural::from(5u32)), true); assert_eq!( Natural::from(10u32).pow(12).mod_is_reduced(&Natural::from(10u32).pow(12)), false ); assert_eq!( Natural::from(10u32).pow(12) .mod_is_reduced(&(Natural::from(10u32).pow(12) + Natural::ONE)), true ); ``` ### impl<'a> ModMul<&'a Natural, Natural> for Natural #### fn mod_mul(self, other: &'a Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(&Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(&Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModMul<&'b Natural, &'c Natural> for &'a Natural #### fn mod_mul(self, other: &'b Natural, m: &'c Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_mul(&Natural::from(4u32), &Natural::from(15u32)), 12 ); assert_eq!((&Natural::from(7u32)).mod_mul(&Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModMul<&'a Natural, &'b Natural> for Natural #### fn mod_mul(self, other: &'a Natural, m: &'b Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(&Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(&Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModMul<&'b Natural, Natural> for &'a Natural #### fn mod_mul(self, other: &'b Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(&Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(&Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModMul<Natural, &'b Natural> for &'a Natural #### fn mod_mul(self, other: Natural, m: &'b Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModMul<Natural, &'a Natural> for Natural #### fn mod_mul(self, other: Natural, m: &'a Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModMul<Natural, Natural> for &'a Natural #### fn mod_mul(self, other: Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModMul<Natural, Natural> for Natural #### fn mod_mul(self, other: Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModMulAssign<&'a Natural, Natural> for Natural #### fn mod_mul_assign(&mut self, other: &'a Natural, m: Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(&Natural::from(4u32), Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(&Natural::from(6u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModMulAssign<&'a Natural, &'b Natural> for Natural #### fn mod_mul_assign(&mut self, other: &'a Natural, m: &'b Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(&Natural::from(4u32), &Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(&Natural::from(6u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModMulAssign<Natural, &'a Natural> for Natural #### fn mod_mul_assign(&mut self, other: Natural, m: &'a Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(Natural::from(4u32), &Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(Natural::from(6u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModMulAssign<Natural, Natural> for Natural #### fn mod_mul_assign(&mut self, other: Natural, m: Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(Natural::from(4u32), Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(Natural::from(6u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a> ModMulPrecomputed<&'a Natural, Natural> for Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'a Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( &Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b, 'c> ModMulPrecomputed<&'b Natural, &'c Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'b Natural, m: &'c Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( &Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from fmpz_mod/mul.c, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<&'a Natural, &'b Natural> for Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'a Natural, m: &'b Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( &Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<&'b Natural, Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'b Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( &Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<Natural, &'b Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: &'b Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputed<Natural, &'a Natural> for Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: &'a Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputed<Natural, Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural #### type Data = ModMulData ### impl ModMulPrecomputed<Natural, Natural> for Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputedAssign<&'a Natural, Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: &'a Natural, m: Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(&Natural::from(9u32), Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModMulPrecomputedAssign<&'a Natural, &'b Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: &'a Natural, m: &'b Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(&Natural::from(9u32), &Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModMulPrecomputedAssign<Natural, &'a Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: Natural, m: &'a Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(Natural::from(9u32), &Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModMulPrecomputedAssign<Natural, Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: Natural, m: Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(Natural::from(9u32), Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a, 'b> ModNeg<&'b Natural> for &'a Natural #### fn mod_neg(self, m: &'b Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_neg(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).mod_neg(&Natural::from(10u32)), 3); assert_eq!( (&Natural::from(7u32)).mod_neg(&Natural::from(10u32).pow(12)), 999999999993u64 ); ``` #### type Output = Natural ### impl<'a> ModNeg<&'a Natural> for Natural #### fn mod_neg(self, m: &'a Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_neg(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).mod_neg(&Natural::from(10u32)), 3); assert_eq!(Natural::from(7u32).mod_neg(&Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl<'a> ModNeg<Natural> for &'a Natural #### fn mod_neg(self, m: Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_neg(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).mod_neg(Natural::from(10u32)), 3); assert_eq!((&Natural::from(7u32)).mod_neg(Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl ModNeg<Natural> for Natural #### fn mod_neg(self, m: Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, m) = y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_neg(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).mod_neg(Natural::from(10u32)), 3); assert_eq!(Natural::from(7u32).mod_neg(Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl<'a> ModNegAssign<&'a Natural> for Natural #### fn mod_neg_assign(&mut self, m: &'a Natural) Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNegAssign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_neg_assign(&Natural::from(5u32)); assert_eq!(n, 0); let mut n = Natural::from(7u32); n.mod_neg_assign(&Natural::from(10u32)); assert_eq!(n, 3); let mut n = Natural::from(7u32); n.mod_neg_assign(&Natural::from(10u32).pow(12)); assert_eq!(n, 999999999993u64); ``` ### impl ModNegAssign<Natural> for Natural #### fn mod_neg_assign(&mut self, m: Natural) Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNegAssign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_neg_assign(Natural::from(5u32)); assert_eq!(n, 0); let mut n = Natural::from(7u32); n.mod_neg_assign(Natural::from(10u32)); assert_eq!(n, 3); let mut n = Natural::from(7u32); n.mod_neg_assign(Natural::from(10u32).pow(12)); assert_eq!(n, 999999999993u64); ``` ### impl<'a> ModPow<&'a Natural, Natural> for Natural #### fn mod_pow(self, exp: &'a Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(&Natural::from(13u32), Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(&Natural::from(1000u32), Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a, 'b, 'c> ModPow<&'b Natural, &'c Natural> for &'a Natural #### fn mod_pow(self, exp: &'b Natural, m: &'c Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. All three `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(&Natural::from(13u32), &Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(&Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<&'a Natural, &'b Natural> for Natural #### fn mod_pow(self, exp: &'a Natural, m: &'b Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(&Natural::from(13u32), &Natural::from(497u32)), 445); assert_eq!( Natural::from(10u32).mod_pow(&Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<&'b Natural, Natural> for &'a Natural #### fn mod_pow(self, exp: &'b Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(&Natural::from(13u32), Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(&Natural::from(1000u32), Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<Natural, &'b Natural> for &'a Natural #### fn mod_pow(self, exp: Natural, m: &'b Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(Natural::from(13u32), &Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a> ModPow<Natural, &'a Natural> for Natural #### fn mod_pow(self, exp: Natural, m: &'a Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(Natural::from(13u32), &Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(Natural::from(1000u32), &Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a> ModPow<Natural, Natural> for &'a Natural #### fn mod_pow(self, exp: Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural`$m$. Assumes the input is already reduced mod $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(Natural::from(13u32), Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(Natural::from(1000u32), Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl ModPow<Natural, Natural> for Natural #### fn mod_pow(self, exp: Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. All three `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(Natural::from(13u32), Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(Natural::from(1000u32), Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a> ModPowAssign<&'a Natural, Natural> for Natural #### fn mod_pow_assign(&mut self, exp: &'a Natural, m: Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(&Natural::from(13u32), Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(&Natural::from(1000u32), Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a, 'b> ModPowAssign<&'a Natural, &'b Natural> for Natural #### fn mod_pow_assign(&mut self, exp: &'a Natural, m: &'b Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(&Natural::from(13u32), &Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(&Natural::from(1000u32), &Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a> ModPowAssign<Natural, &'a Natural> for Natural #### fn mod_pow_assign(&mut self, exp: Natural, m: &'a Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(Natural::from(13u32), &Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(Natural::from(1000u32), &Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl ModPowAssign<Natural, Natural> for Natural #### fn mod_pow_assign(&mut self, exp: Natural, m: Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(Natural::from(13u32), Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(Natural::from(1000u32), Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a> ModPowerOf2 for &'a Natural #### fn mod_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!((&Natural::from(260u32)).mod_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!((&Natural::from(1611u32)).mod_power_of_2(4), 11); ``` #### type Output = Natural ### impl ModPowerOf2 for Natural #### fn mod_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!(Natural::from(260u32).mod_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!(Natural::from(1611u32).mod_power_of_2(4), 11); ``` #### type Output = Natural ### impl<'a, 'b> ModPowerOf2Add<&'a Natural> for &'b Natural #### fn mod_power_of_2_add(self, other: &'a Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_add(&Natural::from(2u32), 5), 2); assert_eq!((&Natural::from(10u32)).mod_power_of_2_add(&Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Add<&'a Natural> for Natural #### fn mod_power_of_2_add(self, other: &'a Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_add(&Natural::from(2u32), 5), 2); assert_eq!(Natural::from(10u32).mod_power_of_2_add(&Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Add<Natural> for &'a Natural #### fn mod_power_of_2_add(self, other: Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_add(Natural::from(2u32), 5), 2); assert_eq!((&Natural::from(10u32)).mod_power_of_2_add(Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl ModPowerOf2Add<Natural> for Natural #### fn mod_power_of_2_add(self, other: Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_add(Natural::from(2u32), 5), 2); assert_eq!(Natural::from(10u32).mod_power_of_2_add(Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2AddAssign<&'a Natural> for Natural #### fn mod_power_of_2_add_assign(&mut self, other: &'a Natural, pow: u64) Adds two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2AddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_power_of_2_add_assign(&Natural::from(2u32), 5); assert_eq!(x, 2); let mut x = Natural::from(10u32); x.mod_power_of_2_add_assign(&Natural::from(14u32), 4); assert_eq!(x, 8); ``` ### impl ModPowerOf2AddAssign<Natural> for Natural #### fn mod_power_of_2_add_assign(&mut self, other: Natural, pow: u64) Adds two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2AddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_power_of_2_add_assign(Natural::from(2u32), 5); assert_eq!(x, 2); let mut x = Natural::from(10u32); x.mod_power_of_2_add_assign(Natural::from(14u32), 4); assert_eq!(x, 8); ``` ### impl ModPowerOf2Assign for Natural #### fn mod_power_of_2_assign(&mut self, pow: u64) Divides a `Natural`by $2^k$, replacing the `Natural` by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Assign; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 let mut x = Natural::from(260u32); x.mod_power_of_2_assign(8); assert_eq!(x, 4); // 100 * 2^4 + 11 = 1611 let mut x = Natural::from(1611u32); x.mod_power_of_2_assign(4); assert_eq!(x, 11); ``` ### impl<'a> ModPowerOf2Inverse for &'a Natural #### fn mod_power_of_2_inverse(self, pow: u64) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo $2^k$. Assumes the `Natural` is already reduced modulo $2^k$. The `Natural` is taken by reference. Returns `None` if $x$ is even. $f(x, k) = y$, where $x, y < 2^k$, $x$ is odd, and $xy \equiv 1 \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Inverse; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_inverse(8), Some(Natural::from(171u32))); assert_eq!((&Natural::from(4u32)).mod_power_of_2_inverse(8), None); ``` #### type Output = Natural ### impl ModPowerOf2Inverse for Natural #### fn mod_power_of_2_inverse(self, pow: u64) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo $2^k$. Assumes the `Natural` is already reduced modulo $2^k$. The `Natural` is taken by value. Returns `None` if $x$ is even. $f(x, k) = y$, where $x, y < 2^k$, $x$ is odd, and $xy \equiv 1 \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Inverse; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_inverse(8), Some(Natural::from(171u32))); assert_eq!(Natural::from(4u32).mod_power_of_2_inverse(8), None); ``` #### type Output = Natural ### impl ModPowerOf2IsReduced for Natural #### fn mod_power_of_2_is_reduced(&self, pow: u64) -> bool Returns whether a `Natural` is reduced modulo 2^k$; in other words, whether it has no more than $k$ significant bits. $f(x, k) = (x < 2^k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModPowerOf2IsReduced, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_is_reduced(5), true); assert_eq!(Natural::from(10u32).pow(12).mod_power_of_2_is_reduced(39), false); assert_eq!(Natural::from(10u32).pow(12).mod_power_of_2_is_reduced(40), true); ``` ### impl<'a, 'b> ModPowerOf2Mul<&'b Natural> for &'a Natural #### fn mod_power_of_2_mul(self, other: &'b Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_mul(&Natural::from(2u32), 5), 6); assert_eq!((&Natural::from(10u32)).mod_power_of_2_mul(&Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Mul<&'a Natural> for Natural #### fn mod_power_of_2_mul(self, other: &'a Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_mul(&Natural::from(2u32), 5), 6); assert_eq!(Natural::from(10u32).mod_power_of_2_mul(&Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Mul<Natural> for &'a Natural #### fn mod_power_of_2_mul(self, other: Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_mul(Natural::from(2u32), 5), 6); assert_eq!((&Natural::from(10u32)).mod_power_of_2_mul(Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl ModPowerOf2Mul<Natural> for Natural #### fn mod_power_of_2_mul(self, other: Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_mul(Natural::from(2u32), 5), 6); assert_eq!(Natural::from(10u32).mod_power_of_2_mul(Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2MulAssign<&'a Natural> for Natural #### fn mod_power_of_2_mul_assign(&mut self, other: &'a Natural, pow: u64) Multiplies two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2MulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_mul_assign(&Natural::from(2u32), 5); assert_eq!(x, 6); let mut x = Natural::from(10u32); x.mod_power_of_2_mul_assign(&Natural::from(14u32), 4); assert_eq!(x, 12); ``` ### impl ModPowerOf2MulAssign<Natural> for Natural #### fn mod_power_of_2_mul_assign(&mut self, other: Natural, pow: u64) Multiplies two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2MulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_mul_assign(Natural::from(2u32), 5); assert_eq!(x, 6); let mut x = Natural::from(10u32); x.mod_power_of_2_mul_assign(Natural::from(14u32), 4); assert_eq!(x, 12); ``` ### impl<'a> ModPowerOf2Neg for &'a Natural #### fn mod_power_of_2_neg(self, pow: u64) -> Natural Negates a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, k) = y$, where $x, y < 2^k$ and $-x \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Neg; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_neg(5), 0); assert_eq!((&Natural::ZERO).mod_power_of_2_neg(100), 0); assert_eq!((&Natural::from(100u32)).mod_power_of_2_neg(8), 156); assert_eq!( (&Natural::from(100u32)).mod_power_of_2_neg(100).to_string(), "1267650600228229401496703205276" ); ``` #### type Output = Natural ### impl ModPowerOf2Neg for Natural #### fn mod_power_of_2_neg(self, pow: u64) -> Natural Negates a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, k) = y$, where $x, y < 2^k$ and $-x \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Neg; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_neg(5), 0); assert_eq!(Natural::ZERO.mod_power_of_2_neg(100), 0); assert_eq!(Natural::from(100u32).mod_power_of_2_neg(8), 156); assert_eq!( Natural::from(100u32).mod_power_of_2_neg(100).to_string(), "1267650600228229401496703205276" ); ``` #### type Output = Natural ### impl ModPowerOf2NegAssign for Natural #### fn mod_power_of_2_neg_assign(&mut self, pow: u64) Negates a `Natural` modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^p$ and $-x \equiv y \mod 2^p$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2NegAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_power_of_2_neg_assign(5); assert_eq!(n, 0); let mut n = Natural::ZERO; n.mod_power_of_2_neg_assign(100); assert_eq!(n, 0); let mut n = Natural::from(100u32); n.mod_power_of_2_neg_assign(8); assert_eq!(n, 156); let mut n = Natural::from(100u32); n.mod_power_of_2_neg_assign(100); assert_eq!(n.to_string(), "1267650600228229401496703205276"); ``` ### impl<'a, 'b> ModPowerOf2Pow<&'b Natural> for &'a Natural #### fn mod_power_of_2_pow(self, exp: &Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. Both `Natural`s are taken by reference. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_pow(&Natural::from(10u32), 8), 169); assert_eq!( (&Natural::from(11u32)).mod_power_of_2_pow(&Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Pow<&'a Natural> for Natural #### fn mod_power_of_2_pow(self, exp: &Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_pow(&Natural::from(10u32), 8), 169); assert_eq!( Natural::from(11u32).mod_power_of_2_pow(&Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Pow<Natural> for &'a Natural #### fn mod_power_of_2_pow(self, exp: Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_pow(Natural::from(10u32), 8), 169); assert_eq!( (&Natural::from(11u32)).mod_power_of_2_pow(Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl ModPowerOf2Pow<Natural> for Natural #### fn mod_power_of_2_pow(self, exp: Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. Both `Natural`s are taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_pow(Natural::from(10u32), 8), 169); assert_eq!( Natural::from(11u32).mod_power_of_2_pow(Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2PowAssign<&'a Natural> for Natural #### fn mod_power_of_2_pow_assign(&mut self, exp: &Natural, pow: u64) Raises a `Natural` to a `Natural` power modulo $2^k$, in place. Assumes the input is already reduced mod $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2PowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_pow_assign(&Natural::from(10u32), 8); assert_eq!(x, 169); let mut x = Natural::from(11u32); x.mod_power_of_2_pow_assign(&Natural::from(1000u32), 30); assert_eq!(x, 289109473); ``` ### impl ModPowerOf2PowAssign<Natural> for Natural #### fn mod_power_of_2_pow_assign(&mut self, exp: Natural, pow: u64) Raises a `Natural` to a `Natural` power modulo $2^k$, in place. Assumes the input is already reduced mod $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2PowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_pow_assign(Natural::from(10u32), 8); assert_eq!(x, 169); let mut x = Natural::from(11u32); x.mod_power_of_2_pow_assign(Natural::from(1000u32), 30); assert_eq!(x, 289109473); ``` ### impl<'a> ModPowerOf2Shl<i128> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i128> for Natural #### fn mod_power_of_2_shl(self, bits: i128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i16> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i16> for Natural #### fn mod_power_of_2_shl(self, bits: i16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i32> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i32> for Natural #### fn mod_power_of_2_shl(self, bits: i32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i64> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i64> for Natural #### fn mod_power_of_2_shl(self, bits: i64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i8> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i8> for Natural #### fn mod_power_of_2_shl(self, bits: i8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<isize> for &'a Natural #### fn mod_power_of_2_shl(self, bits: isize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<isize> for Natural #### fn mod_power_of_2_shl(self, bits: isize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u128> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u128> for Natural #### fn mod_power_of_2_shl(self, bits: u128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u16> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u16> for Natural #### fn mod_power_of_2_shl(self, bits: u16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u32> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u32> for Natural #### fn mod_power_of_2_shl(self, bits: u32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u64> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u64> for Natural #### fn mod_power_of_2_shl(self, bits: u64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u8> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u8> for Natural #### fn mod_power_of_2_shl(self, bits: u8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<usize> for &'a Natural #### fn mod_power_of_2_shl(self, bits: usize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<usize> for Natural #### fn mod_power_of_2_shl(self, bits: usize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2ShlAssign<i128> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i128, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i16> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i16, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i32> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i32, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i64> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i64, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i8> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i8, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<isize> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: isize, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u128> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u128, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u16> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u16, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u32> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u32, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u64> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u64, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u8> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u8, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<usize> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: usize, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl<'a> ModPowerOf2Shr<i128> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i128, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i128> for Natural #### fn mod_power_of_2_shr(self, bits: i128, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i16> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i16, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i16> for Natural #### fn mod_power_of_2_shr(self, bits: i16, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i32> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i32, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i32> for Natural #### fn mod_power_of_2_shr(self, bits: i32, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i64> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i64, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i64> for Natural #### fn mod_power_of_2_shr(self, bits: i64, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i8> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i8, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i8> for Natural #### fn mod_power_of_2_shr(self, bits: i8, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<isize> for &'a Natural #### fn mod_power_of_2_shr(self, bits: isize, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<isize> for Natural #### fn mod_power_of_2_shr(self, bits: isize, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2ShrAssign<i128> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i128, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i16> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i16, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i32> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i32, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i64> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i64, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i8> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i8, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<isize> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: isize, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl<'a> ModPowerOf2Square for &'a Natural #### fn mod_power_of_2_square(self, pow: u64) -> Natural Squares a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, k) = y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).mod_power_of_2_square(2), 0); assert_eq!((&Natural::from(5u32)).mod_power_of_2_square(3), 1); assert_eq!( (&Natural::from_str("12345678987654321").unwrap()) .mod_power_of_2_square(64).to_string(), "16556040056090124897" ); ``` #### type Output = Natural ### impl ModPowerOf2Square for Natural #### fn mod_power_of_2_square(self, pow: u64) -> Natural Squares a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, k) = y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.mod_power_of_2_square(2), 0); assert_eq!(Natural::from(5u32).mod_power_of_2_square(3), 1); assert_eq!( Natural::from_str("12345678987654321").unwrap().mod_power_of_2_square(64).to_string(), "16556040056090124897" ); ``` #### type Output = Natural ### impl ModPowerOf2SquareAssign for Natural #### fn mod_power_of_2_square_assign(&mut self, pow: u64) Squares a `Natural` modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; let mut n = Natural::ZERO; n.mod_power_of_2_square_assign(2); assert_eq!(n, 0); let mut n = Natural::from(5u32); n.mod_power_of_2_square_assign(3); assert_eq!(n, 1); let mut n = Natural::from_str("12345678987654321").unwrap(); n.mod_power_of_2_square_assign(64); assert_eq!(n.to_string(), "16556040056090124897"); ``` ### impl<'a, 'b> ModPowerOf2Sub<&'a Natural> for &'b Natural #### fn mod_power_of_2_sub(self, other: &'a Natural, pow: u64) -> Natural Subtracts two `Natural` modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).mod_power_of_2_sub(&Natural::TWO, 4), 8); assert_eq!((&Natural::from(56u32)).mod_power_of_2_sub(&Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Sub<&'a Natural> for Natural #### fn mod_power_of_2_sub(self, other: &'a Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).mod_power_of_2_sub(&Natural::TWO, 4), 8); assert_eq!(Natural::from(56u32).mod_power_of_2_sub(&Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Sub<Natural> for &'a Natural #### fn mod_power_of_2_sub(self, other: Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).mod_power_of_2_sub(Natural::TWO, 4), 8); assert_eq!((&Natural::from(56u32)).mod_power_of_2_sub(Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl ModPowerOf2Sub<Natural> for Natural #### fn mod_power_of_2_sub(self, other: Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).mod_power_of_2_sub(Natural::TWO, 4), 8); assert_eq!(Natural::from(56u32).mod_power_of_2_sub(Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2SubAssign<&'a Natural> for Natural #### fn mod_power_of_2_sub_assign(&mut self, other: &'a Natural, pow: u64) Subtracts two `Natural` modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SubAssign; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.mod_power_of_2_sub_assign(&Natural::TWO, 4); assert_eq!(x, 8); let mut x = Natural::from(56u32); x.mod_power_of_2_sub_assign(&Natural::from(123u32), 9); assert_eq!(x, 445); ``` ### impl ModPowerOf2SubAssign<Natural> for Natural #### fn mod_power_of_2_sub_assign(&mut self, other: Natural, pow: u64) Subtracts two `Natural` modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SubAssign; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.mod_power_of_2_sub_assign(Natural::TWO, 4); assert_eq!(x, 8); let mut x = Natural::from(56u32); x.mod_power_of_2_sub_assign(Natural::from(123u32), 9); assert_eq!(x, 445); ``` ### impl ModShl<i128, Natural> for Natural #### fn mod_shl(self, bits: i128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i128, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i128, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i128, &'a Natural> for Natural #### fn mod_shl(self, bits: i128, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i128, Natural> for &'a Natural #### fn mod_shl(self, bits: i128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i16, Natural> for Natural #### fn mod_shl(self, bits: i16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i16, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i16, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i16, &'a Natural> for Natural #### fn mod_shl(self, bits: i16, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i16, Natural> for &'a Natural #### fn mod_shl(self, bits: i16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i32, Natural> for Natural #### fn mod_shl(self, bits: i32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i32, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i32, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i32, &'a Natural> for Natural #### fn mod_shl(self, bits: i32, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i32, Natural> for &'a Natural #### fn mod_shl(self, bits: i32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i64, Natural> for Natural #### fn mod_shl(self, bits: i64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i64, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i64, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i64, &'a Natural> for Natural #### fn mod_shl(self, bits: i64, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i64, Natural> for &'a Natural #### fn mod_shl(self, bits: i64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i8, Natural> for Natural #### fn mod_shl(self, bits: i8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i8, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i8, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i8, &'a Natural> for Natural #### fn mod_shl(self, bits: i8, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i8, Natural> for &'a Natural #### fn mod_shl(self, bits: i8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<isize, Natural> for Natural #### fn mod_shl(self, bits: isize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<isize, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: isize, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<isize, &'a Natural> for Natural #### fn mod_shl(self, bits: isize, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<isize, Natural> for &'a Natural #### fn mod_shl(self, bits: isize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u128, Natural> for Natural #### fn mod_shl(self, bits: u128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u128, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u128, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u128, &'a Natural> for Natural #### fn mod_shl(self, bits: u128, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u128, Natural> for &'a Natural #### fn mod_shl(self, bits: u128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u16, Natural> for Natural #### fn mod_shl(self, bits: u16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u16, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u16, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u16, &'a Natural> for Natural #### fn mod_shl(self, bits: u16, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u16, Natural> for &'a Natural #### fn mod_shl(self, bits: u16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u32, Natural> for Natural #### fn mod_shl(self, bits: u32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u32, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u32, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u32, &'a Natural> for Natural #### fn mod_shl(self, bits: u32, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u32, Natural> for &'a Natural #### fn mod_shl(self, bits: u32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u64, Natural> for Natural #### fn mod_shl(self, bits: u64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u64, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u64, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u64, &'a Natural> for Natural #### fn mod_shl(self, bits: u64, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u64, Natural> for &'a Natural #### fn mod_shl(self, bits: u64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u8, Natural> for Natural #### fn mod_shl(self, bits: u8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u8, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u8, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u8, &'a Natural> for Natural #### fn mod_shl(self, bits: u8, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u8, Natural> for &'a Natural #### fn mod_shl(self, bits: u8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<usize, Natural> for Natural #### fn mod_shl(self, bits: usize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<usize, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: usize, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<usize, &'a Natural> for Natural #### fn mod_shl(self, bits: usize, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<usize, Natural> for &'a Natural #### fn mod_shl(self, bits: usize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShlAssign<i128, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i128, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i128, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i128, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i16, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i16, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i16, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i16, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i32, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i32, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i32, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i32, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i64, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i64, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i64, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i64, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i8, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i8, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i8, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i8, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<isize, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: isize, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<isize, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: isize, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u128, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u128, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u128, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u128, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u16, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u16, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u16, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u16, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u32, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u32, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u32, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u32, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u64, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u64, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u64, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u64, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u8, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u8, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u8, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u8, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<usize, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: usize, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<usize, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: usize, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShr<i128, Natural> for Natural #### fn mod_shr(self, bits: i128, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i128, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i128, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i128, &'a Natural> for Natural #### fn mod_shr(self, bits: i128, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i128, Natural> for &'a Natural #### fn mod_shr(self, bits: i128, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i16, Natural> for Natural #### fn mod_shr(self, bits: i16, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i16, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i16, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i16, &'a Natural> for Natural #### fn mod_shr(self, bits: i16, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i16, Natural> for &'a Natural #### fn mod_shr(self, bits: i16, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i32, Natural> for Natural #### fn mod_shr(self, bits: i32, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i32, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i32, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i32, &'a Natural> for Natural #### fn mod_shr(self, bits: i32, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i32, Natural> for &'a Natural #### fn mod_shr(self, bits: i32, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i64, Natural> for Natural #### fn mod_shr(self, bits: i64, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i64, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i64, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i64, &'a Natural> for Natural #### fn mod_shr(self, bits: i64, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i64, Natural> for &'a Natural #### fn mod_shr(self, bits: i64, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i8, Natural> for Natural #### fn mod_shr(self, bits: i8, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i8, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i8, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i8, &'a Natural> for Natural #### fn mod_shr(self, bits: i8, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i8, Natural> for &'a Natural #### fn mod_shr(self, bits: i8, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<isize, Natural> for Natural #### fn mod_shr(self, bits: isize, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<isize, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: isize, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<isize, &'a Natural> for Natural #### fn mod_shr(self, bits: isize, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<isize, Natural> for &'a Natural #### fn mod_shr(self, bits: isize, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShrAssign<i128, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i128, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i128, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i128, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i16, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i16, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i16, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i16, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i32, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i32, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i32, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i32, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i64, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i64, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i64, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i64, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i8, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i8, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i8, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i8, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<isize, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: isize, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<isize, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: isize, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a, 'b> ModSquare<&'b Natural> for &'a Natural #### fn mod_square(self, m: &'b Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(2u32)).mod_square(&Natural::from(10u32)), 4); assert_eq!((&Natural::from(100u32)).mod_square(&Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquare<&'a Natural> for Natural #### fn mod_square(self, m: &'a Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!(Natural::from(2u32).mod_square(&Natural::from(10u32)), 4); assert_eq!(Natural::from(100u32).mod_square(&Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquare<Natural> for &'a Natural #### fn mod_square(self, m: Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(2u32)).mod_square(Natural::from(10u32)), 4); assert_eq!((&Natural::from(100u32)).mod_square(Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl ModSquare<Natural> for Natural #### fn mod_square(self, m: Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!(Natural::from(2u32).mod_square(Natural::from(10u32)), 4); assert_eq!(Natural::from(100u32).mod_square(Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquareAssign<&'a Natural> for Natural #### fn mod_square_assign(&mut self, m: &'a Natural) Squares a `Natural` modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquareAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(2u32); x.mod_square_assign(&Natural::from(10u32)); assert_eq!(x, 4); let mut x = Natural::from(100u32); x.mod_square_assign(&Natural::from(497u32)); assert_eq!(x, 60); ``` ### impl ModSquareAssign<Natural> for Natural #### fn mod_square_assign(&mut self, m: Natural) Squares a `Natural` modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquareAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(2u32); x.mod_square_assign(Natural::from(10u32)); assert_eq!(x, 4); let mut x = Natural::from(100u32); x.mod_square_assign(Natural::from(497u32)); assert_eq!(x, 60); ``` ### impl<'a> ModSub<&'a Natural, Natural> for Natural #### fn mod_sub(self, other: &'a Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(&Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(&Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This isequivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModSub<&'b Natural, &'c Natural> for &'a Natural #### fn mod_sub(self, other: &'b Natural, m: &'c Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(&Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(&Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModSub<&'a Natural, &'b Natural> for Natural #### fn mod_sub(self, other: &'a Natural, m: &'b Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(&Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(&Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModSub<&'b Natural, Natural> for &'a Natural #### fn mod_sub(self, other: &'b Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(&Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(&Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModSub<Natural, &'b Natural> for &'a Natural #### fn mod_sub(self, other: Natural, m: &'b Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModSub<Natural, &'a Natural> for Natural #### fn mod_sub(self, other: Natural, m: &'a Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModSub<Natural, Natural> for &'a Natural #### fn mod_sub(self, other: Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModSub<Natural, Natural> for Natural #### fn mod_sub(self, other: Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModSubAssign<&'a Natural, Natural> for Natural #### fn mod_sub_assign(&mut self, other: &'a Natural, m: Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(&Natural::from(3u32), Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(&Natural::from(9u32), Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModSubAssign<&'a Natural, &'b Natural> for Natural #### fn mod_sub_assign(&mut self, other: &'a Natural, m: &'b Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(&Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(&Natural::from(9u32), &Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModSubAssign<Natural, &'a Natural> for Natural #### fn mod_sub_assign(&mut self, other: Natural, m: &'a Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(Natural::from(9u32), &Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModSubAssign<Natural, Natural> for Natural #### fn mod_sub_assign(&mut self, other: Natural, m: Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(Natural::from(3u32), Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(Natural::from(9u32), Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a, 'b> Mul<&'a Natural> for &'b Natural #### fn mul(self, other: &'a Natural) -> Natural Multiplies two `Natural`s, taking both by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(&Natural::ONE * &Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) * &Natural::ZERO, 0); assert_eq!(&Natural::from(123u32) * &Natural::from(456u32), 56088); assert_eq!( (&Natural::from_str("123456789000").unwrap() * &Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> Mul<&'a Natural> for Natural #### fn mul(self, other: &'a Natural) -> Natural Multiplies two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ONE * &Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) * &Natural::ZERO, 0); assert_eq!(Natural::from(123u32) * &Natural::from(456u32), 56088); assert_eq!( (Natural::from_str("123456789000").unwrap() * &Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> Mul<Natural> for &'a Natural #### fn mul(self, other: Natural) -> Natural Multiplies two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(&Natural::ONE * Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) * Natural::ZERO, 0); assert_eq!(&Natural::from(123u32) * Natural::from(456u32), 56088); assert_eq!( (&Natural::from_str("123456789000").unwrap() * Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl Mul<Natural> for Natural #### fn mul(self, other: Natural) -> Natural Multiplies two `Natural`s, taking both by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ONE * Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) * Natural::ZERO, 0); assert_eq!(Natural::from(123u32) * Natural::from(456u32), 56088); assert_eq!( (Natural::from_str("123456789000").unwrap() * Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> MulAssign<&'a Natural> for Natural #### fn mul_assign(&mut self, other: &'a Natural) Multiplies a `Natural` by a `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::ONE; x *= &Natural::from_str("1000").unwrap(); x *= &Natural::from_str("2000").unwrap(); x *= &Natural::from_str("3000").unwrap(); x *= &Natural::from_str("4000").unwrap(); assert_eq!(x.to_string(), "24000000000000"); ``` ### impl MulAssign<Natural> for Natural #### fn mul_assign(&mut self, other: Natural) Multiplies a `Natural` by a `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::ONE; x *= Natural::from_str("1000").unwrap(); x *= Natural::from_str("2000").unwrap(); x *= Natural::from_str("3000").unwrap(); x *= Natural::from_str("4000").unwrap(); assert_eq!(x.to_string(), "24000000000000"); ``` ### impl Multifactorial for Natural #### fn multifactorial(n: u64, m: u64) -> Natural Computes a multifactorial of a number. $$ f(n, m) = n!^{(m)} = n \times (n - m) \times (n - 2m) \times \cdots \times i. $$ If $n$ is divisible by $m$, then $i$ is $m$; otherwise, $i$ is the remainder when $n$ is divided by $m$. $n!^{(m)} = O(\sqrt{n}(n/e)^{n/m})$. ##### Worst-case complexity $T(n, m) = O(n (\log n)^2 \log\log n)$ $M(n, m) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Multifactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::multifactorial(0, 1), 1); assert_eq!(Natural::multifactorial(1, 1), 1); assert_eq!(Natural::multifactorial(2, 1), 2); assert_eq!(Natural::multifactorial(3, 1), 6); assert_eq!(Natural::multifactorial(4, 1), 24); assert_eq!(Natural::multifactorial(5, 1), 120); assert_eq!(Natural::multifactorial(0, 2), 1); assert_eq!(Natural::multifactorial(1, 2), 1); assert_eq!(Natural::multifactorial(2, 2), 2); assert_eq!(Natural::multifactorial(3, 2), 3); assert_eq!(Natural::multifactorial(4, 2), 8); assert_eq!(Natural::multifactorial(5, 2), 15); assert_eq!(Natural::multifactorial(6, 2), 48); assert_eq!(Natural::multifactorial(7, 2), 105); assert_eq!(Natural::multifactorial(0, 3), 1); assert_eq!(Natural::multifactorial(1, 3), 1); assert_eq!(Natural::multifactorial(2, 3), 2); assert_eq!(Natural::multifactorial(3, 3), 3); assert_eq!(Natural::multifactorial(4, 3), 4); assert_eq!(Natural::multifactorial(5, 3), 10); assert_eq!(Natural::multifactorial(6, 3), 18); assert_eq!(Natural::multifactorial(7, 3), 28); assert_eq!(Natural::multifactorial(8, 3), 80); assert_eq!(Natural::multifactorial(9, 3), 162); assert_eq!( Natural::multifactorial(100, 3).to_string(), "174548867015437739741494347897360069928419328000000000" ); ``` ### impl Named for Natural #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl<'a> Neg for &'a Natural #### fn neg(self) -> Integer Negates a `Natural`, taking it by reference and returning an `Integer`. $$ f(x) = -x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(-&Natural::ZERO, 0); assert_eq!(-&Natural::from(123u32), -123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Neg for Natural #### fn neg(self) -> Integer Negates a `Natural`, taking it by value and returning an `Integer`. $$ f(x) = -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(-Natural::ZERO, 0); assert_eq!(-Natural::from(123u32), -123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a, 'b> NegMod<&'b Natural> for &'a Natural #### fn neg_mod(self, other: &'b Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!((&Natural::from(23u32)).neg_mod(&Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegMod<&'a Natural> for Natural #### fn neg_mod(self, other: &'a Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!(Natural::from(23u32).neg_mod(&Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegMod<Natural> for &'a Natural #### fn neg_mod(self, other: Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!((&Natural::from(23u32)).neg_mod(Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl NegMod<Natural> for Natural #### fn neg_mod(self, other: Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!(Natural::from(23u32).neg_mod(Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegModAssign<&'a Natural> for Natural #### fn neg_mod_assign(&mut self, other: &'a Natural) Divides the negative of a `Natural` by another `Natural`, taking the second `Natural`s by reference and replacing the first by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ x \gets y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); x.neg_mod_assign(&Natural::from(10u32)); assert_eq!(x, 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.neg_mod_assign(&Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 704498996588u64); ``` ### impl NegModAssign<Natural> for Natural #### fn neg_mod_assign(&mut self, other: Natural) Divides the negative of a `Natural` by another `Natural`, taking the second `Natural`s by value and replacing the first by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ x \gets y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); x.neg_mod_assign(Natural::from(10u32)); assert_eq!(x, 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.neg_mod_assign(Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 704498996588u64); ``` ### impl<'a> NegModPowerOf2 for &'a Natural #### fn neg_mod_power_of_2(self, pow: u64) -> Natural Divides the negative of a `Natural` by a $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ f(x, k) = 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 assert_eq!((&Natural::from(260u32)).neg_mod_power_of_2(8), 252); // 101 * 2^4 - 5 = 1611 assert_eq!((&Natural::from(1611u32)).neg_mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl NegModPowerOf2 for Natural #### fn neg_mod_power_of_2(self, pow: u64) -> Natural Divides the negative of a `Natural` by a $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ f(x, k) = 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 assert_eq!(Natural::from(260u32).neg_mod_power_of_2(8), 252); // 101 * 2^4 - 5 = 1611 assert_eq!(Natural::from(1611u32).neg_mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl NegModPowerOf2Assign for Natural #### fn neg_mod_power_of_2_assign(&mut self, pow: u64) Divides the negative of a `Natural` by $2^k$, returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ x \gets 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2Assign; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 let mut x = Natural::from(260u32); x.neg_mod_power_of_2_assign(8); assert_eq!(x, 252); // 101 * 2^4 - 5 = 1611 let mut x = Natural::from(1611u32); x.neg_mod_power_of_2_assign(4); assert_eq!(x, 5); ``` ### impl<'a> NextPowerOf2 for &'a Natural #### fn next_power_of_2(self) -> Natural Finds the smallest power of 2 greater than or equal to a `Natural`. The `Natural` is taken by reference. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).next_power_of_2(), 1); assert_eq!((&Natural::from(123u32)).next_power_of_2(), 128); assert_eq!((&Natural::from(10u32).pow(12)).next_power_of_2(), 1099511627776u64); ``` #### type Output = Natural ### impl NextPowerOf2 for Natural #### fn next_power_of_2(self) -> Natural Finds the smallest power of 2 greater than or equal to a `Natural`. The `Natural` is taken by value. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.next_power_of_2(), 1); assert_eq!(Natural::from(123u32).next_power_of_2(), 128); assert_eq!(Natural::from(10u32).pow(12).next_power_of_2(), 1099511627776u64); ``` #### type Output = Natural ### impl NextPowerOf2Assign for Natural #### fn next_power_of_2_assign(&mut self) Replaces a `Natural` with the smallest power of 2 greater than or equal to it. $x \gets 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2Assign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.next_power_of_2_assign(); assert_eq!(x, 1); let mut x = Natural::from(123u32); x.next_power_of_2_assign(); assert_eq!(x, 128); let mut x = Natural::from(10u32).pow(12); x.next_power_of_2_assign(); assert_eq!(x, 1099511627776u64); ``` ### impl<'a> Not for &'a Natural #### fn not(self) -> Integer Returns the bitwise negation of a `Natural`, taking it by reference and returning an `Integer`. The `Natural` is bitwise-negated as if it were represented in two’s complement. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(!&Natural::ZERO, -1); assert_eq!(!&Natural::from(123u32), -124); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Not for Natural #### fn not(self) -> Integer Returns the bitwise negation of a `Natural`, taking it by value and returning an `Integer`. The `Natural` is bitwise-negated as if it were represented in two’s complement. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(!Natural::ZERO, -1); assert_eq!(!Natural::from(123u32), -124); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Octal for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to an octal `String`. Using the `#` format flag prepends `"0o"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToOctalString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_octal_string(), "0"); assert_eq!(Natural::from(123u32).to_octal_string(), "173"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_octal_string(), "16432451210000" ); assert_eq!(format!("{:07o}", Natural::from(123u32)), "0000173"); assert_eq!(format!("{:#o}", Natural::ZERO), "0o0"); assert_eq!(format!("{:#o}", Natural::from(123u32)), "0o173"); assert_eq!( format!("{:#o}", Natural::from_str("1000000000000").unwrap()), "0o16432451210000" ); assert_eq!(format!("{:#07o}", Natural::from(123u32)), "0o00173"); ``` ### impl One for Natural The constant 1. #### const ONE: Natural = _ ### impl Ord for Natural #### fn cmp(&self, other: &Natural) -> Ordering Compares two `Natural`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Natural::from(122u32)); assert!(Natural::from(123u32) >= Natural::from(122u32)); assert!(Natural::from(123u32) < Natural::from(124u32)); assert!(Natural::from(123u32) <= Natural::from(124u32)); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn overflowing_from(value: &Natural) -> (i128, bool) Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i16 #### fn overflowing_from(value: &Natural) -> (i16, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i32 #### fn overflowing_from(value: &Natural) -> (i32, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i64 #### fn overflowing_from(value: &Natural) -> (i64, bool) Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s), wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i8 #### fn overflowing_from(value: &Natural) -> (i8, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for isize #### fn overflowing_from(value: &Natural) -> (isize, bool) Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u128 #### fn overflowing_from(value: &Natural) -> (u128, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u16 #### fn overflowing_from(value: &Natural) -> (u16, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u32 #### fn overflowing_from(value: &Natural) -> (u32, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u64 #### fn overflowing_from(value: &Natural) -> (u64, bool) Converts a `Natural` to a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u8 #### fn overflowing_from(value: &Natural) -> (u8, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for usize #### fn overflowing_from(value: &Natural) -> (usize, bool) Converts a `Natural` to a `usize`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> Parity for &'a Natural #### fn even(self) -> bool Tests whether a `Natural` is even. $f(x) = (2|x)$. $f(x) = (\exists k \in \N : x = 2k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.even(), true); assert_eq!(Natural::from(123u32).even(), false); assert_eq!(Natural::from(0x80u32).even(), true); assert_eq!(Natural::from(10u32).pow(12).even(), true); assert_eq!((Natural::from(10u32).pow(12) + Natural::ONE).even(), false); ``` #### fn odd(self) -> bool Tests whether a `Natural` is odd. $f(x) = (2\nmid x)$. $f(x) = (\exists k \in \N : x = 2k+1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.odd(), false); assert_eq!(Natural::from(123u32).odd(), true); assert_eq!(Natural::from(0x80u32).odd(), false); assert_eq!(Natural::from(10u32).pow(12).odd(), false); assert_eq!((Natural::from(10u32).pow(12) + Natural::ONE).odd(), true); ``` ### impl PartialEq<Integer> for Natural #### fn eq(&self, other: &Integer) -> bool Determines whether a `Natural` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) == Integer::from(123)); assert!(Natural::from(123u32) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Integer #### fn eq(&self, other: &Natural) -> bool Determines whether an `Integer` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) == Natural::from(123u32)); assert!(Integer::from(123) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Rational #### fn eq(&self, other: &Natural) -> bool Determines whether a `Rational` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from(123) == Natural::from(123u32)); assert!(Rational::from_signeds(22, 7) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Natural #### fn eq(&self, other: &Rational) -> bool Determines whether a `Natural` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(123u32) == Rational::from(123)); assert!(Natural::from(5u32) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f32> for Natural #### fn eq(&self, other: &f32) -> bool Determines whether a `Natural` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Natural #### fn eq(&self, other: &f64) -> bool Determines whether a `Natural` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i128> for Natural #### fn eq(&self, other: &i128) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i16> for Natural #### fn eq(&self, other: &i16) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i32> for Natural #### fn eq(&self, other: &i32) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i64> for Natural #### fn eq(&self, other: &i64) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i8> for Natural #### fn eq(&self, other: &i8) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<isize> for Natural #### fn eq(&self, other: &isize) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u128> for Natural #### fn eq(&self, other: &u128) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Natural #### fn eq(&self, other: &u16) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u32> for Natural #### fn eq(&self, other: &u32) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u64> for Natural #### fn eq(&self, other: &u64) -> bool Determines whether a `Natural` is equal to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u8> for Natural #### fn eq(&self, other: &u8) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<usize> for Natural #### fn eq(&self, other: &usize) -> bool Determines whether a `Natural` is equal to a `usize`. ##### Worst-case complexity Constant time and additional memory. See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Natural #### fn eq(&self, other: &Natural) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Integer> for Natural #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Natural` to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Integer::from(122)); assert!(Natural::from(123u32) >= Integer::from(122)); assert!(Natural::from(123u32) < Integer::from(124)); assert!(Natural::from(123u32) <= Integer::from(124)); assert!(Natural::from(123u32) > Integer::from(-123)); assert!(Natural::from(123u32) >= Integer::from(-123)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares an `Integer` to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) > Natural::from(122u32)); assert!(Integer::from(123) >= Natural::from(122u32)); assert!(Integer::from(123) < Natural::from(124u32)); assert!(Integer::from(123) <= Natural::from(124u32)); assert!(Integer::from(-123) < Natural::from(123u32)); assert!(Integer::from(-123) <= Natural::from(123u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares a `Rational` to a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Natural::from(3u32)); assert!(Rational::from_signeds(22, 7) < Natural::from(4u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares a `Natural` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(3u32) < Rational::from_signeds(22, 7)); assert!(Natural::from(4u32) > Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f32) -> Option<OrderingCompares a `Natural` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f64) -> Option<OrderingCompares a `Natural` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i128) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i16) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i32) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i64) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i8) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &isize) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u128) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u16) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u32) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u64) -> Option<OrderingCompares a `Natural` to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u8) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &usize) -> Option<OrderingCompares a `Natural` to a `usize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares two `Natural`s. See the documentation for the `Ord` implementation. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Natural` and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32).gt_abs(&Integer::from(122))); assert!(Natural::from(123u32).ge_abs(&Integer::from(122))); assert!(Natural::from(123u32).lt_abs(&Integer::from(124))); assert!(Natural::from(123u32).le_abs(&Integer::from(124))); assert!(Natural::from(123u32).lt_abs(&Integer::from(-124))); assert!(Natural::from(123u32).le_abs(&Integer::from(-124))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of an `Integer` and a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123).gt_abs(&Natural::from(122u32))); assert!(Integer::from(123).ge_abs(&Natural::from(122u32))); assert!(Integer::from(123).lt_abs(&Natural::from(124u32))); assert!(Integer::from(123).le_abs(&Natural::from(124u32))); assert!(Integer::from(-124).gt_abs(&Natural::from(123u32))); assert!(Integer::from(-124).ge_abs(&Natural::from(123u32))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of a `Rational` and a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a primitive float to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a primitive float to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a `Natural` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f32) -> Option<OrderingCompares a `Natural` to the absolute value of a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f64) -> Option<OrderingCompares a `Natural` to the absolute value of a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i128) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i16) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i32) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i64) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i8) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &isize) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u128) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u16) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u32) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u64) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u8) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &usize) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn pow(self, exp: u64) -> Natural Raises a `Natural` to a power, taking the `Natural` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(3u32)).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( (&Natural::from_str("12345678987654321").unwrap()).pow(3).to_string(), "1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Natural ### impl Pow<u64> for Natural #### fn pow(self, exp: u64) -> Natural Raises a `Natural` to a power, taking the `Natural` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(3u32).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( Natural::from_str("12345678987654321").unwrap().pow(3).to_string(), "1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Natural ### impl PowAssign<u64> for Natural #### fn pow_assign(&mut self, exp: u64) Raises a `Natural` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::from(3u32); x.pow_assign(100); assert_eq!(x.to_string(), "515377520732011331036461129765621272702107522001"); let mut x = Natural::from_str("12345678987654321").unwrap(); x.pow_assign(3); assert_eq!(x.to_string(), "1881676411868862234942354805142998028003108518161"); ``` ### impl PowerOf2<u64> for Natural #### fn power_of_2(pow: u64) -> Natural Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::power_of_2(0), 1); assert_eq!(Natural::power_of_2(3), 8); assert_eq!(Natural::power_of_2(100).to_string(), "1267650600228229401496703205376"); ``` ### impl<'a> PowerOf2DigitIterable<Natural> for &'a Natural #### fn power_of_2_digits(self, log_base: u64) -> NaturalPowerOf2DigitIterator<'aReturns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::PowerOf2DigitIterable; use malachite_nz::natural::Natural; let n = Natural::ZERO; assert!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).next().is_none()); // 107 = 1223_4 let n = Natural::from(107u32); assert_eq!( PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).collect_vec(), vec![ Natural::from(3u32), Natural::from(2u32), Natural::from(2u32), Natural::from(1u32) ] ); let n = Natural::ZERO; assert!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2) .next_back() .is_none()); // 107 = 1223_4 let n = Natural::from(107u32); assert_eq!( PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2) .rev() .collect_vec(), vec![ Natural::from(1u32), Natural::from(2u32), Natural::from(2u32), Natural::from(3u32) ] ); ``` #### type PowerOf2DigitIterator = NaturalPowerOf2DigitIterator<'a### impl<'a> PowerOf2DigitIterable<u128> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u128Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u128### impl<'a> PowerOf2DigitIterable<u16> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u16Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u16### impl<'a> PowerOf2DigitIterable<u32> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u32Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u32### impl<'a> PowerOf2DigitIterable<u64> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u64Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u64### impl<'a> PowerOf2DigitIterable<u8> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u8Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u8### impl<'a> PowerOf2DigitIterable<usize> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, usizeReturns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, usize### impl<'a> PowerOf2DigitIterator<Natural> for NaturalPowerOf2DigitIterator<'a#### fn get(&self, index: u64) -> Natural Retrieves the base-$2^k$ digits of a `Natural` by index. $f(x, k, i) = d_i$, where $0 \leq d_i < 2^k$ for all $i$ and $$ \sum_{i=0}^\infty2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `log_base`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::{PowerOf2DigitIterable, PowerOf2DigitIterator}; use malachite_nz::natural::Natural; let n = Natural::ZERO; assert_eq!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).get(0), 0); // 107 = 1223_4 let n = Natural::from(107u32); let digits = PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2); assert_eq!(digits.get(0), 3); assert_eq!(digits.get(1), 2); assert_eq!(digits.get(2), 2); assert_eq!(digits.get(3), 1); assert_eq!(digits.get(4), 0); assert_eq!(digits.get(100), 0); ``` ### impl PowerOf2Digits<Natural> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<Natural, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::ZERO, 6) .to_debug_string(), "[]" ); assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::TWO, 6) .to_debug_string(), "[2]" ); // 123_10 = 173_8 assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::from(123u32), 3) .to_debug_string(), "[3, 7, 1]" ); ``` #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<Natural, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::ZERO, 6) .to_debug_string(), "[]" ); assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::TWO, 6) .to_debug_string(), "[2]" ); // 123_10 = 173_8 assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::from(123u32), 3) .to_debug_string(), "[1, 7, 3]" ); ``` #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. The type of each digit is `Natural`. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{One, Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; let digits = &[Natural::ZERO, Natural::ZERO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_asc(6, digits.iter().cloned()).to_debug_string(), "Some(0)" ); let digits = &[Natural::TWO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_asc(6, digits.iter().cloned()).to_debug_string(), "Some(2)" ); let digits = &[Natural::from(3u32), Natural::from(7u32), Natural::ONE]; assert_eq!( Natural::from_power_of_2_digits_asc(3, digits.iter().cloned()).to_debug_string(), "Some(123)" ); let digits = &[Natural::from(100u32)]; assert_eq!( Natural::from_power_of_2_digits_asc(3, digits.iter().cloned()).to_debug_string(), "None" ); ``` #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. The type of each digit is `Natural`. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{One, Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; let digits = &[Natural::ZERO, Natural::ZERO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_desc(6, digits.iter().cloned()).to_debug_string(), "Some(0)" ); let digits = &[Natural::ZERO, Natural::TWO]; assert_eq!( Natural::from_power_of_2_digits_desc(6, digits.iter().cloned()).to_debug_string(), "Some(2)" ); let digits = &[Natural::ONE, Natural::from(7u32), Natural::from(3u32)]; assert_eq!( Natural::from_power_of_2_digits_desc(3, digits.iter().cloned()).to_debug_string(), "Some(123)" ); let digits = &[Natural::from(100u32)]; assert_eq!( Natural::from_power_of_2_digits_desc(3, digits.iter().cloned()).to_debug_string(), "None" ); ``` ### impl PowerOf2Digits<u128> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u128, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u128, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u16> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u16, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u16, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u32> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u32, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u32, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u64> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u64, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u64, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u8> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u8, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u8, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<usize> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<usize, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<usize, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl Primes for Natural #### fn primes_less_than(n: &Natural) -> NaturalPrimesLessThanIterator Returns an iterator that generates all primes less than a given value. The iterator produced by `primes_less_than(n)` generates the same primes as the iterator produced by `primes().take_while(|&p| p < n)`, but the latter would be slower because it doesn’t know in advance how large its prime sieve should be, and might have to create larger and larger prime sieves. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### fn primes_less_than_or_equal_to(n: &Natural) -> NaturalPrimesLessThanIterator Returns an iterator that generates all primes less than or equal to a given value. The iterator produced by `primes_less_than_or_equal_to(n)` generates the same primes as the iterator produced by `primes().take_while(|&p| p <= n)`, but the latter would be slower because it doesn’t know in advance how large its prime sieve should be, and might have to create larger and larger prime sieves. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### fn primes() -> NaturalPrimesIterator Returns all `Natural` primes. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### type I = NaturalPrimesIterator #### type LI = NaturalPrimesLessThanIterator ### impl Primorial for Natural #### fn primorial(n: u64) -> Natural Computes the primorial of a `Natural`: the product of all primes less than or equal to it. The `product_of_first_n_primes` function is similar; it computes the primorial of the $n$th prime. $$ f(n) = n\# =prod_{pleq natop p\text {prime}} p. $$ $n\# = O(e^{(1+o(1))n})$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Primorial; use malachite_nz::natural::Natural; assert_eq!(Natural::primorial(0), 1); assert_eq!(Natural::primorial(1), 1); assert_eq!(Natural::primorial(2), 2); assert_eq!(Natural::primorial(3), 6); assert_eq!(Natural::primorial(4), 6); assert_eq!(Natural::primorial(5), 30); assert_eq!(Natural::primorial(100).to_string(), "2305567963945518424753102147331756070"); ``` This is equivalent to `mpz_primorial_ui` from `mpz/primorial_ui.c`, GMP 6.2.1. #### fn product_of_first_n_primes(n: u64) -> Natural Computes the product of the first $n$ primes. The `primorial` function is similar; it computes the product of all primes less than or equal to $n$. $$ f(n) = p_n\# =prod_{k=1}^n p_n, $$ where $p_n$ is the $n$th prime number. $p_n\# = O\left (left (frac{1}{e}k\log k\left (frac{\log k}{e^2}k right )^{1/\log k}right )^komega(1)\right )$. This asymptotic approximation is due to <NAME>ichels. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Primorial; use malachite_nz::natural::Natural; assert_eq!(Natural::product_of_first_n_primes(0), 1); assert_eq!(Natural::product_of_first_n_primes(1), 2); assert_eq!(Natural::product_of_first_n_primes(2), 6); assert_eq!(Natural::product_of_first_n_primes(3), 30); assert_eq!(Natural::product_of_first_n_primes(4), 210); assert_eq!(Natural::product_of_first_n_primes(5), 2310); assert_eq!( Natural::product_of_first_n_primes(100).to_string(), "4711930799906184953162487834760260422020574773409675520188634839616415335845034221205\ 28925670554468197243910409777715799180438028421831503871944494399049257903072063599053\ 8452312528339864352999310398481791730017201031090" ); ``` ### impl<'a> Product<&'a Natural> for Natural #### fn product<I>(xs: I) -> Naturalwhere I: Iterator<Item = &'a Natural>, Multiplies together all the `Natural`s in an iterator of `Natural` references. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Product; assert_eq!(Natural::product(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().iter()), 210); ``` ### impl Product<Natural> for Natural #### fn product<I>(xs: I) -> Naturalwhere I: Iterator<Item = Natural>, Multiplies together all the `Natural`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Product; assert_eq!( Natural::product(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().into_iter()), 210 ); ``` ### impl<'a, 'b> Rem<&'b Natural> for &'a Natural #### fn rem(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) % &Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() % &Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> Rem<&'a Natural> for Natural #### fn rem(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) % &Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() % &Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> Rem<Natural> for &'a Natural #### fn rem(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) % Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() % Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl Rem<Natural> for Natural #### fn rem(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) % Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() % Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> RemAssign<&'a Natural> for Natural #### fn rem_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by reference and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem_assign` is equivalent to `mod_assign`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x %= &Natural::from(10u32); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x %= &Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 530068894399u64); ``` ### impl RemAssign<Natural> for Natural #### fn rem_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by value and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem_assign` is equivalent to `mod_assign`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x %= Natural::from(10u32); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x %= Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 530068894399u64); ``` ### impl<'a> RemPowerOf2 for &'a Natural #### fn rem_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2` is equivalent to `mod_power_of_2`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!((&Natural::from(260u32)).rem_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!((&Natural::from(1611u32)).rem_power_of_2(4), 11); ``` #### type Output = Natural ### impl RemPowerOf2 for Natural #### fn rem_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2` is equivalent to `mod_power_of_2`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!(Natural::from(260u32).rem_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!(Natural::from(1611u32).rem_power_of_2(4), 11); ``` #### type Output = Natural ### impl RemPowerOf2Assign for Natural #### fn rem_power_of_2_assign(&mut self, pow: u64) Divides a `Natural` by $2^k$, replacing the first `Natural` by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2_assign` is equivalent to `mod_power_of_2_assign`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2Assign; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 let mut x = Natural::from(260u32); x.rem_power_of_2_assign(8); assert_eq!(x, 4); // 100 * 2^4 + 11 = 1611 let mut x = Natural::from(1611u32); x.rem_power_of_2_assign(4); assert_eq!(x, 11); ``` ### impl RootAssignRem<u64> for Natural #### fn root_assign_rem(&mut self, exp: u64) -> Natural Replaces a `Natural` with the floor of its $n$th root, and returns the remainder (the difference between the original `Natural` and the $n$th power of the floor). $f(x, n) = x - \lfloor\sqrt[n]{x}\rfloor^n$, $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootAssignRem; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); assert_eq!(x.root_assign_rem(3), 270); assert_eq!(x, 9); let mut x = Natural::from(1000u16); assert_eq!(x.root_assign_rem(3), 0); assert_eq!(x, 10); let mut x = Natural::from(1001u16); assert_eq!(x.root_assign_rem(3), 1); assert_eq!(x, 10); let mut x = Natural::from(100000000000u64); assert_eq!(x.root_assign_rem(5), 1534195232); assert_eq!(x, 158); ``` #### type RemOutput = Natural ### impl<'a> RootRem<u64> for &'a Natural #### fn root_rem(self, exp: u64) -> (Natural, Natural) Returns the floor of the $n$th root of a `Natural`, and the remainder (the difference between the `Natural` and the $n$th power of the floor). The `Natural` is taken by reference. $f(x, n) = (\lfloor\sqrt[n]{x}\rfloor, x - \lfloor\sqrt[n]{x}\rfloor^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).root_rem(3).to_debug_string(), "(9, 270)"); assert_eq!((&Natural::from(1000u16)).root_rem(3).to_debug_string(), "(10, 0)"); assert_eq!((&Natural::from(1001u16)).root_rem(3).to_debug_string(), "(10, 1)"); assert_eq!( (&Natural::from(100000000000u64)).root_rem(5).to_debug_string(), "(158, 1534195232)" ); ``` #### type RootOutput = Natural #### type RemOutput = Natural ### impl RootRem<u64> for Natural #### fn root_rem(self, exp: u64) -> (Natural, Natural) Returns the floor of the $n$th root of a `Natural`, and the remainder (the difference between the `Natural` and the $n$th power of the floor). The `Natural` is taken by value. $f(x, n) = (\lfloor\sqrt[n]{x}\rfloor, x - \lfloor\sqrt[n]{x}\rfloor^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).root_rem(3).to_debug_string(), "(9, 270)"); assert_eq!(Natural::from(1000u16).root_rem(3).to_debug_string(), "(10, 0)"); assert_eq!(Natural::from(1001u16).root_rem(3).to_debug_string(), "(10, 1)"); assert_eq!( Natural::from(100000000000u64).root_rem(5).to_debug_string(), "(158, 1534195232)" ); ``` #### type RootOutput = Natural #### type RemOutput = Natural ### impl<'a, 'b> RoundToMultiple<&'b Natural> for &'a Natural #### fn round_to_multiple( self, other: &'b Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. Both `Natural`s are taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(5u32)).round_to_multiple(&Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( (&Natural::from(20u32)).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(14u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultiple<&'a Natural> for Natural #### fn round_to_multiple( self, other: &'a Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. The first `Natural` is taken by value and the second by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(5u32).round_to_multiple(&Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( Natural::from(20u32).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(14u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultiple<Natural> for &'a Natural #### fn round_to_multiple( self, other: Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. The first `Natural` is taken by reference and the second by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(5u32)).round_to_multiple(Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( (&Natural::from(20u32)).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(14u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl RoundToMultiple<Natural> for Natural #### fn round_to_multiple( self, other: Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. Both `Natural`s are taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(5u32).round_to_multiple(Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( Natural::from(20u32).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(14u32).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultipleAssign<&'a Natural> for Natural #### fn round_to_multiple_assign( &mut self, other: &'a Natural, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of another `Natural` in place, according to a specified rounding mode. The `Natural` on the right-hand side is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut x = Natural::from(5u32); assert_eq!(x.round_to_multiple_assign(&Natural::ZERO, RoundingMode::Down), Ordering::Less); assert_eq!(x, 0); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Down), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Up), Ordering::Greater ); assert_eq!(x, 12); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(5u32), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, 10); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 9); let mut x = Natural::from(20u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 21); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(14u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 16); ``` ### impl RoundToMultipleAssign<Natural> for Natural #### fn round_to_multiple_assign( &mut self, other: Natural, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of another `Natural` in place, according to a specified rounding mode. The `Natural` on the right-hand side is taken by value. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut x = Natural::from(5u32); assert_eq!(x.round_to_multiple_assign(Natural::ZERO, RoundingMode::Down), Ordering::Less); assert_eq!(x, 0); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Down), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Up), Ordering::Greater ); assert_eq!(x, 12); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(5u32), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, 10); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 9); let mut x = Natural::from(20u32); assert_eq!( x.round_to_multiple_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 21); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(14u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 16); ``` ### impl<'a> RoundToMultipleOfPowerOf2<u64> for &'a Natural #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of $2^k$ according to a specified rounding mode. The `Natural` is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(12u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(12, Equal)" ); ``` #### type Output = Natural ### impl RoundToMultipleOfPowerOf2<u64> for Natural #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of $2^k$ according to a specified rounding mode. The `Natural` is taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(12u32).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(12, Equal)" ); ``` #### type Output = Natural ### impl RoundToMultipleOfPowerOf2Assign<u64> for Natural #### fn round_to_multiple_of_power_of_2_assign( &mut self, pow: u64, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of $2^k$ in place, according to a specified rounding mode. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultipleOfPowerOf2` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2_assign(pow, RoundingMode::Exact);` * `assert!(x.divisible_by_power_of_2(pow));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2Assign; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Floor), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 12); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Down), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Up), Ordering::Greater ); assert_eq!(n, 12); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(12u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Exact), Ordering::Equal ); assert_eq!(n, 12); ``` ### impl<'a> RoundingFrom<&'a Natural> for f32 #### fn rounding_from(value: &'a Natural, rm: RoundingMode) -> (f32, Ordering) Converts a `Natural` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` or `Down`, the largest float less than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. * If the rounding mode is `Ceiling` or `Up`, the smallest float greater than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then positive infinity is returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Natural` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Natural> for f64 #### fn rounding_from(value: &'a Natural, rm: RoundingMode) -> (f64, Ordering) Converts a `Natural` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` or `Down`, the largest float less than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. * If the rounding mode is `Ceiling` or `Up`, the smallest float greater than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then positive infinity is returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Natural` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for Natural #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl RoundingFrom<Rational> for Natural #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl RoundingFrom<f32> for Natural #### fn rounding_from(value: f32, rm: RoundingMode) -> (Natural, Ordering) Converts a floating-point value to a `Natural`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite, and it cannot round to a negative integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite, if it would round to a negative integer, or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl RoundingFrom<f64> for Natural #### fn rounding_from(value: f64, rm: RoundingMode) -> (Natural, Ordering) Converts a floating-point value to a `Natural`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite, and it cannot round to a negative integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite, if it would round to a negative integer, or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for Natural #### fn saturating_from(value: &'a Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(&Integer::from(123)), 123); assert_eq!(Natural::saturating_from(&Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(&Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(&-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> SaturatingFrom<&'a Natural> for i128 #### fn saturating_from(value: &Natural) -> i128 Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i16 #### fn saturating_from(value: &Natural) -> i16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i32 #### fn saturating_from(value: &Natural) -> i32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i64 #### fn saturating_from(value: &Natural) -> i64 Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s). If the `Natural` is too large to fit in a `SignedLimb`, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i8 #### fn saturating_from(value: &Natural) -> i8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for isize #### fn saturating_from(value: &Natural) -> isize Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u128 #### fn saturating_from(value: &Natural) -> u128 Converts a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u16 #### fn saturating_from(value: &Natural) -> u16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u32 #### fn saturating_from(value: &Natural) -> u32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u64 #### fn saturating_from(value: &Natural) -> u64 Converts a `Natural` to a `Limb`. If the `Natural` is too large to fit in a `Limb`, the maximum representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u8 #### fn saturating_from(value: &Natural) -> u8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for usize #### fn saturating_from(value: &Natural) -> usize Converts a `Natural` to a `usize`. If the `Natural` is too large to fit in a `usize`, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<Integer> for Natural #### fn saturating_from(value: Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(Integer::from(123)), 123); assert_eq!(Natural::saturating_from(Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(-Integer::from(10u32).pow(12)), 0); ``` ### impl SaturatingFrom<i128> for Natural #### fn saturating_from(i: i128) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i16> for Natural #### fn saturating_from(i: i16) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i32> for Natural #### fn saturating_from(i: i32) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i64> for Natural #### fn saturating_from(i: i64) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i8> for Natural #### fn saturating_from(i: i8) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<isize> for Natural #### fn saturating_from(i: isize) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a, 'b> SaturatingSub<&'a Natural> for &'b Natural #### fn saturating_sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by reference and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).saturating_sub(&Natural::from(123u32)), 0); assert_eq!((&Natural::from(123u32)).saturating_sub(&Natural::ZERO), 123); assert_eq!((&Natural::from(456u32)).saturating_sub(&Natural::from(123u32)), 333); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .saturating_sub(&Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSub<&'a Natural> for Natural #### fn saturating_sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by value and the second by reference and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.saturating_sub(&Natural::from(123u32)), 0); assert_eq!(Natural::from(123u32).saturating_sub(&Natural::ZERO), 123); assert_eq!(Natural::from(456u32).saturating_sub(&Natural::from(123u32)), 333); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .saturating_sub(&Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSub<Natural> for &'a Natural #### fn saturating_sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by reference and the second by value and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).saturating_sub(Natural::from(123u32)), 0); assert_eq!((&Natural::from(123u32)).saturating_sub(Natural::ZERO), 123); assert_eq!((&Natural::from(456u32)).saturating_sub(Natural::from(123u32)), 333); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .saturating_sub(Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl SaturatingSub<Natural> for Natural #### fn saturating_sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by value and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.saturating_sub(Natural::from(123u32)), 0); assert_eq!(Natural::from(123u32).saturating_sub(Natural::ZERO), 123); assert_eq!(Natural::from(456u32).saturating_sub(Natural::from(123u32)), 333); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .saturating_sub(Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubAssign<&'a Natural> for Natural #### fn saturating_sub_assign(&mut self, other: &'a Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and setting the left-hand side to 0 if the result is negative. $$ x \gets \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SaturatingSubAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::from(123u32)); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::ZERO); assert_eq!(x, 123); let mut x = Natural::from(456u32); x.saturating_sub_assign(&Natural::from(123u32)); assert_eq!(x, 333); let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::from(456u32)); assert_eq!(x, 0); ``` ### impl SaturatingSubAssign<Natural> for Natural #### fn saturating_sub_assign(&mut self, other: Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and setting the left-hand side to 0 if the result is negative. $$ x \gets \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SaturatingSubAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::from(123u32)); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::ZERO); assert_eq!(x, 123); let mut x = Natural::from(456u32); x.saturating_sub_assign(Natural::from(123u32)); assert_eq!(x, 333); let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::from(456u32)); assert_eq!(x, 0); ``` ### impl<'a> SaturatingSubMul<&'a Natural, Natural> for Natural #### fn saturating_sub_mul(self, y: &'a Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(&Natural::from(3u32), Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(&Natural::from(3u32), Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> SaturatingSubMul<&'b Natural, &'c Natural> for &'a Natural #### fn saturating_sub_mul(self, y: &'b Natural, z: &'c Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(20u32)).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( (&Natural::from(10u32)).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( (&Natural::from(10u32).pow(12)) .saturating_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b> SaturatingSubMul<&'a Natural, &'b Natural> for Natural #### fn saturating_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubMul<Natural, &'a Natural> for Natural #### fn saturating_sub_mul(self, y: Natural, z: &'a Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl SaturatingSubMul<Natural, Natural> for Natural #### fn saturating_sub_mul(self, y: Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by value and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(Natural::from(3u32), Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(Natural::from(3u32), Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubMulAssign<&'a Natural, Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: &'a Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by reference and the second by value and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(&Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a, 'b> SaturatingSubMulAssign<&'a Natural, &'b Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by reference and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(&Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SaturatingSubMulAssign<Natural, &'a Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: Natural, z: &'a Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by value and the second by reference and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl SaturatingSubMulAssign<Natural, Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by value and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SciMantissaAndExponent<f32, u64, Natural> for &'a Natural #### fn sci_mantissa_and_exponent(self) -> (f32, u64) Returns a `Natural`’s scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f32, sci_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the `Nearest` rounding mode. To specify other rounding modes, use `from_sci_mantissa_and_exponent_round`. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Examples See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.#### fn sci_exponent(self) -> E Extracts the scientific exponent from a number.### impl<'a> SciMantissaAndExponent<f64, u64, Natural> for &'a Natural #### fn sci_mantissa_and_exponent(self) -> (f64, u64) Returns a `Natural`’s scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f64, sci_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the `Nearest` rounding mode. To specify other rounding modes, use `from_sci_mantissa_and_exponent_round`. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Examples See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.#### fn sci_exponent(self) -> E Extracts the scientific exponent from a number.### impl<'a> Shl<i128> for &'a Natural #### fn shl(self, bits: i128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i128> for Natural #### fn shl(self, bits: i128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i16> for &'a Natural #### fn shl(self, bits: i16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i16> for Natural #### fn shl(self, bits: i16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i32> for &'a Natural #### fn shl(self, bits: i32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i32> for Natural #### fn shl(self, bits: i32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i64> for &'a Natural #### fn shl(self, bits: i64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i64> for Natural #### fn shl(self, bits: i64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i8> for &'a Natural #### fn shl(self, bits: i8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i8> for Natural #### fn shl(self, bits: i8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<isize> for &'a Natural #### fn shl(self, bits: isize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<isize> for Natural #### fn shl(self, bits: isize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u128> for &'a Natural #### fn shl(self, bits: u128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u128> for Natural #### fn shl(self, bits: u128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u16> for &'a Natural #### fn shl(self, bits: u16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u16> for Natural #### fn shl(self, bits: u16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u32> for &'a Natural #### fn shl(self, bits: u32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u32> for Natural #### fn shl(self, bits: u32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u64> for &'a Natural #### fn shl(self, bits: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u64> for Natural #### fn shl(self, bits: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u8> for &'a Natural #### fn shl(self, bits: u8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u8> for Natural #### fn shl(self, bits: u8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<usize> for &'a Natural #### fn shl(self, bits: usize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<usize> for Natural #### fn shl(self, bits: usize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl ShlAssign<i128> for Natural #### fn shl_assign(&mut self, bits: i128) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i16> for Natural #### fn shl_assign(&mut self, bits: i16) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i32> for Natural #### fn shl_assign(&mut self, bits: i32) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i64> for Natural #### fn shl_assign(&mut self, bits: i64) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i8> for Natural #### fn shl_assign(&mut self, bits: i8) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<isize> for Natural #### fn shl_assign(&mut self, bits: isize) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<u128> for Natural #### fn shl_assign(&mut self, bits: u128) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u16> for Natural #### fn shl_assign(&mut self, bits: u16) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u32> for Natural #### fn shl_assign(&mut self, bits: u32) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u64> for Natural #### fn shl_assign(&mut self, bits: u64) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u8> for Natural #### fn shl_assign(&mut self, bits: u8) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<usize> for Natural #### fn shl_assign(&mut self, bits: usize) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl<'a> ShlRound<i128> for &'a Natural #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i128> for Natural #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i16> for &'a Natural #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i16> for Natural #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i32> for &'a Natural #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i32> for Natural #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i64> for &'a Natural #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i64> for Natural #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i8> for &'a Natural #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i8> for Natural #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<isize> for &'a Natural #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<isize> for Natural #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRoundAssign<i128> for Natural #### fn shl_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i16> for Natural #### fn shl_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i32> for Natural #### fn shl_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i64> for Natural #### fn shl_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i8> for Natural #### fn shl_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<isize> for Natural #### fn shl_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl<'a> Shr<i128> for &'a Natural #### fn shr(self, bits: i128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i128> for Natural #### fn shr(self, bits: i128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i16> for &'a Natural #### fn shr(self, bits: i16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i16> for Natural #### fn shr(self, bits: i16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i32> for &'a Natural #### fn shr(self, bits: i32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i32> for Natural #### fn shr(self, bits: i32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i64> for &'a Natural #### fn shr(self, bits: i64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i64> for Natural #### fn shr(self, bits: i64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i8> for &'a Natural #### fn shr(self, bits: i8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i8> for Natural #### fn shr(self, bits: i8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<isize> for &'a Natural #### fn shr(self, bits: isize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<isize> for Natural #### fn shr(self, bits: isize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u128> for &'a Natural #### fn shr(self, bits: u128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u128> for Natural #### fn shr(self, bits: u128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u16> for &'a Natural #### fn shr(self, bits: u16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u16> for Natural #### fn shr(self, bits: u16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u32> for &'a Natural #### fn shr(self, bits: u32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u32> for Natural #### fn shr(self, bits: u32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u64> for &'a Natural #### fn shr(self, bits: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u64> for Natural #### fn shr(self, bits: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u8> for &'a Natural #### fn shr(self, bits: u8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u8> for Natural #### fn shr(self, bits: u8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<usize> for &'a Natural #### fn shr(self, bits: usize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<usize> for Natural #### fn shr(self, bits: usize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl ShrAssign<i128> for Natural #### fn shr_assign(&mut self, bits: i128) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i16> for Natural #### fn shr_assign(&mut self, bits: i16) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i32> for Natural #### fn shr_assign(&mut self, bits: i32) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i64> for Natural #### fn shr_assign(&mut self, bits: i64) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i8> for Natural #### fn shr_assign(&mut self, bits: i8) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<isize> for Natural #### fn shr_assign(&mut self, bits: isize) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u128> for Natural #### fn shr_assign(&mut self, bits: u128) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u16> for Natural #### fn shr_assign(&mut self, bits: u16) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u32> for Natural #### fn shr_assign(&mut self, bits: u32) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u64> for Natural #### fn shr_assign(&mut self, bits: u64) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u8> for Natural #### fn shr_assign(&mut self, bits: u8) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<usize> for Natural #### fn shr_assign(&mut self, bits: usize) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl<'a> ShrRound<i128> for &'a Natural #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i128> for Natural #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i16> for &'a Natural #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i16> for Natural #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i32> for &'a Natural #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i32> for Natural #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i64> for &'a Natural #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i64> for Natural #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i8> for &'a Natural #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i8> for Natural #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<isize> for &'a Natural #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<isize> for Natural #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u128> for &'a Natural #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u128> for Natural #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u16> for &'a Natural #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u16> for Natural #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u32> for &'a Natural #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u32> for Natural #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u64> for &'a Natural #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u64> for Natural #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u8> for &'a Natural #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u8> for Natural #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<usize> for &'a Natural #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<usize> for Natural #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRoundAssign<i128> for Natural #### fn shr_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i16> for Natural #### fn shr_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i32> for Natural #### fn shr_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i64> for Natural #### fn shr_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i8> for Natural #### fn shr_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<isize> for Natural #### fn shr_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u128> for Natural #### fn shr_round_assign(&mut self, bits: u128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u16> for Natural #### fn shr_round_assign(&mut self, bits: u16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u32> for Natural #### fn shr_round_assign(&mut self, bits: u32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u64> for Natural #### fn shr_round_assign(&mut self, bits: u64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u8> for Natural #### fn shr_round_assign(&mut self, bits: u8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<usize> for Natural #### fn shr_round_assign(&mut self, bits: usize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl Sign for Natural #### fn sign(&self) -> Ordering Compares a `Natural` to zero. Returns `Greater` or `Equal` depending on whether the `Natural` is positive or zero, respectively. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Sign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!(Natural::ZERO.sign(), Ordering::Equal); assert_eq!(Natural::from(123u32).sign(), Ordering::Greater); ``` ### impl<'a> SignificantBits for &'a Natural #### fn significant_bits(self) -> u64 Returns the number of significant bits of a `Natural`. $$ f(n) = \begin{cases} 0 & \text{if} \quad n = 0, \\ \lfloor \log_2 n \rfloor + 1 & \text{if} \quad n > 0. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::logic::traits::SignificantBits; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.significant_bits(), 0); assert_eq!(Natural::from(100u32).significant_bits(), 7); ``` ### impl SqrtAssignRem for Natural #### fn sqrt_assign_rem(&mut self) -> Natural Replaces a `Natural` with the floor of its square root and returns the remainder (the difference between the original `Natural` and the square of the floor). $f(x) = x - \lfloor\sqrt{x}\rfloor^2$, $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtAssignRem; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); assert_eq!(x.sqrt_assign_rem(), 18); assert_eq!(x, 9); let mut x = Natural::from(100u8); assert_eq!(x.sqrt_assign_rem(), 0); assert_eq!(x, 10); let mut x = Natural::from(101u8); assert_eq!(x.sqrt_assign_rem(), 1); assert_eq!(x, 10); let mut x = Natural::from(1000000000u32); assert_eq!(x.sqrt_assign_rem(), 49116); assert_eq!(x, 31622); let mut x = Natural::from(10000000000u64); assert_eq!(x.sqrt_assign_rem(), 0); assert_eq!(x, 100000); ``` #### type RemOutput = Natural ### impl<'a> SqrtRem for &'a Natural #### fn sqrt_rem(self) -> (Natural, Natural) Returns the floor of the square root of a `Natural` and the remainder (the difference between the `Natural` and the square of the floor). The `Natural` is taken by reference. $f(x) = (\lfloor\sqrt{x}\rfloor, x - \lfloor\sqrt{x}\rfloor^2)$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).sqrt_rem().to_debug_string(), "(9, 18)"); assert_eq!((&Natural::from(100u8)).sqrt_rem().to_debug_string(), "(10, 0)"); assert_eq!((&Natural::from(101u8)).sqrt_rem().to_debug_string(), "(10, 1)"); assert_eq!((&Natural::from(1000000000u32)).sqrt_rem().to_debug_string(), "(31622, 49116)"); assert_eq!((&Natural::from(10000000000u64)).sqrt_rem().to_debug_string(), "(100000, 0)"); ``` #### type SqrtOutput = Natural #### type RemOutput = Natural ### impl SqrtRem for Natural #### fn sqrt_rem(self) -> (Natural, Natural) Returns the floor of the square root of a `Natural` and the remainder (the difference between the `Natural` and the square of the floor). The `Natural` is taken by value. $f(x) = (\lfloor\sqrt{x}\rfloor, x - \lfloor\sqrt{x}\rfloor^2)$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).sqrt_rem().to_debug_string(), "(9, 18)"); assert_eq!(Natural::from(100u8).sqrt_rem().to_debug_string(), "(10, 0)"); assert_eq!(Natural::from(101u8).sqrt_rem().to_debug_string(), "(10, 1)"); assert_eq!(Natural::from(1000000000u32).sqrt_rem().to_debug_string(), "(31622, 49116)"); assert_eq!(Natural::from(10000000000u64).sqrt_rem().to_debug_string(), "(100000, 0)"); ``` #### type SqrtOutput = Natural #### type RemOutput = Natural ### impl<'a> Square for &'a Natural #### fn square(self) -> Natural Squares a `Natural`, taking it by reference. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).square(), 0); assert_eq!((&Natural::from(123u32)).square(), 15129); ``` #### type Output = Natural ### impl Square for Natural #### fn square(self) -> Natural Squares a `Natural`, taking it by value. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.square(), 0); assert_eq!(Natural::from(123u32).square(), 15129); ``` #### type Output = Natural ### impl SquareAssign for Natural #### fn square_assign(&mut self) Squares a `Natural` in place. $$ x \gets x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.square_assign(); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.square_assign(); assert_eq!(x, 15129); ``` ### impl<'a, 'b> Sub<&'a Natural> for &'b Natural #### fn sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) - &Natural::ZERO, 123); assert_eq!(&Natural::from(456u32) - &Natural::from(123u32), 333); assert_eq!( &(Natural::from(10u32).pow(12) * Natural::from(3u32)) - &Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> Sub<&'a Natural> for Natural #### fn sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by value and the second by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) - &Natural::ZERO, 123); assert_eq!(Natural::from(456u32) - &Natural::from(123u32), 333); assert_eq!( Natural::from(10u32).pow(12) * Natural::from(3u32) - &Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> Sub<Natural> for &'a Natural #### fn sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by reference and the second by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) - Natural::ZERO, 123); assert_eq!(&Natural::from(456u32) - Natural::from(123u32), 333); assert_eq!( &(Natural::from(10u32).pow(12) * Natural::from(3u32)) - Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl Sub<Natural> for Natural #### fn sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) - Natural::ZERO, 123); assert_eq!(Natural::from(456u32) - Natural::from(123u32), 333); assert_eq!( Natural::from(10u32).pow(12) * Natural::from(3u32) - Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> SubAssign<&'a Natural> for Natural #### fn sub_assign(&mut self, other: &'a Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32).pow(12) * Natural::from(10u32); x -= &Natural::from(10u32).pow(12); x -= &(Natural::from(10u32).pow(12) * Natural::from(2u32)); x -= &(Natural::from(10u32).pow(12) * Natural::from(3u32)); x -= &(Natural::from(10u32).pow(12) * Natural::from(4u32)); assert_eq!(x, 0); ``` ### impl SubAssign<Natural> for Natural #### fn sub_assign(&mut self, other: Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32).pow(12) * Natural::from(10u32); x -= Natural::from(10u32).pow(12); x -= Natural::from(10u32).pow(12) * Natural::from(2u32); x -= Natural::from(10u32).pow(12) * Natural::from(3u32); x -= Natural::from(10u32).pow(12) * Natural::from(4u32); assert_eq!(x, 0); ``` ### impl<'a> SubMul<&'a Natural, Natural> for Natural #### fn sub_mul(self, y: &'a Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(&Natural::from(3u32), Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> SubMul<&'a Natural, &'b Natural> for &'c Natural #### fn sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(20u32)).sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( (&Natural::from(10u32).pow(12)) .sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b> SubMul<&'a Natural, &'b Natural> for Natural #### fn sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SubMul<Natural, &'a Natural> for Natural #### fn sub_mul(self, y: Natural, z: &'a Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl SubMul<Natural, Natural> for Natural #### fn sub_mul(self, y: Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by value. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(Natural::from(3u32), Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SubMulAssign<&'a Natural, Natural> for Natural #### fn sub_mul_assign(&mut self, y: &'a Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by reference and the second by value. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(&Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a, 'b> SubMulAssign<&'a Natural, &'b Natural> for Natural #### fn sub_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by reference. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(&Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SubMulAssign<Natural, &'a Natural> for Natural #### fn sub_mul_assign(&mut self, y: Natural, z: &'a Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by value and the second by reference. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl SubMulAssign<Natural, Natural> for Natural #### fn sub_mul_assign(&mut self, y: Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by value. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl Subfactorial for Natural #### fn subfactorial(n: u64) -> Natural Computes the subfactorial of a number. The subfactorial of $n$ counts the number of derangements of a set of size $n$; a derangement is a permutation with no fixed points. $$ f(n) = \ !n = \lfloor n!/e \rfloor. $$ $!n = O(n!) = O(\sqrt{n}(n/e)^n)$. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Subfactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::subfactorial(0), 1); assert_eq!(Natural::subfactorial(1), 0); assert_eq!(Natural::subfactorial(2), 1); assert_eq!(Natural::subfactorial(3), 2); assert_eq!(Natural::subfactorial(4), 9); assert_eq!(Natural::subfactorial(5), 44); assert_eq!( Natural::subfactorial(100).to_string(), "3433279598416380476519597752677614203236578380537578498354340028268518079332763243279\ 1396429850988990237345920155783984828001486412574060553756854137069878601" ); ``` ### impl<'a> Sum<&'a Natural> for Natural #### fn sum<I>(xs: I) -> Naturalwhere I: Iterator<Item = &'a Natural>, Adds up all the `Natural`s in an iterator of `Natural` references. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Sum; assert_eq!(Natural::sum(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().iter()), 17); ``` ### impl Sum<Natural> for Natural #### fn sum<I>(xs: I) -> Naturalwhere I: Iterator<Item = Natural>, Adds up all the `Natural`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Sum; assert_eq!(Natural::sum(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().into_iter()), 17); ``` ### impl ToSci for Natural #### fn fmt_sci_valid(&self, options: ToSciOptions) -> bool Determines whether a `Natural` can be converted to a string using `to_sci` and a particular set of options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; let mut options = ToSciOptions::default(); assert!(Natural::from(123u8).fmt_sci_valid(options)); assert!(Natural::from(u128::MAX).fmt_sci_valid(options)); // u128::MAX has more than 16 significant digits options.set_rounding_mode(RoundingMode::Exact); assert!(!Natural::from(u128::MAX).fmt_sci_valid(options)); options.set_precision(50); assert!(Natural::from(u128::MAX).fmt_sci_valid(options)); ``` #### fn fmt_sci( &self, f: &mut Formatter<'_>, options: ToSciOptions ) -> Result<(), ErrorConverts a `Natural` to a string using a specified base, possibly formatting the number using scientific notation. See `ToSciOptions` for details on the available options. Note that setting `neg_exp_threshold` has no effect, since there is never a need to use negative exponents when representing a `Natural`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `options.rounding_mode` is `Exact`, but the size options are such that the input must be rounded. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; assert_eq!(format!("{}", Natural::from(u128::MAX).to_sci()), "3.402823669209385e38"); assert_eq!(Natural::from(u128::MAX).to_sci().to_string(), "3.402823669209385e38"); let n = Natural::from(123456u32); let mut options = ToSciOptions::default(); assert_eq!(format!("{}", n.to_sci_with_options(options)), "123456"); assert_eq!(n.to_sci_with_options(options).to_string(), "123456"); options.set_precision(3); assert_eq!(n.to_sci_with_options(options).to_string(), "1.23e5"); options.set_rounding_mode(RoundingMode::Ceiling); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24e5"); options.set_e_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E5"); options.set_force_exponent_plus_sign(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E+5"); options = ToSciOptions::default(); options.set_base(36); assert_eq!(n.to_sci_with_options(options).to_string(), "2n9c"); options.set_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "2N9C"); options.set_base(2); options.set_precision(10); assert_eq!(n.to_sci_with_options(options).to_string(), "1.1110001e16"); options.set_include_trailing_zeros(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.111000100e16"); ``` #### fn to_sci_with_options(&self, options: ToSciOptions) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation.#### fn to_sci(&self) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation, using the default `ToSciOptions`.### impl ToStringBase for Natural #### fn to_string_base(&self, base: u8) -> String Converts a `Natural` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the lowercase `char`s `'a'` to `'z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(1000u32).to_string_base(2), "1111101000"); assert_eq!(Natural::from(1000u32).to_string_base(10), "1000"); assert_eq!(Natural::from(1000u32).to_string_base(36), "rs"); ``` #### fn to_string_base_upper(&self, base: u8) -> String Converts a `Natural` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the uppercase `char`s `'A'` to `'Z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(1000u32).to_string_base_upper(2), "1111101000"); assert_eq!(Natural::from(1000u32).to_string_base_upper(10), "1000"); assert_eq!(Natural::from(1000u32).to_string_base_upper(36), "RS"); ``` ### impl<'a> TryFrom<&'a Integer> for Natural #### fn try_from( value: &'a Integer ) -> Result<Natural, <Natural as TryFrom<&'a Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(&Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(&Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(&Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(&(-Integer::from(10u32).pow(12))).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl<'a> TryFrom<&'a Rational> for Natural #### fn try_from( x: &Rational ) -> Result<Natural, <Natural as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by reference. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(&Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(&Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Integer> for Natural #### fn try_from( value: Integer ) -> Result<Natural, <Natural as TryFrom<Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(-Integer::from(10u32).pow(12)).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Natural #### fn try_from( x: Rational ) -> Result<Natural, <Natural as TryFrom<Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by value. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<SerdeNatural> for Natural #### type Error = String The type returned in the event of a conversion error.#### fn try_from(s: SerdeNatural) -> Result<Natural, StringPerforms the conversion.### impl TryFrom<f32> for Natural #### fn try_from(value: f32) -> Result<Natural, <Natural as TryFrom<f32>>::ErrorConverts a floating-point value to a `Natural`. If the input isn’t exactly equal to some `Natural`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = UnsignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<f64> for Natural #### fn try_from(value: f64) -> Result<Natural, <Natural as TryFrom<f64>>::ErrorConverts a floating-point value to a `Natural`. If the input isn’t exactly equal to some `Natural`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = UnsignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<i128> for Natural #### fn try_from(i: i128) -> Result<Natural, <Natural as TryFrom<i128>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i16> for Natural #### fn try_from(i: i16) -> Result<Natural, <Natural as TryFrom<i16>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i32> for Natural #### fn try_from(i: i32) -> Result<Natural, <Natural as TryFrom<i32>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i64> for Natural #### fn try_from(i: i64) -> Result<Natural, <Natural as TryFrom<i64>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i8> for Natural #### fn try_from(i: i8) -> Result<Natural, <Natural as TryFrom<i8>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<isize> for Natural #### fn try_from(i: isize) -> Result<Natural, <Natural as TryFrom<isize>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl Two for Natural The constant 2. #### const TWO: Natural = _ ### impl UpperHex for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a hexadecimal `String` using uppercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToUpperHexString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_upper_hex_string(), "0"); assert_eq!(Natural::from(123u32).to_upper_hex_string(), "7B"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_upper_hex_string(), "E8D4A51000" ); assert_eq!(format!("{:07X}", Natural::from(123u32)), "000007B"); assert_eq!(format!("{:#X}", Natural::ZERO), "0x0"); assert_eq!(format!("{:#X}", Natural::from(123u32)), "0x7B"); assert_eq!( format!("{:#X}", Natural::from_str("1000000000000").unwrap()), "0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Natural::from(123u32)), "0x0007B"); ``` ### impl<'a> WrappingFrom<&'a Natural> for i128 #### fn wrapping_from(value: &Natural) -> i128 Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i16 #### fn wrapping_from(value: &Natural) -> i16 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i32 #### fn wrapping_from(value: &Natural) -> i32 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i64 #### fn wrapping_from(value: &Natural) -> i64 Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s), wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i8 #### fn wrapping_from(value: &Natural) -> i8 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for isize #### fn wrapping_from(value: &Natural) -> isize Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u128 #### fn wrapping_from(value: &Natural) -> u128 Converts a `Natural` to a `usize` or a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u16 #### fn wrapping_from(value: &Natural) -> u16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u32 #### fn wrapping_from(value: &Natural) -> u32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u64 #### fn wrapping_from(value: &Natural) -> u64 Converts a `Natural` to a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u8 #### fn wrapping_from(value: &Natural) -> u8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for usize #### fn wrapping_from(value: &Natural) -> usize Converts a `Natural` to a `usize` or a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Zero for Natural The constant 0. #### const ZERO: Natural = _ ### impl Eq for Natural ### impl StructuralEq for Natural ### impl StructuralPartialEq for Natural Auto Trait Implementations --- ### impl RefUnwindSafe for Natural ### impl Send for Natural ### impl Sync for Natural ### impl Unpin for Natural ### impl UnwindSafe for Natural Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToBinaryString for Twhere T: Binary, #### fn to_binary_string(&self) -> String Returns the `String` produced by `T`s `Binary` implementation. ##### Examples ``` use malachite_base::strings::ToBinaryString; assert_eq!(5u64.to_binary_string(), "101"); assert_eq!((-100i16).to_binary_string(), "1111111110011100"); ``` ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToLowerHexString for Twhere T: LowerHex, #### fn to_lower_hex_string(&self) -> String Returns the `String` produced by `T`s `LowerHex` implementation. ##### Examples ``` use malachite_base::strings::ToLowerHexString; assert_eq!(50u64.to_lower_hex_string(), "32"); assert_eq!((-100i16).to_lower_hex_string(), "ff9c"); ``` ### impl<T> ToOctalString for Twhere T: Octal, #### fn to_octal_string(&self) -> String Returns the `String` produced by `T`s `Octal` implementation. ##### Examples ``` use malachite_base::strings::ToOctalString; assert_eq!(50u64.to_octal_string(), "62"); assert_eq!((-100i16).to_octal_string(), "177634"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. T: UpperHex, #### fn to_upper_hex_string(&self) -> String Returns the `String` produced by `T`s `UpperHex` implementation. ##### Examples ``` use malachite_base::strings::ToUpperHexString; assert_eq!(50u64.to_upper_hex_string(), "32"); assert_eq!((-100i16).to_upper_hex_string(), "FF9C"); ``` ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U {"LimbIterator<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"conversion/to_limbs/struct.LimbIterator.html\" title=\"struct malachite::natural::conversion::to_limbs::LimbIterator\">LimbIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"conversion/to_limbs/struct.LimbIterator.html\" title=\"struct malachite::natural::conversion::to_limbs::LimbIterator\">LimbIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u64.html\">u64</a>;</span>","NaturalBitIterator<'a>":"<h3>Notable traits for <code><a class=\"struct\" href=\"logic/bit_iterable/struct.NaturalBitIterator.html\" title=\"struct malachite::natural::logic::bit_iterable::NaturalBitIterator\">NaturalBitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"logic/bit_iterable/struct.NaturalBitIterator.html\" title=\"struct malachite::natural::logic::bit_iterable::NaturalBitIterator\">NaturalBitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.bool.html\">bool</a>;</span>","NaturalPowerOf2DigitIterator<'a>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitIterator\">NaturalPowerOf2DigitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitIterator\">NaturalPowerOf2DigitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u128>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u16>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u32>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u64>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u8>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, usize>":"<h3>Notable traits for <code><a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"../num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPrimesIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"factorization/primes/struct.NaturalPrimesIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesIterator\">NaturalPrimesIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"factorization/primes/struct.NaturalPrimesIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesIterator\">NaturalPrimesIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>","NaturalPrimesLessThanIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"factorization/primes/struct.NaturalPrimesLessThanIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesLessThanIterator\">NaturalPrimesLessThanIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"factorization/primes/struct.NaturalPrimesLessThanIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesLessThanIterator\">NaturalPrimesLessThanIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>"} Module malachite::nevers === `Never`, a type that cannot be instantiated. Structs --- * NeverError Enums --- * Never`Never` is a type that cannot be instantiated. Functions --- * neversGenerates all (none) of the `Never`s. Enum malachite::nevers::Never === ``` pub enum Never {} ``` `Never` is a type that cannot be instantiated. This is a bottom type. Examples --- ``` use malachite_base::nevers::Never; let x: Option<Never> = None; ``` Trait Implementations --- ### impl Clone for Never #### fn clone(&self) -> Never Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn fmt(&self, _f: &mut Formatter<'_>) -> Result<(), ErrorWould convert a `Never` to a `String`. ### impl FromStr for Never #### fn from_str(_: &str) -> Result<Never, NeverErrorWould convert a `String` to a `Never`. Since a `Never` can never be instantiated, `from_str` never succeeds. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::nevers::{Never, NeverError}; use std::str::FromStr; assert_eq!(Never::from_str("abc"), Err(NeverError)); ``` #### type Err = NeverError The associated error which can be returned from parsing.### impl Hash for Never #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &Never) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &Never) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Never> for Never #### fn partial_cmp(&self, other: &Never) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. ### impl Eq for Never ### impl StructuralEq for Never ### impl StructuralPartialEq for Never Auto Trait Implementations --- ### impl RefUnwindSafe for Never ### impl Send for Never ### impl Sync for Never ### impl Unpin for Never ### impl UnwindSafe for Never Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U Module malachite::num === Functions for working with primitive integers and floats. Modules --- * arithmeticTraits for arithmetic. * basicTraits for primitive integers or floats and some of their basic functionality. * comparisonTraits for comparing the absolute values of numbers for equality or order. * conversionTraits for converting to and from numbers, converting to and from strings, and extracting digits. * exhaustiveIterators that generate numbers without repetition. * factorizationTraits for generating primes, primality testing, and factorization (TODO!) * float`NiceFloat`, a wrapper around primitive floats. * iteratorsIterators related to numbers. * logicTraits for logic and bit manipulation. * randomIterators that generate numbers randomly. Module malachite::options === Functions for working with `Ordering`s. Modules --- * exhaustiveIterators that generate `Option`s without repetition. * randomIterators that generate `Option`s randomly. Functions --- * option_from_strConverts a string to an `Option<T>`, where `T` implements `FromStr`. * option_from_str_customConverts a string to an `Option<T>`, given a function to parse a string into a `T`. Module malachite::orderings === Functions for working with `Option`s. Modules --- * exhaustiveIterators that generate `Ordering`s without repetition. * randomIterators that generate `Ordering`s randomly. Functions --- * ordering_from_strConverts a string to an `Ordering`. Module malachite::random === Functions for generating random values. Structs --- * SeedA type representing a random seed. Constants --- * EXAMPLE_SEEDA random seed used for reproducible testing. Module malachite::rational_sequences === `RationalSequence`, a type representing a sequence that is finite or eventually repeating, just like the digits of a rational number. Modules --- * accessFunctions for getting and setting elements in a `RationalSequence`. * cmpFunctions for comparing `RationalSequence`s. * conversionFunctions for converting a `RationalSequence`s to and from a `Vec` or a slice. * exhaustiveFunctions for generating all `RationalSequence`s over a set of elements. * randomFunctions for generating random `RationalSequence`s from a set of elements. * to_stringFunctions for displaying a `RationalSequence`. Structs --- * RationalSequenceA `RationalSequence` is a sequence that is either finite or eventually repeating, just like the digits of a rational number. Struct malachite::rational_sequences::RationalSequence === ``` pub struct RationalSequence<T>where T: Eq,{ /* private fields */ } ``` A `RationalSequence` is a sequence that is either finite or eventually repeating, just like the digits of a rational number. In testing, the set of rational sequences may be used as a proxy for the set of all sequences, which is too large to work with. Implementations --- ### impl<T> RationalSequence<T>where T: Eq, #### pub fn is_empty(&self) -> bool Returns whether this `RationalSequence` is empty. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slice(&[]).is_empty(), true); assert_eq!(RationalSequence::<u8>::from_slice(&[1, 2, 3]).is_empty(), false); assert_eq!(RationalSequence::<u8>::from_slices(&[], &[3, 4]).is_empty(), false); assert_eq!(RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).is_empty(), false); ``` #### pub fn is_finite(&self) -> bool Returns whether this `RationalSequence` is finite (has no repeating part). ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slice(&[]).is_finite(), true); assert_eq!(RationalSequence::<u8>::from_slice(&[1, 2, 3]).is_finite(), true); assert_eq!(RationalSequence::<u8>::from_slices(&[], &[3, 4]).is_finite(), false); assert_eq!(RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).is_finite(), false); ``` #### pub fn len(&self) -> Option<usizeReturns the length of this `RationalSequence`. If the sequence is infinite, `None` is returned. For a measure of length that always exists, try `component_len`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slice(&[]).len(), Some(0)); assert_eq!(RationalSequence::<u8>::from_slice(&[1, 2, 3]).len(), Some(3)); assert_eq!(RationalSequence::<u8>::from_slices(&[], &[3, 4]).len(), None); assert_eq!(RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).len(), None); ``` #### pub fn component_len(&self) -> usize Returns the sum of the lengths of the non-repeating and repeating parts of this `RationalSequence`. This is often a more useful way of measuring the complexity of a sequence than `len`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slice(&[]).component_len(), 0); assert_eq!(RationalSequence::<u8>::from_slice(&[1, 2, 3]).component_len(), 3); assert_eq!(RationalSequence::<u8>::from_slices(&[], &[3, 4]).component_len(), 2); assert_eq!(RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).component_len(), 4); ``` #### pub fn iter(&self) -> Chain<Iter<'_, T>, Cycle<Iter<'_, T>>Returns an iterator of references to the elements of this sequence. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::rational_sequences::RationalSequence; let empty: &[u8] = &[]; assert_eq!(RationalSequence::<u8>::from_slice(empty).iter().cloned().collect_vec(), empty); assert_eq!( RationalSequence::<u8>::from_slice(&[1, 2, 3]).iter().cloned().collect_vec(), &[1, 2, 3] ); assert_eq!( RationalSequence::<u8>::from_slices(&[], &[3, 4]).iter().cloned().take(10) .collect_vec(), &[3, 4, 3, 4, 3, 4, 3, 4, 3, 4] ); assert_eq!( RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).iter().cloned().take(10) .collect_vec(), &[1, 2, 3, 4, 3, 4, 3, 4, 3, 4] ); ``` ### impl<T> RationalSequence<T>where T: Eq, #### pub fn get(&self, i: usize) -> Option<&TGets a reference to an element of a `RationalSequence` at an index. If the index is greater than or equal to the length of the sequence, `None` is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::from_slices(&[1, 2], &[3, 4]).get(1), Some(&2)); assert_eq!(RationalSequence::from_slices(&[1, 2], &[3, 4]).get(10), Some(&3)); ``` ### impl<T> RationalSequence<T>where T: Clone + Eq, #### pub fn mutate<F, U>(&mut self, i: usize, f: F) -> Uwhere F: FnOnce(&mut T) -> U, Mutates an element of a `RationalSequence` at an index using a provided closure, and then returns whatever the closure returns. If the index is greater than or equal to the length of the sequence, this function panics. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Panics Panics if `index` is greater than or equal to the length of this sequence. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; let mut xs = RationalSequence::from_slices(&[1, 2], &[3, 4]); assert_eq!(xs.mutate(1, |x| { *x = 100; 25 }), 25); assert_eq!(xs, RationalSequence::from_slices(&[1, 100], &[3, 4])); let mut xs = RationalSequence::from_slices(&[1, 2], &[3, 4]); assert_eq!(xs.mutate(6, |x| { *x = 100; 25 }), 25); assert_eq!(xs, RationalSequence::from_slices(&[1, 2, 3, 4, 3, 4, 100], &[4, 3])); ``` ### impl<T> RationalSequence<T>where T: Eq, #### pub fn from_vec(non_repeating: Vec<T, Global>) -> RationalSequence<TConverts a `Vec` to a finite `RationalSequence`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_vec(vec![]).to_string(), "[]"); assert_eq!(RationalSequence::<u8>::from_vec(vec![1, 2]).to_string(), "[1, 2]"); ``` #### pub fn from_vecs( non_repeating: Vec<T, Global>, repeating: Vec<T, Global> ) -> RationalSequence<TConverts two `Vec`s to a finite `RationalSequence`. The first `Vec` is the nonrepeating part and the second is the repeating part. ##### Worst-case complexity $T(n, m) = O(n + m^{1+\epsilon})$ for all $\epsilon > 0$ $M(n, m) = O(1)$ where $T$ is time, $M$ is additional memory, $n$ is `non_repeating.len()`, and $m$ is `repeating.len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_vecs(vec![], vec![]).to_string(), "[]"); assert_eq!(RationalSequence::<u8>::from_vecs(vec![], vec![1, 2]).to_string(), "[[1, 2]]"); assert_eq!(RationalSequence::<u8>::from_vecs(vec![1, 2], vec![]).to_string(), "[1, 2]"); assert_eq!( RationalSequence::<u8>::from_vecs(vec![1, 2], vec![3, 4]).to_string(), "[1, 2, [3, 4]]" ); assert_eq!( RationalSequence::<u8>::from_vecs(vec![1, 2, 3], vec![4, 3]).to_string(), "[1, 2, [3, 4]]" ); ``` #### pub fn into_vecs(self) -> (Vec<T, Global>, Vec<T, Global>) Converts a `RationalSequence` to a pair of `Vec`s containing the non-repeating and repeating parts, taking the `RationalSequence` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!( RationalSequence::from_slices(&[1, 2], &[3, 4]).into_vecs(), (vec![1, 2], vec![3, 4]) ); ``` #### pub fn slices_ref(&self) -> (&[T], &[T]) Returns references to the non-repeating and repeating parts of a `RationalSequence`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!( RationalSequence::from_slices(&[1u8, 2], &[3, 4]).slices_ref(), (&[1u8, 2][..], &[3u8, 4][..]) ); ``` ### impl<T> RationalSequence<T>where T: Clone + Eq, #### pub fn from_slice(non_repeating: &[T]) -> RationalSequence<TConverts a slice to a finite `RationalSequence`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slice(&[]).to_string(), "[]"); assert_eq!(RationalSequence::<u8>::from_slice(&[1, 2]).to_string(), "[1, 2]"); ``` #### pub fn from_slices(non_repeating: &[T], repeating: &[T]) -> RationalSequence<TConverts two slices to a finite `RationalSequence`. The first slice is the nonrepeating part and the second is the repeating part. ##### Worst-case complexity $T(n, m) = O(n + m^{1+\epsilon})$ for all $\epsilon > 0$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `non_repeating.len()`, and $m$ is `repeating.len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_slices(&[], &[]).to_string(), "[]"); assert_eq!(RationalSequence::<u8>::from_slices(&[], &[1, 2]).to_string(), "[[1, 2]]"); assert_eq!(RationalSequence::<u8>::from_slices(&[1, 2], &[]).to_string(), "[1, 2]"); assert_eq!( RationalSequence::<u8>::from_slices(&[1, 2], &[3, 4]).to_string(), "[1, 2, [3, 4]]" ); assert_eq!( RationalSequence::<u8>::from_slices(&[1, 2, 3], &[4, 3]).to_string(), "[1, 2, [3, 4]]" ); ``` #### pub fn to_vecs(&self) -> (Vec<T, Global>, Vec<T, Global>) Converts a `RationalSequence` to a pair of `Vec`s containing the non-repeating and repeating parts, taking the `RationalSequence` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.component_len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!( RationalSequence::from_slices(&[1, 2], &[3, 4]).to_vecs(), (vec![1, 2], vec![3, 4]) ); ``` Trait Implementations --- ### impl<T> Clone for RationalSequence<T>where T: Clone + Eq, #### fn clone(&self) -> RationalSequence<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. T: Display + Eq, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `RationalSequence` to a `String`. This is the same implementation as for `Display`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.component_len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; use malachite_base::strings::ToDebugString; assert_eq!(RationalSequence::<u8>::from_vecs(vec![], vec![]).to_debug_string(), "[]"); assert_eq!( RationalSequence::<u8>::from_vecs(vec![], vec![1, 2]).to_debug_string(), "[[1, 2]]" ); assert_eq!( RationalSequence::<u8>::from_vecs(vec![1, 2], vec![]).to_debug_string(), "[1, 2]" ); assert_eq!( RationalSequence::<u8>::from_vecs(vec![1, 2], vec![3, 4]).to_string(), "[1, 2, [3, 4]]" ); ``` ### impl<T> Default for RationalSequence<T>where T: Default + Eq, #### fn default() -> RationalSequence<TReturns the “default value” for a type. T: Display + Eq, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `RationalSequence` to a `String`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.component_len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::<u8>::from_vecs(vec![], vec![]).to_string(), "[]"); assert_eq!(RationalSequence::<u8>::from_vecs(vec![], vec![1, 2]).to_string(), "[[1, 2]]"); assert_eq!(RationalSequence::<u8>::from_vecs(vec![1, 2], vec![]).to_string(), "[1, 2]"); assert_eq!( RationalSequence::<u8>::from_vecs(vec![1, 2], vec![3, 4]).to_string(), "[1, 2, [3, 4]]" ); ``` ### impl<T> Hash for RationalSequence<T>where T: Hash + Eq, #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. T: Eq, #### fn index(&self, i: usize) -> &T Gets a reference to an element of a `RationalSequence` at an index. If the index is greater than or equal to the length of the sequence, this function panics. ##### Worst-case complexity Constant time and additional memory. ##### Panics Panics if `index` is greater than or equal to the length of this sequence. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert_eq!(RationalSequence::from_slices(&[1, 2], &[3, 4])[1], 2); assert_eq!(RationalSequence::from_slices(&[1, 2], &[3, 4])[10], 3); ``` #### type Output = T The returned type after indexing.### impl<T> Ord for RationalSequence<T>where T: Eq + Ord, #### fn cmp(&self, other: &RationalSequence<T>) -> Ordering Compares a `RationalSequence` to another `RationalSequence`. The comparison is made lexicographically with respect to the element type’s ordering. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.component_len()`. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; assert!( RationalSequence::from_slice(&[1, 2]) < RationalSequence::from_slices(&[1, 2], &[1]) ); assert!( RationalSequence::from_slice(&[1, 2, 3]) < RationalSequence::from_slices(&[1, 2], &[3, 4]) ); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. T: PartialEq<T> + Eq, #### fn eq(&self, other: &RationalSequence<T>) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T> PartialOrd<RationalSequence<T>> for RationalSequence<T>where T: Eq + Ord, #### fn partial_cmp(&self, other: &RationalSequence<T>) -> Option<OrderingCompares a `RationalSequence` to another `RationalSequence`. See here for more information. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. T: Eq, ### impl<T> StructuralEq for RationalSequence<T>where T: Eq, ### impl<T> StructuralPartialEq for RationalSequence<T>where T: Eq, Auto Trait Implementations --- ### impl<T> RefUnwindSafe for RationalSequence<T>where T: RefUnwindSafe, ### impl<T> Send for RationalSequence<T>where T: Send, ### impl<T> Sync for RationalSequence<T>where T: Sync, ### impl<T> Unpin for RationalSequence<T>where T: Unpin, ### impl<T> UnwindSafe for RationalSequence<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U Module malachite::rounding_modes === `RoundingMode`, an enum used to specify rounding behavior. Modules --- * exhaustiveIterators that generate `RoundingMode`s without repetition. * from_strFunctions for converting a string to a `RoundingMode`. * negFunctions for negating a `RoundingMode`. * randomIterators that generate `RoundingMode`s randomly. * to_stringFunctions for displaying a `RoundingMode`. Enums --- * RoundingModeAn enum that specifies how a value should be rounded. Constants --- * ROUNDING_MODESA list of all six rounding modes. Enum malachite::rounding_modes::RoundingMode === ``` pub enum RoundingMode { Down, Up, Floor, Ceiling, Nearest, Exact, } ``` An enum that specifies how a value should be rounded. A `RoundingMode` can often be specified when a function conceptually returns a value of one type, but must be rounded to another type. The most common case is a conceptually real-valued function whose result must be rounded to an integer, like `div_round`. Examples --- Here are some examples of how floating-point values would be rounded to integer values using the different `RoundingMode`s. | x | `Floor` | `Ceiling` | `Down` | `Up` | `Nearest` | `Exact` | | --- | --- | --- | --- | --- | --- | --- | | 3.0 | 3 | 3 | 3 | 3 | 3 | 3 | | 3.2 | 3 | 4 | 3 | 4 | 3 | `panic!()` | | 3.8 | 3 | 4 | 3 | 4 | 4 | `panic!()` | | 3.5 | 3 | 4 | 3 | 4 | 4 | `panic!()` | | 4.5 | 4 | 5 | 4 | 5 | 4 | `panic!()` | | -3.2 | -4 | -3 | -3 | -4 | -3 | `panic!()` | | -3.8 | -4 | -3 | -3 | -4 | -4 | `panic!()` | | -3.5 | -4 | -3 | -3 | -4 | -4 | `panic!()` | | -4.5 | -5 | -4 | -4 | -5 | -4 | `panic!()` | Sometimes a `RoundingMode` is used in an unusual context, such as rounding an integer to a floating-point number, in which case further explanation of its behavior is provided at the usage site. A `RoundingMode` takes up 1 byte of space. Variants --- ### Down Applies the function $x \mapsto \operatorname{sgn}(x) \lfloor |x| \rfloor$. In other words, the value is rounded towards $0$. ### Up Applies the function $x \mapsto \operatorname{sgn}(x) \lceil |x| \rceil$. In other words, the value is rounded away from $0$. ### Floor Applies the floor function: $x \mapsto \lfloor x \rfloor$. In other words, the value is rounded towards $-\infty$. ### Ceiling Applies the ceiling function: $x \mapsto \lceil x \rceil$. In other words, the value is rounded towards $\infty$. ### Nearest Applies the function $$ x \mapsto \begin{cases} \lfloor x \rfloor & x - \lfloor x \rfloor < \frac{1}{2} \\ \lceil x \rceil & x - \lfloor x \rfloor > \frac{1}{2} \\ \lfloor x \rfloor & x - \lfloor x \rfloor = \frac{1}{2} \ \text{and} \ \lfloor x \rfloor \ \text{is even} \\ \lceil x \rceil & x - \lfloor x \rfloor = \frac{1}{2} \ \text{and} \ \lfloor x \rfloor \ \text{is odd.} \end{cases} $$ In other words, it rounds to the nearest integer, and when there’s a tie, it rounds to the nearest even integer. This is also called *bankers’ rounding* and is often used as a default. ### Exact Panics if the value is not already rounded. Trait Implementations --- ### impl Clone for RoundingMode #### fn clone(&self) -> RoundingMode Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `RoundingMode` to a `String`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::rounding_modes::RoundingMode; assert_eq!(RoundingMode::Down.to_string(), "Down"); assert_eq!(RoundingMode::Up.to_string(), "Up"); assert_eq!(RoundingMode::Floor.to_string(), "Floor"); assert_eq!(RoundingMode::Ceiling.to_string(), "Ceiling"); assert_eq!(RoundingMode::Nearest.to_string(), "Nearest"); assert_eq!(RoundingMode::Exact.to_string(), "Exact"); ``` ### impl FromStr for RoundingMode #### fn from_str(src: &str) -> Result<RoundingMode, StringConverts a string to a `RoundingMode`. If the string does not represent a valid `RoundingMode`, an `Err` is returned with the unparseable string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ = `src.len()`. The worst case occurs when the input string is invalid and must be copied into an `Err`. ##### Examples ``` use malachite_base::rounding_modes::RoundingMode; use std::str::FromStr; assert_eq!(RoundingMode::from_str("Down"), Ok(RoundingMode::Down)); assert_eq!(RoundingMode::from_str("Up"), Ok(RoundingMode::Up)); assert_eq!(RoundingMode::from_str("Floor"), Ok(RoundingMode::Floor)); assert_eq!(RoundingMode::from_str("Ceiling"), Ok(RoundingMode::Ceiling)); assert_eq!(RoundingMode::from_str("Nearest"), Ok(RoundingMode::Nearest)); assert_eq!(RoundingMode::from_str("Exact"), Ok(RoundingMode::Exact)); assert_eq!(RoundingMode::from_str("abc"), Err("abc".to_string())); ``` #### type Err = String The associated error which can be returned from parsing.### impl Hash for RoundingMode #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl Neg for RoundingMode Returns the negative of a `RoundingMode`. The negative is defined so that if a `RoundingMode` $m$ is used to round the result of an odd function $f$, then $f(x, -m) = -f(-x, m)$. `Floor` and `Ceiling` are swapped, and the other modes are unchanged. #### Worst-case complexity Constant time and additional memory. #### Examples ``` use malachite_base::rounding_modes::RoundingMode; assert_eq!(-RoundingMode::Down, RoundingMode::Down); assert_eq!(-RoundingMode::Up, RoundingMode::Up); assert_eq!(-RoundingMode::Floor, RoundingMode::Ceiling); assert_eq!(-RoundingMode::Ceiling, RoundingMode::Floor); assert_eq!(-RoundingMode::Nearest, RoundingMode::Nearest); assert_eq!(-RoundingMode::Exact, RoundingMode::Exact); ``` #### type Output = RoundingMode The resulting type after applying the `-` operator.#### fn neg(self) -> RoundingMode Performs the unary `-` operation. #### fn neg_assign(&mut self) Replaces a `RoundingMode` with its negative. The negative is defined so that if a `RoundingMode` $m$ is used to round the result of an odd function $f$, then $f(x, -m) = -f(-x, m)$. `Floor` and `Ceiling` are swapped, and the other modes are unchanged. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegAssign; use malachite_base::rounding_modes::RoundingMode; let mut rm = RoundingMode::Down; rm.neg_assign(); assert_eq!(rm, RoundingMode::Down); let mut rm = RoundingMode::Floor; rm.neg_assign(); assert_eq!(rm, RoundingMode::Ceiling); ``` ### impl Ord for RoundingMode #### fn cmp(&self, other: &RoundingMode) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &RoundingMode) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<RoundingMode> for RoundingMode #### fn partial_cmp(&self, other: &RoundingMode) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. ### impl Eq for RoundingMode ### impl StructuralEq for RoundingMode ### impl StructuralPartialEq for RoundingMode Auto Trait Implementations --- ### impl RefUnwindSafe for RoundingMode ### impl Send for RoundingMode ### impl Sync for RoundingMode ### impl Unpin for RoundingMode ### impl UnwindSafe for RoundingMode Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U Module malachite::sets === Functions for working with `HashSet`s and `BTreeSet`s. Modules --- * exhaustiveIterators that generate sets without repetition. * randomIterators that generate sets randomly. Module malachite::slices === Functions for working with slices. Structs --- * ExhaustiveSlicePermutationsGenerates every permutation of a slice. * RandomSlicePermutationsUniformly generates a random permutation of references to a slice. * RandomValuesFromSliceUniformly generates a random reference to a value from a nonempty slice. Functions --- * exhaustive_slice_permutationsGenerates every permutation of a slice. * min_repeating_lenGiven a slice with nonzero length $\ell$, returns the smallest $n$ such that the slice consists of $n/\ell$ copies of a length-$\ell$ subslice. * random_slice_permutationsUniformly generates a random permutation of references to a slice. * random_values_from_sliceUniformly generates a random reference to a value from a nonempty slice. * slice_leading_zerosCounts the number of zeros that a slice starts with. * slice_move_leftGiven a slice and an starting index, copies the subslice starting from that index to the beginning of the slice. * slice_set_zeroSets all values in a slice to 0. * slice_test_zeroTests whether all values in a slice are equal to 0. * slice_trailing_zerosCounts the number of zeros that a slice ends with. Module malachite::strings === Functions for working with `String`s. Modules --- * exhaustiveIterators that generate `String`s without repetition. * randomIterators that generate `String`s randomly. Structs --- * StringsFromCharVecsGenerates `String`s, given an iterator that generates `Vec<char>`s. Traits --- * ToBinaryStringA trait that provides an ergonomic way to create the string specified by a `Binary` implementation. * ToDebugStringA trait that provides an ergonomic way to create the string specified by a `Debug` implementation. * ToLowerHexStringA trait that provides an ergonomic way to create the string specified by a `LowerHex` implementation. * ToOctalStringA trait that provides an ergonomic way to create the string specified by an `Octal` implementation. * ToUpperHexStringA trait that provides an ergonomic way to create the string specified by an `UpperHex` implementation. Functions --- * string_is_subsetReturns whether all of the first string slice’s characters are present in the second string slice. * string_sortSorts the characters of a string slice and returns them in a new `String`. * string_uniqueTakes a string slice’s unique characters and returns them in a new `String`. * strings_from_char_vecsGenerates `String`s, given an iterator that generates `Vec<char>`s. Module malachite::tuples === Functions for working with tuples. Modules --- * exhaustiveIterators that generate tuples without repetition. * randomIterators that generate tuples randomly. Structs --- * SingletonsGenerates all singletons (1-element tuples) with values from a given iterator. Functions --- * singletonsGenerates all singletons (1-element tuples) with values from a given iterator. Module malachite::unions === Unions (sum types). These are essentially generic enums. unwrap --- ``` use malachite_base::unions::UnionFromStrError; use malachite_base::union_struct; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); let mut u: Union3<char, char, char>; u = Union3::A('a'); assert_eq!(u.unwrap(), 'a'); u = Union3::B('b'); assert_eq!(u.unwrap(), 'b'); u = Union3::C('c'); assert_eq!(u.unwrap(), 'c'); ``` fmt --- ``` use malachite_base::unions::UnionFromStrError; use malachite_base::union_struct; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); let mut u: Union3<char, u32, bool>; u = Union3::A('a'); assert_eq!(u.to_string(), "A(a)"); u = Union3::B(5); assert_eq!(u.to_string(), "B(5)"); u = Union3::C(false); assert_eq!(u.to_string(), "C(false)"); ``` from_str --- ``` use malachite_base::unions::UnionFromStrError; use malachite_base::union_struct; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); let u3: Union3<bool, u32, char> = Union3::from_str("B(5)").unwrap(); assert_eq!(u3, Union3::B(5)); let result: Result<Union3<char, u32, bool>, _> = Union3::from_str("xyz"); assert_eq!(result, Err(UnionFromStrError::Generic("xyz".to_string()))); let result: Result<Union3<char, u32, bool>, _> = Union3::from_str("A(ab)"); if let Err(UnionFromStrError::Specific(Union3::A(_e))) = result { } else { panic!("wrong error variant") } ``` Modules --- * exhaustiveIterators that generate unions without repetition. * randomIterators that generate unions randomly. Enums --- * Union2This is a union, or sum type, of $n$ values. It is essentially a generic enum. * UnionFromStrErrorThis is the error type for the unions’ `FromStr` implementations. Module malachite::vecs === Functions for working with `Vec`s. Modules --- * exhaustiveIterators that generate `Vec`s without repetition. * randomIterators that generate `Vec`s randomly. Structs --- * ExhaustiveVecPermutationsGenerates every permutation of a `Vec`. * RandomValuesFromVecUniformly generates a random value from a nonempty `Vec`. * RandomVecPermutationsUniformly generates a random `Vec` of values cloned from an original `Vec`. Functions --- * exhaustive_vec_permutationsGenerates every permutation of a `Vec`. * random_values_from_vecUniformly generates a random value from a nonempty `Vec`. * random_vec_permutationsUniformly generates a random `Vec` of values cloned from an original `Vec`. * vec_delete_leftDeletes several values from the left (beginning) of a `Vec`. * vec_from_strConverts a string to an `Vec<T>`, where `T` implements `FromStr`. * vec_from_str_customConverts a string to an `Vec<T>`, given a function to parse a string into a `T`. * vec_pad_leftInserts several copies of a value at the left (beginning) of a `Vec`. Macro malachite::custom_tuples === ``` macro_rules! custom_tuples { ( ($($vis:tt)*), $exhaustive_struct: ident, $out_t: ty, $nones: expr, $unwrap_tuple: ident, $exhaustive_fn: ident, $exhaustive_custom_fn: ident, $([$t: ident, $it: ident, $xs: ident, $xs_done: ident, $([$i: tt, $out_x: ident]),*]),* ) => { ... }; } ``` Defines custom exhaustive tuple generators. You can define custom tuple generators like `exhaustive_triples_xyx` or `exhaustive_triples_xyx_custom_output` in your program using the code below. See usage examples here and here. ``` use malachite_base::iterators::bit_distributor::{BitDistributor, BitDistributorOutputType}; use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::num::conversion::traits::{ExactFrom, WrappingFrom}; use malachite_base::num::logic::traits::SignificantBits; use std::cmp::max; #[allow(clippy::missing_const_for_fn)] fn unwrap_triple<X, Y, Z>((a, b, c): (Option<X>, Option<Y>, Option<Z>)) -> (X, Y, Z) { (a.unwrap(), b.unwrap(), c.unwrap()) } #[allow(clippy::missing_const_for_fn)] fn unwrap_quadruple<X, Y, Z, W>( (a, b, c, d): (Option<X>, Option<Y>, Option<Z>, Option<W>), ) -> (X, Y, Z, W) { (a.unwrap(), b.unwrap(), c.unwrap(), d.unwrap()) } #[allow(clippy::missing_const_for_fn)] fn unwrap_quintuple<X, Y, Z, W, V>( (a, b, c, d, e): (Option<X>, Option<Y>, Option<Z>, Option<W>, Option<V>), ) -> (X, Y, Z, W, V) { (a.unwrap(), b.unwrap(), c.unwrap(), d.unwrap(), e.unwrap()) } custom_tuples!( (pub(crate)), ExhaustiveTriplesXXY, (X, X, Y), (None, None, None), unwrap_triple, exhaustive_triples_xxy, exhaustive_triples_xxy_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [1, output_type_xs_1]], [Y, J, ys, ys_done, [2, output_type_ys_2]] ); custom_tuples!( (pub(crate)), ExhaustiveTriplesXYX, (X, Y, X), (None, None, None), unwrap_triple, exhaustive_triples_xyx, exhaustive_triples_xyx_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [2, output_type_ys_1]], [Y, J, ys, ys_done, [1, output_type_xs_2]] ); custom_tuples!( (pub(crate)), ExhaustiveTriplesXYY, (X, Y, Y), (None, None, None), unwrap_triple, exhaustive_triples_xyy, exhaustive_triples_xyy_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0]], [Y, J, ys, ys_done, [1, output_type_ys_1], [2, output_type_ys_2]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXXXY, (X, X, X, Y), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xxxy, exhaustive_quadruples_xxxy_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [1, output_type_xs_1], [2, output_type_xs_2]], [Y, J, ys, ys_done, [3, output_type_ys_3]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXXYX, (X, X, Y, X), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xxyx, exhaustive_quadruples_xxyx_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [1, output_type_xs_1], [3, output_type_xs_3]], [Y, J, ys, ys_done, [2, output_type_ys_2]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXXYZ, (X, X, Y, Z), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xxyz, exhaustive_quadruples_xxyz_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [1, output_type_xs_1]], [Y, J, ys, ys_done, [2, output_type_ys_2]], [Z, K, zs, zs_done, [3, output_type_zs_3]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXYXZ, (X, Y, X, Z), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xyxz, exhaustive_quadruples_xyxz_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [2, output_type_xs_2]], [Y, J, ys, ys_done, [1, output_type_ys_1]], [Z, K, zs, zs_done, [3, output_type_zs_3]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXYYX, (X, Y, Y, X), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xyyx, exhaustive_quadruples_xyyx_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0], [3, output_type_xs_3]], [Y, J, ys, ys_done, [1, output_type_ys_1], [2, output_type_ys_2]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXYYZ, (X, Y, Y, Z), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xyyz, exhaustive_quadruples_xyyz_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0]], [Y, J, ys, ys_done, [1, output_type_ys_1], [2, output_type_ys_2]], [Z, K, zs, zs_done, [3, output_type_zs_3]] ); custom_tuples!( (pub(crate)), ExhaustiveQuadruplesXYZZ, (X, Y, Z, Z), (None, None, None, None), unwrap_quadruple, exhaustive_quadruples_xyzz, exhaustive_quadruples_xyzz_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0]], [Y, J, ys, ys_done, [1, output_type_ys_1]], [Z, K, zs, zs_done, [2, output_type_zs_2], [3, output_type_zs_3]] ); custom_tuples!( (pub(crate)), ExhaustiveQuintuplesXYYYZ, (X, Y, Y, Y, Z), (None, None, None, None, None), unwrap_quintuple, exhaustive_quintuples_xyyyz, exhaustive_quintuples_xyyyz_custom_output, [X, I, xs, xs_done, [0, output_type_xs_0]], [Y, J, ys, ys_done, [1, output_type_ys_1], [2, output_type_ys_2], [3, output_type_ys_3]], [Z, K, zs, zs_done, [4, output_type_zs_4]] ); ``` Macro malachite::exhaustive_ordered_unique_tuples === ``` macro_rules! exhaustive_ordered_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: expr),*] ) => { ... }; } ``` Defines exhaustive ordered unique tuple generators. Malachite provides `exhaustive_ordered_unique_pairs`, but you can also define `exhaustive_ordered_unique_triples`, `exhaustive_ordered_unique_quadruples`, and so on, in your program using the code below. The documentation for `exhaustive_ordered_unique_pairs` describes these other functions as well. See usage examples here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::vecs::exhaustive::next_bit_pattern; exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueTriples, 3, (I::Item, I::Item, I::Item), exhaustive_ordered_unique_triples, [0, 1, 2] ); exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), exhaustive_ordered_unique_quadruples, [0, 1, 2, 3] ); exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), exhaustive_ordered_unique_quintuples, [0, 1, 2, 3, 4] ); exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), exhaustive_ordered_unique_sextuples, [0, 1, 2, 3, 4, 5] ); exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), exhaustive_ordered_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); exhaustive_ordered_unique_tuples!( (pub(crate)), ExhaustiveOrderedUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), exhaustive_ordered_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::exhaustive_tuples === ``` macro_rules! exhaustive_tuples { ( ($($vis:tt)*), $exhaustive_struct: ident, $exhaustive_fn: ident, $exhaustive_fn_custom_output: ident, $([$i: tt, $t: ident, $it: ident, $xs: ident, $xs_done: ident, $out_x: ident]),* ) => { ... }; } ``` Defines exhaustive tuple generators. Malachite provides `exhaustive_pairs` and `exhaustive_pairs_custom_output`, but you can also define `exhaustive_triples`, `exhaustive_quadruples`, and so on, and `exhaustive_triples_custom_output`, `exhaustive_quadruples_custom_output`, and so on, in your program using the code below. The documentation for `exhaustive_pairs` and `exhaustive_pairs_custom_output` describes these other functions as well. See usage examples here and here. ``` use malachite_base::iterators::bit_distributor::{BitDistributor, BitDistributorOutputType}; use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::num::conversion::traits::{ExactFrom, WrappingFrom}; use malachite_base::num::logic::traits::SignificantBits; use std::cmp::max; exhaustive_tuples!( (pub(crate)), ExhaustiveTriples, exhaustive_triples, exhaustive_triples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z] ); exhaustive_tuples!( (pub(crate)), ExhaustiveQuadruples, exhaustive_quadruples, exhaustive_quadruples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z], [3, W, L, ws, ws_done, output_type_w] ); exhaustive_tuples!( (pub(crate)), ExhaustiveQuintuples, exhaustive_quintuples, exhaustive_quintuples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z], [3, W, L, ws, ws_done, output_type_w], [4, V, M, vs, vs_done, output_type_v] ); exhaustive_tuples!( (pub(crate)), ExhaustiveSextuples, exhaustive_sextuples, exhaustive_sextuples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z], [3, W, L, ws, ws_done, output_type_w], [4, V, M, vs, vs_done, output_type_v], [5, U, N, us, us_done, output_type_u] ); exhaustive_tuples!( (pub(crate)), ExhaustiveSeptuples, exhaustive_septuples, exhaustive_septuples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z], [3, W, L, ws, ws_done, output_type_w], [4, V, M, vs, vs_done, output_type_v], [5, U, N, us, us_done, output_type_u], [6, T, O, ts, ts_done, output_type_t] ); exhaustive_tuples!( (pub(crate)), ExhaustiveOctuples, exhaustive_octuples, exhaustive_octuples_custom_output, [0, X, I, xs, xs_done, output_type_x], [1, Y, J, ys, ys_done, output_type_y], [2, Z, K, zs, zs_done, output_type_z], [3, W, L, ws, ws_done, output_type_w], [4, V, M, vs, vs_done, output_type_v], [5, U, N, us, us_done, output_type_u], [6, T, O, ts, ts_done, output_type_t], [7, S, P, ss, ss_done, output_type_s] ); ``` Macro malachite::exhaustive_tuples_1_input === ``` macro_rules! exhaustive_tuples_1_input { ( ($($vis:tt)*), $exhaustive_struct: ident, $exhaustive_fn: ident, $exhaustive_fn_from_single: ident, $out_type: ty, $([$i: tt, $out_x: ident]),* ) => { ... }; } ``` Defines exhaustive tuple generators that generate tuples from a single iterator. Malachite provides `exhaustive_pairs_from_single` and `exhaustive_pairs_1_input`, but you can also define `exhaustive_triples_from_single`, `exhaustive_quadruples_from_single`, and so on, and `exhaustive_triples_1_input`, `exhaustive_quadruples_1_input`, and so on, in your program using the code below. The documentation for `exhaustive_pairs_from_single` and `exhaustive_pairs_1_input` describes these other functions as well. See usage examples here and here. ``` use malachite_base::iterators::bit_distributor::{BitDistributor, BitDistributorOutputType}; use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::num::arithmetic::traits::CheckedPow; use malachite_base::num::conversion::traits::{ExactFrom, WrappingFrom}; use malachite_base::num::logic::traits::SignificantBits; use std::cmp::max; use std::marker::PhantomData; exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveTriples1Input, exhaustive_triples_1_input, exhaustive_triples_from_single, (I::Item, I::Item, I::Item), [0, output_type_x], [1, output_type_y], [2, output_type_z] ); exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveQuadruples1Input, exhaustive_quadruples_1_input, exhaustive_quadruples_from_single, (I::Item, I::Item, I::Item, I::Item), [0, output_type_x], [1, output_type_y], [2, output_type_z], [3, output_type_w] ); exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveQuintuples1Input, exhaustive_quintuples_1_input, exhaustive_quintuples_from_single, (I::Item, I::Item, I::Item, I::Item, I::Item), [0, output_type_x], [1, output_type_y], [2, output_type_z], [3, output_type_w], [4, output_type_v] ); exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveSextuples1Input, exhaustive_sextuples_1_input, exhaustive_sextuples_from_single, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), [0, output_type_x], [1, output_type_y], [2, output_type_z], [3, output_type_w], [4, output_type_v], [5, output_type_u] ); exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveSeptuples1Input, exhaustive_septuples_1_input, exhaustive_septuples_from_single, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), [0, output_type_x], [1, output_type_y], [2, output_type_z], [3, output_type_w], [4, output_type_v], [5, output_type_u], [6, output_type_t] ); exhaustive_tuples_1_input!( (pub(crate)), ExhaustiveOctuples1Input, exhaustive_octuples_1_input, exhaustive_octuples_from_single, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), [0, output_type_x], [1, output_type_y], [2, output_type_z], [3, output_type_w], [4, output_type_v], [5, output_type_u], [6, output_type_t], [7, output_type_s] ); ``` Macro malachite::exhaustive_unions === ``` macro_rules! exhaustive_unions { ( ($($vis:tt)*), $union: ident, $lex_struct: ident, $exhaustive_struct: ident, $lex_fn: ident, $exhaustive_fn: ident, $n: expr, $([$i: expr, $t: ident, $it: ident, $variant: ident, $xs: ident, $xs_done:ident]),* ) => { ... }; } ``` Defines exhaustive union generators. Malachite provides `lex_union2s` and `exhaustive_union2s`, but you can also define `lex_union3s`, `lex_union4s`, and so on, and `exhaustive_union3s`, `exhaustive_union4s`, and so on, in your program using the code below. The documentation for `lex_union2s` and `exhaustive_union2s` describes these other functions as well. See usage examples here and here. ``` use malachite_base::unions::UnionFromStrError; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); union_struct!( (pub(crate)), Union4, Union4<T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d] ); union_struct!( (pub(crate)), Union5, Union5<T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e] ); union_struct!( (pub(crate)), Union6, Union6<T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f] ); union_struct!( (pub(crate)), Union7, Union7<T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g] ); union_struct!( (pub(crate)), Union8, Union8<T, T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g], [H, H, 'H', h] ); exhaustive_unions!( (pub(crate)), Union3, LexUnion3s, ExhaustiveUnion3s, lex_union3s, exhaustive_union3s, 3, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done] ); exhaustive_unions!( (pub(crate)), Union4, LexUnion4s, ExhaustiveUnion4s, lex_union4s, exhaustive_union4s, 4, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done], [3, W, L, D, ws, ws_done] ); exhaustive_unions!( (pub(crate)), Union5, LexUnion5s, ExhaustiveUnion5s, lex_union5s, exhaustive_union5s, 5, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done], [3, W, L, D, ws, ws_done], [4, V, M, E, vs, vs_done] ); exhaustive_unions!( (pub(crate)), Union6, LexUnion6s, ExhaustiveUnion6s, lex_union6s, exhaustive_union6s, 6, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done], [3, W, L, D, ws, ws_done], [4, V, M, E, vs, vs_done], [5, U, N, F, us, us_done] ); exhaustive_unions!( (pub(crate)), Union7, LexUnion7s, ExhaustiveUnion7s, lex_union7s, exhaustive_union7s, 7, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done], [3, W, L, D, ws, ws_done], [4, V, M, E, vs, vs_done], [5, U, N, F, us, us_done], [6, T, O, G, ts, ts_done] ); exhaustive_unions!( (pub(crate)), Union8, LexUnion8s, ExhaustiveUnion8s, lex_union8s, exhaustive_union8s, 8, [0, X, I, A, xs, xs_done], [1, Y, J, B, ys, ys_done], [2, Z, K, C, zs, zs_done], [3, W, L, D, ws, ws_done], [4, V, M, E, vs, vs_done], [5, U, N, F, us, us_done], [6, T, O, G, ts, ts_done], [7, S, P, H, ss, ss_done] ); ``` Macro malachite::exhaustive_unique_tuples === ``` macro_rules! exhaustive_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: expr),*] ) => { ... }; } ``` Defines lexicographic unique tuple generators. Malachite provides `exhaustive_unique_pairs`, but you can also define `exhaustive_unique_triples`, `lex_unique_quadruples`, and so on, in your program using the code below. See usage examples here. ``` use malachite_base::num::iterators::{RulerSequence, ruler_sequence}; use malachite_base::tuples::exhaustive::{ExhaustiveDependentPairs, exhaustive_dependent_pairs}; use malachite_base::vecs::ExhaustiveVecPermutations; use malachite_base::vecs::exhaustive::{ ExhaustiveOrderedUniqueCollections, ExhaustiveUniqueVecsGenerator, exhaustive_ordered_unique_vecs_fixed_length }; exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueTriples, 3, (I::Item, I::Item, I::Item), exhaustive_unique_triples, [0, 1, 2] ); exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), exhaustive_unique_quadruples, [0, 1, 2, 3] ); exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), exhaustive_unique_quintuples, [0, 1, 2, 3, 4] ); exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), exhaustive_unique_sextuples, [0, 1, 2, 3, 4, 5] ); exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), exhaustive_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); exhaustive_unique_tuples!( (pub(crate)), ExhaustiveUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), exhaustive_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::exhaustive_vecs_fixed_length === ``` macro_rules! exhaustive_vecs_fixed_length { ( ($($vis:tt)*), $exhaustive_struct: ident, $exhaustive_custom_fn: ident, $exhaustive_1_to_1_fn: ident, $([$i: expr, $it: ident, $xs: ident, $xs_done: ident, $outputs: ident]),* ) => { ... }; } ``` Defines exhaustive fixed-length `Vec` generators. Malachite provides `exhaustive_vecs_length_2` and `exhaustive_vecs_fixed_length_2_inputs`, but you can also define `exhaustive_vecs_length_3`, `exhaustive_vecs_length_4`, and so on, and `exhaustive_vecs_fixed_length_3_inputs`, `exhaustive_vecs_fixed_length_4_inputs`, and so on, in your program using the code below. The documentation for `exhaustive_vecs_length_2` and `exhaustive_vecs_fixed_length_2_inputs` describes these other functions as well. See usage examples here and here. ``` use itertools::Itertools; use malachite_base::exhaustive_vecs_fixed_length; use malachite_base::iterators::bit_distributor::{BitDistributor, BitDistributorOutputType}; use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::num::conversion::traits::{ExactFrom, WrappingFrom}; use malachite_base::num::logic::traits::SignificantBits; use malachite_base::vecs::exhaustive::validate_oi_map; use std::cmp::max; exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs3Inputs, exhaustive_vecs_fixed_length_3_inputs, exhaustive_vecs_length_3, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs] ); exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs4Inputs, exhaustive_vecs_fixed_length_4_inputs, exhaustive_vecs_length_4, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs], [3, L, ws, ws_done, ws_outputs] ); exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs5Inputs, exhaustive_vecs_fixed_length_5_inputs, exhaustive_vecs_length_5, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs], [3, L, ws, ws_done, ws_outputs], [4, M, vs, vs_done, vs_outputs] ); exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs6Inputs, exhaustive_vecs_fixed_length_6_inputs, exhaustive_vecs_length_6, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs], [3, L, ws, ws_done, ws_outputs], [4, M, vs, vs_done, vs_outputs], [5, N, us, us_done, us_outputs] ); exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs7, exhaustive_vecs_fixed_length_7_inputs, exhaustive_vecs_length_7, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs], [3, L, ws, ws_done, ws_outputs], [4, M, vs, vs_done, vs_outputs], [5, N, us, us_done, us_outputs], [6, O, ts, ts_done, ts_outputs] ); exhaustive_vecs_fixed_length!( (pub(crate)), ExhaustiveFixedLengthVecs8Inputs, exhaustive_vecs_fixed_length_8_inputs, exhaustive_vecs_length_8, [0, I, xs, xs_done, xs_outputs], [1, J, ys, ys_done, ys_outputs], [2, K, zs, zs_done, zs_outputs], [3, L, ws, ws_done, ws_outputs], [4, M, vs, vs_done, vs_outputs], [5, N, us, us_done, us_outputs], [6, O, ts, ts_done, ts_outputs], [7, P, ss, ss_done, ss_outputs] ); ``` Macro malachite::impl_named === ``` macro_rules! impl_named { ($t:ident) => { ... }; } ``` Automatically implements `Named` for a type. It doesn’t work very well for types whose names contain several tokens, like `(u8, u8)`, `&str`, or `Vec<bool>`. Examples --- ``` use malachite_base::named::Named; assert_eq!(u8::NAME, "u8"); assert_eq!(String::NAME, "String"); ``` Macro malachite::lex_custom_tuples === ``` macro_rules! lex_custom_tuples { ( ($($vis:tt)*), $exhaustive_struct: ident, $out_t: ty, $nones: expr, $unwrap_tuple: ident, $exhaustive_fn: ident, $([$t: ident, $it: ident, $xs: ident, $([$i: tt, $x: ident]),*]),* ) => { ... }; } ``` Defines custom lexicographic tuple generators. You can define custom tuple generators like `lex_triples_xxy` in your program using the code below. See usage examples here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::lex_tuples; fn unwrap_triple<X, Y, Z>((a, b, c): (Option<X>, Option<Y>, Option<Z>)) -> (X, Y, Z) { (a.unwrap(), b.unwrap(), c.unwrap()) } lex_custom_tuples! { (pub(crate)), LexTriplesXXY, (X, X, Y), (None, None, None), unwrap_triple, lex_triples_xxy, [X, I, xs, [0, x_0], [1, x_1]], [Y, J, ys, [2, y_2]] } lex_custom_tuples!( (pub(crate)), LexTriplesXYX, (X, Y, X), (None, None, None), unwrap_triple, lex_triples_xyx, [X, I, xs, [0, x_0], [2, x_2]], [Y, J, ys, [1, y_1]] ); lex_custom_tuples!( (pub(crate)), LexTriplesXYY, (X, Y, Y), (None, None, None), unwrap_triple, lex_triples_xyy, [X, I, xs, [0, x_0]], [Y, J, ys, [1, y_1], [2, y_2]] ); ``` Macro malachite::lex_ordered_unique_tuples === ``` macro_rules! lex_ordered_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: expr),*] ) => { ... }; } ``` Defines lexicographic ordered unique tuple generators. Malachite provides `lex_ordered_unique_pairs`, but you can also define `lex_ordered_unique_triples`, `lex_ordered_unique_quadruples`, and so on, in your program using the code below. The documentation for `lex_ordered_unique_pairs` describes these other functions as well. See usage examples here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::vecs::exhaustive::fixed_length_ordered_unique_indices_helper; use std::marker::PhantomData; lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueTriples, 3, (I::Item, I::Item, I::Item), lex_ordered_unique_triples, [0, 1, 2] ); lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), lex_ordered_unique_quadruples, [0, 1, 2, 3] ); lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), lex_ordered_unique_quintuples, [0, 1, 2, 3, 4] ); lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), lex_ordered_unique_sextuples, [0, 1, 2, 3, 4, 5] ); lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), lex_ordered_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); lex_ordered_unique_tuples!( (pub(crate)), LexOrderedUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), lex_ordered_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::lex_tuples === ``` macro_rules! lex_tuples { ( ($($vis:tt)*), $k: expr, $exhaustive_struct: ident, $exhaustive_struct_from_single: ident, $exhaustive_fn: ident, $exhaustive_fn_from_single: ident, $single_out: tt, $([$i: expr, $t: ident, $it: ident, $xs: ident, $x:ident]),* ) => { ... }; } ``` Defines lexicographic tuple generators. Malachite provides `lex_pairs` and `lex_pairs_from_single`, but you can also define `lex_triples`, `lex_quadruples`, and so on, and `lex_triples_from_single`, `lex_quadruples_from_single`, and so on, in your program using the code below. The documentation for `lex_pairs` and `lex_pairs_from_single` describes these other functions as well. See usage examples here and here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::lex_tuples; fn clone_helper<T: Clone>(x: &T, _i: usize) -> T { x.clone() } lex_tuples!( (pub(crate)), 3, LexTriples, LexTriplesFromSingle, lex_triples, lex_triples_from_single, (T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z] ); lex_tuples!( (pub(crate)), 4, LexQuadruples, LexQuadruplesFromSingle, lex_quadruples, lex_quadruples_from_single, (T, T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z], [3, W, L, ws, w] ); lex_tuples!( (pub(crate)), 5, LexQuintuples, LexQuintuplesFromSingle, lex_quintuples, lex_quintuples_from_single, (T, T, T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z], [3, W, L, ws, w], [4, V, M, vs, v] ); lex_tuples!( (pub(crate)), 6, LexSextuples, LexSextuplesFromSingle, lex_sextuples, lex_sextuples_from_single, (T, T, T, T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z], [3, W, L, ws, w], [4, V, M, vs, v], [5, U, N, us, u] ); lex_tuples!( (pub(crate)), 7, LexSeptuples, LexSeptuplesFromSingle, lex_septuples, lex_septuples_from_single, (T, T, T, T, T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z], [3, W, L, ws, w], [4, V, M, vs, v], [5, U, N, us, u], [6, T, O, ts, t] ); lex_tuples!( (pub(crate)), 8, LexOctuples, LexOctuplesFromSingle, lex_octuples, lex_octuples_from_single, (T, T, T, T, T, T, T, T), [0, X, I, xs, x], [1, Y, J, ys, y], [2, Z, K, zs, z], [3, W, L, ws, w], [4, V, M, vs, v], [5, U, N, us, u], [6, T, O, ts, t], [7, S, P, ss, s] ); ``` Macro malachite::lex_unique_tuples === ``` macro_rules! lex_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: expr),*] ) => { ... }; } ``` Defines lexicographic unique tuple generators. Malachite provides `lex_unique_pairs`, but you can also define `lex_unique_triples`, `lex_unique_quadruples`, and so on, in your program using the code below. The documentation for `lex_unique_pairs` describes these other functions as well. See usage examples here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::vecs::exhaustive::{UniqueIndices, unique_indices}; lex_unique_tuples!( (pub(crate)), LexUniqueTriples, 3, (I::Item, I::Item, I::Item), lex_unique_triples, [0, 1, 2] ); lex_unique_tuples!( (pub(crate)), LexUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), lex_unique_quadruples, [0, 1, 2, 3] ); lex_unique_tuples!( (pub(crate)), LexUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), lex_unique_quintuples, [0, 1, 2, 3, 4] ); lex_unique_tuples!( (pub(crate)), LexUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), lex_unique_sextuples, [0, 1, 2, 3, 4, 5] ); lex_unique_tuples!( (pub(crate)), LexUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), lex_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); lex_unique_tuples!( (pub(crate)), LexUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), lex_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::lex_vecs_fixed_length === ``` macro_rules! lex_vecs_fixed_length { ( ($($vis:tt)*), $exhaustive_struct: ident, $exhaustive_custom_fn: ident, $exhaustive_1_to_1_fn: ident, $([$i: expr, $it: ident, $xs: ident, $xs_outputs: ident]),* ) => { ... }; } ``` Defines lexicographic fixed-length `Vec` generators. Malachite provides `lex_vecs_length_2` and `lex_vecs_fixed_length_2_inputs`, but you can also define `lex_vecs_length_3`, `lex_vecs_length_4`, and so on, and `lex_vecs_fixed_length_3_inputs`, `lex_vecs_fixed_length_4_inputs`, and so on, in your program using the code below. The documentation for `lex_vecs_length_2` and `lex_vecs_fixed_length_2_inputs` describes these other functions as well. See usage examples here and here. ``` use malachite_base::iterators::iterator_cache::IteratorCache; use malachite_base::vecs::exhaustive::{validate_oi_map, LexFixedLengthVecsOutput}; lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs3Inputs, lex_vecs_fixed_length_3_inputs, lex_vecs_length_3, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs] ); lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs4Inputs, lex_vecs_fixed_length_4_inputs, lex_vecs_length_4, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs], [3, L, ws, ws_outputs] ); lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs5Inputs, lex_vecs_fixed_length_5_inputs, lex_vecs_length_5, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs], [3, L, ws, ws_outputs], [4, M, vs, vs_outputs] ); lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs6Inputs, lex_vecs_fixed_length_6_inputs, lex_vecs_length_6, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs], [3, L, ws, ws_outputs], [4, M, vs, vs_outputs], [5, N, us, us_outputs] ); lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs7Inputs, lex_vecs_fixed_length_7_inputs, lex_vecs_length_7, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs], [3, L, ws, ws_outputs], [4, M, vs, vs_outputs], [5, N, us, us_outputs], [6, O, ts, ts_outputs] ); lex_vecs_fixed_length!( (pub(crate)), LexFixedLengthVecs8Inputs, lex_vecs_fixed_length_8_inputs, lex_vecs_length_8, [0, I, xs, xs_outputs], [1, J, ys, ys_outputs], [2, K, zs, zs_outputs], [3, L, ws, ws_outputs], [4, M, vs, vs_outputs], [5, N, us, us_outputs], [6, O, ts, ts_outputs], [7, P, ss, ss_outputs] ); ``` Macro malachite::max === ``` macro_rules! max { ($first: expr $(,$next: expr)*) => { ... }; } ``` Computes the maximum of a list of expressions. The list must be nonempty, the expressions must all have the same type, and that type must implement `Ord`. Each expression is only evaluated once. Examples --- ``` use malachite_base::max; assert_eq!(max!(3), 3); assert_eq!(max!(3, 1), 3); assert_eq!(max!(3, 1, 4), 4); ``` Macro malachite::min === ``` macro_rules! min { ($first: expr $(,$next: expr)*) => { ... }; } ``` Computes the minimum of a list of expressions. The list must be nonempty, the expressions must all have the same type, and that type must implement `Ord`. Each expression is only evaluated once. Examples --- ``` use malachite_base::min; assert_eq!(min!(3), 3); assert_eq!(min!(3, 1), 1); assert_eq!(min!(3, 1, 4), 1); ``` Macro malachite::random_custom_tuples === ``` macro_rules! random_custom_tuples { ( ($($vis:tt)*), $random_struct: ident, $out_t: ty, $random_fn: ident, $([$t: ident, $it: ident, $xs: ident, $xs_gen: ident, $([$x: ident, $x_ord: ident]),*]),* ) => { ... }; } ``` Defines custom random tuple generators. You can define custom tuple generators like `random_triples_xyx` in your program using the code below. See usage examples here. ``` use malachite_base::random::Seed; random_custom_tuples!( (pub(crate)), RandomTriplesXXY, (X, X, Y), random_triples_xxy, [X, I, xs, xs_gen, [x_0, x_0], [x_1, x_1]], [Y, J, ys, ys_gen, [y_2, y_2]] ); random_custom_tuples!( (pub(crate)), RandomTriplesXYX, (X, Y, X), random_triples_xyx, [X, I, xs, xs_gen, [x_0, x_0], [x_2, y_1]], [Y, J, ys, ys_gen, [y_1, x_2]] ); random_custom_tuples!( (pub(crate)), RandomTriplesXYY, (X, Y, Y), random_triples_xyy, [X, I, xs, xs_gen, [x_0, x_0]], [Y, J, ys, ys_gen, [y_1, y_1], [y_2, y_2]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXXXY, (X, X, X, Y), random_quadruples_xxxy, [X, I, xs, xs_gen, [x_0, x_0], [x_1, x_1], [x_2, x_2]], [Y, J, ys, ys_gen, [y_3, y_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXXYX, (X, X, Y, X), random_quadruples_xxyx, [X, I, xs, xs_gen, [x_0, x_0], [x_1, x_1], [x_3, y_2]], [Y, J, ys, ys_gen, [y_2, x_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXXYZ, (X, X, Y, Z), random_quadruples_xxyz, [X, I, xs, xs_gen, [x_0, x_0], [x_1, x_1]], [Y, J, ys, ys_gen, [y_2, y_2]], [Z, K, zs, zs_gen, [z_3, z_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXYXZ, (X, Y, X, Z), random_quadruples_xyxz, [X, I, xs, xs_gen, [x_0, x_0], [x_2, y_1]], [Y, J, ys, ys_gen, [y_1, x_2]], [Z, K, zs, zs_gen, [z_3, z_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXYYX, (X, Y, Y, X), random_quadruples_xyyx, [X, I, xs, xs_gen, [x_0, x_0], [x_3, y_1]], [Y, J, ys, ys_gen, [y_1, y_2], [y_2, x_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXYYZ, (X, Y, Y, Z), random_quadruples_xyyz, [X, I, xs, xs_gen, [x_0, x_0]], [Y, J, ys, ys_gen, [y_1, y_1], [y_2, y_2]], [Z, K, zs, zs_gen, [z_3, z_3]] ); random_custom_tuples!( (pub(crate)), RandomQuadruplesXYZZ, (X, Y, Z, Z), random_quadruples_xyzz, [X, I, xs, xs_gen, [x_0, x_0]], [Y, J, ys, ys_gen, [y_1, y_1]], [Z, K, zs, zs_gen, [z_2, z_2], [z_3, z_3]] ); random_custom_tuples!( (pub(crate)), RandomQuintuplesXYYYZ, (X, Y, Y, Y, Z), random_quintuples_xyyyz, [X, I, xs, xs_gen, [x_0, x_0]], [Y, J, ys, ys_gen, [y_1, y_1], [y_2, y_2], [y_3, y_3]], [Z, K, zs, zs_gen, [z_4, z_4]] ); ``` Macro malachite::random_ordered_unique_tuples === ``` macro_rules! random_ordered_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: expr),*] ) => { ... }; } ``` Defines random ordered unique tuple generators. Malachite provides `random_ordered_unique_pairs`, but you can also define `random_ordered_unique_triples`, `random_ordered_unique_quadruples`, and so on, in your program using the code below. See usage examples here. ``` use malachite_base::sets::random::{ random_b_tree_sets_fixed_length, RandomBTreeSetsFixedLength }; random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueTriples, 3, (I::Item, I::Item, I::Item), random_ordered_unique_triples, [0, 1, 2] ); random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), random_ordered_unique_quadruples, [0, 1, 2, 3] ); random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), random_ordered_unique_quintuples, [0, 1, 2, 3, 4] ); random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), random_ordered_unique_sextuples, [0, 1, 2, 3, 4, 5] ); random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), random_ordered_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); random_ordered_unique_tuples!( (pub(crate)), RandomOrderedUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), random_ordered_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::random_tuples === ``` macro_rules! random_tuples { ( ($($vis:tt)*), $random_struct: ident, $random_struct_from_single: ident, $random_fn: ident, $random_fn_from_single: ident, $single_out: tt, $([$i: expr, $t: ident, $it: ident, $xs: ident, $xs_gen:ident]),* ) => { ... }; } ``` Defines random tuple generators. Malachite provides `random_pairs` and `random_pairs_from_single`, but you can also define `random_triples`, `random_quadruples`, and so on, and `random_triples_from_single`, `random_quadruples_from_single`, and so on, in your program using the code below. The documentation for `random_pairs` and `random_pairs_from_single` describes these other functions as well. See usage examples here and here. ``` use malachite_base::num::random::{random_unsigned_range, RandomUnsignedRange}; use malachite_base::random::Seed; use malachite_base::tuples::random::next_helper; random_tuples!( (pub(crate)), RandomTriples, RandomTriplesFromSingle, random_triples, random_triples_from_single, (I::Item, I::Item, I::Item), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen] ); random_tuples!( (pub(crate)), RandomQuadruples, RandomQuadruplesFromSingle, random_quadruples, random_quadruples_from_single, (I::Item, I::Item, I::Item, I::Item), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen], [3, W, L, ws, ws_gen] ); random_tuples!( (pub(crate)), RandomQuintuples, RandomQuintuplesFromSingle, random_quintuples, random_quintuples_from_single, (I::Item, I::Item, I::Item, I::Item, I::Item), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen], [3, W, L, ws, ws_gen], [4, V, M, vs, vs_gen] ); random_tuples!( (pub(crate)), RandomSextuples, RandomSextuplesFromSingle, random_sextuples, random_sextuples_from_single, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen], [3, W, L, ws, ws_gen], [4, V, M, vs, vs_gen], [5, U, N, us, us_gen] ); random_tuples!( (pub(crate)), RandomSeptuples, RandomSeptuplesFromSingle, random_septuples, random_septuples_from_single, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen], [3, W, L, ws, ws_gen], [4, V, M, vs, vs_gen], [5, U, N, us, us_gen], [6, T, O, ts, ts_gen] ); random_tuples!( (pub(crate)), RandomOctuples, RandomOctuplesFromSingle, random_octuples, random_octuples_from_single, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), [0, X, I, xs, xs_gen], [1, Y, J, ys, ys_gen], [2, Z, K, zs, zs_gen], [3, W, L, ws, ws_gen], [4, V, M, vs, vs_gen], [5, U, N, us, us_gen], [6, T, O, ts, ts_gen], [7, S, P, ss, ss_gen] ); ``` Macro malachite::random_unions === ``` macro_rules! random_unions { ( ($($vis:tt)*), $union: ident, $random_struct: ident, $random_fn: ident, $n: expr, $([$i: expr, $t: ident, $it: ident, $variant: ident, $xs: ident, $xs_gen: ident]),* ) => { ... }; } ``` Defines random union generators. Malachite provides `random_union2s`, but you can also define `random_union3s`, `random_union4s`, and so on, in your program using the code below. The documentation for `random_union2s` describes these other functions as well. See usage examples here. ``` use malachite_base::num::random::{random_unsigned_range, RandomUnsignedRange}; use malachite_base::random::Seed; use malachite_base::unions::UnionFromStrError; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); union_struct!( (pub(crate)), Union4, Union4<T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d] ); union_struct!( (pub(crate)), Union5, Union5<T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e] ); union_struct!( (pub(crate)), Union6, Union6<T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f] ); union_struct!( (pub(crate)), Union7, Union7<T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g] ); union_struct!( (pub(crate)), Union8, Union8<T, T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g], [H, H, 'H', h] ); random_unions!( (pub(crate)), Union3, RandomUnion3s, random_union3s, 3, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen] ); random_unions!( (pub(crate)), Union4, RandomUnion4s, random_union4s, 4, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen], [3, W, L, D, ws, ws_gen] ); random_unions!( (pub(crate)), Union5, RandomUnion5s, random_union5s, 5, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen], [3, W, L, D, ws, ws_gen], [4, V, M, E, vs, vs_gen] ); random_unions!( (pub(crate)), Union6, RandomUnion6s, random_union6s, 6, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen], [3, W, L, D, ws, ws_gen], [4, V, M, E, vs, vs_gen], [5, U, N, F, us, us_gen] ); random_unions!( (pub(crate)), Union7, RandomUnion7s, random_union7s, 7, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen], [3, W, L, D, ws, ws_gen], [4, V, M, E, vs, vs_gen], [5, U, N, F, us, us_gen], [6, T, O, G, ts, ts_gen] ); random_unions!( (pub(crate)), Union8, RandomUnion8s, random_union8s, 8, [0, X, I, A, xs, xs_gen], [1, Y, J, B, ys, ys_gen], [2, Z, K, C, zs, zs_gen], [3, W, L, D, ws, ws_gen], [4, V, M, E, vs, vs_gen], [5, U, N, F, us, us_gen], [6, T, O, G, ts, ts_gen], [7, S, P, H, ss, ss_gen] ); ``` Macro malachite::random_unique_tuples === ``` macro_rules! random_unique_tuples { ( ($($vis:tt)*), $struct: ident, $k: expr, $out_t: ty, $fn: ident, [$($i: tt),*] ) => { ... }; } ``` Defines random unique tuple generators. Malachite provides `random_unique_pairs`, but you can also define `random_unique_triples`, `random_unique_quadruples`, and so on, in your program using the code below. See usage examples here. ``` use std::collections::HashMap; use std::hash::Hash; random_unique_tuples!( (pub(crate)), RandomOrderedUniqueTriples, 3, (I::Item, I::Item, I::Item), random_unique_triples, [0, 1, 2] ); random_unique_tuples!( (pub(crate)), RandomOrderedUniqueQuadruples, 4, (I::Item, I::Item, I::Item, I::Item), random_unique_quadruples, [0, 1, 2, 3] ); random_unique_tuples!( (pub(crate)), RandomOrderedUniqueQuintuples, 5, (I::Item, I::Item, I::Item, I::Item, I::Item), random_unique_quintuples, [0, 1, 2, 3, 4] ); random_unique_tuples!( (pub(crate)), RandomOrderedUniqueSextuples, 6, (I::Item, I::Item, I::Item, I::Item, I::Item, I::Item), random_unique_sextuples, [0, 1, 2, 3, 4, 5] ); random_unique_tuples!( (pub(crate)), RandomOrderedUniqueSeptuples, 7, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), random_unique_septuples, [0, 1, 2, 3, 4, 5, 6] ); random_unique_tuples!( (pub(crate)), RandomOrderedUniqueOctuples, 8, ( I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item, I::Item ), random_unique_octuples, [0, 1, 2, 3, 4, 5, 6, 7] ); ``` Macro malachite::random_vecs_fixed_length === ``` macro_rules! random_vecs_fixed_length { ( ($($vis:tt)*), $random_struct: ident, $random_fn: ident, $random_1_to_1_fn: ident, $([$i: expr, $it: ident, $xs: ident, $xs_gen: ident]),* ) => { ... }; } ``` Defines random fixed-length `Vec` generators. Malachite provides `random_vecs_length_2` and `random_vecs_fixed_length_2_inputs`, but you can also define `random_vecs_length_3`, `random_vecs_length_4`, and so on, and `random_vecs_fixed_length_3_inputs`, `random_vecs_fixed_length_4_inputs`, and so on, in your program using the code below. The documentation for `random_vecs_length_2` and `random_vecs_fixed_length_2_inputs` describes these other functions as well. See usage examples here and here. ``` use malachite_base::random::Seed; use malachite_base::vecs::exhaustive::validate_oi_map; random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs3Inputs, random_vecs_fixed_length_3_inputs, random_vecs_length_3, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen] ); random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs4Inputs, random_vecs_fixed_length_4_inputs, random_vecs_length_4, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen], [3, L, ws, ws_gen] ); random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs5Inputs, random_vecs_fixed_length_5_inputs, random_vecs_length_5, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen], [3, L, ws, ws_gen], [4, M, vs, vs_gen] ); random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs6Inputs, random_vecs_fixed_length_6_inputs, random_vecs_length_6, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen], [3, L, ws, ws_gen], [4, M, vs, vs_gen], [5, N, us, us_gen] ); random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs7Inputs, random_vecs_fixed_length_7_inputs, random_vecs_length_7, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen], [3, L, ws, ws_gen], [4, M, vs, vs_gen], [5, N, us, us_gen], [6, O, ts, ts_gen] ); random_vecs_fixed_length!( (pub(crate)), RandomFixedLengthVecs8Inputs, random_vecs_fixed_length_8_inputs, random_vecs_length_8, [0, I, xs, xs_gen], [1, J, ys, ys_gen], [2, K, zs, zs_gen], [3, L, ws, ws_gen], [4, M, vs, vs_gen], [5, N, us, us_gen], [6, O, ts, ts_gen], [7, P, ss, ss_gen] ); ``` Macro malachite::split_into_chunks === ``` macro_rules! split_into_chunks { ($xs: expr, $n: expr, [$($xs_i: ident),*], $xs_last: ident) => { ... }; } ``` Splits an immutable slice into adjacent immutable chunks. An input slice $\mathbf{x}$, a chunk length $n$, and $k + 1$ output slice names $\mathbf{x}_0, \mathbf{x}_1, \ldots, \mathbf{x}_k$ are given. The last output slice name, $\mathbf{x}_k$, is specified via a separate argument called `xs_last`. The first $k$ output slice names are assigned adjacent length-$n$ chunks from $\mathbf{x}$. If $|\mathbf{x}| < kn$, the generated code panics. The last slice, $\mathbf{x}_k$, which is assigned to `xs_last`, has length $|\mathbf{x}| - kn$. This length may be greater than $n$. Worst-case complexity --- $T(k) = O(k)$ $M(k) = O(1)$ where $T$ is time, $M$ is additional memory, and $k$ is the number of output slice names `xs_i`. Examples --- ``` #[macro_use] extern crate malachite_base; fn main() { let xs = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; split_into_chunks!(xs, 3, [xs_1, xs_2, xs_3], xs_4); assert_eq!(xs_1, &[0, 1, 2]); assert_eq!(xs_2, &[3, 4, 5]); assert_eq!(xs_3, &[6, 7, 8]); assert_eq!(xs_4, &[9, 10, 11, 12]); } ``` Macro malachite::split_into_chunks_mut === ``` macro_rules! split_into_chunks_mut { ($xs: expr, $n: expr, [$($xs_i: ident),*], $xs_last: ident) => { ... }; } ``` Splits a mutable slice into adjacent mutable chunks. An input slice $\mathbf{x}$, a chunk length $n$, and $k + 1$ output slice names $\mathbf{x}_0, \mathbf{x}_1, \ldots, \mathbf{x}_k$ are given. The last output slice name, $\mathbf{x}_k$, is specified via a separate argument called `xs_last`. The first $k$ output slice names are assigned adjacent length-$n$ chunks from $\mathbf{x}$. If $|\mathbf{x}| < kn$, the generated code panics. The last slice, $\mathbf{x}_k$, which is assigned to `xs_last`, has length $|\mathbf{x}| - kn$. This length may be greater than $n$. Worst-case complexity --- $T(k) = O(k)$ $M(k) = O(1)$ where $T$ is time, $M$ is additional memory, and $k$ is the number of output slice names `xs_i`. Examples --- ``` #[macro_use] extern crate malachite_base; use malachite_base::slices::slice_set_zero; fn main() { let xs = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; split_into_chunks_mut!(xs, 3, [xs_1, xs_2, xs_3], xs_4); assert_eq!(xs_1, &[0, 1, 2]); assert_eq!(xs_2, &[3, 4, 5]); assert_eq!(xs_3, &[6, 7, 8]); assert_eq!(xs_4, &[9, 10, 11, 12]); slice_set_zero(xs_2); assert_eq!(xs, &[0, 1, 2, 0, 0, 0, 6, 7, 8, 9, 10, 11, 12]); } ``` Macro malachite::union_struct === ``` macro_rules! union_struct { ( ($($vis:tt)*), $name: ident, $single: ty, $([$t: ident, $cons: ident, $c: expr, $x: ident]),* ) => { ... }; } ``` Defines unions. Malachite provides `Union2`, but you can also define `Union3`, `Union4`, and so on, in your program using the code below. The documentation for `Union2` and describes these other `enum`s as well. ``` use malachite_base::unions::UnionFromStrError; use std::fmt::{self, Display, Formatter}; use std::str::FromStr; union_struct!( (pub(crate)), Union3, Union3<T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c] ); union_struct!( (pub(crate)), Union4, Union4<T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d] ); union_struct!( (pub(crate)), Union5, Union5<T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e] ); union_struct!( (pub(crate)), Union6, Union6<T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f] ); union_struct!( (pub(crate)), Union7, Union7<T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g] ); union_struct!( (pub(crate)), Union8, Union8<T, T, T, T, T, T, T, T>, [A, A, 'A', a], [B, B, 'B', b], [C, C, 'C', c], [D, D, 'D', d], [E, E, 'E', e], [F, F, 'F', f], [G, G, 'G', g], [H, H, 'H', h] ); } ``` Struct malachite::Integer === ``` pub struct Integer { /* private fields */ } ``` An integer. Any `Integer` whose absolute value is small enough to fit into a `Limb` is represented inline. Only integers outside this range incur the costs of heap-allocation. Implementations --- ### impl Integer #### pub const fn unsigned_abs_ref(&self) -> &Natural Finds the absolute value of an `Integer`, taking the `Integer` by reference and returning a reference to the internal `Natural` absolute value. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(*Integer::ZERO.unsigned_abs_ref(), 0); assert_eq!(*Integer::from(123).unsigned_abs_ref(), 123); assert_eq!(*Integer::from(-123).unsigned_abs_ref(), 123); ``` #### pub fn mutate_unsigned_abs<F, T>(&mut self, f: F) -> Twhere F: FnOnce(&mut Natural) -> T, Mutates the absolute value of an `Integer` using a provided closure, and then returns whatever the closure returns. This function is similar to the `unsigned_abs_ref` function, which returns a reference to the absolute value. A function that returns a *mutable* reference would be too dangerous, as it could leave the `Integer` in an invalid state (specifically, with a negative sign but a zero absolute value). So rather than returning a mutable reference, this function allows mutation of the absolute value using a closure. After the closure executes, this function ensures that the `Integer` remains valid. There is only constant time and memory overhead on top of the time and memory used by the closure. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_base::num::basic::traits::Two; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; let mut n = Integer::from(-123); let remainder = n.mutate_unsigned_abs(|x| x.div_assign_mod(Natural::TWO)); assert_eq!(n, -61); assert_eq!(remainder, 1); let mut n = Integer::from(-123); n.mutate_unsigned_abs(|x| *x >>= 10); assert_eq!(n, 0); ``` ### impl Integer #### pub fn from_sign_and_abs(sign: bool, abs: Natural) -> Integer Converts a sign and a `Natural` to an `Integer`, taking the `Natural` by value. The `Natural` becomes the `Integer`’s absolute value, and the sign indicates whether the `Integer` should be non-negative. If the `Natural` is zero, then the `Integer` will be non-negative regardless of the sign. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from_sign_and_abs(true, Natural::from(123u32)), 123); assert_eq!(Integer::from_sign_and_abs(false, Natural::from(123u32)), -123); ``` #### pub fn from_sign_and_abs_ref(sign: bool, abs: &Natural) -> Integer Converts a sign and an `Natural` to an `Integer`, taking the `Natural` by reference. The `Natural` becomes the `Integer`’s absolute value, and the sign indicates whether the `Integer` should be non-negative. If the `Natural` is zero, then the `Integer` will be non-negative regardless of the sign. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, $n$ is `abs.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from_sign_and_abs_ref(true, &Natural::from(123u32)), 123); assert_eq!(Integer::from_sign_and_abs_ref(false, &Natural::from(123u32)), -123); ``` ### impl Integer #### pub const fn const_from_unsigned(x: u64) -> Integer Converts a `Limb` to an `Integer`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; const TEN: Integer = Integer::const_from_unsigned(10); assert_eq!(TEN, 10); ``` #### pub const fn const_from_signed(x: i64) -> Integer Converts a `SignedLimb` to an `Integer`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; const TEN: Integer = Integer::const_from_signed(10); assert_eq!(TEN, 10); const NEGATIVE_TEN: Integer = Integer::const_from_signed(-10); assert_eq!(NEGATIVE_TEN, -10); ``` ### impl Integer #### pub fn from_twos_complement_limbs_asc(xs: &[u64]) -> Integer Converts a slice of limbs to an `Integer`, in ascending order, so that less significant limbs have lower indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function borrows a slice. If taking ownership of a `Vec` is possible instead, `from_owned_twos_complement_limbs_asc` is more efficient. This function is more efficient than `from_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_twos_complement_limbs_asc(&[]), 0); assert_eq!(Integer::from_twos_complement_limbs_asc(&[123]), 123); assert_eq!(Integer::from_twos_complement_limbs_asc(&[4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_twos_complement_limbs_asc(&[3567587328, 232]), 1000000000000u64 ); assert_eq!( Integer::from_twos_complement_limbs_asc(&[727379968, 4294967063]), -1000000000000i64 ); } ``` #### pub fn from_twos_complement_limbs_desc(xs: &[u64]) -> Integer Converts a slice of limbs to an `Integer`, in descending order, so that less significant limbs have higher indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function borrows a slice. If taking ownership of a `Vec` is possible instead, `from_owned_twos_complement_limbs_desc` is more efficient. This function is less efficient than `from_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_twos_complement_limbs_desc(&[]), 0); assert_eq!(Integer::from_twos_complement_limbs_desc(&[123]), 123); assert_eq!(Integer::from_twos_complement_limbs_desc(&[4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_twos_complement_limbs_desc(&[232, 3567587328]), 1000000000000u64 ); assert_eq!( Integer::from_twos_complement_limbs_desc(&[4294967063, 727379968]), -1000000000000i64 ); } ``` #### pub fn from_owned_twos_complement_limbs_asc(xs: Vec<u64, Global>) -> Integer Converts a slice of limbs to an `Integer`, in ascending order, so that less significant limbs have lower indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function takes ownership of a `Vec`. If it’s necessary to borrow a slice instead, use `from_twos_complement_limbs_asc` This function is more efficient than `from_owned_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![]), 0); assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![123]), 123); assert_eq!(Integer::from_owned_twos_complement_limbs_asc(vec![4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_owned_twos_complement_limbs_asc(vec![3567587328, 232]), 1000000000000i64 ); assert_eq!( Integer::from_owned_twos_complement_limbs_asc(vec![727379968, 4294967063]), -1000000000000i64 ); } ``` #### pub fn from_owned_twos_complement_limbs_desc(xs: Vec<u64, Global>) -> Integer Converts a slice of limbs to an `Integer`, in descending order, so that less significant limbs have higher indices in the input slice. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is non-negative, and if the bit is one it is negative. If the slice is empty, zero is returned. This function takes ownership of a `Vec`. If it’s necessary to borrow a slice instead, use `from_twos_complement_limbs_desc`. This function is less efficient than `from_owned_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![]), 0); assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![123]), 123); assert_eq!(Integer::from_owned_twos_complement_limbs_desc(vec![4294967173]), -123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from_owned_twos_complement_limbs_desc(vec![232, 3567587328]), 1000000000000i64 ); assert_eq!( Integer::from_owned_twos_complement_limbs_desc(vec![4294967063, 727379968]), -1000000000000i64 ); } ``` ### impl Integer #### pub fn to_twos_complement_limbs_asc(&self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in ascending order, so that less significant limbs have lower indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no trailing zero limbs if the `Integer` is positive or trailing `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This function borrows `self`. If taking ownership of `self` is possible, `into_twos_complement_limbs_asc` is more efficient. This function is more efficient than `to_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.to_twos_complement_limbs_asc().is_empty()); assert_eq!(Integer::from(123).to_twos_complement_limbs_asc(), &[123]); assert_eq!(Integer::from(-123).to_twos_complement_limbs_asc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).to_twos_complement_limbs_asc(), &[3567587328, 232] ); assert_eq!( (-Integer::from(10u32).pow(12)).to_twos_complement_limbs_asc(), &[727379968, 4294967063] ); } ``` #### pub fn to_twos_complement_limbs_desc(&self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in descending order, so that less significant limbs have higher indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no leading zero limbs if the `Integer` is non-negative or leading `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This is similar to how `BigInteger`s in Java are represented. This function borrows `self`. If taking ownership of `self` is possible, `into_twos_complement_limbs_desc` is more efficient. This function is less efficient than `to_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.to_twos_complement_limbs_desc().is_empty()); assert_eq!(Integer::from(123).to_twos_complement_limbs_desc(), &[123]); assert_eq!(Integer::from(-123).to_twos_complement_limbs_desc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).to_twos_complement_limbs_desc(), &[232, 3567587328] ); assert_eq!( (-Integer::from(10u32).pow(12)).to_twos_complement_limbs_desc(), &[4294967063, 727379968] ); } ``` #### pub fn into_twos_complement_limbs_asc(self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in ascending order, so that less significant limbs have lower indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no trailing zero limbs if the `Integer` is positive or trailing `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This function takes ownership of `self`. If it’s necessary to borrow `self` instead, use `to_twos_complement_limbs_asc`. This function is more efficient than `into_twos_complement_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.into_twos_complement_limbs_asc().is_empty()); assert_eq!(Integer::from(123).into_twos_complement_limbs_asc(), &[123]); assert_eq!(Integer::from(-123).into_twos_complement_limbs_asc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).into_twos_complement_limbs_asc(), &[3567587328, 232] ); assert_eq!( (-Integer::from(10u32).pow(12)).into_twos_complement_limbs_asc(), &[727379968, 4294967063] ); } ``` #### pub fn into_twos_complement_limbs_desc(self) -> Vec<u64, GlobalReturns the limbs of an `Integer`, in descending order, so that less significant limbs have higher indices in the output vector. The limbs are in two’s complement, and the most significant bit of the limbs indicates the sign; if the bit is zero, the `Integer` is positive, and if the bit is one it is negative. There are no leading zero limbs if the `Integer` is non-negative or leading `Limb::MAX` limbs if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no limbs. This is similar to how `BigInteger`s in Java are represented. This function takes ownership of `self`. If it’s necessary to borrow `self` instead, use `to_twos_complement_limbs_desc`. This function is less efficient than `into_twos_complement_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.into_twos_complement_limbs_desc().is_empty()); assert_eq!(Integer::from(123).into_twos_complement_limbs_desc(), &[123]); assert_eq!(Integer::from(-123).into_twos_complement_limbs_desc(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).into_twos_complement_limbs_desc(), &[232, 3567587328] ); assert_eq!( (-Integer::from(10u32).pow(12)).into_twos_complement_limbs_desc(), &[4294967063, 727379968] ); } ``` #### pub fn twos_complement_limbs(&self) -> TwosComplementLimbIterator<'_Returns a double-ended iterator over the twos-complement limbs of an `Integer`. The forward order is ascending, so that less significant limbs appear first. There may be a most-significant sign-extension limb. If it’s necessary to get a `Vec` of all the twos_complement limbs, consider using `to_twos_complement_limbs_asc`, `to_twos_complement_limbs_desc`, `into_twos_complement_limbs_asc`, or `into_twos_complement_limbs_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Integer::ZERO.twos_complement_limbs().next().is_none()); assert_eq!(Integer::from(123).twos_complement_limbs().collect_vec(), &[123]); assert_eq!(Integer::from(-123).twos_complement_limbs().collect_vec(), &[4294967173]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).twos_complement_limbs().collect_vec(), &[3567587328, 232] ); // Sign-extension for a non-negative `Integer` assert_eq!( Integer::from(4294967295i64).twos_complement_limbs().collect_vec(), &[4294967295, 0] ); assert_eq!( (-Integer::from(10u32).pow(12)).twos_complement_limbs().collect_vec(), &[727379968, 4294967063] ); // Sign-extension for a negative `Integer` assert_eq!( (-Integer::from(4294967295i64)).twos_complement_limbs().collect_vec(), &[1, 4294967295] ); assert!(Integer::ZERO.twos_complement_limbs().rev().next().is_none()); assert_eq!(Integer::from(123).twos_complement_limbs().rev().collect_vec(), &[123]); assert_eq!( Integer::from(-123).twos_complement_limbs().rev().collect_vec(), &[4294967173] ); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Integer::from(10u32).pow(12).twos_complement_limbs().rev().collect_vec(), &[232, 3567587328] ); // Sign-extension for a non-negative `Integer` assert_eq!( Integer::from(4294967295i64).twos_complement_limbs().rev().collect_vec(), &[0, 4294967295] ); assert_eq!( (-Integer::from(10u32).pow(12)).twos_complement_limbs().rev().collect_vec(), &[4294967063, 727379968] ); // Sign-extension for a negative `Integer` assert_eq!( (-Integer::from(4294967295i64)).twos_complement_limbs().rev().collect_vec(), &[4294967295, 1] ); } ``` ### impl Integer #### pub fn checked_count_ones(&self) -> Option<u64Counts the number of ones in the binary expansion of an `Integer`. If the `Integer` is negative, then the number of ones is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.checked_count_ones(), Some(0)); // 105 = 1101001b assert_eq!(Integer::from(105).checked_count_ones(), Some(4)); assert_eq!(Integer::from(-105).checked_count_ones(), None); // 10^12 = 1110100011010100101001010001000000000000b assert_eq!(Integer::from(10u32).pow(12).checked_count_ones(), Some(13)); ``` ### impl Integer #### pub fn checked_count_zeros(&self) -> Option<u64Counts the number of zeros in the binary expansion of an `Integer`. If the `Integer` is non-negative, then the number of zeros is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.checked_count_zeros(), None); // -105 = 10010111 in two's complement assert_eq!(Integer::from(-105).checked_count_zeros(), Some(3)); assert_eq!(Integer::from(105).checked_count_zeros(), None); // -10^12 = 10001011100101011010110101111000000000000 in two's complement assert_eq!((-Integer::from(10u32).pow(12)).checked_count_zeros(), Some(24)); ``` ### impl Integer #### pub fn trailing_zeros(&self) -> Option<u64Returns the number of trailing zeros in the binary expansion of an `Integer` (equivalently, the multiplicity of 2 in its prime factorization), or `None` is the `Integer` is 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.trailing_zeros(), None); assert_eq!(Integer::from(3).trailing_zeros(), Some(0)); assert_eq!(Integer::from(-72).trailing_zeros(), Some(3)); assert_eq!(Integer::from(100).trailing_zeros(), Some(2)); assert_eq!((-Integer::from(10u32).pow(12)).trailing_zeros(), Some(12)); ``` Trait Implementations --- ### impl<'a> Abs for &'a Integer #### fn abs(self) -> Integer Takes the absolute value of an `Integer`, taking the `Integer` by reference. $$ f(x) = |x|. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).abs(), 0); assert_eq!((&Integer::from(123)).abs(), 123); assert_eq!((&Integer::from(-123)).abs(), 123); ``` #### type Output = Integer ### impl Abs for Integer #### fn abs(self) -> Integer Takes the absolute value of an `Integer`, taking the `Integer` by value. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.abs(), 0); assert_eq!(Integer::from(123).abs(), 123); assert_eq!(Integer::from(-123).abs(), 123); ``` #### type Output = Integer ### impl AbsAssign for Integer #### fn abs_assign(&mut self) Replaces an `Integer` with its absolute value. $$ x \gets |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::AbsAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.abs_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.abs_assign(); assert_eq!(x, 123); let mut x = Integer::from(-123); x.abs_assign(); assert_eq!(x, 123); ``` ### impl<'a, 'b> Add<&'a Integer> for &'b Integer #### fn add(self, other: &'a Integer) -> Integer Adds two `Integer`s, taking both by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO + &Integer::from(123), 123); assert_eq!(&Integer::from(-123) + &Integer::ZERO, -123); assert_eq!(&Integer::from(-123) + &Integer::from(456), 333); assert_eq!( &-Integer::from(10u32).pow(12) + &(Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> Add<&'a Integer> for Integer #### fn add(self, other: &'a Integer) -> Integer Adds two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO + &Integer::from(123), 123); assert_eq!(Integer::from(-123) + &Integer::ZERO, -123); assert_eq!(Integer::from(-123) + &Integer::from(456), 333); assert_eq!( -Integer::from(10u32).pow(12) + &(Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> Add<Integer> for &'a Integer #### fn add(self, other: Integer) -> Integer Adds two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO + Integer::from(123), 123); assert_eq!(&Integer::from(-123) + Integer::ZERO, -123); assert_eq!(&Integer::from(-123) + Integer::from(456), 333); assert_eq!( &-Integer::from(10u32).pow(12) + (Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl Add<Integer> for Integer #### fn add(self, other: Integer) -> Integer Adds two `Integer`s, taking both by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO + Integer::from(123), 123); assert_eq!(Integer::from(-123) + Integer::ZERO, -123); assert_eq!(Integer::from(-123) + Integer::from(456), 333); assert_eq!( -Integer::from(10u32).pow(12) + (Integer::from(10u32).pow(12) << 1), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `+` operator.### impl<'a> AddAssign<&'a Integer> for Integer #### fn add_assign(&mut self, other: &'a Integer) Adds an `Integer` to an `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x += &(-Integer::from(10u32).pow(12)); x += &(Integer::from(10u32).pow(12) * Integer::from(2u32)); x += &(-Integer::from(10u32).pow(12) * Integer::from(3u32)); x += &(Integer::from(10u32).pow(12) * Integer::from(4u32)); assert_eq!(x, 2000000000000u64); ``` ### impl AddAssign<Integer> for Integer #### fn add_assign(&mut self, other: Integer) Adds an `Integer` to an `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x += -Integer::from(10u32).pow(12); x += Integer::from(10u32).pow(12) * Integer::from(2u32); x += -Integer::from(10u32).pow(12) * Integer::from(3u32); x += Integer::from(10u32).pow(12) * Integer::from(4u32); assert_eq!(x, 2000000000000u64); ``` ### impl<'a> AddMul<&'a Integer, Integer> for Integer #### fn add_mul(self, y: &'a Integer, z: Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first and third by value and the second by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(&Integer::from(3u32), Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b, 'c> AddMul<&'a Integer, &'b Integer> for &'c Integer #### fn add_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking all three by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(10u32)).add_mul(&Integer::from(3u32), &Integer::from(4u32)), 22 ); assert_eq!( (&-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b> AddMul<&'a Integer, &'b Integer> for Integer #### fn add_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first by value and the second and third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(&Integer::from(3u32), &Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> AddMul<Integer, &'a Integer> for Integer #### fn add_mul(self, y: Integer, z: &'a Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking the first two by value and the third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(Integer::from(3u32), &Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(Integer::from(0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl AddMul<Integer, Integer> for Integer #### fn add_mul(self, y: Integer, z: Integer) -> Integer Adds an `Integer` and the product of two other `Integer`s, taking all three by value. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).add_mul(Integer::from(3u32), Integer::from(4u32)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .add_mul(Integer::from(0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> AddMulAssign<&'a Integer, Integer> for Integer #### fn add_mul_assign(&mut self, y: &'a Integer, z: Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking the first `Integer` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(&Integer::from(3u32), Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(&Integer::from(0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a, 'b> AddMulAssign<&'a Integer, &'b Integer> for Integer #### fn add_mul_assign(&mut self, y: &'a Integer, z: &'b Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking both `Integer`s on the right-hand side by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(&Integer::from(3u32), &Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(&Integer::from(0x10000), &-Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> AddMulAssign<Integer, &'a Integer> for Integer #### fn add_mul_assign(&mut self, y: Integer, z: &'a Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking the first `Integer` on the right-hand side by value and the second by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(Integer::from(3u32), &Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(Integer::from(0x10000), &-Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl AddMulAssign<Integer, Integer> for Integer #### fn add_mul_assign(&mut self, y: Integer, z: Integer) Adds the product of two other `Integer`s to an `Integer` in place, taking both `Integer`s on the right-hand side by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.add_mul_assign(Integer::from(3u32), Integer::from(4u32)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.add_mul_assign(Integer::from(0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl Binary for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a binary `String`. Using the `#` format flag prepends `"0b"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToBinaryString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_binary_string(), "0"); assert_eq!(Integer::from(123).to_binary_string(), "1111011"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_binary_string(), "1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Integer::from(123)), "00001111011"); assert_eq!(Integer::from(-123).to_binary_string(), "-1111011"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_binary_string(), "-1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Integer::from(-123)), "-0001111011"); assert_eq!(format!("{:#b}", Integer::ZERO), "0b0"); assert_eq!(format!("{:#b}", Integer::from(123)), "0b1111011"); assert_eq!( format!("{:#b}", Integer::from_str("1000000000000").unwrap()), "0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Integer::from(123)), "0b001111011"); assert_eq!(format!("{:#b}", Integer::from(-123)), "-0b1111011"); assert_eq!( format!("{:#b}", Integer::from_str("-1000000000000").unwrap()), "-0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Integer::from(-123)), "-0b01111011"); ``` ### impl<'a> BinomialCoefficient<&'a Integer> for Integer #### fn binomial_coefficient(n: &'a Integer, k: &'a Integer) -> Integer Computes the binomial coefficient of two `Integer`s, taking both by reference. The second argument must be non-negative, but the first may be negative. If it is, the identity $\binom{-n}{k} = (-1)^k \binom{n+k-1}{k}$ is used. $$ f(n, k) = \begin{cases} \binom{n}{k} & \text{if} \quad n \geq 0, \\ (-1)^k \binom{-n+k-1}{k} & \text{if} \quad n < 0. \end{cases} $$ ##### Worst-case complexity TODO ##### Panics Panics if $k$ is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::integer::Integer; assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(1)), 4); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(3)), 4); assert_eq!(Integer::binomial_coefficient(&Integer::from(4), &Integer::from(4)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(10), &Integer::from(5)), 252); assert_eq!( Integer::binomial_coefficient(&Integer::from(100), &Integer::from(50)).to_string(), "100891344545564193334812497256" ); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(1)), -3); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(&Integer::from(-3), &Integer::from(3)), -10); ``` ### impl BinomialCoefficient<Integer> for Integer #### fn binomial_coefficient(n: Integer, k: Integer) -> Integer Computes the binomial coefficient of two `Integer`s, taking both by value. The second argument must be non-negative, but the first may be negative. If it is, the identity $\binom{-n}{k} = (-1)^k \binom{n+k-1}{k}$ is used. $$ f(n, k) = \begin{cases} \binom{n}{k} & \text{if} \quad n \geq 0, \\ (-1)^k \binom{-n+k-1}{k} & \text{if} \quad n < 0. \end{cases} $$ ##### Worst-case complexity TODO ##### Panics Panics if $k$ is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::integer::Integer; assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(1)), 4); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(3)), 4); assert_eq!(Integer::binomial_coefficient(Integer::from(4), Integer::from(4)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(10), Integer::from(5)), 252); assert_eq!( Integer::binomial_coefficient(Integer::from(100), Integer::from(50)).to_string(), "100891344545564193334812497256" ); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(0)), 1); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(1)), -3); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(2)), 6); assert_eq!(Integer::binomial_coefficient(Integer::from(-3), Integer::from(3)), -10); ``` ### impl BitAccess for Integer Provides functions for accessing and modifying the $i$th bit of a `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion. #### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::{NegativeOne, Zero}; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, 100); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, 0); let mut x = Integer::from(-0x100); x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, -156); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, -256); let mut x = Integer::ZERO; x.flip_bit(10); assert_eq!(x, 1024); x.flip_bit(10); assert_eq!(x, 0); let mut x = Integer::NEGATIVE_ONE; x.flip_bit(10); assert_eq!(x, -1025); x.flip_bit(10); assert_eq!(x, -1); ``` #### fn get_bit(&self, index: u64) -> bool Determines whether the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion, is 0 or 1. `false` means 0 and `true` means 1. Getting bits beyond the `Integer`’s width is allowed; those bits are `false` if the `Integer` is non-negative and `true` if it is negative. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $f(n, i) = (b_i = 1)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::integer::Integer; assert_eq!(Integer::from(123).get_bit(2), false); assert_eq!(Integer::from(123).get_bit(3), true); assert_eq!(Integer::from(123).get_bit(100), false); assert_eq!(Integer::from(-123).get_bit(0), true); assert_eq!(Integer::from(-123).get_bit(1), false); assert_eq!(Integer::from(-123).get_bit(100), true); assert_eq!(Integer::from(10u32).pow(12).get_bit(12), true); assert_eq!(Integer::from(10u32).pow(12).get_bit(100), false); assert_eq!((-Integer::from(10u32).pow(12)).get_bit(12), true); assert_eq!((-Integer::from(10u32).pow(12)).get_bit(100), true); ``` #### fn set_bit(&mut self, index: u64) Sets the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its two’s complement binary expansion, to 1. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $$ n \gets \begin{cases} n + 2^j & \text{if} \quad b_j = 0, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, 100); let mut x = Integer::from(-0x100); x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, -156); ``` #### fn clear_bit(&mut self, index: u64) Sets the $i$th bit of an `Integer`, or the coefficient of $2^i$ in its binary expansion, to 0. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. $$ n \gets \begin{cases} n - 2^j & \text{if} \quad b_j = 1, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_nz::integer::Integer; let mut x = Integer::from(0x7f); x.clear_bit(0); x.clear_bit(1); x.clear_bit(3); x.clear_bit(4); assert_eq!(x, 100); let mut x = Integer::from(-156); x.clear_bit(2); x.clear_bit(5); x.clear_bit(6); assert_eq!(x, -256); ``` #### fn assign_bit(&mut self, index: u64, bit: bool) Sets the bit at `index` to whichever value `bit` is. Sets the bit at `index` to the opposite of its original value. #### fn bitand(self, other: &'a Integer) -> Integer Takes the bitwise and of two `Integer`s, taking both by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) & &Integer::from(-456), -512); assert_eq!( &-Integer::from(10u32).pow(12) & &-(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAnd<&'a Integer> for Integer #### fn bitand(self, other: &'a Integer) -> Integer Takes the bitwise and of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) & &Integer::from(-456), -512); assert_eq!( -Integer::from(10u32).pow(12) & &-(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAnd<Integer> for &'a Integer #### fn bitand(self, other: Integer) -> Integer Takes the bitwise and of two `Integer`s, taking the first by reference and the seocnd by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) & Integer::from(-456), -512); assert_eq!( &-Integer::from(10u32).pow(12) & -(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl BitAnd<Integer> for Integer #### fn bitand(self, other: Integer) -> Integer Takes the bitwise and of two `Integer`s, taking both by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) & Integer::from(-456), -512); assert_eq!( -Integer::from(10u32).pow(12) & -(Integer::from(10u32).pow(12) + Integer::ONE), -1000000004096i64 ); ``` #### type Output = Integer The resulting type after applying the `&` operator.### impl<'a> BitAndAssign<&'a Integer> for Integer #### fn bitand_assign(&mut self, other: &'a Integer) Bitwise-ands an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::NEGATIVE_ONE; x &= &Integer::from(0x70ffffff); x &= &Integer::from(0x7ff0_ffff); x &= &Integer::from(0x7ffff0ff); x &= &Integer::from(0x7ffffff0); assert_eq!(x, 0x70f0f0f0); ``` ### impl BitAndAssign<Integer> for Integer #### fn bitand_assign(&mut self, other: Integer) Bitwise-ands an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::NEGATIVE_ONE; x &= Integer::from(0x70ffffff); x &= Integer::from(0x7ff0_ffff); x &= Integer::from(0x7ffff0ff); x &= Integer::from(0x7ffffff0); assert_eq!(x, 0x70f0f0f0); ``` ### impl BitBlockAccess for Integer #### fn get_bits(&self, start: u64, end: u64) -> Natural Extracts a block of adjacent two’s complement bits from an `Integer`, taking the `Integer` by reference. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), end)`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits(16, 48), Natural::from(0x10feedcbu32) ); assert_eq!( Integer::from(0xabcdef0112345678u64).get_bits(4, 16), Natural::from(0x567u32) ); assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits(0, 100), Natural::from_str("1267650600215849587758112418184").unwrap() ); assert_eq!(Integer::from(0xabcdef0112345678u64).get_bits(10, 10), Natural::ZERO); ``` #### fn get_bits_owned(self, start: u64, end: u64) -> Natural Extracts a block of adjacent two’s complement bits from an `Integer`, taking the `Integer` by value. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), end)`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits_owned(16, 48), Natural::from(0x10feedcbu32) ); assert_eq!( Integer::from(0xabcdef0112345678u64).get_bits_owned(4, 16), Natural::from(0x567u32) ); assert_eq!( (-Natural::from(0xabcdef0112345678u64)).get_bits_owned(0, 100), Natural::from_str("1267650600215849587758112418184").unwrap() ); assert_eq!(Integer::from(0xabcdef0112345678u64).get_bits_owned(10, 10), Natural::ZERO); ``` #### fn assign_bits(&mut self, start: u64, end: u64, bits: &Natural) Replaces a block of adjacent two’s complement bits in an `Integer` with other bits. The least-significant `end - start` bits of `bits` are assigned to bits `start` through `end - 1`, inclusive, of `self`. Let $n$ be `self` and let $m$ be `bits`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ m = \sum_{i=0}^k 2^{d_i}, $$ where for all $i$, $d_i\in \{0, 1\}$. If $n \geq 0$, let $$ n = \sum_{i=0}^\infty 2^{b_i}; $$ but if $n < 0$, let $$ -n - 1 = \sum_{i=0}^\infty 2^{1 - b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$. Then $$ n \gets \sum_{i=0}^\infty 2^{c_i}, $$ where $$ \{c_0, c_1, c_2, \ldots \} = \{b_0, b_1, b_2, \ldots, b_{p-1}, d_0, d_1, \ldots, d_{p-q-1}, b_q, b_{q+1}, \ldots \}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), end)`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; let mut n = Integer::from(123); n.assign_bits(5, 7, &Natural::from(456u32)); assert_eq!(n.to_string(), "27"); let mut n = Integer::from(-123); n.assign_bits(64, 128, &Natural::from(456u32)); assert_eq!(n.to_string(), "-340282366920938455033212565746503123067"); let mut n = Integer::from(-123); n.assign_bits(80, 100, &Natural::from(456u32)); assert_eq!(n.to_string(), "-1267098121128665515963862483067"); ``` #### type Bits = Natural ### impl BitConvertible for Integer #### fn to_bits_asc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the twos-complement bits of an `Integer` in ascending order: least- to most-significant. The most significant bit indicates the sign; if the bit is `false`, the `Integer` is positive, and if the bit is `true` it is negative. There are no trailing `false` bits if the `Integer` is positive or trailing `true` bits if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no bits. This function is more efficient than `to_bits_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert!(Integer::ZERO.to_bits_asc().is_empty()); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).to_bits_asc(), &[true, false, false, true, false, true, true, false] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).to_bits_asc(), &[true, true, true, false, true, false, false, true] ); ``` #### fn to_bits_desc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the twos-complement bits of an `Integer` in descending order: most- to least-significant. The most significant bit indicates the sign; if the bit is `false`, the `Integer` is positive, and if the bit is `true` it is negative. There are no leading `false` bits if the `Integer` is positive or leading `true` bits if the `Integer` is negative, except as necessary to include the correct sign bit. Zero is a special case: it contains no bits. This is similar to how `BigInteger`s in Java are represented. This function is less efficient than `to_bits_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert!(Integer::ZERO.to_bits_desc().is_empty()); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).to_bits_desc(), &[false, true, true, false, true, false, false, true] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).to_bits_desc(), &[true, false, false, true, false, true, true, true] ); ``` #### fn from_bits_asc<I>(xs: I) -> Integerwhere I: Iterator<Item = bool>, Converts an iterator of twos-complement bits into an `Integer`. The bits should be in ascending order (least- to most-significant). Let $k$ be `bits.count()`. If $k = 0$ or $b_{k-1}$ is `false`, then $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^i [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. If $b_{k-1}$ is `true`, then $$ f((b_i)_ {i=0}^{k-1}) = \left ( \sum_{i=0}^{k-1}2^i [b_i] \right ) - 2^k. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::integer::Integer; use std::iter::empty; assert_eq!(Integer::from_bits_asc(empty()), 0); // 105 = 1101001b assert_eq!( Integer::from_bits_asc( [true, false, false, true, false, true, true, false].iter().cloned() ), 105 ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from_bits_asc( [true, true, true, false, true, false, false, true].iter().cloned() ), -105 ); ``` #### fn from_bits_desc<I>(xs: I) -> Integerwhere I: Iterator<Item = bool>, Converts an iterator of twos-complement bits into an `Integer`. The bits should be in descending order (most- to least-significant). If `bits` is empty or $b_0$ is `false`, then $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^{k-i-1} [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. If $b_0$ is `true`, then $$ f((b_i)_ {i=0}^{k-1}) = \left ( \sum_{i=0}^{k-1}2^{k-i-1} [b_i] \right ) - 2^k. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::integer::Integer; use std::iter::empty; assert_eq!(Integer::from_bits_desc(empty()), 0); // 105 = 1101001b assert_eq!( Integer::from_bits_desc( [false, true, true, false, true, false, false, true].iter().cloned() ), 105 ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from_bits_desc( [true, false, false, true, false, true, true, true].iter().cloned() ), -105 ); ``` ### impl<'a> BitIterable for &'a Integer #### fn bits(self) -> IntegerBitIterator<'aReturns a double-ended iterator over the bits of an `Integer`. The forward order is ascending, so that less significant bits appear first. There are no trailing false bits going forward, or leading falses going backward, except for possibly a most-significant sign-extension bit. If it’s necessary to get a `Vec` of all the bits, consider using `to_bits_asc` or `to_bits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitIterable; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.bits().next(), None); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).bits().collect_vec(), &[true, false, false, true, false, true, true, false] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).bits().collect_vec(), &[true, true, true, false, true, false, false, true] ); assert_eq!(Integer::ZERO.bits().next_back(), None); // 105 = 01101001b, with a leading false bit to indicate sign assert_eq!( Integer::from(105).bits().rev().collect_vec(), &[false, true, true, false, true, false, false, true] ); // -105 = 10010111 in two's complement, with a leading true bit to indicate sign assert_eq!( Integer::from(-105).bits().rev().collect_vec(), &[true, false, false, true, false, true, true, true] ); ``` #### type BitIterator = IntegerBitIterator<'a### impl<'a, 'b> BitOr<&'a Integer> for &'b Integer #### fn bitor(self, other: &'a Integer) -> Integer Takes the bitwise or of two `Integer`s, taking both by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(&Integer::from(-123) | &Integer::from(-456), -67); assert_eq!( &-Integer::from(10u32).pow(12) | &-(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOr<&'a Integer> for Integer #### fn bitor(self, other: &'a Integer) -> Integer Takes the bitwise or of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) | &Integer::from(-456), -67); assert_eq!( -Integer::from(10u32).pow(12) | &-(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOr<Integer> for &'a Integer #### fn bitor(self, other: Integer) -> Integer Takes the bitwise or of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(&Integer::from(-123) | Integer::from(-456), -67); assert_eq!( &-Integer::from(10u32).pow(12) | -(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl BitOr<Integer> for Integer #### fn bitor(self, other: Integer) -> Integer Takes the bitwise or of two `Integer`s, taking both by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) | Integer::from(-456), -67); assert_eq!( -Integer::from(10u32).pow(12) | -(Integer::from(10u32).pow(12) + Integer::ONE), -999999995905i64 ); ``` #### type Output = Integer The resulting type after applying the `|` operator.### impl<'a> BitOrAssign<&'a Integer> for Integer #### fn bitor_assign(&mut self, other: &'a Integer) Bitwise-ors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x |= &Integer::from(0x0000000f); x |= &Integer::from(0x00000f00); x |= &Integer::from(0x000f_0000); x |= &Integer::from(0x0f000000); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitOrAssign<Integer> for Integer #### fn bitor_assign(&mut self, other: Integer) Bitwise-ors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x |= Integer::from(0x0000000f); x |= Integer::from(0x00000f00); x |= Integer::from(0x000f_0000); x |= Integer::from(0x0f000000); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> BitScan for &'a Integer #### fn index_of_next_false_bit(self, starting_index: u64) -> Option<u64Given an `Integer` and a starting index, searches the `Integer` for the smallest index of a `false` bit that is greater than or equal to the starting index. If the [`Integer]` is negative, and the starting index is too large and there are no more `false` bits above it, `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::integer::Integer; assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(0), Some(0)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(20), Some(20)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(31), Some(31)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(32), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(33), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(34), Some(34)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(35), None); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_false_bit(100), None); ``` #### fn index_of_next_true_bit(self, starting_index: u64) -> Option<u64Given an `Integer` and a starting index, searches the `Integer` for the smallest index of a `true` bit that is greater than or equal to the starting index. If the `Integer` is non-negative, and the starting index is too large and there are no more `true` bits above it, `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::integer::Integer; assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(0), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(20), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(31), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(32), Some(32)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(33), Some(33)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(34), Some(35)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(35), Some(35)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(36), Some(36)); assert_eq!((-Integer::from(0x500000000u64)).index_of_next_true_bit(100), Some(100)); ``` ### impl<'a, 'b> BitXor<&'a Integer> for &'b Integer #### fn bitxor(self, other: &'a Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking both by reference. $$ f(x, y) = x \oplus y. $$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) ^ &Integer::from(-456), 445); assert_eq!( &-Integer::from(10u32).pow(12) ^ &-(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXor<&'a Integer> for Integer #### fn bitxor(self, other: &'a Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) ^ &Integer::from(-456), 445); assert_eq!( -Integer::from(10u32).pow(12) ^ &-(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXor<Integer> for &'a Integer #### fn bitxor(self, other: Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::from(-123) ^ Integer::from(-456), 445); assert_eq!( &-Integer::from(10u32).pow(12) ^ -(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl BitXor<Integer> for Integer #### fn bitxor(self, other: Integer) -> Integer Takes the bitwise xor of two `Integer`s, taking both by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::integer::Integer; assert_eq!(Integer::from(-123) ^ Integer::from(-456), 445); assert_eq!( -Integer::from(10u32).pow(12) ^ -(Integer::from(10u32).pow(12) + Integer::ONE), 8191 ); ``` #### type Output = Integer The resulting type after applying the `^` operator.### impl<'a> BitXorAssign<&'a Integer> for Integer #### fn bitxor_assign(&mut self, other: &'a Integer) Bitwise-xors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.significant_bits())`, and $m$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::from(u32::MAX); x ^= &Integer::from(0x0000000f); x ^= &Integer::from(0x00000f00); x ^= &Integer::from(0x000f_0000); x ^= &Integer::from(0x0f000000); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitXorAssign<Integer> for Integer #### fn bitxor_assign(&mut self, other: Integer) Bitwise-xors an `Integer` with another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; let mut x = Integer::from(u32::MAX); x ^= Integer::from(0x0000000f); x ^= Integer::from(0x00000f00); x ^= Integer::from(0x000f_0000); x ^= Integer::from(0x0f000000); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl<'a> CeilingDivAssignMod<&'a Integer> for Integer #### fn ceiling_div_assign_mod(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil\frac{x}{y} \right \rceil, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignMod; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(10)), -7); assert_eq!(x, 3); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(10)), -3); assert_eq!(x, -2); // 3 * -10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(&Integer::from(-10)), 7); assert_eq!(x, 3); ``` #### type ModOutput = Integer ### impl CeilingDivAssignMod<Integer> for Integer #### fn ceiling_div_assign_mod(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil\frac{x}{y} \right \rceil, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignMod; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(10)), -7); assert_eq!(x, 3); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(10)), -3); assert_eq!(x, -2); // 3 * -10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.ceiling_div_assign_mod(Integer::from(-10)), 7); assert_eq!(x, 3); ``` #### type ModOutput = Integer ### impl<'a, 'b> CeilingDivMod<&'b Integer> for &'a Integer #### fn ceiling_div_mod(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> CeilingDivMod<&'a Integer> for Integer #### fn ceiling_div_mod(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(&Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> CeilingDivMod<Integer> for &'a Integer #### fn ceiling_div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( (&Integer::from(23)).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( (&Integer::from(-23)).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl CeilingDivMod<Integer> for Integer #### fn ceiling_div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards positive infinity and the remainder has the opposite sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space x - y\left \lceil \frac{x}{y} \right \rceil \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 3 * 10 + -7 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(3, -7)" ); // -2 * -10 + 3 = 23 assert_eq!( Integer::from(23).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(-2, 3)" ); // -2 * 10 + -3 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 3 * -10 + 7 = -23 assert_eq!( Integer::from(-23).ceiling_div_mod(Integer::from(-10)).to_debug_string(), "(3, 7)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a, 'b> CeilingMod<&'b Integer> for &'a Integer #### fn ceiling_mod(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(&Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(&Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(&Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(&Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingMod<&'a Integer> for Integer #### fn ceiling_mod(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).ceiling_mod(&Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).ceiling_mod(&Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).ceiling_mod(&Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).ceiling_mod(&Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingMod<Integer> for &'a Integer #### fn ceiling_mod(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).ceiling_mod(Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).ceiling_mod(Integer::from(-10)), 7); ``` #### type Output = Integer ### impl CeilingMod<Integer> for Integer #### fn ceiling_mod(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the opposite sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).ceiling_mod(Integer::from(10)), -7); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).ceiling_mod(Integer::from(-10)), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).ceiling_mod(Integer::from(10)), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).ceiling_mod(Integer::from(-10)), 7); ``` #### type Output = Integer ### impl<'a> CeilingModAssign<&'a Integer> for Integer #### fn ceiling_mod_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the `Integer` on the right-hand side by reference and replacing the first number by the remainder. The remainder has the opposite sign as the second number. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lceil\frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(&Integer::from(10)); assert_eq!(x, -7); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(&Integer::from(-10)); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(&Integer::from(10)); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(&Integer::from(-10)); assert_eq!(x, 7); ``` ### impl CeilingModAssign<Integer> for Integer #### fn ceiling_mod_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the `Integer` on the right-hand side by value and replacing the first number by the remainder. The remainder has the opposite sign as the second number. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lceil\frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(Integer::from(10)); assert_eq!(x, -7); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.ceiling_mod_assign(Integer::from(-10)); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(Integer::from(10)); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.ceiling_mod_assign(Integer::from(-10)); assert_eq!(x, 7); ``` ### impl<'a> CeilingModPowerOf2 for &'a Integer #### fn ceiling_mod_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ f(x, y) = x - 2^k\left \lceil \frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 assert_eq!((&Integer::from(260)).ceiling_mod_power_of_2(8), -252); // -100 * 2^4 + -11 = -1611 assert_eq!((&Integer::from(-1611)).ceiling_mod_power_of_2(4), -11); ``` #### type Output = Integer ### impl CeilingModPowerOf2 for Integer #### fn ceiling_mod_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ f(x, y) = x - 2^k\left \lceil \frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 assert_eq!(Integer::from(260).ceiling_mod_power_of_2(8), -252); // -100 * 2^4 + -11 = -1611 assert_eq!(Integer::from(-1611).ceiling_mod_power_of_2(4), -11); ``` #### type Output = Integer ### impl CeilingModPowerOf2Assign for Integer #### fn ceiling_mod_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder is non-positive. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq -r < 2^k$. $$ x \gets x - 2^k\left \lceil\frac{x}{2^k} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingModPowerOf2Assign; use malachite_nz::integer::Integer; // 2 * 2^8 + -252 = 260 let mut x = Integer::from(260); x.ceiling_mod_power_of_2_assign(8); assert_eq!(x, -252); // -100 * 2^4 + -11 = -1611 let mut x = Integer::from(-1611); x.ceiling_mod_power_of_2_assign(4); assert_eq!(x, -11); ``` ### impl<'a> CeilingRoot<u64> for &'a Integer #### fn ceiling_root(self, exp: u64) -> Integer Returns the ceiling of the $n$th root of an `Integer`, taking the `Integer` by reference. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).ceiling_root(3), 10); assert_eq!(Integer::from(1000).ceiling_root(3), 10); assert_eq!(Integer::from(1001).ceiling_root(3), 11); assert_eq!(Integer::from(100000000000i64).ceiling_root(5), 159); assert_eq!(Integer::from(-100000000000i64).ceiling_root(5), -158); ``` #### type Output = Integer ### impl CeilingRoot<u64> for Integer #### fn ceiling_root(self, exp: u64) -> Integer Returns the ceiling of the $n$th root of an `Integer`, taking the `Integer` by value. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).ceiling_root(3), 10); assert_eq!(Integer::from(1000).ceiling_root(3), 10); assert_eq!(Integer::from(1001).ceiling_root(3), 11); assert_eq!(Integer::from(100000000000i64).ceiling_root(5), 159); assert_eq!(Integer::from(-100000000000i64).ceiling_root(5), -158); ``` #### type Output = Integer ### impl CeilingRootAssign<u64> for Integer #### fn ceiling_root_assign(&mut self, exp: u64) Replaces an `Integer` with the ceiling of its $n$th root. $x \gets \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRootAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(999); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1000); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1001); x.ceiling_root_assign(3); assert_eq!(x, 11); let mut x = Integer::from(100000000000i64); x.ceiling_root_assign(5); assert_eq!(x, 159); let mut x = Integer::from(-100000000000i64); x.ceiling_root_assign(5); assert_eq!(x, -158); ``` ### impl<'a> CeilingSqrt for &'a Integer #### fn ceiling_sqrt(self) -> Integer Returns the ceiling of the square root of an `Integer`, taking it by reference. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).ceiling_sqrt(), 10); assert_eq!(Integer::from(100).ceiling_sqrt(), 10); assert_eq!(Integer::from(101).ceiling_sqrt(), 11); assert_eq!(Integer::from(1000000000).ceiling_sqrt(), 31623); assert_eq!(Integer::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Integer ### impl CeilingSqrt for Integer #### fn ceiling_sqrt(self) -> Integer Returns the ceiling of the square root of an `Integer`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).ceiling_sqrt(), 10); assert_eq!(Integer::from(100).ceiling_sqrt(), 10); assert_eq!(Integer::from(101).ceiling_sqrt(), 11); assert_eq!(Integer::from(1000000000).ceiling_sqrt(), 31623); assert_eq!(Integer::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Integer ### impl CeilingSqrtAssign for Integer #### fn ceiling_sqrt_assign(&mut self) Replaces an `Integer` with the ceiling of its square root. $x \gets \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrtAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(99u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(100); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(101); x.ceiling_sqrt_assign(); assert_eq!(x, 11); let mut x = Integer::from(1000000000); x.ceiling_sqrt_assign(); assert_eq!(x, 31623); let mut x = Integer::from(10000000000u64); x.ceiling_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a, 'b> CheckedHammingDistance<&'a Integer> for &'b Integer #### fn checked_hamming_distance(self, other: &Integer) -> Option<u64Determines the Hamming distance between two `Integer`s. The two `Integer`s have infinitely many leading zeros or infinitely many leading ones, depending on their signs. If they are both non-negative or both negative, the Hamming distance is finite. If one is non-negative and the other is negative, the Hamming distance is infinite, so `None` is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::logic::traits::CheckedHammingDistance; use malachite_nz::integer::Integer; assert_eq!(Integer::from(123).checked_hamming_distance(&Integer::from(123)), Some(0)); // 105 = 1101001b, 123 = 1111011 assert_eq!(Integer::from(-105).checked_hamming_distance(&Integer::from(-123)), Some(2)); assert_eq!(Integer::from(-105).checked_hamming_distance(&Integer::from(123)), None); ``` ### impl<'a> CheckedRoot<u64> for &'a Integer #### fn checked_root(self, exp: u64) -> Option<IntegerReturns the the $n$th root of an `Integer`, or `None` if the `Integer` is not a perfect $n$th power. The `Integer` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(999)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Integer::from(1000)).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!((&Integer::from(1001)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Integer::from(100000000000i64)).checked_root(5).to_debug_string(), "None"); assert_eq!((&Integer::from(-100000000000i64)).checked_root(5).to_debug_string(), "None"); assert_eq!((&Integer::from(10000000000i64)).checked_root(5).to_debug_string(), "Some(100)"); assert_eq!( (&Integer::from(-10000000000i64)).checked_root(5).to_debug_string(), "Some(-100)" ); ``` #### type Output = Integer ### impl CheckedRoot<u64> for Integer #### fn checked_root(self, exp: u64) -> Option<IntegerReturns the the $n$th root of an `Integer`, or `None` if the `Integer` is not a perfect $n$th power. The `Integer` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).checked_root(3).to_debug_string(), "None"); assert_eq!(Integer::from(1000).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!(Integer::from(1001).checked_root(3).to_debug_string(), "None"); assert_eq!(Integer::from(100000000000i64).checked_root(5).to_debug_string(), "None"); assert_eq!(Integer::from(-100000000000i64).checked_root(5).to_debug_string(), "None"); assert_eq!(Integer::from(10000000000i64).checked_root(5).to_debug_string(), "Some(100)"); assert_eq!(Integer::from(-10000000000i64).checked_root(5).to_debug_string(), "Some(-100)"); ``` #### type Output = Integer ### impl<'a> CheckedSqrt for &'a Integer #### fn checked_sqrt(self) -> Option<IntegerReturns the the square root of an `Integer`, or `None` if it is not a perfect square. The `Integer` is taken by reference. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(99u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Integer::from(100u8)).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!((&Integer::from(101u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Integer::from(1000000000u32)).checked_sqrt().to_debug_string(), "None"); assert_eq!( (&Integer::from(10000000000u64)).checked_sqrt().to_debug_string(), "Some(100000)" ); ``` #### type Output = Integer ### impl CheckedSqrt for Integer #### fn checked_sqrt(self) -> Option<IntegerReturns the the square root of an `Integer`, or `None` if it is not a perfect square. The `Integer` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(100u8).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!(Integer::from(101u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(1000000000u32).checked_sqrt().to_debug_string(), "None"); assert_eq!(Integer::from(10000000000u64).checked_sqrt().to_debug_string(), "Some(100000)"); ``` #### type Output = Integer ### impl Clone for Integer #### fn clone(&self) -> Integer Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(&Integer::from(123)), true); assert_eq!(Natural::convertible_from(&Integer::from(-123)), false); assert_eq!(Natural::convertible_from(&Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(&-Integer::from(10u32).pow(12)), false); ``` ### impl<'a> ConvertibleFrom<&'a Integer> for f32 #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for f64 #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i128 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i16 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i32 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i64 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for i8 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for isize #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u128 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u16 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u32 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u64 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for u8 #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Integer> for usize #### fn convertible_from(value: &Integer) -> bool Determines whether an `Integer` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for Integer #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to an `Integer`, taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Integer::convertible_from(&Rational::from(123)), true); assert_eq!(Integer::convertible_from(&Rational::from(-123)), true); assert_eq!(Integer::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl ConvertibleFrom<Integer> for Natural #### fn convertible_from(value: Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(Integer::from(123)), true); assert_eq!(Natural::convertible_from(Integer::from(-123)), false); assert_eq!(Natural::convertible_from(Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(-Integer::from(10u32).pow(12)), false); ``` ### impl ConvertibleFrom<f32> for Integer #### fn convertible_from(value: f32) -> bool Determines whether a primitive float can be exactly converted to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<f64> for Integer #### fn convertible_from(value: f64) -> bool Determines whether a primitive float can be exactly converted to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Debug for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a `String`. This is the same as the `Display::fmt` implementation. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_debug_string(), "0"); assert_eq!(Integer::from(123).to_debug_string(), "123"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_debug_string(), "1000000000000" ); assert_eq!(format!("{:05?}", Integer::from(123)), "00123"); assert_eq!(Integer::from(-123).to_debug_string(), "-123"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_debug_string(), "-1000000000000" ); assert_eq!(format!("{:05?}", Integer::from(-123)), "-0123"); ``` ### impl Default for Integer #### fn default() -> Integer The default value of an `Integer`, 0. ### impl Display for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a `String`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_string(), "0"); assert_eq!(Integer::from(123).to_string(), "123"); assert_eq!( Integer::from_str("1000000000000").unwrap().to_string(), "1000000000000" ); assert_eq!(format!("{:05}", Integer::from(123)), "00123"); assert_eq!(Integer::from(-123).to_string(), "-123"); assert_eq!( Integer::from_str("-1000000000000").unwrap().to_string(), "-1000000000000" ); assert_eq!(format!("{:05}", Integer::from(-123)), "-0123"); ``` ### impl<'a, 'b> Div<&'b Integer> for &'a Integer #### fn div(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) / &Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(&Integer::from(23) / &Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(&Integer::from(-23) / &Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) / &Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> Div<&'a Integer> for Integer #### fn div(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) / &Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23) / &Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23) / &Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) / &Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> Div<Integer> for &'a Integer #### fn div(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) / Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(&Integer::from(23) / Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(&Integer::from(-23) / Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) / Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl Div<Integer> for Integer #### fn div(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) / Integer::from(10), 2); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23) / Integer::from(-10), -2); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23) / Integer::from(10), -2); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) / Integer::from(-10), 2); ``` #### type Output = Integer The resulting type after applying the `/` operator.### impl<'a> DivAssign<&'a Integer> for Integer #### fn div_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x /= &Integer::from(10); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); x /= &Integer::from(-10); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); x /= &Integer::from(10); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x /= &Integer::from(-10); assert_eq!(x, 2); ``` ### impl DivAssign<Integer> for Integer #### fn div_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. The quotient is rounded towards zero. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x /= Integer::from(10); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); x /= Integer::from(-10); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); x /= Integer::from(10); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x /= Integer::from(-10); assert_eq!(x, 2); ``` ### impl<'a> DivAssignMod<&'a Integer> for Integer #### fn div_assign_mod(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(&Integer::from(10)), 3); assert_eq!(x, 2); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(&Integer::from(-10)), -7); assert_eq!(x, -3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(&Integer::from(10)), 7); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(&Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type ModOutput = Integer ### impl DivAssignMod<Integer> for Integer #### fn div_assign_mod(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(Integer::from(10)), 3); assert_eq!(x, 2); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_mod(Integer::from(-10)), -7); assert_eq!(x, -3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(Integer::from(10)), 7); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_mod(Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type ModOutput = Integer ### impl<'a> DivAssignRem<&'a Integer> for Integer #### fn div_assign_rem(&mut self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and returning the remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, $$ $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(&Integer::from(10)), 3); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(&Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(&Integer::from(10)), -3); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(&Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type RemOutput = Integer ### impl DivAssignRem<Integer> for Integer #### fn div_assign_rem(&mut self, other: Integer) -> Integer Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and returning the remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, $$ $$ x \gets \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(Integer::from(10)), 3); assert_eq!(x, 2); // -2 * -10 + 3 = 23 let mut x = Integer::from(23); assert_eq!(x.div_assign_rem(Integer::from(-10)), 3); assert_eq!(x, -2); // -2 * 10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(Integer::from(10)), -3); assert_eq!(x, -2); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); assert_eq!(x.div_assign_rem(Integer::from(-10)), -3); assert_eq!(x, 2); ``` #### type RemOutput = Integer ### impl<'a, 'b> DivExact<&'b Integer> for &'a Integer #### fn div_exact(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / &other` instead. If you’re unsure and you want to know, use `(&self).div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(&other, RoundingMode::Exact)`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!((&Integer::from(-56088)).div_exact(&Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( (&Integer::from_str("121932631112635269000000").unwrap()) .div_exact(&Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExact<&'a Integer> for Integer #### fn div_exact(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / &other` instead. If you’re unsure and you want to know, use `self.div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!(Integer::from(-56088).div_exact(&Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( Integer::from_str("121932631112635269000000").unwrap() .div_exact(&Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExact<Integer> for &'a Integer #### fn div_exact(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!((&Integer::from(-56088)).div_exact(Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( (&Integer::from_str("121932631112635269000000").unwrap()) .div_exact(Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl DivExact<Integer> for Integer #### fn div_exact(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 assert_eq!(Integer::from(-56088).div_exact(Integer::from(456)), -123); // -123456789000 * -987654321000 = 121932631112635269000000 assert_eq!( Integer::from_str("121932631112635269000000").unwrap() .div_exact(Integer::from_str("-987654321000").unwrap()), -123456789000i64 ); ``` #### type Output = Integer ### impl<'a> DivExactAssign<&'a Integer> for Integer #### fn div_exact_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= &other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 let mut x = Integer::from(-56088); x.div_exact_assign(&Integer::from(456)); assert_eq!(x, -123); // -123456789000 * -987654321000 = 121932631112635269000000 let mut x = Integer::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(&Integer::from_str("-987654321000").unwrap()); assert_eq!(x, -123456789000i64); ``` ### impl DivExactAssign<Integer> for Integer #### fn div_exact_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. The first `Integer` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::integer::Integer; use std::str::FromStr; // -123 * 456 = -56088 let mut x = Integer::from(-56088); x.div_exact_assign(Integer::from(456)); assert_eq!(x, -123); // -123456789000 * -987654321000 = 121932631112635269000000 let mut x = Integer::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(Integer::from_str("-987654321000").unwrap()); assert_eq!(x, -123456789000i64); ``` ### impl<'a, 'b> DivMod<&'b Integer> for &'a Integer #### fn div_mod(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_mod(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!( (&Integer::from(23)).div_mod(&Integer::from(-10)).to_debug_string(), "(-3, -7)" ); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).div_mod(&Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_mod(&Integer::from(-10)).to_debug_string(), "(2, -3)" ); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> DivMod<&'a Integer> for Integer #### fn div_mod(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_mod(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).div_mod(&Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).div_mod(&Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_mod(&Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a> DivMod<Integer> for &'a Integer #### fn div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_mod(Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).div_mod(Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).div_mod(Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_mod(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl DivMod<Integer> for Integer #### fn div_mod(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards negative infinity, and the remainder has the same sign as the second `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_mod(Integer::from(10)).to_debug_string(), "(2, 3)"); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).div_mod(Integer::from(-10)).to_debug_string(), "(-3, -7)"); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).div_mod(Integer::from(10)).to_debug_string(), "(-3, 7)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_mod(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type ModOutput = Integer ### impl<'a, 'b> DivRem<&'b Integer> for &'a Integer #### fn div_rem(self, other: &'b Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(&Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_rem(&Integer::from(10)).to_debug_string(), "(-2, -3)" ); // 2 * -10 + -3 = -23 assert_eq!( (&Integer::from(-23)).div_rem(&Integer::from(-10)).to_debug_string(), "(2, -3)" ); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a> DivRem<&'a Integer> for Integer #### fn div_rem(self, other: &'a Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(&Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(&Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(&Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(&Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a> DivRem<Integer> for &'a Integer #### fn div_rem(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!((&Integer::from(23)).div_rem(Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_rem(Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).div_rem(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl DivRem<Integer> for Integer #### fn div_rem(self, other: Integer) -> (Integer, Integer) Divides an `Integer` by another `Integer`, taking both by value and returning the quotient and remainder. The quotient is rounded towards zero and the remainder has the same sign as the first `Integer`. The quotient and remainder satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = \left ( \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor, \space x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(Integer::from(10)).to_debug_string(), "(2, 3)"); // -2 * -10 + 3 = 23 assert_eq!(Integer::from(23).div_rem(Integer::from(-10)).to_debug_string(), "(-2, 3)"); // -2 * 10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(Integer::from(10)).to_debug_string(), "(-2, -3)"); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).div_rem(Integer::from(-10)).to_debug_string(), "(2, -3)"); ``` #### type DivOutput = Integer #### type RemOutput = Integer ### impl<'a, 'b> DivRound<&'b Integer> for &'a Integer #### fn div_round(self, other: &'b Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking both by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( (&Integer::from(-20)).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&Integer::from(-14)).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( (&Integer::from(-20)).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( (&Integer::from(-14)).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRound<&'a Integer> for Integer #### fn div_round(self, other: &'a Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( Integer::from(-20).div_round(&Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( Integer::from(-14).div_round(&Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(&Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( Integer::from(-20).div_round(&Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( Integer::from(-14).div_round(&Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRound<Integer> for &'a Integer #### fn div_round(self, other: Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( (&Integer::from(-10)).div_round(Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( (&Integer::from(-20)).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (&Integer::from(-14)).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (&-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( (&Integer::from(-20)).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( (&Integer::from(-10)).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( (&Integer::from(-14)).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl DivRound<Integer> for Integer #### fn div_round(self, other: Integer, rm: RoundingMode) -> (Integer, Ordering) Divides an `Integer` by another `Integer`, taking both by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor. $$ $$ g(x, y, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil. $$ $$ g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Down), (Integer::from(-2), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Floor), (Integer::from(-333333333334i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Up), (Integer::from(-3), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(3), RoundingMode::Ceiling), (Integer::from(-333333333333i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(5), RoundingMode::Exact), (Integer::from(-2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-3), Ordering::Greater) ); assert_eq!( Integer::from(-20).div_round(Integer::from(3), RoundingMode::Nearest), (Integer::from(-7), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-2), Ordering::Greater) ); assert_eq!( Integer::from(-14).div_round(Integer::from(4), RoundingMode::Nearest), (Integer::from(-4), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Down), (Integer::from(2), Ordering::Less) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Floor), (Integer::from(333333333333i64), Ordering::Less) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Up), (Integer::from(3), Ordering::Greater) ); assert_eq!( (-Integer::from(10u32).pow(12)).div_round(Integer::from(-3), RoundingMode::Ceiling), (Integer::from(333333333334i64), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-5), RoundingMode::Exact), (Integer::from(2), Ordering::Equal) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(3), Ordering::Less) ); assert_eq!( Integer::from(-20).div_round(Integer::from(-3), RoundingMode::Nearest), (Integer::from(7), Ordering::Greater) ); assert_eq!( Integer::from(-10).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(2), Ordering::Less) ); assert_eq!( Integer::from(-14).div_round(Integer::from(-4), RoundingMode::Nearest), (Integer::from(4), Ordering::Greater) ); ``` #### type Output = Integer ### impl<'a> DivRoundAssign<&'a Integer> for Integer #### fn div_round_assign(&mut self, other: &'a Integer, rm: RoundingMode) -> Ordering Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Down), Ordering::Greater); assert_eq!(n, -2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Integer::from(3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, -333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(n, -3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Integer::from(3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, -333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, -2); let mut n = Integer::from(-10); assert_eq!( n.div_round_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, -3); let mut n = Integer::from(-20); assert_eq!(n.div_round_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -7); let mut n = Integer::from(-10); assert_eq!( n.div_round_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, -2); let mut n = Integer::from(-14); assert_eq!(n.div_round_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -4); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Integer::from(-3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Integer::from(-3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Integer::from(-20); assert_eq!( n.div_round_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Integer::from(-14); assert_eq!( n.div_round_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl DivRoundAssign<Integer> for Integer #### fn div_round_assign(&mut self, other: Integer, rm: RoundingMode) -> Ordering Divides an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Down), Ordering::Greater); assert_eq!(n, -2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, -333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(n, -3); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Ceiling), Ordering::Greater); assert_eq!(n, -333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, -2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Greater); assert_eq!(n, -3); let mut n = Integer::from(-20); assert_eq!(n.div_round_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Greater); assert_eq!(n, -2); let mut n = Integer::from(-14); assert_eq!(n.div_round_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, -4); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = -Integer::from(10u32).pow(12); assert_eq!(n.div_round_assign(Integer::from(-3), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = -Integer::from(10u32).pow(12); assert_eq!( n.div_round_assign(Integer::from(-3), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334i64); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-5), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Integer::from(-20); assert_eq!( n.div_round_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Integer::from(-10); assert_eq!(n.div_round_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Integer::from(-14); assert_eq!( n.div_round_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl<'a, 'b> DivisibleBy<&'b Integer> for &'a Integer #### fn divisible_by(self, other: &'b Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. Both `Integer`s are taken by reference. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!((&Integer::ZERO).divisible_by(&Integer::ZERO), true); assert_eq!((&Integer::from(-100)).divisible_by(&Integer::from(-3)), false); assert_eq!((&Integer::from(102)).divisible_by(&Integer::from(-3)), true); assert_eq!( (&Integer::from_str("-1000000000000000000000000").unwrap()) .divisible_by(&Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<&'a Integer> for Integer #### fn divisible_by(self, other: &'a Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. The first `Integer` is taken by value and the second by reference. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.divisible_by(&Integer::ZERO), true); assert_eq!(Integer::from(-100).divisible_by(&Integer::from(-3)), false); assert_eq!(Integer::from(102).divisible_by(&Integer::from(-3)), true); assert_eq!( Integer::from_str("-1000000000000000000000000").unwrap() .divisible_by(&Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<Integer> for &'a Integer #### fn divisible_by(self, other: Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. The first `Integer` is taken by reference and the second by value. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!((&Integer::ZERO).divisible_by(Integer::ZERO), true); assert_eq!((&Integer::from(-100)).divisible_by(Integer::from(-3)), false); assert_eq!((&Integer::from(102)).divisible_by(Integer::from(-3)), true); assert_eq!( (&Integer::from_str("-1000000000000000000000000").unwrap()) .divisible_by(Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl DivisibleBy<Integer> for Integer #### fn divisible_by(self, other: Integer) -> bool Returns whether an `Integer` is divisible by another `Integer`; in other words, whether the first is a multiple of the second. Both `Integer`s are taken by value. This means that zero is divisible by any `Integer`, including zero; but a nonzero `Integer` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.divisible_by(Integer::ZERO), true); assert_eq!(Integer::from(-100).divisible_by(Integer::from(-3)), false); assert_eq!(Integer::from(102).divisible_by(Integer::from(-3)), true); assert_eq!( Integer::from_str("-1000000000000000000000000").unwrap() .divisible_by(Integer::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleByPowerOf2 for &'a Integer #### fn divisible_by_power_of_2(self, pow: u64) -> bool Returns whether an `Integer` is divisible by $2^k$. $f(x, k) = (2^k|x)$. $f(x, k) = (\exists n \in \N : \ x = n2^k)$. If `self` is 0, the result is always true; otherwise, it is equivalent to `self.trailing_zeros().unwrap() <= pow`, but more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivisibleByPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.divisible_by_power_of_2(100), true); assert_eq!(Integer::from(-100).divisible_by_power_of_2(2), true); assert_eq!(Integer::from(100u32).divisible_by_power_of_2(3), false); assert_eq!((-Integer::from(10u32).pow(12)).divisible_by_power_of_2(12), true); assert_eq!((-Integer::from(10u32).pow(12)).divisible_by_power_of_2(13), false); ``` ### impl<'a, 'b, 'c> EqMod<&'b Integer, &'c Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: &'c Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Integer, &'b Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by value and the second and third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Integer, Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Integer, Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by value and the second by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Integer, &'b Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, &'a Natural> for Integer #### fn eq_mod(self, other: Integer, m: &'a Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by value and the third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by reference and the second and third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Integer, Natural> for Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqModPowerOf2<&'b Integer> for &'a Integer #### fn eq_mod_power_of_2(self, other: &'b Integer, pow: u64) -> bool Returns whether one `Integer` is equal to another modulo $2^k$; that is, whether their $k$ least-significant bits (in two’s complement) are equal. $f(x, y, k) = (x \equiv y \mod 2^k)$. $f(x, y, k) = (\exists n \in \Z : x - y = n2^k)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqModPowerOf2; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.eq_mod_power_of_2(&Integer::from(-256), 8), true); assert_eq!(Integer::from(-0b1101).eq_mod_power_of_2(&Integer::from(0b11011), 3), true); assert_eq!(Integer::from(-0b1101).eq_mod_power_of_2(&Integer::from(0b11011), 4), false); ``` ### impl<'a, 'b> ExtendedGcd<&'a Integer> for &'b Integer #### fn extended_gcd(self, other: &'a Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Integer`s are taken by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(3)).extended_gcd(&Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Integer::from(240)).extended_gcd(&Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( (&Integer::from(-111)).extended_gcd(&Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<&'a Integer> for Integer #### fn extended_gcd(self, other: &'a Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Integer` is taken by value and the second by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(3).extended_gcd(&Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Integer::from(240).extended_gcd(&Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( Integer::from(-111).extended_gcd(&Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<Integer> for &'a Integer #### fn extended_gcd(self, other: Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Integer` is taken by reference and the second by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(3)).extended_gcd(Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Integer::from(240)).extended_gcd(Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( (&Integer::from(-111)).extended_gcd(Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl ExtendedGcd<Integer> for Integer #### fn extended_gcd(self, other: Integer) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Integer`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Integer`s are taken by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(a, ak) = (-a, -1, 0)$ if $a < 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(bk, b) = (-b, 0, -1)$ if $b < 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(|a|, |b|)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(3).extended_gcd(Integer::from(5)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Integer::from(240).extended_gcd(Integer::from(46)).to_debug_string(), "(2, -9, 47)" ); assert_eq!( Integer::from(-111).extended_gcd(Integer::from(300)).to_debug_string(), "(3, 27, 10)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> FloorRoot<u64> for &'a Integer #### fn floor_root(self, exp: u64) -> Integer Returns the floor of the $n$th root of an `Integer`, taking the `Integer` by reference. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(999)).floor_root(3), 9); assert_eq!((&Integer::from(1000)).floor_root(3), 10); assert_eq!((&Integer::from(1001)).floor_root(3), 10); assert_eq!((&Integer::from(100000000000i64)).floor_root(5), 158); assert_eq!((&Integer::from(-100000000000i64)).floor_root(5), -159); ``` #### type Output = Integer ### impl FloorRoot<u64> for Integer #### fn floor_root(self, exp: u64) -> Integer Returns the floor of the $n$th root of an `Integer`, taking the `Integer` by value. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::integer::Integer; assert_eq!(Integer::from(999).floor_root(3), 9); assert_eq!(Integer::from(1000).floor_root(3), 10); assert_eq!(Integer::from(1001).floor_root(3), 10); assert_eq!(Integer::from(100000000000i64).floor_root(5), 158); assert_eq!(Integer::from(-100000000000i64).floor_root(5), -159); ``` #### type Output = Integer ### impl FloorRootAssign<u64> for Integer #### fn floor_root_assign(&mut self, exp: u64) Replaces an `Integer` with the floor of its $n$th root. $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRootAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(999); x.floor_root_assign(3); assert_eq!(x, 9); let mut x = Integer::from(1000); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(1001); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Integer::from(100000000000i64); x.floor_root_assign(5); assert_eq!(x, 158); let mut x = Integer::from(-100000000000i64); x.floor_root_assign(5); assert_eq!(x, -159); ``` ### impl<'a> FloorSqrt for &'a Integer #### fn floor_sqrt(self) -> Integer Returns the floor of the square root of an `Integer`, taking it by reference. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(99)).floor_sqrt(), 9); assert_eq!((&Integer::from(100)).floor_sqrt(), 10); assert_eq!((&Integer::from(101)).floor_sqrt(), 10); assert_eq!((&Integer::from(1000000000)).floor_sqrt(), 31622); assert_eq!((&Integer::from(10000000000u64)).floor_sqrt(), 100000); ``` #### type Output = Integer ### impl FloorSqrt for Integer #### fn floor_sqrt(self) -> Integer Returns the floor of the square root of an `Integer`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::integer::Integer; assert_eq!(Integer::from(99).floor_sqrt(), 9); assert_eq!(Integer::from(100).floor_sqrt(), 10); assert_eq!(Integer::from(101).floor_sqrt(), 10); assert_eq!(Integer::from(1000000000).floor_sqrt(), 31622); assert_eq!(Integer::from(10000000000u64).floor_sqrt(), 100000); ``` #### type Output = Integer ### impl FloorSqrtAssign for Integer #### fn floor_sqrt_assign(&mut self) Replaces an `Integer` with the floor of its square root. $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrtAssign; use malachite_nz::integer::Integer; let mut x = Integer::from(99); x.floor_sqrt_assign(); assert_eq!(x, 9); let mut x = Integer::from(100); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(101); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Integer::from(1000000000); x.floor_sqrt_assign(); assert_eq!(x, 31622); let mut x = Integer::from(10000000000u64); x.floor_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a> From<&'a Integer> for Rational #### fn from(value: &'a Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(&Integer::from(123)), 123); assert_eq!(Rational::from(&Integer::from(-123)), -123); ``` ### impl<'a> From<&'a Natural> for Integer #### fn from(value: &'a Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(&Natural::from(123u32)), 123); assert_eq!(Integer::from(&Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<Integer> for Rational #### fn from(value: Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(Integer::from(123)), 123); assert_eq!(Rational::from(Integer::from(-123)), -123); ``` ### impl From<Natural> for Integer #### fn from(value: Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(Natural::from(123u32)), 123); assert_eq!(Integer::from(Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<bool> for Integer #### fn from(b: bool) -> Integer Converts a `bool` to 0 or 1. This function is known as the Iverson bracket. $$ f(P) = [P] = \begin{cases} 1 & \text{if} \quad P, \\ 0 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; assert_eq!(Integer::from(false), 0); assert_eq!(Integer::from(true), 1); ``` ### impl From<i128> for Integer #### fn from(i: i128) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i16> for Integer #### fn from(i: i16) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i32> for Integer #### fn from(i: i32) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i64> for Integer #### fn from(i: i64) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i8> for Integer #### fn from(i: i8) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<isize> for Integer #### fn from(i: isize) -> Integer Converts a signed primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u128> for Integer #### fn from(u: u128) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u16> for Integer #### fn from(u: u16) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u32> for Integer #### fn from(u: u32) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u64> for Integer #### fn from(u: u64) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u8> for Integer #### fn from(u: u8) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<usize> for Integer #### fn from(u: usize) -> Integer Converts an unsigned primitive integer to an `Integer`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl FromSciString for Integer #### fn from_sci_string_with_options( s: &str, options: FromSciStringOptions ) -> Option<IntegerConverts a string, possibly in scientfic notation, to an `Integer`. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive) and the rounding mode, in case rounding is necessary because the string represents a non-integer. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. These are most useful in conjunction with exponents, but they may be used on their own. If the string represents a non-integer, the rounding mode specified in `options` is used to round to an integer. If the string is unparseable, `None` is returned. `None` is also returned if the rounding mode in options is `Exact`, but rounding is necessary. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::num::conversion::traits::FromSciString; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; assert_eq!(Integer::from_sci_string("123").unwrap(), 123); assert_eq!(Integer::from_sci_string("123.5").unwrap(), 124); assert_eq!(Integer::from_sci_string("-123.5").unwrap(), -124); assert_eq!(Integer::from_sci_string("1.23e10").unwrap(), 12300000000i64); let mut options = FromSciStringOptions::default(); assert_eq!(Integer::from_sci_string_with_options("123.5", options).unwrap(), 124); options.set_rounding_mode(RoundingMode::Floor); assert_eq!(Integer::from_sci_string_with_options("123.5", options).unwrap(), 123); options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Integer::from_sci_string_with_options("ff", options).unwrap(), 255); ``` #### fn from_sci_string(s: &str) -> Option<SelfConverts a `&str`, possibly in scientific notation, to a number, using the default `FromSciStringOptions`.### impl FromStr for Integer #### fn from_str(s: &str) -> Result<Integer, ()Converts an string to an `Integer`. If the string does not represent a valid `Integer`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, with an optional leading `'-'`. Leading zeros are allowed, as is the string `"-0"`. The string `"-"` is not. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::from_str("123456").unwrap(), 123456); assert_eq!(Integer::from_str("00123456").unwrap(), 123456); assert_eq!(Integer::from_str("0").unwrap(), 0); assert_eq!(Integer::from_str("-123456").unwrap(), -123456); assert_eq!(Integer::from_str("-00123456").unwrap(), -123456); assert_eq!(Integer::from_str("-0").unwrap(), 0); assert!(Integer::from_str("").is_err()); assert!(Integer::from_str("a").is_err()); ``` #### type Err = () The associated error which can be returned from parsing.### impl FromStringBase for Integer #### fn from_string_base(base: u8, s: &str) -> Option<IntegerConverts an string, in a specified base, to an `Integer`. If the string does not represent a valid `Integer`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, `'a'` through `'z'`, and `'A'` through `'Z'`, with an optional leading `'-'`; and only characters that represent digits smaller than the base are allowed. Leading zeros are allowed, as is the string `"-0"`. The string `"-"` is not. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::{Digits, FromStringBase}; use malachite_nz::integer::Integer; assert_eq!(Integer::from_string_base(10, "123456").unwrap(), 123456); assert_eq!(Integer::from_string_base(10, "00123456").unwrap(), 123456); assert_eq!(Integer::from_string_base(16, "0").unwrap(), 0); assert_eq!( Integer::from_string_base(16, "deadbeef").unwrap(), 3735928559i64 ); assert_eq!( Integer::from_string_base(16, "deAdBeEf").unwrap(), 3735928559i64 ); assert_eq!(Integer::from_string_base(10, "-123456").unwrap(), -123456); assert_eq!(Integer::from_string_base(10, "-00123456").unwrap(), -123456); assert_eq!(Integer::from_string_base(16, "-0").unwrap(), 0); assert_eq!( Integer::from_string_base(16, "-deadbeef").unwrap(), -3735928559i64 ); assert_eq!( Integer::from_string_base(16, "-deAdBeEf").unwrap(), -3735928559i64 ); assert!(Integer::from_string_base(10, "").is_none()); assert!(Integer::from_string_base(10, "a").is_none()); assert!(Integer::from_string_base(2, "2").is_none()); assert!(Integer::from_string_base(2, "-2").is_none()); ``` ### impl Hash for Integer #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn is_integer(self) -> bool Determines whether an `Integer` is an integer. It always returns `true`. $f(x) = \textrm{true}$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::{NegativeOne, One, Zero}; use malachite_base::num::conversion::traits::IsInteger; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.is_integer(), true); assert_eq!(Integer::ONE.is_integer(), true); assert_eq!(Integer::from(100).is_integer(), true); assert_eq!(Integer::NEGATIVE_ONE.is_integer(), true); assert_eq!(Integer::from(-100).is_integer(), true); ``` ### impl<'a, 'b> JacobiSymbol<&'a Integer> for &'b Integer #### fn jacobi_symbol(self, other: &'a Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).jacobi_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).jacobi_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).jacobi_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(11)).jacobi_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(-7)).jacobi_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).jacobi_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).jacobi_symbol(&Integer::from(9)), 1); ``` ### impl<'a> JacobiSymbol<&'a Integer> for Integer #### fn jacobi_symbol(self, other: &'a Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).jacobi_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).jacobi_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).jacobi_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(11).jacobi_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(-7).jacobi_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).jacobi_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-11).jacobi_symbol(&Integer::from(9)), 1); ``` ### impl<'a> JacobiSymbol<Integer> for &'a Integer #### fn jacobi_symbol(self, other: Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).jacobi_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).jacobi_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).jacobi_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(11)).jacobi_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(-7)).jacobi_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).jacobi_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).jacobi_symbol(Integer::from(9)), 1); ``` ### impl JacobiSymbol<Integer> for Integer #### fn jacobi_symbol(self, other: Integer) -> i8 Computes the Jacobi symbol of two `Integer`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).jacobi_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).jacobi_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).jacobi_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(11).jacobi_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(-7).jacobi_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).jacobi_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-11).jacobi_symbol(Integer::from(9)), 1); ``` ### impl<'a, 'b> KroneckerSymbol<&'a Integer> for &'b Integer #### fn kronecker_symbol(self, other: &'a Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).kronecker_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).kronecker_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(&Integer::from(8)), -1); assert_eq!((&Integer::from(-7)).kronecker_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(9)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(8)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(&Integer::from(-8)), 1); ``` ### impl<'a> KroneckerSymbol<&'a Integer> for Integer #### fn kronecker_symbol(self, other: &'a Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).kronecker_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).kronecker_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(11).kronecker_symbol(&Integer::from(8)), -1); assert_eq!(Integer::from(-7).kronecker_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(9)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(8)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(&Integer::from(-8)), 1); ``` ### impl<'a> KroneckerSymbol<Integer> for &'a Integer #### fn kronecker_symbol(self, other: Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking the first by reference and the second value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).kronecker_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).kronecker_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(11)).kronecker_symbol(Integer::from(8)), -1); assert_eq!((&Integer::from(-7)).kronecker_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(9)), 1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(8)), -1); assert_eq!((&Integer::from(-11)).kronecker_symbol(Integer::from(-8)), 1); ``` ### impl KroneckerSymbol<Integer> for Integer #### fn kronecker_symbol(self, other: Integer) -> i8 Computes the Kronecker symbol of two `Integer`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).kronecker_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).kronecker_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(11).kronecker_symbol(Integer::from(8)), -1); assert_eq!(Integer::from(-7).kronecker_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(9)), 1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(8)), -1); assert_eq!(Integer::from(-11).kronecker_symbol(Integer::from(-8)), 1); ``` ### impl<'a, 'b> LegendreSymbol<&'a Integer> for &'b Integer #### fn legendre_symbol(self, other: &'a Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking both by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).legendre_symbol(&Integer::from(5)), 0); assert_eq!((&Integer::from(7)).legendre_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(11)).legendre_symbol(&Integer::from(5)), 1); assert_eq!((&Integer::from(-7)).legendre_symbol(&Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).legendre_symbol(&Integer::from(5)), 1); ``` ### impl<'a> LegendreSymbol<&'a Integer> for Integer #### fn legendre_symbol(self, other: &'a Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking the first by value and the second by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).legendre_symbol(&Integer::from(5)), 0); assert_eq!(Integer::from(7).legendre_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(11).legendre_symbol(&Integer::from(5)), 1); assert_eq!(Integer::from(-7).legendre_symbol(&Integer::from(5)), -1); assert_eq!(Integer::from(-11).legendre_symbol(&Integer::from(5)), 1); ``` ### impl<'a> LegendreSymbol<Integer> for &'a Integer #### fn legendre_symbol(self, other: Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking the first by reference and the second by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10)).legendre_symbol(Integer::from(5)), 0); assert_eq!((&Integer::from(7)).legendre_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(11)).legendre_symbol(Integer::from(5)), 1); assert_eq!((&Integer::from(-7)).legendre_symbol(Integer::from(5)), -1); assert_eq!((&Integer::from(-11)).legendre_symbol(Integer::from(5)), 1); ``` ### impl LegendreSymbol<Integer> for Integer #### fn legendre_symbol(self, other: Integer) -> i8 Computes the Legendre symbol of two `Integer`s, taking both by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `self` is negative or if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10).legendre_symbol(Integer::from(5)), 0); assert_eq!(Integer::from(7).legendre_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(11).legendre_symbol(Integer::from(5)), 1); assert_eq!(Integer::from(-7).legendre_symbol(Integer::from(5)), -1); assert_eq!(Integer::from(-11).legendre_symbol(Integer::from(5)), 1); ``` ### impl LowMask for Integer #### fn low_mask(bits: u64) -> Integer Returns an `Integer` whose least significant $b$ bits are `true` and whose other bits are `false`. $f(b) = 2^b - 1$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `bits`. ##### Examples ``` use malachite_base::num::logic::traits::LowMask; use malachite_nz::integer::Integer; assert_eq!(Integer::low_mask(0), 0); assert_eq!(Integer::low_mask(3), 7); assert_eq!(Integer::low_mask(100).to_string(), "1267650600228229401496703205375"); ``` ### impl LowerHex for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a hexadecimal `String` using lowercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToLowerHexString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_lower_hex_string(), "0"); assert_eq!(Integer::from(123).to_lower_hex_string(), "7b"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_lower_hex_string(), "e8d4a51000" ); assert_eq!(format!("{:07x}", Integer::from(123)), "000007b"); assert_eq!(Integer::from(-123).to_lower_hex_string(), "-7b"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_lower_hex_string(), "-e8d4a51000" ); assert_eq!(format!("{:07x}", Integer::from(-123)), "-00007b"); assert_eq!(format!("{:#x}", Integer::ZERO), "0x0"); assert_eq!(format!("{:#x}", Integer::from(123)), "0x7b"); assert_eq!( format!("{:#x}", Integer::from_str("1000000000000").unwrap()), "0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Integer::from(123)), "0x0007b"); assert_eq!(format!("{:#x}", Integer::from(-123)), "-0x7b"); assert_eq!( format!("{:#x}", Integer::from_str("-1000000000000").unwrap()), "-0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Integer::from(-123)), "-0x007b"); ``` ### impl<'a, 'b> Mod<&'b Integer> for &'a Integer #### fn mod_op(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).mod_op(&Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).mod_op(&Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).mod_op(&Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).mod_op(&Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> Mod<&'a Integer> for Integer #### fn mod_op(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).mod_op(&Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).mod_op(&Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).mod_op(&Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).mod_op(&Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> Mod<Integer> for &'a Integer #### fn mod_op(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!((&Integer::from(23)).mod_op(Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!((&Integer::from(23)).mod_op(Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!((&Integer::from(-23)).mod_op(Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!((&Integer::from(-23)).mod_op(Integer::from(-10)), -3); ``` #### type Output = Integer ### impl Mod<Integer> for Integer #### fn mod_op(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23).mod_op(Integer::from(10)), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23).mod_op(Integer::from(-10)), -7); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23).mod_op(Integer::from(10)), 7); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23).mod_op(Integer::from(-10)), -3); ``` #### type Output = Integer ### impl<'a> ModAssign<&'a Integer> for Integer #### fn mod_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by reference and replacing the first by the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.mod_assign(&Integer::from(10)); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.mod_assign(&Integer::from(-10)); assert_eq!(x, -7); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.mod_assign(&Integer::from(10)); assert_eq!(x, 7); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.mod_assign(&Integer::from(-10)); assert_eq!(x, -3); ``` ### impl ModAssign<Integer> for Integer #### fn mod_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by value and replacing the first by the remainder. The remainder has the same sign as the second `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x.mod_assign(Integer::from(10)); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x.mod_assign(Integer::from(-10)); assert_eq!(x, -7); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x.mod_assign(Integer::from(10)); assert_eq!(x, 7); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x.mod_assign(Integer::from(-10)); assert_eq!(x, -3); ``` ### impl<'a> ModPowerOf2 for &'a Integer #### fn mod_power_of_2(self, pow: u64) -> Natural Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder is non-negative. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2`, this function always returns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!((&Integer::from(260)).mod_power_of_2(8), 4); // -101 * 2^4 + 5 = -1611 assert_eq!((&Integer::from(-1611)).mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl ModPowerOf2 for Integer #### fn mod_power_of_2(self, pow: u64) -> Natural Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder is non-negative. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2`, this function always returns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!(Integer::from(260).mod_power_of_2(8), 4); // -101 * 2^4 + 5 = -1611 assert_eq!(Integer::from(-1611).mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl ModPowerOf2Assign for Integer #### fn mod_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder is non-negative. If the quotient were computed, he quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ Unlike `rem_power_of_2_assign`, this function always assigns a non-negative number. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Assign; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 let mut x = Integer::from(260); x.mod_power_of_2_assign(8); assert_eq!(x, 4); // -101 * 2^4 + 5 = -1611 let mut x = Integer::from(-1611); x.mod_power_of_2_assign(4); assert_eq!(x, 5); ``` ### impl<'a, 'b> Mul<&'a Integer> for &'b Integer #### fn mul(self, other: &'a Integer) -> Integer Multiplies two `Integer`s, taking both by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::ONE * &Integer::from(123), 123); assert_eq!(&Integer::from(123) * &Integer::ZERO, 0); assert_eq!(&Integer::from(123) * &Integer::from(-456), -56088); assert_eq!( (&Integer::from(-123456789000i64) * &Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> Mul<&'a Integer> for Integer #### fn mul(self, other: &'a Integer) -> Integer Multiplies two `Integer`s, taking the first by value and the second by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ONE * &Integer::from(123), 123); assert_eq!(Integer::from(123) * &Integer::ZERO, 0); assert_eq!(Integer::from(123) * &Integer::from(-456), -56088); assert_eq!( (Integer::from(-123456789000i64) * &Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> Mul<Integer> for &'a Integer #### fn mul(self, other: Integer) -> Integer Multiplies two `Integer`s, taking the first by reference and the second by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(&Integer::ONE * Integer::from(123), 123); assert_eq!(&Integer::from(123) * Integer::ZERO, 0); assert_eq!(&Integer::from(123) * Integer::from(-456), -56088); assert_eq!( (&Integer::from(-123456789000i64) * Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl Mul<Integer> for Integer #### fn mul(self, other: Integer) -> Integer Multiplies two `Integer`s, taking both by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ONE * Integer::from(123), 123); assert_eq!(Integer::from(123) * Integer::ZERO, 0); assert_eq!(Integer::from(123) * Integer::from(-456), -56088); assert_eq!( (Integer::from(-123456789000i64) * Integer::from(-987654321000i64)).to_string(), "121932631112635269000000" ); ``` #### type Output = Integer The resulting type after applying the `*` operator.### impl<'a> MulAssign<&'a Integer> for Integer #### fn mul_assign(&mut self, other: &'a Integer) Multiplies an `Integer` by an `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::NEGATIVE_ONE; x *= &Integer::from(1000); x *= &Integer::from(2000); x *= &Integer::from(3000); x *= &Integer::from(4000); assert_eq!(x, -24000000000000i64); ``` ### impl MulAssign<Integer> for Integer #### fn mul_assign(&mut self, other: Integer) Multiplies an `Integer` by an `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::NegativeOne; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::NEGATIVE_ONE; x *= Integer::from(1000); x *= Integer::from(2000); x *= Integer::from(3000); x *= Integer::from(4000); assert_eq!(x, -24000000000000i64); ``` ### impl Named for Integer #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl<'a> Neg for &'a Integer #### fn neg(self) -> Integer Negates an `Integer`, taking it by reference. $$ f(x) = -x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(-&Integer::ZERO, 0); assert_eq!(-&Integer::from(123), -123); assert_eq!(-&Integer::from(-123), 123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Neg for Integer #### fn neg(self) -> Integer Negates an `Integer`, taking it by value. $$ f(x) = -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(-Integer::ZERO, 0); assert_eq!(-Integer::from(123), -123); assert_eq!(-Integer::from(-123), 123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl NegAssign for Integer #### fn neg_assign(&mut self) Negates an `Integer` in place. $$ x \gets -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.neg_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.neg_assign(); assert_eq!(x, -123); let mut x = Integer::from(-123); x.neg_assign(); assert_eq!(x, 123); ``` ### impl NegativeOne for Integer The constant -1. #### const NEGATIVE_ONE: Integer = _ ### impl<'a> Not for &'a Integer #### fn not(self) -> Integer Returns the bitwise negation of an `Integer`, taking it by reference. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(!&Integer::ZERO, -1); assert_eq!(!&Integer::from(123), -124); assert_eq!(!&Integer::from(-123), 122); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Not for Integer #### fn not(self) -> Integer Returns the bitwise negation of an `Integer`, taking it by value. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(!Integer::ZERO, -1); assert_eq!(!Integer::from(123), -124); assert_eq!(!Integer::from(-123), 122); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl NotAssign for Integer #### fn not_assign(&mut self) Replaces an `Integer` with its bitwise negation. $$ n \gets -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::NotAssign; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.not_assign(); assert_eq!(x, -1); let mut x = Integer::from(123); x.not_assign(); assert_eq!(x, -124); let mut x = Integer::from(-123); x.not_assign(); assert_eq!(x, 122); ``` ### impl Octal for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to an octal `String`. Using the `#` format flag prepends `"0o"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToOctalString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_octal_string(), "0"); assert_eq!(Integer::from(123).to_octal_string(), "173"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_octal_string(), "16432451210000" ); assert_eq!(format!("{:07o}", Integer::from(123)), "0000173"); assert_eq!(Integer::from(-123).to_octal_string(), "-173"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_octal_string(), "-16432451210000" ); assert_eq!(format!("{:07o}", Integer::from(-123)), "-000173"); assert_eq!(format!("{:#o}", Integer::ZERO), "0o0"); assert_eq!(format!("{:#o}", Integer::from(123)), "0o173"); assert_eq!( format!("{:#o}", Integer::from_str("1000000000000").unwrap()), "0o16432451210000" ); assert_eq!(format!("{:#07o}", Integer::from(123)), "0o00173"); assert_eq!(format!("{:#o}", Integer::from(-123)), "-0o173"); assert_eq!( format!("{:#o}", Integer::from_str("-1000000000000").unwrap()), "-0o16432451210000" ); assert_eq!(format!("{:#07o}", Integer::from(-123)), "-0o0173"); ``` ### impl One for Integer The constant 1. #### const ONE: Integer = _ ### impl Ord for Integer #### fn cmp(&self, other: &Integer) -> Ordering Compares two `Integer`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; assert!(Integer::from(-123) < Integer::from(-122)); assert!(Integer::from(-123) <= Integer::from(-122)); assert!(Integer::from(-123) > Integer::from(-124)); assert!(Integer::from(-123) >= Integer::from(-124)); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn cmp_abs(&self, other: &Integer) -> Ordering Compares the absolute values of two `Integer`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; assert!(Integer::from(-123).lt_abs(&Integer::from(-124))); assert!(Integer::from(-123).le_abs(&Integer::from(-124))); assert!(Integer::from(-124).gt_abs(&Integer::from(-123))); assert!(Integer::from(-124).ge_abs(&Integer::from(-123))); ``` ### impl<'a> OverflowingFrom<&'a Integer> for i128 #### fn overflowing_from(value: &Integer) -> (i128, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i16 #### fn overflowing_from(value: &Integer) -> (i16, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i32 #### fn overflowing_from(value: &Integer) -> (i32, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i64 #### fn overflowing_from(value: &Integer) -> (i64, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for i8 #### fn overflowing_from(value: &Integer) -> (i8, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for isize #### fn overflowing_from(value: &Integer) -> (isize, bool) Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u128 #### fn overflowing_from(value: &Integer) -> (u128, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u16 #### fn overflowing_from(value: &Integer) -> (u16, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u32 #### fn overflowing_from(value: &Integer) -> (u32, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u64 #### fn overflowing_from(value: &Integer) -> (u64, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for u8 #### fn overflowing_from(value: &Integer) -> (u8, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Integer> for usize #### fn overflowing_from(value: &Integer) -> (usize, bool) Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> Parity for &'a Integer #### fn even(self) -> bool Tests whether an `Integer` is even. $f(x) = (2|x)$. $f(x) = (\exists k \in \N : x = 2k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.even(), true); assert_eq!(Integer::from(123).even(), false); assert_eq!(Integer::from(-0x80).even(), true); assert_eq!(Integer::from(10u32).pow(12).even(), true); assert_eq!((-Integer::from(10u32).pow(12) - Integer::ONE).even(), false); ``` #### fn odd(self) -> bool Tests whether an `Integer` is odd. $f(x) = (2\nmid x)$. $f(x) = (\exists k \in \N : x = 2k+1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.odd(), false); assert_eq!(Integer::from(123).odd(), true); assert_eq!(Integer::from(-0x80).odd(), false); assert_eq!(Integer::from(10u32).pow(12).odd(), false); assert_eq!((-Integer::from(10u32).pow(12) - Integer::ONE).odd(), true); ``` ### impl PartialEq<Integer> for Natural #### fn eq(&self, other: &Integer) -> bool Determines whether a `Natural` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) == Integer::from(123)); assert!(Natural::from(123u32) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Integer> for Rational #### fn eq(&self, other: &Integer) -> bool Determines whether a `Rational` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from(-123) == Integer::from(-123)); assert!(Rational::from_signeds(22, 7) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Integer #### fn eq(&self, other: &Natural) -> bool Determines whether an `Integer` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) == Natural::from(123u32)); assert!(Integer::from(123) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Integer #### fn eq(&self, other: &Rational) -> bool Determines whether an `Integer` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(-123) == Rational::from(-123)); assert!(Integer::from(5) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f32> for Integer #### fn eq(&self, other: &f32) -> bool Determines whether an `Integer` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Integer #### fn eq(&self, other: &f64) -> bool Determines whether an `Integer` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i128> for Integer #### fn eq(&self, other: &i128) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i16> for Integer #### fn eq(&self, other: &i16) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i32> for Integer #### fn eq(&self, other: &i32) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i64> for Integer #### fn eq(&self, other: &i64) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i8> for Integer #### fn eq(&self, other: &i8) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<isize> for Integer #### fn eq(&self, other: &isize) -> bool Determines whether an `Integer` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u128> for Integer #### fn eq(&self, other: &u128) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Integer #### fn eq(&self, other: &u16) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u32> for Integer #### fn eq(&self, other: &u32) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u64> for Integer #### fn eq(&self, other: &u64) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u8> for Integer #### fn eq(&self, other: &u8) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<usize> for Integer #### fn eq(&self, other: &usize) -> bool Determines whether an `Integer` is equal to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Integer> for Integer #### fn eq(&self, other: &Integer) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Integer> for Natural #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Natural` to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Integer::from(122)); assert!(Natural::from(123u32) >= Integer::from(122)); assert!(Natural::from(123u32) < Integer::from(124)); assert!(Natural::from(123u32) <= Integer::from(124)); assert!(Natural::from(123u32) > Integer::from(-123)); assert!(Natural::from(123u32) >= Integer::from(-123)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Rational` to an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Integer::from(3)); assert!(Rational::from_signeds(22, 7) < Integer::from(4)); assert!(Rational::from_signeds(-22, 7) < Integer::from(-3)); assert!(Rational::from_signeds(-22, 7) > Integer::from(-4)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares an `Integer` to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) > Natural::from(122u32)); assert!(Integer::from(123) >= Natural::from(122u32)); assert!(Integer::from(123) < Natural::from(124u32)); assert!(Integer::from(123) <= Natural::from(124u32)); assert!(Integer::from(-123) < Natural::from(123u32)); assert!(Integer::from(-123) <= Natural::from(123u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares an `Integer` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(3) < Rational::from_signeds(22, 7)); assert!(Integer::from(4) > Rational::from_signeds(22, 7)); assert!(Integer::from(-3) > Rational::from_signeds(-22, 7)); assert!(Integer::from(-4) < Rational::from_signeds(-22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f32) -> Option<OrderingCompares an `Integer` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f64) -> Option<OrderingCompares an `Integer` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i128) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i16) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i32) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i64) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i8) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &isize) -> Option<OrderingCompares an `Integer` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u128) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u16) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u32) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u64) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u8) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &usize) -> Option<OrderingCompares an `Integer` to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares two `Integer`s. See the documentation for the `Ord` implementation. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Natural` and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32).gt_abs(&Integer::from(122))); assert!(Natural::from(123u32).ge_abs(&Integer::from(122))); assert!(Natural::from(123u32).lt_abs(&Integer::from(124))); assert!(Natural::from(123u32).le_abs(&Integer::from(124))); assert!(Natural::from(123u32).lt_abs(&Integer::from(-124))); assert!(Natural::from(123u32).le_abs(&Integer::from(-124))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Rational` and an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Integer::from(3)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Integer::from(-3)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a primitive float and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a primitive float and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a signed primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and an `Integer`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of an `Integer` and a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123).gt_abs(&Natural::from(122u32))); assert!(Integer::from(123).ge_abs(&Natural::from(122u32))); assert!(Integer::from(123).lt_abs(&Natural::from(124u32))); assert!(Integer::from(123).le_abs(&Natural::from(124u32))); assert!(Integer::from(-124).gt_abs(&Natural::from(123u32))); assert!(Integer::from(-124).ge_abs(&Natural::from(123u32))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an `Integer` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Integer::from(3).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Integer::from(-3).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f32) -> Option<OrderingCompares the absolute values of an `Integer` and a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f64) -> Option<OrderingCompares the absolute values of an `Integer` and a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i128) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i16) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i32) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i64) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i8) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &isize) -> Option<OrderingCompares the absolute values of an `Integer` and a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u128) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u16) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u32) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u64) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u8) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &usize) -> Option<OrderingCompares the absolute values of an `Integer` and an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of two `Integer`s. See the documentation for the `OrdAbs` implementation. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn pow(self, exp: u64) -> Integer Raises an `Integer` to a power, taking the `Integer` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!( (&Integer::from(-3)).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( (&Integer::from_str("-12345678987654321").unwrap()).pow(3).to_string(), "-1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Integer ### impl Pow<u64> for Integer #### fn pow(self, exp: u64) -> Integer Raises an `Integer` to a power, taking the `Integer` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!( Integer::from(-3).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( Integer::from_str("-12345678987654321").unwrap().pow(3).to_string(), "-1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Integer ### impl PowAssign<u64> for Integer #### fn pow_assign(&mut self, exp: u64) Raises an `Integer` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_nz::integer::Integer; use std::str::FromStr; let mut x = Integer::from(-3); x.pow_assign(100); assert_eq!(x.to_string(), "515377520732011331036461129765621272702107522001"); let mut x = Integer::from_str("-12345678987654321").unwrap(); x.pow_assign(3); assert_eq!(x.to_string(), "-1881676411868862234942354805142998028003108518161"); ``` ### impl PowerOf2<u64> for Integer #### fn power_of_2(pow: u64) -> Integer Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_nz::integer::Integer; assert_eq!(Integer::power_of_2(0), 1); assert_eq!(Integer::power_of_2(3), 8); assert_eq!(Integer::power_of_2(100).to_string(), "1267650600228229401496703205376"); ``` ### impl<'a> Product<&'a Integer> for Integer #### fn product<I>(xs: I) -> Integerwhere I: Iterator<Item = &'a Integer>, Multiplies together all the `Integer`s in an iterator of `Integer` references. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Product; assert_eq!( Integer::product(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().iter()), -210 ); ``` ### impl Product<Integer> for Integer #### fn product<I>(xs: I) -> Integerwhere I: Iterator<Item = Integer>, Multiplies together all the `Integer`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Product; assert_eq!( Integer::product(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().into_iter()), -210 ); ``` ### impl<'a, 'b> Rem<&'b Integer> for &'a Integer #### fn rem(self, other: &'b Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by reference and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) % &Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(&Integer::from(23) % &Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(&Integer::from(-23) % &Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) % &Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> Rem<&'a Integer> for Integer #### fn rem(self, other: &'a Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by value and the second by reference and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) % &Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23) % &Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23) % &Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) % &Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> Rem<Integer> for &'a Integer #### fn rem(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking the first by reference and the second by value and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(&Integer::from(23) % Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(&Integer::from(23) % Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(&Integer::from(-23) % Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(&Integer::from(-23) % Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl Rem<Integer> for Integer #### fn rem(self, other: Integer) -> Integer Divides an `Integer` by another `Integer`, taking both by value and returning just the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ f(x, y) = x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 assert_eq!(Integer::from(23) % Integer::from(10), 3); // -3 * -10 + -7 = 23 assert_eq!(Integer::from(23) % Integer::from(-10), 3); // -3 * 10 + 7 = -23 assert_eq!(Integer::from(-23) % Integer::from(10), -3); // 2 * -10 + -3 = -23 assert_eq!(Integer::from(-23) % Integer::from(-10), -3); ``` #### type Output = Integer The resulting type after applying the `%` operator.### impl<'a> RemAssign<&'a Integer> for Integer #### fn rem_assign(&mut self, other: &'a Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by reference and replacing the first by the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x %= &Integer::from(10); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x %= &Integer::from(-10); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x %= &Integer::from(10); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x %= &Integer::from(-10); assert_eq!(x, -3); ``` ### impl RemAssign<Integer> for Integer #### fn rem_assign(&mut self, other: Integer) Divides an `Integer` by another `Integer`, taking the second `Integer` by value and replacing the first by the remainder. The remainder has the same sign as the first `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq |r| < |y|$. $$ x \gets x - y \operatorname{sgn}(xy) \left \lfloor \left | \frac{x}{y} \right | \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::integer::Integer; // 2 * 10 + 3 = 23 let mut x = Integer::from(23); x %= Integer::from(10); assert_eq!(x, 3); // -3 * -10 + -7 = 23 let mut x = Integer::from(23); x %= Integer::from(-10); assert_eq!(x, 3); // -3 * 10 + 7 = -23 let mut x = Integer::from(-23); x %= Integer::from(10); assert_eq!(x, -3); // 2 * -10 + -3 = -23 let mut x = Integer::from(-23); x %= Integer::from(-10); assert_eq!(x, -3); ``` ### impl<'a> RemPowerOf2 for &'a Integer #### fn rem_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by reference and returning just the remainder. The remainder has the same sign as the first number. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq |r| < 2^k$. $$ f(x, k) = x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2`, this function always returns zero or a number with the same sign as `self`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!((&Integer::from(260)).rem_power_of_2(8), 4); // -100 * 2^4 + -11 = -1611 assert_eq!((&Integer::from(-1611)).rem_power_of_2(4), -11); ``` #### type Output = Integer ### impl RemPowerOf2 for Integer #### fn rem_power_of_2(self, pow: u64) -> Integer Divides an `Integer` by $2^k$, taking it by value and returning just the remainder. The remainder has the same sign as the first number. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq |r| < 2^k$. $$ f(x, k) = x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2`, this function always returns zero or a number with the same sign as `self`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 assert_eq!(Integer::from(260).rem_power_of_2(8), 4); // -100 * 2^4 + -11 = -1611 assert_eq!(Integer::from(-1611).rem_power_of_2(4), -11); ``` #### type Output = Integer ### impl RemPowerOf2Assign for Integer #### fn rem_power_of_2_assign(&mut self, pow: u64) Divides an `Integer` by $2^k$, replacing the `Integer` by the remainder. The remainder has the same sign as the `Integer`. If the quotient were computed, he quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\operatorname{sgn}(x)\left \lfloor \frac{|x|}{2^k} \right \rfloor. $$ Unlike `mod_power_of_2_assign`, this function does never changes the sign of `self`, except possibly to set `self` to 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2Assign; use malachite_nz::integer::Integer; // 1 * 2^8 + 4 = 260 let mut x = Integer::from(260); x.rem_power_of_2_assign(8); assert_eq!(x, 4); // -100 * 2^4 + -11 = -1611 let mut x = Integer::from(-1611); x.rem_power_of_2_assign(4); assert_eq!(x, -11); ``` ### impl<'a, 'b> RoundToMultiple<&'b Integer> for &'a Integer #### fn round_to_multiple( self, other: &'b Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. Both `Integer`s are taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(-5)).round_to_multiple(&Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultiple<&'a Integer> for Integer #### fn round_to_multiple( self, other: &'a Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. The first `Integer` is taken by value and the second by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(-5).round_to_multiple(&Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(&Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(&Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(&Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(&Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultiple<Integer> for &'a Integer #### fn round_to_multiple( self, other: Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. The first `Integer` is taken by reference and the second by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(-5)).round_to_multiple(Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( (&Integer::from(-20)).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(-14)).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl RoundToMultiple<Integer> for Integer #### fn round_to_multiple( self, other: Integer, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of another `Integer`, according to a specified rounding mode. Both `Integer`s are taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{|y|}$: $f(x, y, \mathrm{Down}) = \operatorname{sgn}(q) |y| \lfloor |q| \rfloor.$ $f(x, y, \mathrm{Up}) = \operatorname{sgn}(q) |y| \lceil |q| \rceil.$ $f(x, y, \mathrm{Floor}) = |y| \lfloor q \rfloor.$ $f(x, y, \mathrm{Ceiling}) = |y| \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(-5).round_to_multiple(Integer::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(Integer::from(3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(Integer::from(4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Down) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-5), RoundingMode::Exact) .to_debug_string(), "(-10, Equal)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-9, Greater)" ); assert_eq!( Integer::from(-20).round_to_multiple(Integer::from(-3), RoundingMode::Nearest) .to_debug_string(), "(-21, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(-14).round_to_multiple(Integer::from(-4), RoundingMode::Nearest) .to_debug_string(), "(-16, Less)" ); ``` #### type Output = Integer ### impl<'a> RoundToMultipleAssign<&'a Integer> for Integer #### fn round_to_multiple_assign( &mut self, other: &'a Integer, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of another `Integer` in place, according to a specified rounding mode. The `Integer` on the right-hand side is taken by reference. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut x = Integer::from(-5); assert_eq!( x.round_to_multiple_assign(&Integer::ZERO, RoundingMode::Down), Ordering::Greater ); assert_eq!(x, 0); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(&Integer::from(3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(&Integer::from(4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(&Integer::from(-3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(&Integer::from(-4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); ``` ### impl RoundToMultipleAssign<Integer> for Integer #### fn round_to_multiple_assign( &mut self, other: Integer, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of another `Integer` in place, according to a specified rounding mode. The `Integer` on the right-hand side is taken by value. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut x = Integer::from(-5); assert_eq!( x.round_to_multiple_assign(Integer::ZERO, RoundingMode::Down), Ordering::Greater ); assert_eq!(x, 0); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!(x.round_to_multiple_assign(Integer::from(4), RoundingMode::Up), Ordering::Less); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(Integer::from(3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(Integer::from(4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Down), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Up), Ordering::Less ); assert_eq!(x, -12); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-5), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, -10); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -9); let mut x = Integer::from(-20); assert_eq!( x.round_to_multiple_assign(Integer::from(-3), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -21); let mut x = Integer::from(-10); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, -8); let mut x = Integer::from(-14); assert_eq!( x.round_to_multiple_assign(Integer::from(-4), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, -16); ``` ### impl<'a> RoundToMultipleOfPowerOf2<u64> for &'a Integer #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of $2^k$ according to a specified rounding mode. The `Integer` is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(-8, Greater)" ); assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-10)).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( (&Integer::from(10)).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Integer::from(-12)).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(-12, Equal)" ); ``` #### type Output = Integer ### impl RoundToMultipleOfPowerOf2<u64> for Integer #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Integer, Ordering) Rounds an `Integer` to a multiple of $2^k$ according to a specified rounding mode. The `Integer` is taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(-8, Greater)" ); assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-10).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(-12, Less)" ); assert_eq!( Integer::from(10).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Integer::from(-12).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(-12, Equal)" ); ``` #### type Output = Integer ### impl RoundToMultipleOfPowerOf2Assign<u64> for Integer #### fn round_to_multiple_of_power_of_2_assign( &mut self, pow: u64, rm: RoundingMode ) -> Ordering Rounds an `Integer` to a multiple of $2^k$ in place, according to a specified rounding mode. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultipleOfPowerOf2` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2_assign(pow, RoundingMode::Exact);` * `assert!(x.divisible_by_power_of_2(pow));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2Assign; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; use std::cmp::Ordering; let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Floor), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, -8); let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Down), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-10); assert_eq!(n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Up), Ordering::Less); assert_eq!(n, -12); let mut n = Integer::from(10); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 8); let mut n = Integer::from(-12); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Exact), Ordering::Equal ); assert_eq!(n, -12); ``` ### impl<'a> RoundingFrom<&'a Integer> for f32 #### fn rounding_from(value: &'a Integer, rm: RoundingMode) -> (f32, Ordering) Converts an `Integer` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` the largest float less than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Integer` is non-negative and as with `Ceiling` if the `Integer` is negative. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Integer` is non-negative and as with `Floor` if the `Integer` is negative. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Integer` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Integer> for f64 #### fn rounding_from(value: &'a Integer, rm: RoundingMode) -> (f64, Ordering) Converts an `Integer` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` the largest float less than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Integer` is returned. If the `Integer` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Integer` is non-negative and as with `Ceiling` if the `Integer` is negative. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Integer` is non-negative and as with `Floor` if the `Integer` is negative. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Integer` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Integer` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for Integer #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!(Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl RoundingFrom<Rational> for Integer #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl RoundingFrom<f32> for Integer #### fn rounding_from(value: f32, rm: RoundingMode) -> (Integer, Ordering) Converts a primitive float to an `Integer`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl RoundingFrom<f64> for Integer #### fn rounding_from(value: f64, rm: RoundingMode) -> (Integer, Ordering) Converts a primitive float to an `Integer`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for Natural #### fn saturating_from(value: &'a Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(&Integer::from(123)), 123); assert_eq!(Natural::saturating_from(&Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(&Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(&-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> SaturatingFrom<&'a Integer> for i128 #### fn saturating_from(value: &Integer) -> i128 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i16 #### fn saturating_from(value: &Integer) -> i16 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i32 #### fn saturating_from(value: &Integer) -> i32 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i64 #### fn saturating_from(value: &Integer) -> i64 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for i8 #### fn saturating_from(value: &Integer) -> i8 Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for isize #### fn saturating_from(value: &Integer) -> isize Converts an `Integer` to a signed primitive integer. If the `Integer` cannot be represented by the output type, then either the maximum or the minimum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u128 #### fn saturating_from(value: &Integer) -> u128 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u16 #### fn saturating_from(value: &Integer) -> u16 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u32 #### fn saturating_from(value: &Integer) -> u32 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u64 #### fn saturating_from(value: &Integer) -> u64 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for u8 #### fn saturating_from(value: &Integer) -> u8 Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for usize #### fn saturating_from(value: &Integer) -> usize Converts an `Integer` to an unsigned primitive integer. If the `Integer` cannot be represented by the output type, then either zero or the maximum representable value is returned, whichever is closer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<Integer> for Natural #### fn saturating_from(value: Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(Integer::from(123)), 123); assert_eq!(Natural::saturating_from(Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> Shl<i128> for &'a Integer #### fn shl(self, bits: i128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i128> for Integer #### fn shl(self, bits: i128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i16> for &'a Integer #### fn shl(self, bits: i16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i16> for Integer #### fn shl(self, bits: i16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i32> for &'a Integer #### fn shl(self, bits: i32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i32> for Integer #### fn shl(self, bits: i32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i64> for &'a Integer #### fn shl(self, bits: i64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i64> for Integer #### fn shl(self, bits: i64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<i8> for &'a Integer #### fn shl(self, bits: i8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<i8> for Integer #### fn shl(self, bits: i8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<isize> for &'a Integer #### fn shl(self, bits: isize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<isize> for Integer #### fn shl(self, bits: isize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u128> for &'a Integer #### fn shl(self, bits: u128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u128> for Integer #### fn shl(self, bits: u128) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u16> for &'a Integer #### fn shl(self, bits: u16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u16> for Integer #### fn shl(self, bits: u16) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u32> for &'a Integer #### fn shl(self, bits: u32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u32> for Integer #### fn shl(self, bits: u32) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u64> for &'a Integer #### fn shl(self, bits: u64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u64> for Integer #### fn shl(self, bits: u64) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<u8> for &'a Integer #### fn shl(self, bits: u8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<u8> for Integer #### fn shl(self, bits: u8) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl<'a> Shl<usize> for &'a Integer #### fn shl(self, bits: usize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl Shl<usize> for Integer #### fn shl(self, bits: usize) -> Integer Left-shifts an `Integer` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `<<` operator.### impl ShlAssign<i128> for Integer #### fn shl_assign(&mut self, bits: i128) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i16> for Integer #### fn shl_assign(&mut self, bits: i16) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i32> for Integer #### fn shl_assign(&mut self, bits: i32) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i64> for Integer #### fn shl_assign(&mut self, bits: i64) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<i8> for Integer #### fn shl_assign(&mut self, bits: i8) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<isize> for Integer #### fn shl_assign(&mut self, bits: isize) Left-shifts an `Integer` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlAssign<u128> for Integer #### fn shl_assign(&mut self, bits: u128) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u16> for Integer #### fn shl_assign(&mut self, bits: u16) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u32> for Integer #### fn shl_assign(&mut self, bits: u32) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u64> for Integer #### fn shl_assign(&mut self, bits: u64) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u8> for Integer #### fn shl_assign(&mut self, bits: u8) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<usize> for Integer #### fn shl_assign(&mut self, bits: usize) Left-shifts an `Integer` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ShlRound<i128> for &'a Integer #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i128> for Integer #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i16> for &'a Integer #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i16> for Integer #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i32> for &'a Integer #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i32> for Integer #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i64> for &'a Integer #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i64> for Integer #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<i8> for &'a Integer #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<i8> for Integer #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShlRound<isize> for &'a Integer #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRound<isize> for Integer #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Left-shifts an `Integer` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Integer ### impl ShlRoundAssign<i128> for Integer #### fn shl_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i16> for Integer #### fn shl_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i32> for Integer #### fn shl_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i64> for Integer #### fn shl_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<i8> for Integer #### fn shl_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl ShlRoundAssign<isize> for Integer #### fn shl_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Left-shifts an `Integer` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. ### impl<'a> Shr<i128> for &'a Integer #### fn shr(self, bits: i128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i128> for Integer #### fn shr(self, bits: i128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i16> for &'a Integer #### fn shr(self, bits: i16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i16> for Integer #### fn shr(self, bits: i16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i32> for &'a Integer #### fn shr(self, bits: i32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i32> for Integer #### fn shr(self, bits: i32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i64> for &'a Integer #### fn shr(self, bits: i64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i64> for Integer #### fn shr(self, bits: i64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<i8> for &'a Integer #### fn shr(self, bits: i8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<i8> for Integer #### fn shr(self, bits: i8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<isize> for &'a Integer #### fn shr(self, bits: isize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<isize> for Integer #### fn shr(self, bits: isize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u128> for &'a Integer #### fn shr(self, bits: u128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u128> for Integer #### fn shr(self, bits: u128) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u16> for &'a Integer #### fn shr(self, bits: u16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u16> for Integer #### fn shr(self, bits: u16) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u32> for &'a Integer #### fn shr(self, bits: u32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u32> for Integer #### fn shr(self, bits: u32) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u64> for &'a Integer #### fn shr(self, bits: u64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u64> for Integer #### fn shr(self, bits: u64) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<u8> for &'a Integer #### fn shr(self, bits: u8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<u8> for Integer #### fn shr(self, bits: u8) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl<'a> Shr<usize> for &'a Integer #### fn shr(self, bits: usize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl Shr<usize> for Integer #### fn shr(self, bits: usize) -> Integer Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Integer The resulting type after applying the `>>` operator.### impl ShrAssign<i128> for Integer #### fn shr_assign(&mut self, bits: i128) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i16> for Integer #### fn shr_assign(&mut self, bits: i16) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i32> for Integer #### fn shr_assign(&mut self, bits: i32) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i64> for Integer #### fn shr_assign(&mut self, bits: i64) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i8> for Integer #### fn shr_assign(&mut self, bits: i8) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<isize> for Integer #### fn shr_assign(&mut self, bits: isize) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u128> for Integer #### fn shr_assign(&mut self, bits: u128) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u16> for Integer #### fn shr_assign(&mut self, bits: u16) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u32> for Integer #### fn shr_assign(&mut self, bits: u32) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u64> for Integer #### fn shr_assign(&mut self, bits: u64) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u8> for Integer #### fn shr_assign(&mut self, bits: u8) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<usize> for Integer #### fn shr_assign(&mut self, bits: usize) Right-shifts an `Integer` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl<'a> ShrRound<i128> for &'a Integer #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i128> for Integer #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i16> for &'a Integer #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i16> for Integer #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i32> for &'a Integer #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i32> for Integer #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i64> for &'a Integer #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i64> for Integer #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<i8> for &'a Integer #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<i8> for Integer #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<isize> for &'a Integer #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<isize> for Integer #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u128> for &'a Integer #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u128> for Integer #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u16> for &'a Integer #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u16> for Integer #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u32> for &'a Integer #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u32> for Integer #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u64> for &'a Integer #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u64> for Integer #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<u8> for &'a Integer #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<u8> for Integer #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl<'a> ShrRound<usize> for &'a Integer #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRound<usize> for Integer #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Integer, Ordering) Shifts an `Integer` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \Z$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Integer ### impl ShrRoundAssign<i128> for Integer #### fn shr_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i16> for Integer #### fn shr_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i32> for Integer #### fn shr_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i64> for Integer #### fn shr_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i8> for Integer #### fn shr_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<isize> for Integer #### fn shr_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Shifts an `Integer` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u128> for Integer #### fn shr_round_assign(&mut self, bits: u128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u16> for Integer #### fn shr_round_assign(&mut self, bits: u16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u32> for Integer #### fn shr_round_assign(&mut self, bits: u32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u64> for Integer #### fn shr_round_assign(&mut self, bits: u64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u8> for Integer #### fn shr_round_assign(&mut self, bits: u8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<usize> for Integer #### fn shr_round_assign(&mut self, bits: usize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. Passing `RoundingMode::Floor` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl Sign for Integer #### fn sign(&self) -> Ordering Compares an `Integer` to zero. Returns `Greater`, `Equal`, or `Less`, depending on whether the `Integer` is positive, zero, or negative, respectively. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Sign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use std::cmp::Ordering; assert_eq!(Integer::ZERO.sign(), Ordering::Equal); assert_eq!(Integer::from(123).sign(), Ordering::Greater); assert_eq!(Integer::from(-123).sign(), Ordering::Less); ``` ### impl<'a> SignificantBits for &'a Integer #### fn significant_bits(self) -> u64 Returns the number of significant bits of an `Integer`’s absolute value. $$ f(n) = \begin{cases} 0 & \text{if} \quad n = 0, \\ \lfloor \log_2 |n| \rfloor + 1 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::logic::traits::SignificantBits; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.significant_bits(), 0); assert_eq!(Integer::from(100).significant_bits(), 7); assert_eq!(Integer::from(-100).significant_bits(), 7); ``` ### impl<'a> Square for &'a Integer #### fn square(self) -> Integer Squares an `Integer`, taking it by reference. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).square(), 0); assert_eq!((&Integer::from(123)).square(), 15129); assert_eq!((&Integer::from(-123)).square(), 15129); ``` #### type Output = Integer ### impl Square for Integer #### fn square(self) -> Integer Squares an `Integer`, taking it by value. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.square(), 0); assert_eq!(Integer::from(123).square(), 15129); assert_eq!(Integer::from(-123).square(), 15129); ``` #### type Output = Integer ### impl SquareAssign for Integer #### fn square_assign(&mut self) Squares an `Integer` in place. $$ x \gets x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x.square_assign(); assert_eq!(x, 0); let mut x = Integer::from(123); x.square_assign(); assert_eq!(x, 15129); let mut x = Integer::from(-123); x.square_assign(); assert_eq!(x, 15129); ``` ### impl<'a, 'b> Sub<&'a Integer> for &'b Integer #### fn sub(self, other: &'a Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking both by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO - &Integer::from(123), -123); assert_eq!(&Integer::from(123) - &Integer::ZERO, 123); assert_eq!(&Integer::from(456) - &Integer::from(-123), 579); assert_eq!( &-Integer::from(10u32).pow(12) - &(-Integer::from(10u32).pow(12) * Integer::from(2u32)), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> Sub<&'a Integer> for Integer #### fn sub(self, other: &'a Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking the first by value and the second by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO - &Integer::from(123), -123); assert_eq!(Integer::from(123) - &Integer::ZERO, 123); assert_eq!(Integer::from(456) - &Integer::from(-123), 579); assert_eq!( -Integer::from(10u32).pow(12) - &(-Integer::from(10u32).pow(12) * Integer::from(2u32)), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> Sub<Integer> for &'a Integer #### fn sub(self, other: Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking the first by reference and the second by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(&Integer::ZERO - Integer::from(123), -123); assert_eq!(&Integer::from(123) - Integer::ZERO, 123); assert_eq!(&Integer::from(456) - Integer::from(-123), 579); assert_eq!( &-Integer::from(10u32).pow(12) - -Integer::from(10u32).pow(12) * Integer::from(2u32), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Sub<Integer> for Integer #### fn sub(self, other: Integer) -> Integer Subtracts an `Integer` by another `Integer`, taking both by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO - Integer::from(123), -123); assert_eq!(Integer::from(123) - Integer::ZERO, 123); assert_eq!(Integer::from(456) - Integer::from(-123), 579); assert_eq!( -Integer::from(10u32).pow(12) - -Integer::from(10u32).pow(12) * Integer::from(2u32), 1000000000000u64 ); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a> SubAssign<&'a Integer> for Integer #### fn sub_assign(&mut self, other: &'a Integer) Subtracts an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by reference. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x -= &(-Integer::from(10u32).pow(12)); x -= &(Integer::from(10u32).pow(12) * Integer::from(2u32)); x -= &(-Integer::from(10u32).pow(12) * Integer::from(3u32)); x -= &(Integer::from(10u32).pow(12) * Integer::from(4u32)); assert_eq!(x, -2000000000000i64); ``` ### impl SubAssign<Integer> for Integer #### fn sub_assign(&mut self, other: Integer) Subtracts an `Integer` by another `Integer` in place, taking the `Integer` on the right-hand side by value. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; let mut x = Integer::ZERO; x -= -Integer::from(10u32).pow(12); x -= Integer::from(10u32).pow(12) * Integer::from(2u32); x -= -Integer::from(10u32).pow(12) * Integer::from(3u32); x -= Integer::from(10u32).pow(12) * Integer::from(4u32); assert_eq!(x, -2000000000000i64); ``` ### impl<'a> SubMul<&'a Integer, Integer> for Integer #### fn sub_mul(self, y: &'a Integer, z: Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first and third by value and the second by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(&Integer::from(3u32), Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b, 'c> SubMul<&'a Integer, &'b Integer> for &'c Integer #### fn sub_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking all three by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!((&Integer::from(10u32)).sub_mul(&Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (&-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a, 'b> SubMul<&'a Integer, &'b Integer> for Integer #### fn sub_mul(self, y: &'a Integer, z: &'b Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first by value and the second and third by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(&Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(&Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> SubMul<Integer, &'a Integer> for Integer #### fn sub_mul(self, y: Integer, z: &'a Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking the first two by value and the third by reference. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(Integer::from(3u32), &Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(Integer::from(-0x10000), &-Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl SubMul<Integer, Integer> for Integer #### fn sub_mul(self, y: Integer, z: Integer) -> Integer Subtracts an `Integer` by the product of two other `Integer`s, taking all three by value. $f(x, y, z) = x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::integer::Integer; assert_eq!(Integer::from(10u32).sub_mul(Integer::from(3u32), Integer::from(-4)), 22); assert_eq!( (-Integer::from(10u32).pow(12)) .sub_mul(Integer::from(-0x10000), -Integer::from(10u32).pow(12)), -65537000000000000i64 ); ``` #### type Output = Integer ### impl<'a> SubMulAssign<&'a Integer, Integer> for Integer #### fn sub_mul_assign(&mut self, y: &'a Integer, z: Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking the first `Integer` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(&Integer::from(3u32), Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(&Integer::from(-0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a, 'b> SubMulAssign<&'a Integer, &'b Integer> for Integer #### fn sub_mul_assign(&mut self, y: &'a Integer, z: &'b Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking both `Integer`s on the right-hand side by reference. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(&Integer::from(3u32), &Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(&Integer::from(-0x10000), &(-Integer::from(10u32).pow(12))); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> SubMulAssign<Integer, &'a Integer> for Integer #### fn sub_mul_assign(&mut self, y: Integer, z: &'a Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking the first `Integer` on the right-hand side by value and the second by reference. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(Integer::from(3u32), &Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(Integer::from(-0x10000), &(-Integer::from(10u32).pow(12))); assert_eq!(x, -65537000000000000i64); ``` ### impl SubMulAssign<Integer, Integer> for Integer #### fn sub_mul_assign(&mut self, y: Integer, z: Integer) Subtracts the product of two other `Integer`s from an `Integer` in place, taking both `Integer`s on the right-hand side by value. $x \gets x - yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::integer::Integer; let mut x = Integer::from(10u32); x.sub_mul_assign(Integer::from(3u32), Integer::from(-4)); assert_eq!(x, 22); let mut x = -Integer::from(10u32).pow(12); x.sub_mul_assign(Integer::from(-0x10000), -Integer::from(10u32).pow(12)); assert_eq!(x, -65537000000000000i64); ``` ### impl<'a> Sum<&'a Integer> for Integer #### fn sum<I>(xs: I) -> Integerwhere I: Iterator<Item = &'a Integer>, Adds up all the `Integer`s in an iterator of `Integer` references. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Sum; assert_eq!(Integer::sum(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().iter()), 11); ``` ### impl Sum<Integer> for Integer #### fn sum<I>(xs: I) -> Integerwhere I: Iterator<Item = Integer>, Adds up all the `Integer`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Integer::sum(xs.map(Integer::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use std::iter::Sum; assert_eq!( Integer::sum(vec_from_str::<Integer>("[2, -3, 5, 7]").unwrap().into_iter()), 11 ); ``` ### impl ToSci for Integer #### fn fmt_sci_valid(&self, options: ToSciOptions) -> bool Determines whether an `Integer` can be converted to a string using `to_sci` and a particular set of options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; let mut options = ToSciOptions::default(); assert!(Integer::from(123).fmt_sci_valid(options)); assert!(Integer::from(u128::MAX).fmt_sci_valid(options)); // u128::MAX has more than 16 significant digits options.set_rounding_mode(RoundingMode::Exact); assert!(!Integer::from(u128::MAX).fmt_sci_valid(options)); options.set_precision(50); assert!(Integer::from(u128::MAX).fmt_sci_valid(options)); ``` #### fn fmt_sci( &self, f: &mut Formatter<'_>, options: ToSciOptions ) -> Result<(), ErrorConverts an `Integer` to a string using a specified base, possibly formatting the number using scientific notation. See `ToSciOptions` for details on the available options. Note that setting `neg_exp_threshold` has no effect, since there is never a need to use negative exponents when representing an `Integer`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `options.rounding_mode` is `Exact`, but the size options are such that the input must be rounded. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::integer::Integer; assert_eq!(Integer::from(u128::MAX).to_sci().to_string(), "3.402823669209385e38"); assert_eq!(Integer::from(i128::MIN).to_sci().to_string(), "-1.701411834604692e38"); let n = Integer::from(123456u32); let mut options = ToSciOptions::default(); assert_eq!(n.to_sci_with_options(options).to_string(), "123456"); options.set_precision(3); assert_eq!(n.to_sci_with_options(options).to_string(), "1.23e5"); options.set_rounding_mode(RoundingMode::Ceiling); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24e5"); options.set_e_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E5"); options.set_force_exponent_plus_sign(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E+5"); options = ToSciOptions::default(); options.set_base(36); assert_eq!(n.to_sci_with_options(options).to_string(), "2n9c"); options.set_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "2N9C"); options.set_base(2); options.set_precision(10); assert_eq!(n.to_sci_with_options(options).to_string(), "1.1110001e16"); options.set_include_trailing_zeros(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.111000100e16"); ``` #### fn to_sci_with_options(&self, options: ToSciOptions) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation.#### fn to_sci(&self) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation, using the default `ToSciOptions`.### impl ToStringBase for Integer #### fn to_string_base(&self, base: u8) -> String Converts an `Integer` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the lowercase `char`s `'a'` to `'z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::integer::Integer; assert_eq!(Integer::from(1000).to_string_base(2), "1111101000"); assert_eq!(Integer::from(1000).to_string_base(10), "1000"); assert_eq!(Integer::from(1000).to_string_base(36), "rs"); assert_eq!(Integer::from(-1000).to_string_base(2), "-1111101000"); assert_eq!(Integer::from(-1000).to_string_base(10), "-1000"); assert_eq!(Integer::from(-1000).to_string_base(36), "-rs"); ``` #### fn to_string_base_upper(&self, base: u8) -> String Converts an `Integer` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the uppercase `char`s `'A'` to `'Z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::integer::Integer; assert_eq!(Integer::from(1000).to_string_base_upper(2), "1111101000"); assert_eq!(Integer::from(1000).to_string_base_upper(10), "1000"); assert_eq!(Integer::from(1000).to_string_base_upper(36), "RS"); assert_eq!(Integer::from(-1000).to_string_base_upper(2), "-1111101000"); assert_eq!(Integer::from(-1000).to_string_base_upper(10), "-1000"); assert_eq!(Integer::from(-1000).to_string_base_upper(36), "-RS"); ``` ### impl<'a> TryFrom<&'a Integer> for Natural #### fn try_from( value: &'a Integer ) -> Result<Natural, <Natural as TryFrom<&'a Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(&Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(&Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(&Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(&(-Integer::from(10u32).pow(12))).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl<'a> TryFrom<&'a Rational> for Integer #### fn try_from( x: &Rational ) -> Result<Integer, <Integer as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by reference. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(&Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(&Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Integer> for Natural #### fn try_from( value: Integer ) -> Result<Natural, <Natural as TryFrom<Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(-Integer::from(10u32).pow(12)).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Integer #### fn try_from( x: Rational ) -> Result<Integer, <Integer as TryFrom<Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by value. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl TryFrom<SerdeInteger> for Integer #### type Error = String The type returned in the event of a conversion error.#### fn try_from(s: SerdeInteger) -> Result<Integer, StringPerforms the conversion.### impl TryFrom<f32> for Integer #### fn try_from(value: f32) -> Result<Integer, <Integer as TryFrom<f32>>::ErrorConverts a primitive float to an `Integer`. If the input isn’t exactly equal to some `Integer`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = SignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<f64> for Integer #### fn try_from(value: f64) -> Result<Integer, <Integer as TryFrom<f64>>::ErrorConverts a primitive float to an `Integer`. If the input isn’t exactly equal to some `Integer`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = SignedFromFloatError The type returned in the event of a conversion error.### impl Two for Integer The constant 2. #### const TWO: Integer = _ ### impl<'a> UnsignedAbs for &'a Integer #### fn unsigned_abs(self) -> Natural Takes the absolute value of an `Integer`, taking the `Integer` by reference and converting the result to a `Natural`. $$ f(x) = |x|. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::UnsignedAbs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!((&Integer::ZERO).unsigned_abs(), 0); assert_eq!((&Integer::from(123)).unsigned_abs(), 123); assert_eq!((&Integer::from(-123)).unsigned_abs(), 123); ``` #### type Output = Natural ### impl UnsignedAbs for Integer #### fn unsigned_abs(self) -> Natural Takes the absolute value of an `Integer`, taking the `Integer` by value and converting the result to a `Natural`. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::UnsignedAbs; use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; assert_eq!(Integer::ZERO.unsigned_abs(), 0); assert_eq!(Integer::from(123).unsigned_abs(), 123); assert_eq!(Integer::from(-123).unsigned_abs(), 123); ``` #### type Output = Natural ### impl UpperHex for Integer #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts an `Integer` to a hexadecimal `String` using uppercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToUpperHexString; use malachite_nz::integer::Integer; use std::str::FromStr; assert_eq!(Integer::ZERO.to_upper_hex_string(), "0"); assert_eq!(Integer::from(123).to_upper_hex_string(), "7B"); assert_eq!( Integer::from_str("1000000000000") .unwrap() .to_upper_hex_string(), "E8D4A51000" ); assert_eq!(format!("{:07X}", Integer::from(123)), "000007B"); assert_eq!(Integer::from(-123).to_upper_hex_string(), "-7B"); assert_eq!( Integer::from_str("-1000000000000") .unwrap() .to_upper_hex_string(), "-E8D4A51000" ); assert_eq!(format!("{:07X}", Integer::from(-123)), "-00007B"); assert_eq!(format!("{:#X}", Integer::ZERO), "0x0"); assert_eq!(format!("{:#X}", Integer::from(123)), "0x7B"); assert_eq!( format!("{:#X}", Integer::from_str("1000000000000").unwrap()), "0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Integer::from(123)), "0x0007B"); assert_eq!(format!("{:#X}", Integer::from(-123)), "-0x7B"); assert_eq!( format!("{:#X}", Integer::from_str("-1000000000000").unwrap()), "-0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Integer::from(-123)), "-0x007B"); ``` ### impl<'a> WrappingFrom<&'a Integer> for i128 #### fn wrapping_from(value: &Integer) -> i128 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i16 #### fn wrapping_from(value: &Integer) -> i16 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i32 #### fn wrapping_from(value: &Integer) -> i32 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i64 #### fn wrapping_from(value: &Integer) -> i64 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for i8 #### fn wrapping_from(value: &Integer) -> i8 Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for isize #### fn wrapping_from(value: &Integer) -> isize Converts an `Integer` to a signed primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u128 #### fn wrapping_from(value: &Integer) -> u128 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u16 #### fn wrapping_from(value: &Integer) -> u16 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u32 #### fn wrapping_from(value: &Integer) -> u32 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u64 #### fn wrapping_from(value: &Integer) -> u64 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for u8 #### fn wrapping_from(value: &Integer) -> u8 Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Integer> for usize #### fn wrapping_from(value: &Integer) -> usize Converts an `Integer` to an unsigned primitive integer, wrapping modulo $2^W$, where $W$ is the width of the primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Zero for Integer The constant 0. #### const ZERO: Integer = _ ### impl Eq for Integer ### impl StructuralEq for Integer ### impl StructuralPartialEq for Integer Auto Trait Implementations --- ### impl RefUnwindSafe for Integer ### impl Send for Integer ### impl Sync for Integer ### impl Unpin for Integer ### impl UnwindSafe for Integer Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToBinaryString for Twhere T: Binary, #### fn to_binary_string(&self) -> String Returns the `String` produced by `T`s `Binary` implementation. ##### Examples ``` use malachite_base::strings::ToBinaryString; assert_eq!(5u64.to_binary_string(), "101"); assert_eq!((-100i16).to_binary_string(), "1111111110011100"); ``` ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToLowerHexString for Twhere T: LowerHex, #### fn to_lower_hex_string(&self) -> String Returns the `String` produced by `T`s `LowerHex` implementation. ##### Examples ``` use malachite_base::strings::ToLowerHexString; assert_eq!(50u64.to_lower_hex_string(), "32"); assert_eq!((-100i16).to_lower_hex_string(), "ff9c"); ``` ### impl<T> ToOctalString for Twhere T: Octal, #### fn to_octal_string(&self) -> String Returns the `String` produced by `T`s `Octal` implementation. ##### Examples ``` use malachite_base::strings::ToOctalString; assert_eq!(50u64.to_octal_string(), "62"); assert_eq!((-100i16).to_octal_string(), "177634"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. T: UpperHex, #### fn to_upper_hex_string(&self) -> String Returns the `String` produced by `T`s `UpperHex` implementation. ##### Examples ``` use malachite_base::strings::ToUpperHexString; assert_eq!(50u64.to_upper_hex_string(), "32"); assert_eq!((-100i16).to_upper_hex_string(), "FF9C"); ``` ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U {"IntegerBitIterator<'a>":"<h3>Notable traits for <code><a class=\"enum\" href=\"integer/logic/bit_iterable/enum.IntegerBitIterator.html\" title=\"enum malachite::integer::logic::bit_iterable::IntegerBitIterator\">IntegerBitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"integer/logic/bit_iterable/enum.IntegerBitIterator.html\" title=\"enum malachite::integer::logic::bit_iterable::IntegerBitIterator\">IntegerBitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.bool.html\">bool</a>;</span>","TwosComplementLimbIterator<'_>":"<h3>Notable traits for <code><a class=\"enum\" href=\"integer/conversion/to_twos_complement_limbs/enum.TwosComplementLimbIterator.html\" title=\"enum malachite::integer::conversion::to_twos_complement_limbs::TwosComplementLimbIterator\">TwosComplementLimbIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"integer/conversion/to_twos_complement_limbs/enum.TwosComplementLimbIterator.html\" title=\"enum malachite::integer::conversion::to_twos_complement_limbs::TwosComplementLimbIterator\">TwosComplementLimbIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u64.html\">u64</a>;</span>"} Struct malachite::Natural === ``` pub struct Natural(/* private fields */); ``` A natural (non-negative) integer. Any `Natural` small enough to fit into a `Limb` is represented inline. Only `Natural`s outside this range incur the costs of heap-allocation. Here’s a diagram of a slice of `Natural`s (using 32-bit limbs) containing the first 8 values of Sylvester’s sequence: ![Natural memory layout](data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJy<KEY>Zz0nVVRGLTgnPz4KP<KEY>) Implementations --- ### impl Natural #### pub fn approx_log(&self) -> f64 Calculates the approximate natural logarithm of a nonzero `Natural`. $f(x) = (1+\epsilon)(\log x)$, where $|\epsilon| < 2^{-52}.$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::float::NiceFloat; use malachite_nz::natural::Natural; assert_eq!(NiceFloat(Natural::from(10u32).approx_log()), NiceFloat(2.3025850929940455)); assert_eq!( NiceFloat(Natural::from(10u32).pow(10000).approx_log()), NiceFloat(23025.850929940454) ); ``` This is equivalent to `fmpz_dlog` from `fmpz/dlog.c`, FLINT 2.7.1. ### impl Natural #### pub fn cmp_normalized(&self, other: &Natural) -> Ordering Returns a result of a comparison between two `Natural`s as if each had been multiplied by some power of 2 to bring it into the interval $[1, 2)$. That is, the comparison is equivalent to a comparison between $f(x)$ and $f(y)$, where $$ f(n) = n2^{\lfloor\log_2 n \rfloor}. $$ The multiplication is not actually performed. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if either argument is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::cmp::Ordering; // 1 == 1.0 * 2^0, 4 == 1.0 * 2^2 // 1.0 == 1.0 assert_eq!(Natural::from(1u32).cmp_normalized(&Natural::from(4u32)), Ordering::Equal); // 5 == 1.25 * 2^2, 6 == 1.5 * 2^2 // 1.25 < 1.5 assert_eq!(Natural::from(5u32).cmp_normalized(&Natural::from(6u32)), Ordering::Less); // 3 == 1.5 * 2^1, 17 == 1.0625 * 2^4 // 1.5 > 1.0625 assert_eq!(Natural::from(3u32).cmp_normalized(&Natural::from(17u32)), Ordering::Greater); // 9 == 1.125 * 2^3, 36 == 1.125 * 2^5 // 1.125 == 1.125 assert_eq!(Natural::from(9u32).cmp_normalized(&Natural::from(36u32)), Ordering::Equal); ``` ### impl Natural #### pub fn from_limbs_asc(xs: &[u64]) -> Natural Converts a slice of limbs to a `Natural`. The limbs are in ascending order, so that less-significant limbs have lower indices in the input slice. This function borrows the limbs. If taking ownership of limbs is possible, `from_owned_limbs_asc` is more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is more efficient than `from_limbs_desc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_limbs_asc(&[]), 0); assert_eq!(Natural::from_limbs_asc(&[123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_limbs_asc(&[3567587328, 232]), 1000000000000u64); } ``` #### pub fn from_limbs_desc(xs: &[u64]) -> Natural Converts a slice of limbs to a `Natural`. The limbs in descending order, so that less-significant limbs have higher indices in the input slice. This function borrows the limbs. If taking ownership of the limbs is possible, `from_owned_limbs_desc` is more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is less efficient than `from_limbs_asc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_limbs_desc(&[]), 0); assert_eq!(Natural::from_limbs_desc(&[123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_limbs_desc(&[232, 3567587328]), 1000000000000u64); } ``` #### pub fn from_owned_limbs_asc(xs: Vec<u64, Global>) -> Natural Converts a `Vec` of limbs to a `Natural`. The limbs are in ascending order, so that less-significant limbs have lower indices in the input `Vec`. This function takes ownership of the limbs. If it’s necessary to borrow the limbs instead, use `from_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is more efficient than `from_limbs_desc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_owned_limbs_asc(vec![]), 0); assert_eq!(Natural::from_owned_limbs_asc(vec![123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_owned_limbs_asc(vec![3567587328, 232]), 1000000000000u64); } ``` #### pub fn from_owned_limbs_desc(xs: Vec<u64, Global>) -> Natural Converts a `Vec` of limbs to a `Natural`. The limbs are in descending order, so that less-significant limbs have higher indices in the input `Vec`. This function takes ownership of the limbs. If it’s necessary to borrow the limbs instead, use `from_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.len()`. This function is less efficient than `from_limbs_asc`. ##### Examples ``` use malachite_base::num::basic::integers::PrimitiveInt; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::from_owned_limbs_desc(vec![]), 0); assert_eq!(Natural::from_owned_limbs_desc(vec![123]), 123); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from_owned_limbs_desc(vec![232, 3567587328]), 1000000000000u64); } ``` ### impl Natural #### pub const fn const_from(x: u64) -> Natural Converts a `Limb` to a `Natural`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; const TEN: Natural = Natural::const_from(10); assert_eq!(TEN, 10); ``` ### impl Natural #### pub fn limb_count(&self) -> u64 Returns the number of limbs of a `Natural`. Zero has 0 limbs. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert_eq!(Natural::ZERO.limb_count(), 0); assert_eq!(Natural::from(123u32).limb_count(), 1); assert_eq!(Natural::from(10u32).pow(12).limb_count(), 2); } ``` ### impl Natural #### pub fn sci_mantissa_and_exponent_round<T>( &self, rm: RoundingMode ) -> Option<(T, u64, Ordering)>where T: PrimitiveFloat, Returns a `Natural`’s scientific mantissa and exponent, rounding according to the specified rounding mode. An `Ordering` is also returned, indicating whether the mantissa and exponent represent a value that is less than, equal to, or greater than the original value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the provided rounding mode. If the rounding mode is `Exact` but the conversion is not exact, `None` is returned. $$ f(x, r) \approx \left (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor\right ). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::num::float::NiceFloat; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let test = |n: Natural, rm: RoundingMode, out: Option<(f32, u64, Ordering)>| { assert_eq!( n.sci_mantissa_and_exponent_round(rm) .map(|(m, e, o)| (NiceFloat(m), e, o)), out.map(|(m, e, o)| (NiceFloat(m), e, o)) ); }; test(Natural::from(3u32), RoundingMode::Floor, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Down, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Ceiling, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Up, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Nearest, Some((1.5, 1, Ordering::Equal))); test(Natural::from(3u32), RoundingMode::Exact, Some((1.5, 1, Ordering::Equal))); test( Natural::from(123u32), RoundingMode::Floor, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Down, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Ceiling, Some((1.921875, 6, Ordering::Equal)), ); test(Natural::from(123u32), RoundingMode::Up, Some((1.921875, 6, Ordering::Equal))); test( Natural::from(123u32), RoundingMode::Nearest, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(123u32), RoundingMode::Exact, Some((1.921875, 6, Ordering::Equal)), ); test( Natural::from(1000000000u32), RoundingMode::Nearest, Some((1.8626451, 29, Ordering::Equal)), ); test( Natural::from(10u32).pow(52), RoundingMode::Nearest, Some((1.670478, 172, Ordering::Greater)), ); test(Natural::from(10u32).pow(52), RoundingMode::Exact, None); ``` #### pub fn from_sci_mantissa_and_exponent_round<T>( sci_mantissa: T, sci_exponent: u64, rm: RoundingMode ) -> Option<(Natural, Ordering)>where T: PrimitiveFloat, Constructs a `Natural` from its scientific mantissa and exponent, rounding according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value represented by the mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the specified rounding mode. If the rounding mode is `Exact` but the input does not exactly specify a `Natural`, `None` is returned. $$ f(x, r) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Panics Panics if `sci_mantissa` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; use std::str::FromStr; let test = | mantissa: f32, exponent: u64, rm: RoundingMode, out: Option<(Natural, Ordering)> | { assert_eq!( Natural::from_sci_mantissa_and_exponent_round(mantissa, exponent, rm), out ); }; test(1.5, 1, RoundingMode::Floor, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Down, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Ceiling, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Up, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Nearest, Some((Natural::from(3u32), Ordering::Equal))); test(1.5, 1, RoundingMode::Exact, Some((Natural::from(3u32), Ordering::Equal))); test(1.51, 1, RoundingMode::Floor, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Down, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Ceiling, Some((Natural::from(4u32), Ordering::Greater))); test(1.51, 1, RoundingMode::Up, Some((Natural::from(4u32), Ordering::Greater))); test(1.51, 1, RoundingMode::Nearest, Some((Natural::from(3u32), Ordering::Less))); test(1.51, 1, RoundingMode::Exact, None); test( 1.670478, 172, RoundingMode::Nearest, Some( ( Natural::from_str("10000000254586612611935772707803116801852191350456320") .unwrap(), Ordering::Equal ) ), ); test(2.0, 1, RoundingMode::Floor, None); test(10.0, 1, RoundingMode::Floor, None); test(0.5, 1, RoundingMode::Floor, None); ``` ### impl Natural #### pub fn to_limbs_asc(&self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in ascending order, so that less-significant limbs have lower indices in the output vector. There are no trailing zero limbs. This function borrows the `Natural`. If taking ownership is possible instead, `into_limbs_asc` is more efficient. This function is more efficient than `to_limbs_desc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.to_limbs_asc().is_empty()); assert_eq!(Natural::from(123u32).to_limbs_asc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).to_limbs_asc(), &[3567587328, 232]); } ``` #### pub fn to_limbs_desc(&self) -> Vec<u64, GlobalReturns the limbs of a `Natural` in descending order, so that less-significant limbs have higher indices in the output vector. There are no leading zero limbs. This function borrows the `Natural`. If taking ownership is possible instead, `into_limbs_desc` is more efficient. This function is less efficient than `to_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.to_limbs_desc().is_empty()); assert_eq!(Natural::from(123u32).to_limbs_desc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).to_limbs_desc(), &[232, 3567587328]); } ``` #### pub fn into_limbs_asc(self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in ascending order, so that less-significant limbs have lower indices in the output vector. There are no trailing zero limbs. This function takes ownership of the `Natural`. If it’s necessary to borrow instead, use `to_limbs_asc`. This function is more efficient than `into_limbs_desc`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.into_limbs_asc().is_empty()); assert_eq!(Natural::from(123u32).into_limbs_asc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).into_limbs_asc(), &[3567587328, 232]); } ``` #### pub fn into_limbs_desc(self) -> Vec<u64, GlobalReturns the limbs of a `Natural`, in descending order, so that less-significant limbs have higher indices in the output vector. There are no leading zero limbs. This function takes ownership of the `Natural`. If it’s necessary to borrow instead, use `to_limbs_desc`. This function is less efficient than `into_limbs_asc`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.into_limbs_desc().is_empty()); assert_eq!(Natural::from(123u32).into_limbs_desc(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).into_limbs_desc(), &[232, 3567587328]); } ``` #### pub fn limbs(&self) -> LimbIterator<'_Returns a double-ended iterator over the limbs of a `Natural`. The forward order is ascending, so that less-significant limbs appear first. There are no trailing zero limbs going forward, or leading zeros going backward. If it’s necessary to get a `Vec` of all the limbs, consider using `to_limbs_asc`, `to_limbs_desc`, `into_limbs_asc`, or `into_limbs_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::integers::PrimitiveInt; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_nz::platform::Limb; if Limb::WIDTH == u32::WIDTH { assert!(Natural::ZERO.limbs().next().is_none()); assert_eq!(Natural::from(123u32).limbs().collect_vec(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!(Natural::from(10u32).pow(12).limbs().collect_vec(), &[3567587328, 232]); assert!(Natural::ZERO.limbs().rev().next().is_none()); assert_eq!(Natural::from(123u32).limbs().rev().collect_vec(), &[123]); // 10^12 = 232 * 2^32 + 3567587328 assert_eq!( Natural::from(10u32).pow(12).limbs().rev().collect_vec(), &[232, 3567587328] ); } ``` ### impl Natural #### pub fn trailing_zeros(&self) -> Option<u64Returns the number of trailing zeros in the binary expansion of a `Natural` (equivalently, the multiplicity of 2 in its prime factorization), or `None` is the `Natural` is 0. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.trailing_zeros(), None); assert_eq!(Natural::from(3u32).trailing_zeros(), Some(0)); assert_eq!(Natural::from(72u32).trailing_zeros(), Some(3)); assert_eq!(Natural::from(100u32).trailing_zeros(), Some(2)); assert_eq!(Natural::from(10u32).pow(12).trailing_zeros(), Some(12)); ``` Trait Implementations --- ### impl<'a, 'b> Add<&'a Natural> for &'b Natural #### fn add(self, other: &'a Natural) -> Natural Adds two `Natural`s, taking both by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::ZERO + &Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) + &Natural::ZERO, 123); assert_eq!(&Natural::from(123u32) + &Natural::from(456u32), 579); assert_eq!( &Natural::from(10u32).pow(12) + &(Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> Add<&'a Natural> for Natural #### fn add(self, other: &'a Natural) -> Natural Adds two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO + &Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) + &Natural::ZERO, 123); assert_eq!(Natural::from(123u32) + &Natural::from(456u32), 579); assert_eq!( Natural::from(10u32).pow(12) + &(Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> Add<Natural> for &'a Natural #### fn add(self, other: Natural) -> Natural Adds two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::ZERO + Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) + Natural::ZERO, 123); assert_eq!(&Natural::from(123u32) + Natural::from(456u32), 579); assert_eq!( &Natural::from(10u32).pow(12) + (Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl Add<Natural> for Natural #### fn add(self, other: Natural) -> Natural Adds two `Natural`s, taking both by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO + Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) + Natural::ZERO, 123); assert_eq!(Natural::from(123u32) + Natural::from(456u32), 579); assert_eq!( Natural::from(10u32).pow(12) + (Natural::from(10u32).pow(12) << 1), 3000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `+` operator.### impl<'a> AddAssign<&'a Natural> for Natural #### fn add_assign(&mut self, other: &'a Natural) Adds a `Natural` to a `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x += &Natural::from(10u32).pow(12); x += &(Natural::from(10u32).pow(12) * Natural::from(2u32)); x += &(Natural::from(10u32).pow(12) * Natural::from(3u32)); x += &(Natural::from(10u32).pow(12) * Natural::from(4u32)); assert_eq!(x, 10000000000000u64); ``` ### impl AddAssign<Natural> for Natural #### fn add_assign(&mut self, other: Natural) Adds a `Natural` to a `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x += Natural::from(10u32).pow(12); x += Natural::from(10u32).pow(12) * Natural::from(2u32); x += Natural::from(10u32).pow(12) * Natural::from(3u32); x += Natural::from(10u32).pow(12) * Natural::from(4u32); assert_eq!(x, 10000000000000u64); ``` ### impl<'a> AddMul<&'a Natural, Natural> for Natural #### fn add_mul(self, y: &'a Natural, z: Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first and third by value and the second by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(&Natural::from(3u32), Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(&Natural::from(0x10000u32), Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> AddMul<&'a Natural, &'b Natural> for &'c Natural #### fn add_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking all three by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).add_mul(&Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( (&Natural::from(10u32).pow(12)) .add_mul(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a, 'b> AddMul<&'a Natural, &'b Natural> for Natural #### fn add_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first by value and the second and third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(&Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a> AddMul<Natural, &'a Natural> for Natural #### fn add_mul(self, y: Natural, z: &'a Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking the first two by value and the third by reference. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(Natural::from(3u32), &Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(Natural::from(0x10000u32), &Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl AddMul<Natural, Natural> for Natural #### fn add_mul(self, y: Natural, z: Natural) -> Natural Adds a `Natural` and the product of two other `Natural`s, taking all three by value. $f(x, y, z) = x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMul, Pow}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).add_mul(Natural::from(3u32), Natural::from(4u32)), 22); assert_eq!( Natural::from(10u32).pow(12) .add_mul(Natural::from(0x10000u32), Natural::from(10u32).pow(12)), 65537000000000000u64 ); ``` #### type Output = Natural ### impl<'a> AddMulAssign<&'a Natural, Natural> for Natural #### fn add_mul_assign(&mut self, y: &'a Natural, z: Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking the first `Natural` on the right-hand side by reference and the second by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(&Natural::from(0x10000u32), Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl<'a, 'b> AddMulAssign<&'a Natural, &'b Natural> for Natural #### fn add_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking both `Natural`s on the right-hand side by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(&Natural::from(0x10000u32), &Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl<'a> AddMulAssign<Natural, &'a Natural> for Natural #### fn add_mul_assign(&mut self, y: Natural, z: &'a Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking the first `Natural` on the right-hand side by value and the second by reference. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(Natural::from(0x10000u32), &Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl AddMulAssign<Natural, Natural> for Natural #### fn add_mul_assign(&mut self, y: Natural, z: Natural) Adds the product of two other `Natural`s to a `Natural` in place, taking both `Natural`s on the right-hand side by value. $x \gets x + yz$. ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{AddMulAssign, Pow}; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.add_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 22); let mut x = Natural::from(10u32).pow(12); x.add_mul_assign(Natural::from(0x10000u32), Natural::from(10u32).pow(12)); assert_eq!(x, 65537000000000000u64); ``` ### impl Binary for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a binary `String`. Using the `#` format flag prepends `"0b"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToBinaryString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_binary_string(), "0"); assert_eq!(Natural::from(123u32).to_binary_string(), "1111011"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_binary_string(), "1110100011010100101001010001000000000000" ); assert_eq!(format!("{:011b}", Natural::from(123u32)), "00001111011"); assert_eq!(format!("{:#b}", Natural::ZERO), "0b0"); assert_eq!(format!("{:#b}", Natural::from(123u32)), "0b1111011"); assert_eq!( format!("{:#b}", Natural::from_str("1000000000000").unwrap()), "0b1110100011010100101001010001000000000000" ); assert_eq!(format!("{:#011b}", Natural::from(123u32)), "0b001111011"); ``` ### impl<'a> BinomialCoefficient<&'a Natural> for Natural #### fn binomial_coefficient(n: &'a Natural, k: &'a Natural) -> Natural Computes the binomial coefficient of two `Natural`s, taking both by reference. $$ f(n, k) =binom{n}{k} =frac{n!}{k!(n-k)!}. $$ ##### Worst-case complexity TODO ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::natural::Natural; assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(0u32)), 1); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(1u32)), 4); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(2u32)), 6); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(3u32)), 4); assert_eq!(Natural::binomial_coefficient(&Natural::from(4u32), &Natural::from(4u32)), 1); assert_eq!( Natural::binomial_coefficient(&Natural::from(10u32), &Natural::from(5u32)), 252 ); assert_eq!( Natural::binomial_coefficient(&Natural::from(100u32), &Natural::from(50u32)) .to_string(), "100891344545564193334812497256" ); ``` ### impl BinomialCoefficient<Natural> for Natural #### fn binomial_coefficient(n: Natural, k: Natural) -> Natural Computes the binomial coefficient of two `Natural`s, taking both by value. $$ f(n, k) =binom{n}{k} =frac{n!}{k!(n-k)!}. $$ ##### Worst-case complexity TODO ##### Examples ``` use malachite_base::num::arithmetic::traits::BinomialCoefficient; use malachite_nz::natural::Natural; assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(0u32)), 1); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(1u32)), 4); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(2u32)), 6); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(3u32)), 4); assert_eq!(Natural::binomial_coefficient(Natural::from(4u32), Natural::from(4u32)), 1); assert_eq!(Natural::binomial_coefficient(Natural::from(10u32), Natural::from(5u32)), 252); assert_eq!( Natural::binomial_coefficient(Natural::from(100u32), Natural::from(50u32)).to_string(), "100891344545564193334812497256" ); ``` ### impl BitAccess for Natural Provides functions for accessing and modifying the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion. #### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.assign_bit(2, true); x.assign_bit(5, true); x.assign_bit(6, true); assert_eq!(x, 100); x.assign_bit(2, false); x.assign_bit(5, false); x.assign_bit(6, false); assert_eq!(x, 0); let mut x = Natural::ZERO; x.flip_bit(10); assert_eq!(x, 1024); x.flip_bit(10); assert_eq!(x, 0); ``` #### fn get_bit(&self, index: u64) -> bool Determines whether the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, is 0 or 1. `false` means 0 and `true` means 1. Getting bits beyond the `Natural`’s width is allowed; those bits are `false`. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $f(n, j) = (b_j = 1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).get_bit(2), false); assert_eq!(Natural::from(123u32).get_bit(3), true); assert_eq!(Natural::from(123u32).get_bit(100), false); assert_eq!(Natural::from(10u32).pow(12).get_bit(12), true); assert_eq!(Natural::from(10u32).pow(12).get_bit(100), false); ``` #### fn set_bit(&mut self, index: u64) Sets the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, to 1. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ n \gets \begin{cases} n + 2^j & \text{if} \quad b_j = 0, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.set_bit(2); x.set_bit(5); x.set_bit(6); assert_eq!(x, 100); ``` #### fn clear_bit(&mut self, index: u64) Sets the $i$th bit of a `Natural`, or the coefficient of $2^i$ in its binary expansion, to 0. Clearing bits beyond the `Natural`’s width is allowed; since those bits are already `false`, clearing them does nothing. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ n \gets \begin{cases} n - 2^j & \text{if} \quad b_j = 1, \\ n & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `index`. ##### Examples ``` use malachite_base::num::logic::traits::BitAccess; use malachite_nz::natural::Natural; let mut x = Natural::from(0x7fu32); x.clear_bit(0); x.clear_bit(1); x.clear_bit(3); x.clear_bit(4); assert_eq!(x, 100); ``` #### fn assign_bit(&mut self, index: u64, bit: bool) Sets the bit at `index` to whichever value `bit` is. Sets the bit at `index` to the opposite of its original value. #### fn bitand(self, other: &'a Natural) -> Natural Takes the bitwise and of two `Natural`s, taking both by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) & &Natural::from(456u32), 72); assert_eq!( &Natural::from(10u32).pow(12) & &(Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAnd<&'a Natural> for Natural #### fn bitand(self, other: &'a Natural) -> Natural Takes the bitwise and of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) & &Natural::from(456u32), 72); assert_eq!( Natural::from(10u32).pow(12) & &(Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAnd<Natural> for &'a Natural #### fn bitand(self, other: Natural) -> Natural Takes the bitwise and of two `Natural`s, taking the first by reference and the seocnd by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) & Natural::from(456u32), 72); assert_eq!( &Natural::from(10u32).pow(12) & (Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl BitAnd<Natural> for Natural #### fn bitand(self, other: Natural) -> Natural Takes the bitwise and of two `Natural`s, taking both by value. $$ f(x, y) = x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) & Natural::from(456u32), 72); assert_eq!( Natural::from(10u32).pow(12) & (Natural::from(10u32).pow(12) - Natural::ONE), 999999995904u64 ); ``` #### type Output = Natural The resulting type after applying the `&` operator.### impl<'a> BitAndAssign<&'a Natural> for Natural #### fn bitand_assign(&mut self, other: &'a Natural) Bitwise-ands a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; let mut x = Natural::from(u32::MAX); x &= &Natural::from(0xf0ffffffu32); x &= &Natural::from(0xfff0_ffffu32); x &= &Natural::from(0xfffff0ffu32); x &= &Natural::from(0xfffffff0u32); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitAndAssign<Natural> for Natural #### fn bitand_assign(&mut self, other: Natural) Bitwise-ands a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x \wedge y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; let mut x = Natural::from(u32::MAX); x &= Natural::from(0xf0ffffffu32); x &= Natural::from(0xfff0_ffffu32); x &= Natural::from(0xfffff0ffu32); x &= Natural::from(0xfffffff0u32); assert_eq!(x, 0xf0f0_f0f0u32); ``` ### impl BitBlockAccess for Natural #### fn get_bits(&self, start: u64, end: u64) -> Natural Extracts a block of adjacent bits from a `Natural`, taking the `Natural` by reference. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(16, 48), 0xef011234u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(4, 16), 0x567u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(0, 100), 0xabcdef0112345678u64); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits(10, 10), 0); ``` #### fn get_bits_owned(self, start: u64, end: u64) -> Natural Extracts a block of adjacent bits from a `Natural`, taking the `Natural` by value. The first index is `start` and last index is `end - 1`. Let $n$ be `self`, and let $p$ and $q$ be `start` and `end`, respectively. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Then $$ f(n, p, q) = \sum_{i=p}^{q-1} 2^{b_{i-p}}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(16, 48), 0xef011234u32); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(4, 16), 0x567u32); assert_eq!( Natural::from(0xabcdef0112345678u64).get_bits_owned(0, 100), 0xabcdef0112345678u64 ); assert_eq!(Natural::from(0xabcdef0112345678u64).get_bits_owned(10, 10), 0); ``` #### fn assign_bits(&mut self, start: u64, end: u64, bits: &Natural) Replaces a block of adjacent bits in a `Natural` with other bits. The least-significant `end - start` bits of `bits` are assigned to bits `start` through `end - 1`, inclusive, of `self`. Let $n$ be `self` and let $m$ be `bits`, and let $p$ and $q$ be `start` and `end`, respectively. If `bits` has fewer bits than `end - start`, the high bits are interpreted as 0. Let $$ n = \sum_{i=0}^\infty 2^{b_i}, $$ where for all $i$, $b_i\in \{0, 1\}$; so finitely many of the bits are 1, and the rest are 0. Let $$ m = \sum_{i=0}^k 2^{d_i}, $$ where for all $i$, $d_i\in \{0, 1\}$. Also, let $p, q \in \mathbb{N}$, and let $W$ be `max(self.significant_bits(), end + 1)`. Then $$ n \gets \sum_{i=0}^{W-1} 2^{c_i}, $$ where $$ \{c_0, c_1, c_2, \ldots, c_ {W-1}\} = \{b_0, b_1, b_2, \ldots, b_{p-1}, d_0, d_1, \ldots, d_{p-q-1}, b_q, \ldots, b_ {W-1}\}. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `end`. ##### Panics Panics if `start > end`. ##### Examples ``` use malachite_base::num::logic::traits::BitBlockAccess; use malachite_nz::natural::Natural; let mut n = Natural::from(123u32); n.assign_bits(5, 7, &Natural::from(456u32)); assert_eq!(n, 27); let mut n = Natural::from(123u32); n.assign_bits(64, 128, &Natural::from(456u32)); assert_eq!(n.to_string(), "8411715297611555537019"); let mut n = Natural::from(123u32); n.assign_bits(80, 100, &Natural::from(456u32)); assert_eq!(n.to_string(), "551270173744270903666016379"); ``` #### type Bits = Natural ### impl BitConvertible for Natural #### fn to_bits_asc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the bits of a `Natural` in ascending order: least- to most-significant. If the number is 0, the `Vec` is empty; otherwise, it ends with `true`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert!(Natural::ZERO.to_bits_asc().is_empty()); // 105 = 1101001b assert_eq!( Natural::from(105u32).to_bits_asc(), &[true, false, false, true, false, true, true] ); ``` #### fn to_bits_desc(&self) -> Vec<bool, GlobalReturns a `Vec` containing the bits of a `Natural` in descending order: most- to least-significant. If the number is 0, the `Vec` is empty; otherwise, it begins with `true`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert!(Natural::ZERO.to_bits_desc().is_empty()); // 105 = 1101001b assert_eq!( Natural::from(105u32).to_bits_desc(), &[true, true, false, true, false, false, true] ); ``` #### fn from_bits_asc<I>(xs: I) -> Naturalwhere I: Iterator<Item = bool>, Converts an iterator of bits into a `Natural`. The bits should be in ascending order (least- to most-significant). $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^i [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::natural::Natural; use std::iter::empty; assert_eq!(Natural::from_bits_asc(empty()), 0); // 105 = 1101001b assert_eq!( Natural::from_bits_asc([true, false, false, true, false, true, true].iter().cloned()), 105 ); ``` #### fn from_bits_desc<I>(xs: I) -> Naturalwhere I: Iterator<Item = bool>, Converts an iterator of bits into a `Natural`. The bits should be in descending order (most- to least-significant). $$ f((b_i)_ {i=0}^{k-1}) = \sum_{i=0}^{k-1}2^{k-i-1} [b_i], $$ where braces denote the Iverson bracket, which converts a bit to 0 or 1. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `xs.count()`. ##### Examples ``` use malachite_base::num::logic::traits::BitConvertible; use malachite_nz::natural::Natural; use std::iter::empty; assert_eq!(Natural::from_bits_desc(empty()), 0); // 105 = 1101001b assert_eq!( Natural::from_bits_desc([true, true, false, true, false, false, true].iter().cloned()), 105 ); ``` ### impl<'a> BitIterable for &'a Natural #### fn bits(self) -> NaturalBitIterator<'aReturns a double-ended iterator over the bits of a `Natural`. The forward order is ascending, so that less significant bits appear first. There are no trailing false bits going forward, or leading falses going backward. If it’s necessary to get a `Vec` of all the bits, consider using `to_bits_asc` or `to_bits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::BitIterable; use malachite_nz::natural::Natural; assert!(Natural::ZERO.bits().next().is_none()); // 105 = 1101001b assert_eq!( Natural::from(105u32).bits().collect::<Vec<bool>>(), &[true, false, false, true, false, true, true] ); assert!(Natural::ZERO.bits().next_back().is_none()); // 105 = 1101001b assert_eq!( Natural::from(105u32).bits().rev().collect::<Vec<bool>>(), &[true, true, false, true, false, false, true] ); ``` #### type BitIterator = NaturalBitIterator<'a### impl<'a, 'b> BitOr<&'a Natural> for &'b Natural #### fn bitor(self, other: &'a Natural) -> Natural Takes the bitwise or of two `Natural`s, taking both by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) | &Natural::from(456u32), 507); assert_eq!( &Natural::from(10u32).pow(12) | &(Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOr<&'a Natural> for Natural #### fn bitor(self, other: &'a Natural) -> Natural Takes the bitwise or of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) | &Natural::from(456u32), 507); assert_eq!( Natural::from(10u32).pow(12) | &(Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOr<Natural> for &'a Natural #### fn bitor(self, other: Natural) -> Natural Takes the bitwise or of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) | Natural::from(456u32), 507); assert_eq!( &Natural::from(10u32).pow(12) | (Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl BitOr<Natural> for Natural #### fn bitor(self, other: Natural) -> Natural Takes the bitwise or of two `Natural`s, taking both by value. $$ f(x, y) = x \vee y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) | Natural::from(456u32), 507); assert_eq!( Natural::from(10u32).pow(12) | (Natural::from(10u32).pow(12) - Natural::ONE), 1000000004095u64 ); ``` #### type Output = Natural The resulting type after applying the `|` operator.### impl<'a> BitOrAssign<&'a Natural> for Natural #### fn bitor_assign(&mut self, other: &'a Natural) Bitwise-ors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= &Natural::from(0x0000000fu32); x |= &Natural::from(0x00000f00u32); x |= &Natural::from(0x000f_0000u32); x |= &Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitOrAssign<Natural> for Natural #### fn bitor_assign(&mut self, other: Natural) Bitwise-ors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= Natural::from(0x0000000fu32); x |= Natural::from(0x00000f00u32); x |= Natural::from(0x000f_0000u32); x |= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> BitScan for &'a Natural #### fn index_of_next_false_bit(self, start: u64) -> Option<u64Given a `Natural` and a starting index, searches the `Natural` for the smallest index of a `false` bit that is greater than or equal to the starting index. Since every `Natural` has an implicit prefix of infinitely-many zeros, this function always returns a value. Starting beyond the `Natural`’s width is allowed; the result is the starting index. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(0), Some(0)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(20), Some(20)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(31), Some(31)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(32), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(33), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(34), Some(34)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(35), Some(36)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_false_bit(100), Some(100)); ``` #### fn index_of_next_true_bit(self, start: u64) -> Option<u64Given a `Natural` and a starting index, searches the `Natural` for the smallest index of a `true` bit that is greater than or equal to the starting index. If the starting index is greater than or equal to the `Natural`’s width, the result is `None` since there are no `true` bits past that point. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::logic::traits::BitScan; use malachite_nz::natural::Natural; assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(0), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(20), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(31), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(32), Some(32)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(33), Some(33)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(34), Some(35)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(35), Some(35)); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(36), None); assert_eq!(Natural::from(0xb00000000u64).index_of_next_true_bit(100), None); ``` ### impl<'a, 'b> BitXor<&'a Natural> for &'b Natural #### fn bitxor(self, other: &'a Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking both by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) ^ &Natural::from(456u32), 435); assert_eq!( &Natural::from(10u32).pow(12) ^ &(Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXor<&'a Natural> for Natural #### fn bitxor(self, other: &'a Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) ^ &Natural::from(456u32), 435); assert_eq!( Natural::from(10u32).pow(12) ^ &(Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXor<Natural> for &'a Natural #### fn bitxor(self, other: Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) ^ Natural::from(456u32), 435); assert_eq!( &Natural::from(10u32).pow(12) ^ (Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl BitXor<Natural> for Natural #### fn bitxor(self, other: Natural) -> Natural Takes the bitwise xor of two `Natural`s, taking both by value. $$ f(x, y) = x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) ^ Natural::from(456u32), 435); assert_eq!( Natural::from(10u32).pow(12) ^ (Natural::from(10u32).pow(12) - Natural::ONE), 8191 ); ``` #### type Output = Natural The resulting type after applying the `^` operator.### impl<'a> BitXorAssign<&'a Natural> for Natural #### fn bitxor_assign(&mut self, other: &'a Natural) Bitwise-xors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x |= Natural::from(0x0000000fu32); x |= Natural::from(0x00000f00u32); x |= Natural::from(0x000f_0000u32); x |= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl BitXorAssign<Natural> for Natural #### fn bitxor_assign(&mut self, other: Natural) Bitwise-xors a `Natural` with another `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets x \oplus y. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x ^= Natural::from(0x0000000fu32); x ^= Natural::from(0x00000f00u32); x ^= Natural::from(0x000f_0000u32); x ^= Natural::from(0x0f000000u32); assert_eq!(x, 0x0f0f_0f0f); ``` ### impl<'a> CeilingDivAssignNegMod<&'a Natural> for Natural #### fn ceiling_div_assign_neg_mod(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and returning the remainder of the negative of the first number divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignNegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); assert_eq!(x.ceiling_div_assign_neg_mod(&Natural::from(10u32)), 7); assert_eq!(x, 3); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.ceiling_div_assign_neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64, ); assert_eq!(x, 810000006724u64); ``` #### type ModOutput = Natural ### impl CeilingDivAssignNegMod<Natural> for Natural #### fn ceiling_div_assign_neg_mod(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder of the negative of the first number divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x, $$ $$ x \gets \left \lceil \frac{x}{y} \right \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivAssignNegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); assert_eq!(x.ceiling_div_assign_neg_mod(Natural::from(10u32)), 7); assert_eq!(x, 3); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.ceiling_div_assign_neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64, ); assert_eq!(x, 810000006724u64); ``` #### type ModOutput = Natural ### impl<'a, 'b> CeilingDivNegMod<&'b Natural> for &'a Natural #### fn ceiling_div_neg_mod(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( (&Natural::from(23u32)).ceiling_div_neg_mod(&Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .ceiling_div_neg_mod(&Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> CeilingDivNegMod<&'a Natural> for Natural #### fn ceiling_div_neg_mod(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( Natural::from(23u32).ceiling_div_neg_mod(&Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .ceiling_div_neg_mod(&Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> CeilingDivNegMod<Natural> for &'a Natural #### fn ceiling_div_neg_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( (&Natural::from(23u32)).ceiling_div_neg_mod(Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .ceiling_div_neg_mod(Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl CeilingDivNegMod<Natural> for Natural #### fn ceiling_div_neg_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the ceiling of the quotient and the remainder of the negative of the first `Natural` divided by the second. The quotient and remainder satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lceil \frac{x}{y} \right \rceil, \space y\left \lceil \frac{x}{y} \right \rceil - x \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingDivNegMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!( Natural::from(23u32).ceiling_div_neg_mod(Natural::from(10u32)).to_debug_string(), "(3, 7)" ); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .ceiling_div_neg_mod(Natural::from_str("1234567890987").unwrap()) .to_debug_string(), "(810000006724, 704498996588)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a, 'b> CeilingLogBase<&'b Natural> for &'a Natural #### fn ceiling_log_base(self, base: &Natural) -> u64 Returns the ceiling of the base-$b$ logarithm of a positive `Natural`. $f(x, b) = \lceil\log_b x\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).ceiling_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(81u32).ceiling_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(82u32).ceiling_log_base(&Natural::from(3u32)), 5); assert_eq!(Natural::from(4294967296u64).ceiling_log_base(&Natural::from(10u32)), 10); ``` This is equivalent to `fmpz_clog` from `fmpz/clog.c`, FLINT 2.7.1. #### type Output = u64 ### impl<'a> CeilingLogBase2 for &'a Natural #### fn ceiling_log_base_2(self) -> u64 Returns the ceiling of the base-2 logarithm of a positive `Natural`. $f(x) = \lceil\log_2 x\rceil$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).ceiling_log_base_2(), 2); assert_eq!(Natural::from(100u32).ceiling_log_base_2(), 7); ``` #### type Output = u64 ### impl<'a> CeilingLogBasePowerOf2<u64> for &'a Natural #### fn ceiling_log_base_power_of_2(self, pow: u64) -> u64 Returns the ceiling of the base-$2^k$ logarithm of a positive `Natural`. $f(x, k) = \lceil\log_{2^k} x\rceil$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBasePowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(100u32).ceiling_log_base_power_of_2(2), 4); assert_eq!(Natural::from(4294967296u64).ceiling_log_base_power_of_2(8), 4); ``` #### type Output = u64 ### impl<'a> CeilingRoot<u64> for &'a Natural #### fn ceiling_root(self, exp: u64) -> Natural Returns the ceiling of the $n$th root of a `Natural`, taking the `Natural` by reference. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).ceiling_root(3), 10); assert_eq!(Natural::from(1000u16).ceiling_root(3), 10); assert_eq!(Natural::from(1001u16).ceiling_root(3), 11); assert_eq!(Natural::from(100000000000u64).ceiling_root(5), 159); ``` #### type Output = Natural ### impl CeilingRoot<u64> for Natural #### fn ceiling_root(self, exp: u64) -> Natural Returns the ceiling of the $n$th root of a `Natural`, taking the `Natural` by value. $f(x, n) = \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).ceiling_root(3), 10); assert_eq!(Natural::from(1000u16).ceiling_root(3), 10); assert_eq!(Natural::from(1001u16).ceiling_root(3), 11); assert_eq!(Natural::from(100000000000u64).ceiling_root(5), 159); ``` #### type Output = Natural ### impl CeilingRootAssign<u64> for Natural #### fn ceiling_root_assign(&mut self, exp: u64) Replaces a `Natural` with the ceiling of its $n$th root. $x \gets \lceil\sqrt[n]{x}\rceil$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingRootAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1000u16); x.ceiling_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1001u16); x.ceiling_root_assign(3); assert_eq!(x, 11); let mut x = Natural::from(100000000000u64); x.ceiling_root_assign(5); assert_eq!(x, 159); ``` ### impl<'a> CeilingSqrt for &'a Natural #### fn ceiling_sqrt(self) -> Natural Returns the ceiling of the square root of a `Natural`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(100u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(101u8).ceiling_sqrt(), 11); assert_eq!(Natural::from(1000000000u32).ceiling_sqrt(), 31623); assert_eq!(Natural::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Natural ### impl CeilingSqrt for Natural #### fn ceiling_sqrt(self) -> Natural Returns the ceiling of the square root of a `Natural`, taking it by value. $f(x) = \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(100u8).ceiling_sqrt(), 10); assert_eq!(Natural::from(101u8).ceiling_sqrt(), 11); assert_eq!(Natural::from(1000000000u32).ceiling_sqrt(), 31623); assert_eq!(Natural::from(10000000000u64).ceiling_sqrt(), 100000); ``` #### type Output = Natural ### impl CeilingSqrtAssign for Natural #### fn ceiling_sqrt_assign(&mut self) Replaces a `Natural` with the ceiling of its square root. $x \gets \lceil\sqrt{x}\rceil$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingSqrtAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(100u8); x.ceiling_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(101u8); x.ceiling_sqrt_assign(); assert_eq!(x, 11); let mut x = Natural::from(1000000000u32); x.ceiling_sqrt_assign(); assert_eq!(x, 31623); let mut x = Natural::from(10000000000u64); x.ceiling_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a, 'b> CheckedLogBase<&'b Natural> for &'a Natural #### fn checked_log_base(self, base: &Natural) -> Option<u64Returns the base-$b$ logarithm of a positive `Natural`. If the `Natural` is not a power of $b$, then `None` is returned. $$ f(x, b) = \begin{cases} \operatorname{Some}(\log_b x) & \text{if} \quad \log_b x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).checked_log_base(&Natural::from(3u32)), None); assert_eq!(Natural::from(81u32).checked_log_base(&Natural::from(3u32)), Some(4)); assert_eq!(Natural::from(82u32).checked_log_base(&Natural::from(3u32)), None); assert_eq!(Natural::from(4294967296u64).checked_log_base(&Natural::from(10u32)), None); ``` #### type Output = u64 ### impl<'a> CheckedLogBase2 for &'a Natural #### fn checked_log_base_2(self) -> Option<u64Returns the base-2 logarithm of a positive `Natural`. If the `Natural` is not a power of 2, then `None` is returned. $$ f(x) = \begin{cases} \operatorname{Some}(\log_2 x) & \text{if} \quad \log_2 x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase2; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from(3u32).checked_log_base_2(), None); assert_eq!(Natural::from(4u32).checked_log_base_2(), Some(2)); assert_eq!( Natural::from_str("1267650600228229401496703205376").unwrap().checked_log_base_2(), Some(100) ); ``` #### type Output = u64 ### impl<'a> CheckedLogBasePowerOf2<u64> for &'a Natural #### fn checked_log_base_power_of_2(self, pow: u64) -> Option<u64Returns the base-$2^k$ logarithm of a positive `Natural`. If the `Natural` is not a power of $2^k$, then `None` is returned. $$ f(x, k) = \begin{cases} \operatorname{Some}(\log_{2^k} x) & \text{if} \quad \log_{2^k} x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBasePowerOf2; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from(100u32).checked_log_base_power_of_2(2), None); assert_eq!(Natural::from(4294967296u64).checked_log_base_power_of_2(8), Some(4)); ``` #### type Output = u64 ### impl<'a> CheckedRoot<u64> for &'a Natural #### fn checked_root(self, exp: u64) -> Option<NaturalReturns the the $n$th root of a `Natural`, or `None` if the `Natural` is not a perfect $n$th power. The `Natural` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Natural::from(1000u16)).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!((&Natural::from(1001u16)).checked_root(3).to_debug_string(), "None"); assert_eq!((&Natural::from(100000000000u64)).checked_root(5).to_debug_string(), "None"); assert_eq!( (&Natural::from(10000000000u64)).checked_root(5).to_debug_string(), "Some(100)" ); ``` #### type Output = Natural ### impl CheckedRoot<u64> for Natural #### fn checked_root(self, exp: u64) -> Option<NaturalReturns the the $n$th root of a `Natural`, or `None` if the `Natural` is not a perfect $n$th power. The `Natural` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).checked_root(3).to_debug_string(), "None"); assert_eq!(Natural::from(1000u16).checked_root(3).to_debug_string(), "Some(10)"); assert_eq!(Natural::from(1001u16).checked_root(3).to_debug_string(), "None"); assert_eq!(Natural::from(100000000000u64).checked_root(5).to_debug_string(), "None"); assert_eq!(Natural::from(10000000000u64).checked_root(5).to_debug_string(), "Some(100)"); ``` #### type Output = Natural ### impl<'a> CheckedSqrt for &'a Natural #### fn checked_sqrt(self) -> Option<NaturalReturns the the square root of a `Natural`, or `None` if it is not a perfect square. The `Natural` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Natural::from(100u8)).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!((&Natural::from(101u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Natural::from(1000000000u32)).checked_sqrt().to_debug_string(), "None"); assert_eq!( (&Natural::from(10000000000u64)).checked_sqrt().to_debug_string(), "Some(100000)" ); ``` #### type Output = Natural ### impl CheckedSqrt for Natural #### fn checked_sqrt(self) -> Option<NaturalReturns the the square root of a `Natural`, or `None` if it is not a perfect square. The `Natural` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(100u8).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!(Natural::from(101u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(1000000000u32).checked_sqrt().to_debug_string(), "None"); assert_eq!(Natural::from(10000000000u64).checked_sqrt().to_debug_string(), "Some(100000)"); ``` #### type Output = Natural ### impl<'a, 'b> CheckedSub<&'a Natural> for &'b Natural #### fn checked_sub(self, other: &'a Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking both by reference and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).checked_sub(&Natural::from(123u32)).to_debug_string(), "None"); assert_eq!((&Natural::from(123u32)).checked_sub(&Natural::ZERO).to_debug_string(), "Some(123)"); assert_eq!((&Natural::from(456u32)).checked_sub(&Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .checked_sub(&Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSub<&'a Natural> for Natural #### fn checked_sub(self, other: &'a Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking the first by value and the second by reference and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.checked_sub(&Natural::from(123u32)).to_debug_string(), "None"); assert_eq!( Natural::from(123u32).checked_sub(&Natural::ZERO).to_debug_string(), "Some(123)" ); assert_eq!(Natural::from(456u32).checked_sub(&Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .checked_sub(&Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSub<Natural> for &'a Natural #### fn checked_sub(self, other: Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking the first by reference and the second by value and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).checked_sub(Natural::from(123u32)).to_debug_string(), "None"); assert_eq!((&Natural::from(123u32)).checked_sub(Natural::ZERO).to_debug_string(), "Some(123)"); assert_eq!((&Natural::from(456u32)).checked_sub(Natural::from(123u32)).to_debug_string(), "Some(333)"); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .checked_sub(Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl CheckedSub<Natural> for Natural #### fn checked_sub(self, other: Natural) -> Option<NaturalSubtracts a `Natural` by another `Natural`, taking both by value and returning `None` if the result is negative. $$ f(x, y) = \begin{cases} \operatorname{Some}(x - y) & \text{if} \quad x \geq y, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSub, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.checked_sub(Natural::from(123u32)).to_debug_string(), "None"); assert_eq!( Natural::from(123u32).checked_sub(Natural::ZERO).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from(456u32).checked_sub(Natural::from(123u32)).to_debug_string(), "Some(333)" ); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .checked_sub(Natural::from(10u32).pow(12)).to_debug_string(), "Some(2000000000000)" ); ``` #### type Output = Natural ### impl<'a> CheckedSubMul<&'a Natural, Natural> for Natural #### fn checked_sub_mul(self, y: &'a Natural, z: Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(&Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(&Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> CheckedSubMul<&'a Natural, &'b Natural> for &'c Natural #### fn checked_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking all three by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(20u32)).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( (&Natural::from(10u32)).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( (&Natural::from(10u32).pow(12)) .checked_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a, 'b> CheckedSubMul<&'a Natural, &'b Natural> for Natural #### fn checked_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(&Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl<'a> CheckedSubMul<Natural, &'a Natural> for Natural #### fn checked_sub_mul(self, y: Natural, z: &'a Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(Natural::from(3u32), &Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl CheckedSubMul<Natural, Natural> for Natural #### fn checked_sub_mul(self, y: Natural, z: Natural) -> Option<NaturalSubtracts a `Natural` by the product of two other `Natural`s, taking all three by value and returning `None` if the result is negative. $$ f(x, y, z) = \begin{cases} \operatorname{Some}(x - yz) & \text{if} \quad x \geq yz, \\ \operatorname{None} & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{CheckedSubMul, Pow}; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).checked_sub_mul(Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "Some(8)" ); assert_eq!( Natural::from(10u32).checked_sub_mul(Natural::from(3u32), Natural::from(4u32)) .to_debug_string(), "None" ); assert_eq!( Natural::from(10u32).pow(12) .checked_sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)) .to_debug_string(), "Some(995705032704)" ); ``` #### type Output = Natural ### impl Clone for Natural #### fn clone(&self) -> Natural Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn convertible_from(value: &'a Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(&Integer::from(123)), true); assert_eq!(Natural::convertible_from(&Integer::from(-123)), false); assert_eq!(Natural::convertible_from(&Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(&-Integer::from(10u32).pow(12)), false); ``` ### impl<'a> ConvertibleFrom<&'a Natural> for f32 #### fn convertible_from(value: &'a Natural) -> bool Determines whether a `Natural` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for f64 #### fn convertible_from(value: &'a Natural) -> bool Determines whether a `Natural` can be exactly converted to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i128 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i16 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i32 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i64 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `SignedLimb` (the signed type whose width is the same as a limb’s). ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for i8 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a signed primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for isize #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to an `isize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u128 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u16 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u32 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u64 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for u8 #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a value of a primitive unsigned integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Natural> for usize #### fn convertible_from(value: &Natural) -> bool Determines whether a `Natural` can be converted to a `usize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for Natural #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to a `Natural` (when the `Rational` is non-negative and an integer), taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Natural::convertible_from(&Rational::from(123)), true); assert_eq!(Natural::convertible_from(&Rational::from(-123)), false); assert_eq!(Natural::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl ConvertibleFrom<Integer> for Natural #### fn convertible_from(value: Integer) -> bool Determines whether an `Integer` can be converted to a `Natural` (when the `Integer` is non-negative). Takes the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::convertible_from(Integer::from(123)), true); assert_eq!(Natural::convertible_from(Integer::from(-123)), false); assert_eq!(Natural::convertible_from(Integer::from(10u32).pow(12)), true); assert_eq!(Natural::convertible_from(-Integer::from(10u32).pow(12)), false); ``` ### impl ConvertibleFrom<f32> for Natural #### fn convertible_from(value: f32) -> bool Determines whether a floating-point value can be exactly converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<f64> for Natural #### fn convertible_from(value: f64) -> bool Determines whether a floating-point value can be exactly converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i128> for Natural #### fn convertible_from(i: i128) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i16> for Natural #### fn convertible_from(i: i16) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i32> for Natural #### fn convertible_from(i: i32) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i64> for Natural #### fn convertible_from(i: i64) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<i8> for Natural #### fn convertible_from(i: i8) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<isize> for Natural #### fn convertible_from(i: isize) -> bool Determines whether a signed primitive integer can be converted to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a, 'b> CoprimeWith<&'b Natural> for &'a Natural #### fn coprime_with(self, other: &'b Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. Both `Natural`s are taken by reference. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).coprime_with(Natural::from(5u32)), true); assert_eq!((&Natural::from(12u32)).coprime_with(Natural::from(90u32)), false); ``` ### impl<'a> CoprimeWith<&'a Natural> for Natural #### fn coprime_with(self, other: &'a Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. The first `Natural` is taken by value and the second by reference. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).coprime_with(&Natural::from(5u32)), true); assert_eq!(Natural::from(12u32).coprime_with(&Natural::from(90u32)), false); ``` ### impl<'a> CoprimeWith<Natural> for &'a Natural #### fn coprime_with(self, other: Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. The first `Natural` is taken by reference and the second by value. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).coprime_with(Natural::from(5u32)), true); assert_eq!((&Natural::from(12u32)).coprime_with(Natural::from(90u32)), false); ``` ### impl CoprimeWith<Natural> for Natural #### fn coprime_with(self, other: Natural) -> bool Returns whether two `Natural`s are coprime; that is, whether they have no common factor other than 1. Both `Natural`s are taken by value. Every `Natural` is coprime with 1. No `Natural` is coprime with 0, except 1. $f(x, y) = (\gcd(x, y) = 1)$. $f(x, y) = ((k,m,n \in \N \land x=km \land y=kn) \implies k=1)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CoprimeWith; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).coprime_with(Natural::from(5u32)), true); assert_eq!(Natural::from(12u32).coprime_with(Natural::from(90u32)), false); ``` ### impl CountOnes for &Natural #### fn count_ones(self) -> u64 Counts the number of ones in the binary expansion of a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_base::num::logic::traits::CountOnes; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.count_ones(), 0); // 105 = 1101001b assert_eq!(Natural::from(105u32).count_ones(), 4); // 10^12 = 1110100011010100101001010001000000000000b assert_eq!(Natural::from(10u32).pow(12).count_ones(), 13); ``` ### impl Debug for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a `String`. This is the same as the `Display::fmt` implementation. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_debug_string(), "0"); assert_eq!(Natural::from(123u32).to_debug_string(), "123"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_debug_string(), "1000000000000" ); assert_eq!(format!("{:05?}", Natural::from(123u32)), "00123"); ``` ### impl Default for Natural #### fn default() -> Natural The default value of a `Natural`, 0. ### impl Digits<Natural> for Natural #### fn to_digits_asc(&self, base: &Natural) -> Vec<Natural, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.to_digits_asc(&Natural::from(6u32)).to_debug_string(), "[]"); assert_eq!(Natural::TWO.to_digits_asc(&Natural::from(6u32)).to_debug_string(), "[2]"); assert_eq!( Natural::from(123456u32).to_digits_asc(&Natural::from(3u32)).to_debug_string(), "[0, 1, 1, 0, 0, 1, 1, 2, 0, 0, 2]" ); ``` #### fn to_digits_desc(&self, base: &Natural) -> Vec<Natural, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.to_digits_desc(&Natural::from(6u32)).to_debug_string(), "[]"); assert_eq!(Natural::TWO.to_digits_desc(&Natural::from(6u32)).to_debug_string(), "[2]"); assert_eq!( Natural::from(123456u32).to_digits_desc(&Natural::from(3u32)).to_debug_string(), "[2, 0, 0, 2, 1, 1, 0, 0, 1, 1, 0]" ); ``` #### fn from_digits_asc<I>(base: &Natural, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n, m) = O(nm (\log (nm))^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from_digits_asc( &Natural::from(64u32), vec_from_str::<Natural>("[0, 0, 0]").unwrap().into_iter() ).to_debug_string(), "Some(0)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(3u32), vec_from_str::<Natural>("[0, 1, 1, 0, 0, 1, 1, 2, 0, 0, 2]").unwrap().into_iter() ).to_debug_string(), "Some(123456)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(8u32), vec_from_str::<Natural>("[3, 7, 1]").unwrap().into_iter() ).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from_digits_asc( &Natural::from(8u32), vec_from_str::<Natural>("[1, 10, 3]").unwrap().into_iter() ).to_debug_string(), "None" ); ``` #### fn from_digits_desc<I>(base: &Natural, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm (\log (nm))^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::num::conversion::traits::Digits; use malachite_base::strings::ToDebugString; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from_digits_desc( &Natural::from(64u32), vec_from_str::<Natural>("[0, 0, 0]").unwrap().into_iter() ).to_debug_string(), "Some(0)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(3u32), vec_from_str::<Natural>("[2, 0, 0, 2, 1, 1, 0, 0, 1, 1, 0]").unwrap().into_iter() ).to_debug_string(), "Some(123456)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(8u32), vec_from_str::<Natural>("[1, 7, 3]").unwrap().into_iter() ).to_debug_string(), "Some(123)" ); assert_eq!( Natural::from_digits_desc( &Natural::from(8u32), vec_from_str::<Natural>("[3, 10, 1]").unwrap().into_iter() ).to_debug_string(), "None" ); ``` ### impl Digits<u128> for Natural #### fn to_digits_asc(&self, base: &u128) -> Vec<u128, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u128) -> Vec<u128, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u128, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u128, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u16> for Natural #### fn to_digits_asc(&self, base: &u16) -> Vec<u16, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u16) -> Vec<u16, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u16, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u16, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u32> for Natural #### fn to_digits_asc(&self, base: &u32) -> Vec<u32, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u32) -> Vec<u32, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u32, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u32, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u64> for Natural #### fn to_digits_asc(&self, base: &u64) -> Vec<u64, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u64) -> Vec<u64, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<u8> for Natural #### fn to_digits_asc(&self, base: &u8) -> Vec<u8, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &u8) -> Vec<u8, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &u8, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &u8, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Digits<usize> for Natural #### fn to_digits_asc(&self, base: &usize) -> Vec<usize, GlobalReturns a `Vec` containing the digits of a `Natural` in ascending order (least- to most-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_i = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn to_digits_desc(&self, base: &usize) -> Vec<usize, GlobalReturns a `Vec` containing the digits of a `Natural` in descending order (most- to least-significant). If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, b) = (d_i)_ {i=0}^{k-1}$, where $0 \leq d_i < b$ for all $i$, $k=0$ or $d_{k-1} \neq 0$, and $$ \sum_{i=0}^{k-1}b^i d_{k-i-1} = x. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_asc<I>(base: &usize, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of digits into a `Natural`. The input digits are in ascending order (least- to most-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^id_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. #### fn from_digits_desc<I>(base: &usize, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of digits into a `Natural`. The input digits are in descending order (most- to least-significant). The function returns `None` if any of the digits are greater than or equal to the base. $$ f((d_i)_ {i=0}^{k-1}, b) = \sum_{i=0}^{k-1}b^{k-i-1}d_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `base` is less than 2. ##### Examples See here. ### impl Display for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a `String`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_string(), "0"); assert_eq!(Natural::from(123u32).to_string(), "123"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_string(), "1000000000000" ); assert_eq!(format!("{:05}", Natural::from(123u32)), "00123"); ``` ### impl<'a, 'b> Div<&'b Natural> for &'a Natural #### fn div(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) / &Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() / &Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> Div<&'a Natural> for Natural #### fn div(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) / &Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() / &Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> Div<Natural> for &'a Natural #### fn div(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) / Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() / Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl Div<Natural> for Natural #### fn div(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) / Natural::from(10u32), 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() / Natural::from_str("1234567890987").unwrap(), 810000006723u64 ); ``` #### type Output = Natural The resulting type after applying the `/` operator.### impl<'a> DivAssign<&'a Natural> for Natural #### fn div_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x /= &Natural::from(10u32); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x /= &Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 810000006723u64); ``` ### impl DivAssign<Natural> for Natural #### fn div_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. The quotient is rounded towards negative infinity. The quotient and remainder (which is not computed) satisfy $x = qy + r$ and $0 \leq r < y$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x /= Natural::from(10u32); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x /= Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 810000006723u64); ``` ### impl<'a> DivAssignMod<&'a Natural> for Natural #### fn div_assign_mod(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_mod(&Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.div_assign_mod(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); assert_eq!(x, 810000006723u64); ``` #### type ModOutput = Natural ### impl DivAssignMod<Natural> for Natural #### fn div_assign_mod(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_mod(Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!(x.div_assign_mod(Natural::from_str("1234567890987").unwrap()), 530068894399u64); assert_eq!(x, 810000006723u64); ``` #### type ModOutput = Natural ### impl<'a> DivAssignRem<&'a Natural> for Natural #### fn div_assign_rem(&mut self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and returning the remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `div_assign_rem` is equivalent to `div_assign_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_rem(&Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!( x.div_assign_rem(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); assert_eq!(x, 810000006723u64); ``` #### type RemOutput = Natural ### impl DivAssignRem<Natural> for Natural #### fn div_assign_rem(&mut self, other: Natural) -> Natural Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and returning the remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor, $$ $$ x \gets \left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `div_assign_rem` is equivalent to `div_assign_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivAssignRem; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); assert_eq!(x.div_assign_rem(Natural::from(10u32)), 3); assert_eq!(x, 2); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); assert_eq!(x.div_assign_rem(Natural::from_str("1234567890987").unwrap()), 530068894399u64); assert_eq!(x, 810000006723u64); ``` #### type RemOutput = Natural ### impl<'a, 'b> DivExact<&'b Natural> for &'a Natural #### fn div_exact(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / &other` instead. If you’re unsure and you want to know, use `(&self).div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!((&Natural::from(56088u32)).div_exact(&Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( (&Natural::from_str("121932631112635269000000").unwrap()) .div_exact(&Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExact<&'a Natural> for Natural #### fn div_exact(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / &other` instead. If you’re unsure and you want to know, use `self.div_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!(Natural::from(56088u32).div_exact(&Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( Natural::from_str("121932631112635269000000").unwrap() .div_exact(&Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExact<Natural> for &'a Natural #### fn div_exact(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `&self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `(&self).div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!((&Natural::from(56088u32)).div_exact(Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( (&Natural::from_str("121932631112635269000000").unwrap()) .div_exact(Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl DivExact<Natural> for Natural #### fn div_exact(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ f(x, y) = \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self / other` instead. If you’re unsure and you want to know, use `self.div_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExact; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 assert_eq!(Natural::from(56088u32).div_exact(Natural::from(456u32)), 123); // 123456789000 * 987654321000 = 121932631112635269000000 assert_eq!( Natural::from_str("121932631112635269000000").unwrap() .div_exact(Natural::from_str("987654321000").unwrap()), 123456789000u64 ); ``` #### type Output = Natural ### impl<'a> DivExactAssign<&'a Natural> for Natural #### fn div_exact_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= &other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(&other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(&other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 let mut x = Natural::from(56088u32); x.div_exact_assign(&Natural::from(456u32)); assert_eq!(x, 123); // 123456789000 * 987654321000 = 121932631112635269000000 let mut x = Natural::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(&Natural::from_str("987654321000").unwrap()); assert_eq!(x, 123456789000u64); ``` ### impl DivExactAssign<Natural> for Natural #### fn div_exact_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. The first `Natural` must be exactly divisible by the second. If it isn’t, this function may panic or return a meaningless result. $$ x \gets \frac{x}{y}. $$ If you are unsure whether the division will be exact, use `self /= other` instead. If you’re unsure and you want to know, use `self.div_assign_mod(other)` and check whether the remainder is zero. If you want a function that panics if the division is not exact, use `self.div_round_assign(other, RoundingMode::Exact)`. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. May panic if `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivExactAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 123 * 456 = 56088 let mut x = Natural::from(56088u32); x.div_exact_assign(Natural::from(456u32)); assert_eq!(x, 123); // 123456789000 * 987654321000 = 121932631112635269000000 let mut x = Natural::from_str("121932631112635269000000").unwrap(); x.div_exact_assign(Natural::from_str("987654321000").unwrap()); assert_eq!(x, 123456789000u64); ``` ### impl<'a, 'b> DivMod<&'b Natural> for &'a Natural #### fn div_mod(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_mod(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_mod(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> DivMod<&'a Natural> for Natural #### fn div_mod(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( Natural::from(23u32).div_mod(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_mod(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a> DivMod<Natural> for &'a Natural #### fn div_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_mod(Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_mod(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl DivMod<Natural> for Natural #### fn div_mod(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the quotient and remainder. The quotient is rounded towards negative infinity. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivMod; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).div_mod(Natural::from(10u32)).to_debug_string(), "(2, 3)"); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_mod(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type ModOutput = Natural ### impl<'a, 'b> DivRem<&'b Natural> for &'a Natural #### fn div_rem(self, other: &'b Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by reference and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_rem(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_rem(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a> DivRem<&'a Natural> for Natural #### fn div_rem(self, other: &'a Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( Natural::from(23u32).div_rem(&Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_rem(&Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a> DivRem<Natural> for &'a Natural #### fn div_rem(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!( (&Natural::from(23u32)).div_rem(Natural::from(10u32)).to_debug_string(), "(2, 3)" ); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .div_rem(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl DivRem<Natural> for Natural #### fn div_rem(self, other: Natural) -> (Natural, Natural) Divides a `Natural` by another `Natural`, taking both by value and returning the quotient and remainder. The quotient is rounded towards zero. The quotient and remainder satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = \left ( \left \lfloor \frac{x}{y} \right \rfloor, \space x - y\left \lfloor \frac{x}{y} \right \rfloor \right ). $$ For `Natural`s, `div_rem` is equivalent to `div_mod`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).div_rem(Natural::from(10u32)).to_debug_string(), "(2, 3)"); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .div_rem(Natural::from_str("1234567890987").unwrap()).to_debug_string(), "(810000006723, 530068894399)" ); ``` #### type DivOutput = Natural #### type RemOutput = Natural ### impl<'a, 'b> DivRound<&'b Natural> for &'a Natural #### fn div_round(self, other: &'b Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking both by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(&Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(&Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( (&Natural::from(20u32)).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(14u32)).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRound<&'a Natural> for Natural #### fn div_round(self, other: &'a Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(10u32).pow(12).div_round(&Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).pow(12).div_round(&Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( Natural::from(20u32).div_round(&Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(14u32).div_round(&Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRound<Natural> for &'a Natural #### fn div_round(self, other: Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32).pow(12)).div_round(Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( (&Natural::from(20u32)).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( (&Natural::from(10u32)).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( (&Natural::from(14u32)).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl DivRound<Natural> for Natural #### fn div_round(self, other: Natural, rm: RoundingMode) -> (Natural, Ordering) Divides a `Natural` by another `Natural`, taking both by value and rounding according to a specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Let $q = \frac{x}{y}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $$ g(x, y, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor. $$ $$ g(x, y, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil. $$ $$ g(x, y, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $g(x, y, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, y, r) = (g(x, y, r), \operatorname{cmp}(g(x, y, r), q))$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRound, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Down), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(10u32).pow(12).div_round(Natural::from(3u32), RoundingMode::Floor), (Natural::from(333333333333u64), Ordering::Less) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Up), (Natural::from(3u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).pow(12).div_round(Natural::from(3u32), RoundingMode::Ceiling), (Natural::from(333333333334u64), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(5u32), RoundingMode::Exact), (Natural::from(2u32), Ordering::Equal) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(3u32), Ordering::Less) ); assert_eq!( Natural::from(20u32).div_round(Natural::from(3u32), RoundingMode::Nearest), (Natural::from(7u32), Ordering::Greater) ); assert_eq!( Natural::from(10u32).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(2u32), Ordering::Less) ); assert_eq!( Natural::from(14u32).div_round(Natural::from(4u32), RoundingMode::Nearest), (Natural::from(4u32), Ordering::Greater) ); ``` #### type Output = Natural ### impl<'a> DivRoundAssign<&'a Natural> for Natural #### fn div_round_assign(&mut self, other: &'a Natural, rm: RoundingMode) -> Ordering Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(4u32), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(10u32).pow(12); assert_eq!(n.div_round_assign(&Natural::from(3u32), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(4u32), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = Natural::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(&Natural::from(5u32), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Natural::from(10u32); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 3); let mut n = Natural::from(20u32); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Natural::from(10u32); assert_eq!( n.div_round_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 2); let mut n = Natural::from(14u32); assert_eq!( n.div_round_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl DivRoundAssign<Natural> for Natural #### fn div_round_assign(&mut self, other: Natural, rm: RoundingMode) -> Ordering Divides a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and rounding according to a specified rounding mode. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. See the `DivRound` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero, or if `rm` is `Exact` but `self` is not divisible by `other`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivRoundAssign, Pow}; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Down), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(10u32).pow(12); assert_eq!(n.div_round_assign(Natural::from(3u32), RoundingMode::Floor), Ordering::Less); assert_eq!(n, 333333333333u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Up), Ordering::Greater); assert_eq!(n, 3); let mut n = Natural::from(10u32).pow(12); assert_eq!( n.div_round_assign(&Natural::from(3u32), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 333333333334u64); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(5u32), RoundingMode::Exact), Ordering::Equal); assert_eq!(n, 2); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 3); let mut n = Natural::from(20u32); assert_eq!( n.div_round_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 7); let mut n = Natural::from(10u32); assert_eq!(n.div_round_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Less); assert_eq!(n, 2); let mut n = Natural::from(14u32); assert_eq!( n.div_round_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(n, 4); ``` ### impl<'a, 'b> DivisibleBy<&'b Natural> for &'a Natural #### fn divisible_by(self, other: &'b Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. Both `Natural`s are taken by reference. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).divisible_by(&Natural::ZERO), true); assert_eq!((&Natural::from(100u32)).divisible_by(&Natural::from(3u32)), false); assert_eq!((&Natural::from(102u32)).divisible_by(&Natural::from(3u32)), true); assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .divisible_by(&Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<&'a Natural> for Natural #### fn divisible_by(self, other: &'a Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. The first `Natural`s is taken by reference and the second by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.divisible_by(&Natural::ZERO), true); assert_eq!(Natural::from(100u32).divisible_by(&Natural::from(3u32)), false); assert_eq!(Natural::from(102u32).divisible_by(&Natural::from(3u32)), true); assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .divisible_by(&Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleBy<Natural> for &'a Natural #### fn divisible_by(self, other: Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. The first `Natural`s are taken by reference and the second by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).divisible_by(Natural::ZERO), true); assert_eq!((&Natural::from(100u32)).divisible_by(Natural::from(3u32)), false); assert_eq!((&Natural::from(102u32)).divisible_by(Natural::from(3u32)), true); assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .divisible_by(Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl DivisibleBy<Natural> for Natural #### fn divisible_by(self, other: Natural) -> bool Returns whether a `Natural` is divisible by another `Natural`; in other words, whether the first is a multiple of the second. Both `Natural`s are taken by value. This means that zero is divisible by any `Natural`, including zero; but a nonzero `Natural` is never divisible by zero. It’s more efficient to use this function than to compute the remainder and check whether it’s zero. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::DivisibleBy; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.divisible_by(Natural::ZERO), true); assert_eq!(Natural::from(100u32).divisible_by(Natural::from(3u32)), false); assert_eq!(Natural::from(102u32).divisible_by(Natural::from(3u32)), true); assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .divisible_by(Natural::from_str("1000000000000").unwrap()), true ); ``` ### impl<'a> DivisibleByPowerOf2 for &'a Natural #### fn divisible_by_power_of_2(self, pow: u64) -> bool Returns whether a `Natural` is divisible by $2^k$. $f(x, k) = (2^k|x)$. $f(x, k) = (\exists n \in \N : \ x = n2^k)$. If `self` is 0, the result is always true; otherwise, it is equivalent to `self.trailing_zeros().unwrap() <= pow`, but more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{DivisibleByPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.divisible_by_power_of_2(100), true); assert_eq!(Natural::from(100u32).divisible_by_power_of_2(2), true); assert_eq!(Natural::from(100u32).divisible_by_power_of_2(3), false); assert_eq!(Natural::from(10u32).pow(12).divisible_by_power_of_2(12), true); assert_eq!(Natural::from(10u32).pow(12).divisible_by_power_of_2(13), false); ``` ### impl DoubleFactorial for Natural #### fn double_factorial(n: u64) -> Natural Computes the double factorial of a number. $$ f(n) = n!! = n \times (n - 2) \times (n - 4) \times \cdots \times i, $$ where $i$ is 1 if $n$ is odd and $2$ if $n$ is even. $n!! = O(\sqrt{n}(n/e)^{n/2})$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::DoubleFactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::double_factorial(0), 1); assert_eq!(Natural::double_factorial(1), 1); assert_eq!(Natural::double_factorial(2), 2); assert_eq!(Natural::double_factorial(3), 3); assert_eq!(Natural::double_factorial(4), 8); assert_eq!(Natural::double_factorial(5), 15); assert_eq!(Natural::double_factorial(6), 48); assert_eq!(Natural::double_factorial(7), 105); assert_eq!( Natural::double_factorial(99).to_string(), "2725392139750729502980713245400918633290796330545803413734328823443106201171875" ); assert_eq!( Natural::double_factorial(100).to_string(), "34243224702511976248246432895208185975118675053719198827915654463488000000000000" ); ``` This is equivalent to `mpz_2fac_ui` from `mpz/2fac_ui.c`, GMP 6.2.1. ### impl<'a, 'b, 'c> EqMod<&'b Integer, &'c Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: &'c Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Integer, &'b Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by value and the second and third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Integer, Natural> for &'a Integer #### fn eq_mod(self, other: &'b Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Integer, Natural> for Integer #### fn eq_mod(self, other: &'a Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by value and the second by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(&Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( &Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<&'a Natural, Natural> for Natural #### fn eq_mod(self, other: &'a Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and third are taken by value and the second by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(&Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b, 'c> EqMod<&'b Natural, &'c Natural> for &'a Natural #### fn eq_mod(self, other: &'b Natural, m: &'c Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. All three are taken by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(&Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'a Natural, &'b Natural> for Natural #### fn eq_mod(self, other: &'a Natural, m: &'b Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first is taken by value and the second and third by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(&Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( &Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<&'b Natural, Natural> for &'a Natural #### fn eq_mod(self, other: &'b Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and second are taken by reference and the third by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(&Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( &Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Integer, &'b Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: &'b Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first and third numbers are taken by reference and the third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, &'a Natural> for Integer #### fn eq_mod(self, other: Integer, m: &'a Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first two numbers are taken by value and the third by reference. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), &Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Integer, Natural> for &'a Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. The first number is taken by reference and the second and third by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Integer::from(123)).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Integer::from_str("1000000987654").unwrap()).eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Integer, Natural> for Integer #### fn eq_mod(self, other: Integer, m: Natural) -> bool Returns whether an `Integer` is equivalent to another `Integer` modulo a `Natural`; that is, whether the difference between the two `Integer`s is a multiple of the `Natural`. All three numbers are taken by value. Two `Integer`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Integer::from(123).eq_mod(Integer::from(223), Natural::from(100u32)), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("-999999012346").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Integer::from_str("1000000987654").unwrap().eq_mod( Integer::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqMod<Natural, &'b Natural> for &'a Natural #### fn eq_mod(self, other: Natural, m: &'b Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first and third are taken by reference and the second by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Natural, &'a Natural> for Natural #### fn eq_mod(self, other: Natural, m: &'a Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first two are taken by value and the third by reference. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(Natural::from(223u32), &Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987654").unwrap(), &Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987655").unwrap(), &Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a> EqMod<Natural, Natural> for &'a Natural #### fn eq_mod(self, other: Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. The first is taken by reference and the second and third by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(123u32)).eq_mod(Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( (&Natural::from_str("1000000987654").unwrap()).eq_mod( Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl EqMod<Natural, Natural> for Natural #### fn eq_mod(self, other: Natural, m: Natural) -> bool Returns whether a `Natural` is equivalent to another `Natural` modulo a third; that is, whether the difference between the first two is a multiple of the third. All three are taken by value. Two `Natural`s are equal to each other modulo 0 iff they are equal. $f(x, y, m) = (x \equiv y \mod m)$. $f(x, y, m) = (\exists k \in \Z : x - y = km)$. ##### Worst-case complexity $T(n) = O(n \log n \log \log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqMod; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(123u32).eq_mod(Natural::from(223u32), Natural::from(100u32)), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987654").unwrap(), Natural::from_str("1000000000000").unwrap() ), true ); assert_eq!( Natural::from_str("1000000987654").unwrap().eq_mod( Natural::from_str("2000000987655").unwrap(), Natural::from_str("1000000000000").unwrap() ), false ); ``` ### impl<'a, 'b> EqModPowerOf2<&'b Natural> for &'a Natural #### fn eq_mod_power_of_2(self, other: &'b Natural, pow: u64) -> bool Returns whether one `Natural` is equal to another modulo $2^k$; that is, whether their $k$ least-significant bits are equal. $f(x, y, k) = (x \equiv y \mod 2^k)$. $f(x, y, k) = (\exists n \in \Z : x - y = n2^k)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(pow, self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::EqModPowerOf2; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).eq_mod_power_of_2(&Natural::from(256u32), 8), true); assert_eq!( (&Natural::from(0b1101u32)).eq_mod_power_of_2(&Natural::from(0b10101u32), 3), true ); assert_eq!( (&Natural::from(0b1101u32)).eq_mod_power_of_2(&Natural::from(0b10101u32), 4), false ); ``` ### impl<'a, 'b> ExtendedGcd<&'a Natural> for &'b Natural #### fn extended_gcd(self, other: &'a Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Natural`s are taken by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).extended_gcd(&Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Natural::from(240u32)).extended_gcd(&Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<&'a Natural> for Natural #### fn extended_gcd(self, other: &'a Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Natural` is taken by value and the second by reference. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).extended_gcd(&Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Natural::from(240u32).extended_gcd(&Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl<'a> ExtendedGcd<Natural> for &'a Natural #### fn extended_gcd(self, other: Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. The first `Natural` is taken by reference and the second by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).extended_gcd(Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( (&Natural::from(240u32)).extended_gcd(Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl ExtendedGcd<Natural> for Natural #### fn extended_gcd(self, other: Natural) -> (Natural, Integer, Integer) Computes the GCD (greatest common divisor) of two `Natural`s $a$ and $b$, and also the coefficients $x$ and $y$ in Bézout’s identity $ax+by=\gcd(a,b)$. Both `Natural`s are taken by value. The are infinitely many $x$, $y$ that satisfy the identity for any $a$, $b$, so the full specification is more detailed: * $f(0, 0) = (0, 0, 0)$. * $f(a, ak) = (a, 1, 0)$ if $a > 0$ and $k \neq 1$. * $f(bk, b) = (b, 0, 1)$ if $b > 0$. * $f(a, b) = (g, x, y)$ if $a \neq 0$ and $b \neq 0$ and $\gcd(a, b) \neq \min(a, b)$, where $g = \gcd(a, b) \geq 0$, $ax + by = g$, $x \leq \lfloor b/g \rfloor$, and $y \leq \lfloor a/g \rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ExtendedGcd; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).extended_gcd(Natural::from(5u32)).to_debug_string(), "(1, 2, -1)" ); assert_eq!( Natural::from(240u32).extended_gcd(Natural::from(46u32)).to_debug_string(), "(2, -9, 47)" ); ``` #### type Gcd = Natural #### type Cofactor = Integer ### impl Factorial for Natural #### fn factorial(n: u64) -> Natural Computes the factorial of a number. $$ f(n) = n! = 1 \times 2 \times 3 \times \cdots \times n. $$ $n! = O(\sqrt{n}(n/e)^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `n`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Factorial; use malachite_nz::natural::Natural; assert_eq!(Natural::factorial(0), 1); assert_eq!(Natural::factorial(1), 1); assert_eq!(Natural::factorial(2), 2); assert_eq!(Natural::factorial(3), 6); assert_eq!(Natural::factorial(4), 24); assert_eq!(Natural::factorial(5), 120); assert_eq!( Natural::factorial(100).to_string(), "9332621544394415268169923885626670049071596826438162146859296389521759999322991560894\ 1463976156518286253697920827223758251185210916864000000000000000000000000" ); ``` This is equivalent to `mpz_fac_ui` from `mpz/fac_ui.c`, GMP 6.2.1. ### impl<'a, 'b> FloorLogBase<&'b Natural> for &'a Natural #### fn floor_log_base(self, base: &Natural) -> u64 Returns the floor of the base-$b$ logarithm of a positive `Natural`. $f(x, b) = \lfloor\log_b x\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if `self` is 0 or `base` is less than 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(80u32).floor_log_base(&Natural::from(3u32)), 3); assert_eq!(Natural::from(81u32).floor_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(82u32).floor_log_base(&Natural::from(3u32)), 4); assert_eq!(Natural::from(4294967296u64).floor_log_base(&Natural::from(10u32)), 9); ``` This is equivalent to `fmpz_flog` from `fmpz/flog.c`, FLINT 2.7.1. #### type Output = u64 ### impl<'a> FloorLogBase2 for &'a Natural #### fn floor_log_base_2(self) -> u64 Returns the floor of the base-2 logarithm of a positive `Natural`. $f(x) = \lfloor\log_2 x\rfloor$. ##### Worst-case complexity Constant time and additional memory. ##### Panics Panics if `self` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).floor_log_base_2(), 1); assert_eq!(Natural::from(100u32).floor_log_base_2(), 6); ``` #### type Output = u64 ### impl<'a> FloorLogBasePowerOf2<u64> for &'a Natural #### fn floor_log_base_power_of_2(self, pow: u64) -> u64 Returns the floor of the base-$2^k$ logarithm of a positive `Natural`. $f(x, k) = \lfloor\log_{2^k} x\rfloor$. ##### Worst-case complexity Constant time and additional memory. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBasePowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::from(100u32).floor_log_base_power_of_2(2), 3); assert_eq!(Natural::from(4294967296u64).floor_log_base_power_of_2(8), 4); ``` #### type Output = u64 ### impl<'a> FloorRoot<u64> for &'a Natural #### fn floor_root(self, exp: u64) -> Natural Returns the floor of the $n$th root of a `Natural`, taking the `Natural` by reference. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).floor_root(3), 9); assert_eq!((&Natural::from(1000u16)).floor_root(3), 10); assert_eq!((&Natural::from(1001u16)).floor_root(3), 10); assert_eq!((&Natural::from(100000000000u64)).floor_root(5), 158); ``` #### type Output = Natural ### impl FloorRoot<u64> for Natural #### fn floor_root(self, exp: u64) -> Natural Returns the floor of the $n$th root of a `Natural`, taking the `Natural` by value. $f(x, n) = \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRoot; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).floor_root(3), 9); assert_eq!(Natural::from(1000u16).floor_root(3), 10); assert_eq!(Natural::from(1001u16).floor_root(3), 10); assert_eq!(Natural::from(100000000000u64).floor_root(5), 158); ``` #### type Output = Natural ### impl FloorRootAssign<u64> for Natural #### fn floor_root_assign(&mut self, exp: u64) Replaces a `Natural` with the floor of its $n$th root. $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorRootAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); x.floor_root_assign(3); assert_eq!(x, 9); let mut x = Natural::from(1000u16); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(1001u16); x.floor_root_assign(3); assert_eq!(x, 10); let mut x = Natural::from(100000000000u64); x.floor_root_assign(5); assert_eq!(x, 158); ``` ### impl<'a> FloorSqrt for &'a Natural #### fn floor_sqrt(self) -> Natural Returns the floor of the square root of a `Natural`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).floor_sqrt(), 9); assert_eq!((&Natural::from(100u8)).floor_sqrt(), 10); assert_eq!((&Natural::from(101u8)).floor_sqrt(), 10); assert_eq!((&Natural::from(1000000000u32)).floor_sqrt(), 31622); assert_eq!((&Natural::from(10000000000u64)).floor_sqrt(), 100000); ``` #### type Output = Natural ### impl FloorSqrt for Natural #### fn floor_sqrt(self) -> Natural Returns the floor of the square root of a `Natural`, taking it by value. $f(x) = \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrt; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).floor_sqrt(), 9); assert_eq!(Natural::from(100u8).floor_sqrt(), 10); assert_eq!(Natural::from(101u8).floor_sqrt(), 10); assert_eq!(Natural::from(1000000000u32).floor_sqrt(), 31622); assert_eq!(Natural::from(10000000000u64).floor_sqrt(), 100000); ``` #### type Output = Natural ### impl FloorSqrtAssign for Natural #### fn floor_sqrt_assign(&mut self) Replaces a `Natural` with the floor of its square root. $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorSqrtAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); x.floor_sqrt_assign(); assert_eq!(x, 9); let mut x = Natural::from(100u8); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(101u8); x.floor_sqrt_assign(); assert_eq!(x, 10); let mut x = Natural::from(1000000000u32); x.floor_sqrt_assign(); assert_eq!(x, 31622); let mut x = Natural::from(10000000000u64); x.floor_sqrt_assign(); assert_eq!(x, 100000); ``` ### impl<'a> From<&'a Natural> for Integer #### fn from(value: &'a Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(&Natural::from(123u32)), 123); assert_eq!(Integer::from(&Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl<'a> From<&'a Natural> for Rational #### fn from(value: &'a Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(&Natural::from(123u32)), 123); ``` ### impl From<Natural> for Integer #### fn from(value: Natural) -> Integer Converts a `Natural` to an `Integer`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Integer::from(Natural::from(123u32)), 123); assert_eq!(Integer::from(Natural::from(10u32).pow(12)), 1000000000000u64); ``` ### impl From<Natural> for Rational #### fn from(value: Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(Natural::from(123u32)), 123); ``` ### impl From<bool> for Natural #### fn from(b: bool) -> Natural Converts a `bool` to 0 or 1. This function is known as the Iverson bracket. $$ f(P) = [P] = \begin{cases} 1 & \text{if} \quad P, \\ 0 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; assert_eq!(Natural::from(false), 0); assert_eq!(Natural::from(true), 1); ``` ### impl From<u128> for Natural #### fn from(u: u128) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is larger than a `Limb`’s. This implementation is general enough to also work for `usize`, regardless of whether it is equal in width to `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u16> for Natural #### fn from(u: u16) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u32> for Natural #### fn from(u: u32) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u64> for Natural #### fn from(u: u64) -> Natural Converts a `Limb` to a `Natural`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u8> for Natural #### fn from(u: u8) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is smaller than a `Limb`’s. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<usize> for Natural #### fn from(u: usize) -> Natural Converts an unsigned primitive integer to a `Natural`, where the integer’s width is larger than a `Limb`’s. This implementation is general enough to also work for `usize`, regardless of whether it is equal in width to `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl FromSciString for Natural #### fn from_sci_string_with_options( s: &str, options: FromSciStringOptions ) -> Option<NaturalConverts a string, possibly in scientfic notation, to a `Natural`. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive) and the rounding mode, in case rounding is necessary because the string represents a non-integer. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. These are most useful in conjunction with exponents, but they may be used on their own. If the string represents a non-integer, the rounding mode specified in `options` is used to round to an integer. If the string is unparseable, `None` is returned. `None` is also returned if the rounding mode in options is `Exact`, but rounding is necessary. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::num::conversion::traits::FromSciString; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; assert_eq!(Natural::from_sci_string("123").unwrap(), 123); assert_eq!(Natural::from_sci_string("123.5").unwrap(), 124); assert_eq!(Natural::from_sci_string("-123.5"), None); assert_eq!(Natural::from_sci_string("1.23e10").unwrap(), 12300000000u64); let mut options = FromSciStringOptions::default(); assert_eq!(Natural::from_sci_string_with_options("123.5", options).unwrap(), 124); options.set_rounding_mode(RoundingMode::Floor); assert_eq!(Natural::from_sci_string_with_options("123.5", options).unwrap(), 123); options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Natural::from_sci_string_with_options("ff", options).unwrap(), 255); options = FromSciStringOptions::default(); options.set_base(36); assert_eq!(Natural::from_sci_string_with_options("1e5", options).unwrap(), 1805); assert_eq!(Natural::from_sci_string_with_options("1e+5", options).unwrap(), 60466176); assert_eq!(Natural::from_sci_string_with_options("1e-5", options).unwrap(), 0); ``` #### fn from_sci_string(s: &str) -> Option<SelfConverts a `&str`, possibly in scientific notation, to a number, using the default `FromSciStringOptions`.### impl FromStr for Natural #### fn from_str(s: &str) -> Result<Natural, ()Converts an string to a `Natural`. If the string does not represent a valid `Natural`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`. Leading zeros are allowed. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::from_str("123456").unwrap(), 123456); assert_eq!(Natural::from_str("00123456").unwrap(), 123456); assert_eq!(Natural::from_str("0").unwrap(), 0); assert!(Natural::from_str("").is_err()); assert!(Natural::from_str("a").is_err()); assert!(Natural::from_str("-5").is_err()); ``` #### type Err = () The associated error which can be returned from parsing.### impl FromStringBase for Natural #### fn from_string_base(base: u8, s: &str) -> Option<NaturalConverts an string, in a specified base, to a `Natural`. If the string does not represent a valid `Natural`, an `Err` is returned. To be valid, the string must be nonempty and only contain the `char`s `'0'` through `'9'`, `'a'` through `'z'`, and `'A'` through `'Z'`; and only characters that represent digits smaller than the base are allowed. Leading zeros are always allowed. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::{Digits, FromStringBase}; use malachite_nz::natural::Natural; assert_eq!(Natural::from_string_base(10, "123456").unwrap(), 123456); assert_eq!(Natural::from_string_base(10, "00123456").unwrap(), 123456); assert_eq!(Natural::from_string_base(16, "0").unwrap(), 0); assert_eq!(Natural::from_string_base(16, "deadbeef").unwrap(), 3735928559u32); assert_eq!(Natural::from_string_base(16, "deAdBeEf").unwrap(), 3735928559u32); assert!(Natural::from_string_base(10, "").is_none()); assert!(Natural::from_string_base(10, "a").is_none()); assert!(Natural::from_string_base(10, "-5").is_none()); assert!(Natural::from_string_base(2, "2").is_none()); ``` ### impl<'a, 'b> Gcd<&'a Natural> for &'b Natural #### fn gcd(self, other: &'a Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking both by reference. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).gcd(&Natural::from(5u32)), 1); assert_eq!((&Natural::from(12u32)).gcd(&Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> Gcd<&'a Natural> for Natural #### fn gcd(self, other: &'a Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking the first by value and the second by reference. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).gcd(&Natural::from(5u32)), 1); assert_eq!(Natural::from(12u32).gcd(&Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> Gcd<Natural> for &'a Natural #### fn gcd(self, other: Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking the first by reference and the second by value. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).gcd(Natural::from(5u32)), 1); assert_eq!((&Natural::from(12u32)).gcd(Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl Gcd<Natural> for Natural #### fn gcd(self, other: Natural) -> Natural Computes the GCD (greatest common divisor) of two `Natural`s, taking both by value. The GCD of 0 and $n$, for any $n$, is 0. In particular, $\gcd(0, 0) = 0$, which makes sense if we interpret “greatest” to mean “greatest by the divisibility order”. $$ f(x, y) = \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Gcd; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).gcd(Natural::from(5u32)), 1); assert_eq!(Natural::from(12u32).gcd(Natural::from(90u32)), 6); ``` #### type Output = Natural ### impl<'a> GcdAssign<&'a Natural> for Natural #### fn gcd_assign(&mut self, other: &'a Natural) Replaces a `Natural` by its GCD (greatest common divisor) with another `Natural`, taking the `Natural` on the right-hand side by reference. $$ x \gets \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::GcdAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.gcd_assign(&Natural::from(5u32)); assert_eq!(x, 1); let mut x = Natural::from(12u32); x.gcd_assign(&Natural::from(90u32)); assert_eq!(x, 6); ``` ### impl GcdAssign<Natural> for Natural #### fn gcd_assign(&mut self, other: Natural) Replaces a `Natural` by its GCD (greatest common divisor) with another `Natural`, taking the `Natural` on the right-hand side by value. $$ x \gets \gcd(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::GcdAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.gcd_assign(Natural::from(5u32)); assert_eq!(x, 1); let mut x = Natural::from(12u32); x.gcd_assign(Natural::from(90u32)); assert_eq!(x, 6); ``` ### impl<'a, 'b> HammingDistance<&'a Natural> for &'b Natural #### fn hamming_distance(self, other: &'a Natural) -> u64 Determines the Hamming distance between two [`Natural]`s. Both `Natural`s have infinitely many implicit leading zeros. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_base::num::logic::traits::HammingDistance; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).hamming_distance(&Natural::from(123u32)), 0); // 105 = 1101001b, 123 = 1111011 assert_eq!(Natural::from(105u32).hamming_distance(&Natural::from(123u32)), 2); let n = Natural::ONE << 100u32; assert_eq!(n.hamming_distance(&(&n - Natural::ONE)), 101); ``` ### impl Hash for Natural #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn integer_mantissa_and_exponent(self) -> (Natural, u64) Returns a `Natural`’s integer mantissa and exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = (\frac{|x|}{2^{e_i}}, e_i), $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. The inverse operation is `from_integer_mantissa_and_exponent`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!( Natural::from(123u32).integer_mantissa_and_exponent(), (Natural::from(123u32), 0) ); assert_eq!( Natural::from(100u32).integer_mantissa_and_exponent(), (Natural::from(25u32), 2) ); ``` #### fn integer_mantissa(self) -> Natural Returns a `Natural`’s integer mantissa. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = \frac{|x|}{2^{e_i}}, $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).integer_mantissa(), 123); assert_eq!(Natural::from(100u32).integer_mantissa(), 25); ``` #### fn integer_exponent(self) -> u64 Returns a `Natural`’s integer exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = e_i, $$ where $e_i$ is the unique integer such that $x/2^{e_i}$ is an odd integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32).integer_exponent(), 0); assert_eq!(Natural::from(100u32).integer_exponent(), 2); ``` #### fn from_integer_mantissa_and_exponent( integer_mantissa: Natural, integer_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its integer mantissa and exponent. When $x$ is nonzero, we can write $x = 2^{e_i}m_i$, where $e_i$ is an integer and $m_i$ is an odd integer. $$ f(x) = 2^{e_i}m_i. $$ The input does not have to be reduced; that is, the mantissa does not have to be odd. The result is an `Option`, but for this trait implementation the result is always `Some`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `integer_mantissa.significant_bits() + integer_exponent`. ##### Examples ``` use malachite_base::num::conversion::traits::IntegerMantissaAndExponent; use malachite_nz::natural::Natural; let n = <&Natural as IntegerMantissaAndExponent<_, _, _>> ::from_integer_mantissa_and_exponent(Natural::from(123u32), 0).unwrap(); assert_eq!(n, 123); let n = <&Natural as IntegerMantissaAndExponent<_, _, _>> ::from_integer_mantissa_and_exponent(Natural::from(25u32), 2).unwrap(); assert_eq!(n, 100); ``` ### impl<'a> IsInteger for &'a Natural #### fn is_integer(self) -> bool Determines whether a `Natural` is an integer. It always returns `true`. $f(x) = \textrm{true}$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_base::num::conversion::traits::IsInteger; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.is_integer(), true); assert_eq!(Natural::ONE.is_integer(), true); assert_eq!(Natural::from(100u32).is_integer(), true); ``` ### impl IsPowerOf2 for Natural #### fn is_power_of_2(&self) -> bool Determines whether a `Natural` is an integer power of 2. $f(x) = (\exists n \in \Z : 2^n = x)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{IsPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.is_power_of_2(), false); assert_eq!(Natural::from(123u32).is_power_of_2(), false); assert_eq!(Natural::from(0x80u32).is_power_of_2(), true); assert_eq!(Natural::from(10u32).pow(12).is_power_of_2(), false); assert_eq!(Natural::from_str("1099511627776").unwrap().is_power_of_2(), true); ``` ### impl<'a, 'b> JacobiSymbol<&'a Natural> for &'b Natural #### fn jacobi_symbol(self, other: &'a Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).jacobi_symbol(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).jacobi_symbol(&Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(&Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(&Natural::from(9u32)), 1); ``` ### impl<'a> JacobiSymbol<&'a Natural> for Natural #### fn jacobi_symbol(self, other: &'a Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).jacobi_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).jacobi_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).jacobi_symbol(&Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).jacobi_symbol(&Natural::from(9u32)), 1); ``` ### impl<'a> JacobiSymbol<Natural> for &'a Natural #### fn jacobi_symbol(self, other: Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).jacobi_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).jacobi_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).jacobi_symbol(Natural::from(9u32)), 1); ``` ### impl JacobiSymbol<Natural> for Natural #### fn jacobi_symbol(self, other: Natural) -> i8 Computes the Jacobi symbol of two `Natural`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::JacobiSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).jacobi_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).jacobi_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).jacobi_symbol(Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).jacobi_symbol(Natural::from(9u32)), 1); ``` ### impl<'a, 'b> KroneckerSymbol<&'a Natural> for &'b Natural #### fn kronecker_symbol(self, other: &'a Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking both by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl<'a> KroneckerSymbol<&'a Natural> for Natural #### fn kronecker_symbol(self, other: &'a Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).kronecker_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).kronecker_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(9u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(&Natural::from(8u32)), -1); ``` ### impl<'a> KroneckerSymbol<Natural> for &'a Natural #### fn kronecker_symbol(self, other: Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!((&Natural::from(11u32)).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl KroneckerSymbol<Natural> for Natural #### fn kronecker_symbol(self, other: Natural) -> i8 Computes the Kronecker symbol of two `Natural`s, taking both by value. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::KroneckerSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).kronecker_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).kronecker_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(5u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(9u32)), 1); assert_eq!(Natural::from(11u32).kronecker_symbol(Natural::from(8u32)), -1); ``` ### impl<'a, 'b> Lcm<&'a Natural> for &'b Natural #### fn lcm(self, other: &'a Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking both by reference. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).lcm(&Natural::from(5u32)), 15); assert_eq!((&Natural::from(12u32)).lcm(&Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> Lcm<&'a Natural> for Natural #### fn lcm(self, other: &'a Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).lcm(&Natural::from(5u32)), 15); assert_eq!(Natural::from(12u32).lcm(&Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> Lcm<Natural> for &'a Natural #### fn lcm(self, other: Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).lcm(Natural::from(5u32)), 15); assert_eq!((&Natural::from(12u32)).lcm(Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl Lcm<Natural> for Natural #### fn lcm(self, other: Natural) -> Natural Computes the LCM (least common multiple) of two `Natural`s, taking both by value. $$ f(x, y) = \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Lcm; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).lcm(Natural::from(5u32)), 15); assert_eq!(Natural::from(12u32).lcm(Natural::from(90u32)), 180); ``` #### type Output = Natural ### impl<'a> LcmAssign<&'a Natural> for Natural #### fn lcm_assign(&mut self, other: &'a Natural) Replaces a `Natural` by its LCM (least common multiple) with another `Natural`, taking the `Natural` on the right-hand side by reference. $$ x \gets \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::LcmAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.lcm_assign(&Natural::from(5u32)); assert_eq!(x, 15); let mut x = Natural::from(12u32); x.lcm_assign(&Natural::from(90u32)); assert_eq!(x, 180); ``` ### impl LcmAssign<Natural> for Natural #### fn lcm_assign(&mut self, other: Natural) Replaces a `Natural` by its LCM (least common multiple) with another `Natural`, taking the `Natural` on the right-hand side by value. $$ x \gets \operatorname{lcm}(x, y). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::LcmAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.lcm_assign(Natural::from(5u32)); assert_eq!(x, 15); let mut x = Natural::from(12u32); x.lcm_assign(Natural::from(90u32)); assert_eq!(x, 180); ``` ### impl<'a, 'b> LegendreSymbol<&'a Natural> for &'b Natural #### fn legendre_symbol(self, other: &'a Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).legendre_symbol(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).legendre_symbol(&Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).legendre_symbol(&Natural::from(5u32)), 1); ``` ### impl<'a> LegendreSymbol<&'a Natural> for Natural #### fn legendre_symbol(self, other: &'a Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking the first by value and the second by reference. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).legendre_symbol(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).legendre_symbol(&Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).legendre_symbol(&Natural::from(5u32)), 1); ``` ### impl<'a> LegendreSymbol<Natural> for &'a Natural #### fn legendre_symbol(self, other: Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both the first by reference and the second by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).legendre_symbol(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).legendre_symbol(Natural::from(5u32)), -1); assert_eq!((&Natural::from(11u32)).legendre_symbol(Natural::from(5u32)), 1); ``` ### impl LegendreSymbol<Natural> for Natural #### fn legendre_symbol(self, other: Natural) -> i8 Computes the Legendre symbol of two `Natural`s, taking both by value. This implementation is identical to that of `JacobiSymbol`, since there is no computational benefit to requiring that the denominator be prime. $$ f(x, y) = \left ( \frac{x}{y} \right ). $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if `other` is even. ##### Examples ``` use malachite_base::num::arithmetic::traits::LegendreSymbol; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).legendre_symbol(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).legendre_symbol(Natural::from(5u32)), -1); assert_eq!(Natural::from(11u32).legendre_symbol(Natural::from(5u32)), 1); ``` ### impl LowMask for Natural #### fn low_mask(bits: u64) -> Natural Returns a `Natural` whose least significant $b$ bits are `true` and whose other bits are `false`. $f(b) = 2^b - 1$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `bits`. ##### Examples ``` use malachite_base::num::logic::traits::LowMask; use malachite_nz::natural::Natural; assert_eq!(Natural::low_mask(0), 0); assert_eq!(Natural::low_mask(3), 7); assert_eq!(Natural::low_mask(100).to_string(), "1267650600228229401496703205375"); ``` ### impl LowerHex for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a hexadecimal `String` using lowercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToLowerHexString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_lower_hex_string(), "0"); assert_eq!(Natural::from(123u32).to_lower_hex_string(), "7b"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_lower_hex_string(), "e8d4a51000" ); assert_eq!(format!("{:07x}", Natural::from(123u32)), "000007b"); assert_eq!(format!("{:#x}", Natural::ZERO), "0x0"); assert_eq!(format!("{:#x}", Natural::from(123u32)), "0x7b"); assert_eq!( format!("{:#x}", Natural::from_str("1000000000000").unwrap()), "0xe8d4a51000" ); assert_eq!(format!("{:#07x}", Natural::from(123u32)), "0x0007b"); ``` ### impl Min for Natural The minimum value of a `Natural`, 0. #### const MIN: Natural = Natural::ZERO The minimum value of `Self`.### impl<'a, 'b> Mod<&'b Natural> for &'a Natural #### fn mod_op(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!((&Natural::from(23u32)).mod_op(&Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .mod_op(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> Mod<&'a Natural> for Natural #### fn mod_op(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).mod_op(&Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .mod_op(&Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> Mod<Natural> for &'a Natural #### fn mod_op(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!((&Natural::from(23u32)).mod_op(Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .mod_op(Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl Mod<Natural> for Natural #### fn mod_op(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ This function is called `mod_op` rather than `mod` because `mod` is a Rust keyword. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::Mod; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32).mod_op(Natural::from(10u32)), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .mod_op(Natural::from_str("1234567890987").unwrap()), 530068894399u64 ); ``` #### type Output = Natural ### impl<'a> ModAdd<&'a Natural, Natural> for Natural #### fn mod_add(self, other: &'a Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(&Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(&Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModAdd<&'b Natural, &'c Natural> for &'a Natural #### fn mod_add(self, other: &'b Natural, m: &'c Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(&Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(&Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModAdd<&'a Natural, &'b Natural> for Natural #### fn mod_add(self, other: &'a Natural, m: &'b Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(&Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(&Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModAdd<&'b Natural, Natural> for &'a Natural #### fn mod_add(self, other: &'b Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(&Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(&Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModAdd<Natural, &'b Natural> for &'a Natural #### fn mod_add(self, other: Natural, m: &'b Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_add(Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModAdd<Natural, &'a Natural> for Natural #### fn mod_add(self, other: Natural, m: &'a Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(Natural::from(3u32), &Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(Natural::from(5u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModAdd<Natural, Natural> for &'a Natural #### fn mod_add(self, other: Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. assert_eq!((&Natural::ZERO).mod_add(Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!((&Natural::from(7u32)).mod_add(Natural::from(5u32), Natural::from(10u32)), 2); ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!( (&Natural::ZERO).mod_add(Natural::from(3u32), Natural::from(5u32)).to_string(), "3" ); assert_eq!( (&Natural::from(7u32)).mod_add(Natural::from(5u32), Natural::from(10u32)).to_string(), "2" ); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModAdd<Natural, Natural> for Natural #### fn mod_add(self, other: Natural, m: Natural) -> Natural Adds two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAdd; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_add(Natural::from(3u32), Natural::from(5u32)), 3); assert_eq!(Natural::from(7u32).mod_add(Natural::from(5u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModAddAssign<&'a Natural, Natural> for Natural #### fn mod_add_assign(&mut self, other: &'a Natural, m: Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(&Natural::from(3u32), Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(&Natural::from(5u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModAddAssign<&'a Natural, &'b Natural> for Natural #### fn mod_add_assign(&mut self, other: &'a Natural, m: &'b Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(&Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(&Natural::from(5u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModAddAssign<Natural, &'a Natural> for Natural #### fn mod_add_assign(&mut self, other: Natural, m: &'a Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(Natural::from(5u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModAddAssign<Natural, Natural> for Natural #### fn mod_add_assign(&mut self, other: Natural, m: Natural) Adds two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $x + y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_add_assign(Natural::from(3u32), Natural::from(5u32)); assert_eq!(x, 3); let mut x = Natural::from(7u32); x.mod_add_assign(Natural::from(5u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_addN` from `fmpz_mod/add.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a> ModAssign<&'a Natural> for Natural #### fn mod_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by reference and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x.mod_assign(&Natural::from(10u32)); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.mod_assign(&Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 530068894399u64); ``` ### impl ModAssign<Natural> for Natural #### fn mod_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by value and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x.mod_assign(Natural::from(10u32)); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.mod_assign(Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 530068894399u64); ``` ### impl<'a, 'b> ModInverse<&'a Natural> for &'b Natural #### fn mod_inverse(self, m: &'a Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. Both `Natural`s are taken by reference. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_inverse(&Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!((&Natural::from(4u32)).mod_inverse(&Natural::from(10u32)), None); ``` #### type Output = Natural ### impl<'a> ModInverse<&'a Natural> for Natural #### fn mod_inverse(self, m: &'a Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).mod_inverse(&Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!(Natural::from(4u32).mod_inverse(&Natural::from(10u32)), None); ``` #### type Output = Natural ### impl<'a> ModInverse<Natural> for &'a Natural #### fn mod_inverse(self, m: Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. The first `Natural`s is taken by reference and the second by value. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_inverse(Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!((&Natural::from(4u32)).mod_inverse(Natural::from(10u32)), None); ``` #### type Output = Natural ### impl ModInverse<Natural> for Natural #### fn mod_inverse(self, m: Natural) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo another `Natural` $m$. Assumes the first `Natural` is already reduced modulo $m$. Both `Natural`s are taken by value. Returns `None` if $x$ and $m$ are not coprime. $f(x, m) = y$, where $x, y < m$, $\gcd(x, y) = 1$, and $xy \equiv 1 \mod m$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), m.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModInverse; use malachite_nz::natural::Natural; assert_eq!( Natural::from(3u32).mod_inverse(Natural::from(10u32)), Some(Natural::from(7u32)) ); assert_eq!(Natural::from(4u32).mod_inverse(Natural::from(10u32)), None); ``` #### type Output = Natural ### impl ModIsReduced<Natural> for Natural #### fn mod_is_reduced(&self, m: &Natural) -> bool Returns whether a `Natural` is reduced modulo another `Natural` $m$; in other words, whether it is less than $m$. $m$ cannot be zero. $f(x, m) = (x < m)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `m` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModIsReduced, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_is_reduced(&Natural::from(5u32)), true); assert_eq!( Natural::from(10u32).pow(12).mod_is_reduced(&Natural::from(10u32).pow(12)), false ); assert_eq!( Natural::from(10u32).pow(12) .mod_is_reduced(&(Natural::from(10u32).pow(12) + Natural::ONE)), true ); ``` ### impl<'a> ModMul<&'a Natural, Natural> for Natural #### fn mod_mul(self, other: &'a Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(&Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(&Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModMul<&'b Natural, &'c Natural> for &'a Natural #### fn mod_mul(self, other: &'b Natural, m: &'c Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(3u32)).mod_mul(&Natural::from(4u32), &Natural::from(15u32)), 12 ); assert_eq!((&Natural::from(7u32)).mod_mul(&Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModMul<&'a Natural, &'b Natural> for Natural #### fn mod_mul(self, other: &'a Natural, m: &'b Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(&Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(&Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModMul<&'b Natural, Natural> for &'a Natural #### fn mod_mul(self, other: &'b Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(&Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(&Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModMul<Natural, &'b Natural> for &'a Natural #### fn mod_mul(self, other: Natural, m: &'b Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModMul<Natural, &'a Natural> for Natural #### fn mod_mul(self, other: Natural, m: &'a Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(Natural::from(4u32), &Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(Natural::from(6u32), &Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModMul<Natural, Natural> for &'a Natural #### fn mod_mul(self, other: Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_mul(Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!((&Natural::from(7u32)).mod_mul(Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModMul<Natural, Natural> for Natural #### fn mod_mul(self, other: Natural, m: Natural) -> Natural Multiplies two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_mul(Natural::from(4u32), Natural::from(15u32)), 12); assert_eq!(Natural::from(7u32).mod_mul(Natural::from(6u32), Natural::from(10u32)), 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModMulAssign<&'a Natural, Natural> for Natural #### fn mod_mul_assign(&mut self, other: &'a Natural, m: Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(&Natural::from(4u32), Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(&Natural::from(6u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModMulAssign<&'a Natural, &'b Natural> for Natural #### fn mod_mul_assign(&mut self, other: &'a Natural, m: &'b Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(&Natural::from(4u32), &Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(&Natural::from(6u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModMulAssign<Natural, &'a Natural> for Natural #### fn mod_mul_assign(&mut self, other: Natural, m: &'a Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(Natural::from(4u32), &Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(Natural::from(6u32), &Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModMulAssign<Natural, Natural> for Natural #### fn mod_mul_assign(&mut self, other: Natural, m: Natural) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $xy \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_mul_assign(Natural::from(4u32), Natural::from(15u32)); assert_eq!(x, 12); let mut x = Natural::from(7u32); x.mod_mul_assign(Natural::from(6u32), Natural::from(10u32)); assert_eq!(x, 2); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a> ModMulPrecomputed<&'a Natural, Natural> for Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'a Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( &Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b, 'c> ModMulPrecomputed<&'b Natural, &'c Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'b Natural, m: &'c Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( &Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from fmpz_mod/mul.c, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<&'a Natural, &'b Natural> for Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'a Natural, m: &'b Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( &Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( &Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<&'b Natural, Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: &'b Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( &Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( &Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a, 'b> ModMulPrecomputed<Natural, &'b Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: &'b Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputed<Natural, &'a Natural> for Natural #### fn precompute_mod_mul_data(m: &&Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: &'a Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( Natural::from(9u32), &Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( Natural::from(7u32), &Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputed<Natural, Natural> for &'a Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( (&Natural::from(6u8)).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( (&Natural::from(9u8)).mod_mul_precomputed( Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( (&Natural::from(4u8)).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural #### type Data = ModMulData ### impl ModMulPrecomputed<Natural, Natural> for Natural #### fn precompute_mod_mul_data(m: &Natural) -> ModMulData Precomputes data for modular multiplication. See `mod_mul_precomputed` and `mod_mul_precomputed_assign`. ##### Worst-case complexity Constant time and additional memory. This is equivalent to part of `fmpz_mod_ctx_init` from `fmpz_mod/ctx_init.c`, FLINT 2.7.1. #### fn mod_mul_precomputed( self, other: Natural, m: Natural, data: &ModMulData ) -> Natural Multiplies two `Natural` modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModMulPrecomputed; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); assert_eq!( Natural::from(6u8).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 2 ); assert_eq!( Natural::from(9u8).mod_mul_precomputed( Natural::from(9u32), Natural::from(10u32), &data ), 1 ); assert_eq!( Natural::from(4u8).mod_mul_precomputed( Natural::from(7u32), Natural::from(10u32), &data ), 8 ); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural #### type Data = ModMulData ### impl<'a> ModMulPrecomputedAssign<&'a Natural, Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: &'a Natural, m: Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(&Natural::from(9u32), Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModMulPrecomputedAssign<&'a Natural, &'b Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: &'a Natural, m: &'b Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(&Natural::from(9u32), &Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(&Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModMulPrecomputedAssign<Natural, &'a Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: Natural, m: &'a Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(Natural::from(9u32), &Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(Natural::from(7u32), &Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModMulPrecomputedAssign<Natural, Natural> for Natural #### fn mod_mul_precomputed_assign( &mut self, other: Natural, m: Natural, data: &ModMulData ) Multiplies two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. Some precomputed data is provided; this speeds up computations involving several modular multiplications with the same modulus. The precomputed data should be obtained using `precompute_mod_mul_data`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModMulPrecomputed, ModMulPrecomputedAssign}; use malachite_nz::natural::Natural; let data = ModMulPrecomputed::<Natural>::precompute_mod_mul_data(&Natural::from(10u32)); let mut x = Natural::from(6u8); x.mod_mul_precomputed_assign(Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 2); let mut x = Natural::from(9u8); x.mod_mul_precomputed_assign(Natural::from(9u32), Natural::from(10u32), &data); assert_eq!(x, 1); let mut x = Natural::from(4u8); x.mod_mul_precomputed_assign(Natural::from(7u32), Natural::from(10u32), &data); assert_eq!(x, 8); ``` This is equivalent to `_fmpz_mod_mulN` from `fmpz_mod/mul.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a, 'b> ModNeg<&'b Natural> for &'a Natural #### fn mod_neg(self, m: &'b Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_neg(&Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).mod_neg(&Natural::from(10u32)), 3); assert_eq!( (&Natural::from(7u32)).mod_neg(&Natural::from(10u32).pow(12)), 999999999993u64 ); ``` #### type Output = Natural ### impl<'a> ModNeg<&'a Natural> for Natural #### fn mod_neg(self, m: &'a Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_neg(&Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).mod_neg(&Natural::from(10u32)), 3); assert_eq!(Natural::from(7u32).mod_neg(&Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl<'a> ModNeg<Natural> for &'a Natural #### fn mod_neg(self, m: Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_neg(Natural::from(5u32)), 0); assert_eq!((&Natural::from(7u32)).mod_neg(Natural::from(10u32)), 3); assert_eq!((&Natural::from(7u32)).mod_neg(Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl ModNeg<Natural> for Natural #### fn mod_neg(self, m: Natural) -> Natural Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, m) = y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNeg, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_neg(Natural::from(5u32)), 0); assert_eq!(Natural::from(7u32).mod_neg(Natural::from(10u32)), 3); assert_eq!(Natural::from(7u32).mod_neg(Natural::from(10u32).pow(12)), 999999999993u64); ``` #### type Output = Natural ### impl<'a> ModNegAssign<&'a Natural> for Natural #### fn mod_neg_assign(&mut self, m: &'a Natural) Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNegAssign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_neg_assign(&Natural::from(5u32)); assert_eq!(n, 0); let mut n = Natural::from(7u32); n.mod_neg_assign(&Natural::from(10u32)); assert_eq!(n, 3); let mut n = Natural::from(7u32); n.mod_neg_assign(&Natural::from(10u32).pow(12)); assert_eq!(n, 999999999993u64); ``` ### impl ModNegAssign<Natural> for Natural #### fn mod_neg_assign(&mut self, m: Natural) Negates a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $-x \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModNegAssign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_neg_assign(Natural::from(5u32)); assert_eq!(n, 0); let mut n = Natural::from(7u32); n.mod_neg_assign(Natural::from(10u32)); assert_eq!(n, 3); let mut n = Natural::from(7u32); n.mod_neg_assign(Natural::from(10u32).pow(12)); assert_eq!(n, 999999999993u64); ``` ### impl<'a> ModPow<&'a Natural, Natural> for Natural #### fn mod_pow(self, exp: &'a Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(&Natural::from(13u32), Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(&Natural::from(1000u32), Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a, 'b, 'c> ModPow<&'b Natural, &'c Natural> for &'a Natural #### fn mod_pow(self, exp: &'b Natural, m: &'c Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. All three `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(&Natural::from(13u32), &Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(&Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<&'a Natural, &'b Natural> for Natural #### fn mod_pow(self, exp: &'a Natural, m: &'b Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(&Natural::from(13u32), &Natural::from(497u32)), 445); assert_eq!( Natural::from(10u32).mod_pow(&Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<&'b Natural, Natural> for &'a Natural #### fn mod_pow(self, exp: &'b Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(&Natural::from(13u32), Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(&Natural::from(1000u32), Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a, 'b> ModPow<Natural, &'b Natural> for &'a Natural #### fn mod_pow(self, exp: Natural, m: &'b Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(Natural::from(13u32), &Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(Natural::from(1000u32), &Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl<'a> ModPow<Natural, &'a Natural> for Natural #### fn mod_pow(self, exp: Natural, m: &'a Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(Natural::from(13u32), &Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(Natural::from(1000u32), &Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a> ModPow<Natural, Natural> for &'a Natural #### fn mod_pow(self, exp: Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural`$m$. Assumes the input is already reduced mod $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_pow(Natural::from(13u32), Natural::from(497u32)), 445 ); assert_eq!( (&Natural::from(10u32)).mod_pow(Natural::from(1000u32), Natural::from(30u32)), 10 ); ``` #### type Output = Natural ### impl ModPow<Natural, Natural> for Natural #### fn mod_pow(self, exp: Natural, m: Natural) -> Natural Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$. Assumes the input is already reduced mod $m$. All three `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(4u32).mod_pow(Natural::from(13u32), Natural::from(497u32)), 445); assert_eq!(Natural::from(10u32).mod_pow(Natural::from(1000u32), Natural::from(30u32)), 10); ``` #### type Output = Natural ### impl<'a> ModPowAssign<&'a Natural, Natural> for Natural #### fn mod_pow_assign(&mut self, exp: &'a Natural, m: Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(&Natural::from(13u32), Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(&Natural::from(1000u32), Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a, 'b> ModPowAssign<&'a Natural, &'b Natural> for Natural #### fn mod_pow_assign(&mut self, exp: &'a Natural, m: &'b Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(&Natural::from(13u32), &Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(&Natural::from(1000u32), &Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a> ModPowAssign<Natural, &'a Natural> for Natural #### fn mod_pow_assign(&mut self, exp: Natural, m: &'a Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(Natural::from(13u32), &Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(Natural::from(1000u32), &Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl ModPowAssign<Natural, Natural> for Natural #### fn mod_pow_assign(&mut self, exp: Natural, m: Natural) Raises a `Natural` to a `Natural` power modulo a third `Natural` $m$, in place. Assumes the input is already reduced mod $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets y$, where $x, y < m$ and $x^n \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_pow_assign(Natural::from(13u32), Natural::from(497u32)); assert_eq!(x, 445); let mut x = Natural::from(10u32); x.mod_pow_assign(Natural::from(1000u32), Natural::from(30u32)); assert_eq!(x, 10); ``` ### impl<'a> ModPowerOf2 for &'a Natural #### fn mod_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!((&Natural::from(260u32)).mod_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!((&Natural::from(1611u32)).mod_power_of_2(4), 11); ``` #### type Output = Natural ### impl ModPowerOf2 for Natural #### fn mod_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!(Natural::from(260u32).mod_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!(Natural::from(1611u32).mod_power_of_2(4), 11); ``` #### type Output = Natural ### impl<'a, 'b> ModPowerOf2Add<&'a Natural> for &'b Natural #### fn mod_power_of_2_add(self, other: &'a Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_add(&Natural::from(2u32), 5), 2); assert_eq!((&Natural::from(10u32)).mod_power_of_2_add(&Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Add<&'a Natural> for Natural #### fn mod_power_of_2_add(self, other: &'a Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_add(&Natural::from(2u32), 5), 2); assert_eq!(Natural::from(10u32).mod_power_of_2_add(&Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Add<Natural> for &'a Natural #### fn mod_power_of_2_add(self, other: Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_add(Natural::from(2u32), 5), 2); assert_eq!((&Natural::from(10u32)).mod_power_of_2_add(Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl ModPowerOf2Add<Natural> for Natural #### fn mod_power_of_2_add(self, other: Natural, pow: u64) -> Natural Adds two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Add; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_add(Natural::from(2u32), 5), 2); assert_eq!(Natural::from(10u32).mod_power_of_2_add(Natural::from(14u32), 4), 8); ``` #### type Output = Natural ### impl<'a> ModPowerOf2AddAssign<&'a Natural> for Natural #### fn mod_power_of_2_add_assign(&mut self, other: &'a Natural, pow: u64) Adds two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2AddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_power_of_2_add_assign(&Natural::from(2u32), 5); assert_eq!(x, 2); let mut x = Natural::from(10u32); x.mod_power_of_2_add_assign(&Natural::from(14u32), 4); assert_eq!(x, 8); ``` ### impl ModPowerOf2AddAssign<Natural> for Natural #### fn mod_power_of_2_add_assign(&mut self, other: Natural, pow: u64) Adds two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2AddAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.mod_power_of_2_add_assign(Natural::from(2u32), 5); assert_eq!(x, 2); let mut x = Natural::from(10u32); x.mod_power_of_2_add_assign(Natural::from(14u32), 4); assert_eq!(x, 8); ``` ### impl ModPowerOf2Assign for Natural #### fn mod_power_of_2_assign(&mut self, pow: u64) Divides a `Natural`by $2^k$, replacing the `Natural` by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Assign; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 let mut x = Natural::from(260u32); x.mod_power_of_2_assign(8); assert_eq!(x, 4); // 100 * 2^4 + 11 = 1611 let mut x = Natural::from(1611u32); x.mod_power_of_2_assign(4); assert_eq!(x, 11); ``` ### impl<'a> ModPowerOf2Inverse for &'a Natural #### fn mod_power_of_2_inverse(self, pow: u64) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo $2^k$. Assumes the `Natural` is already reduced modulo $2^k$. The `Natural` is taken by reference. Returns `None` if $x$ is even. $f(x, k) = y$, where $x, y < 2^k$, $x$ is odd, and $xy \equiv 1 \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Inverse; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_inverse(8), Some(Natural::from(171u32))); assert_eq!((&Natural::from(4u32)).mod_power_of_2_inverse(8), None); ``` #### type Output = Natural ### impl ModPowerOf2Inverse for Natural #### fn mod_power_of_2_inverse(self, pow: u64) -> Option<NaturalComputes the multiplicative inverse of a `Natural` modulo $2^k$. Assumes the `Natural` is already reduced modulo $2^k$. The `Natural` is taken by value. Returns `None` if $x$ is even. $f(x, k) = y$, where $x, y < 2^k$, $x$ is odd, and $xy \equiv 1 \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Inverse; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_inverse(8), Some(Natural::from(171u32))); assert_eq!(Natural::from(4u32).mod_power_of_2_inverse(8), None); ``` #### type Output = Natural ### impl ModPowerOf2IsReduced for Natural #### fn mod_power_of_2_is_reduced(&self, pow: u64) -> bool Returns whether a `Natural` is reduced modulo 2^k$; in other words, whether it has no more than $k$ significant bits. $f(x, k) = (x < 2^k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{ModPowerOf2IsReduced, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_is_reduced(5), true); assert_eq!(Natural::from(10u32).pow(12).mod_power_of_2_is_reduced(39), false); assert_eq!(Natural::from(10u32).pow(12).mod_power_of_2_is_reduced(40), true); ``` ### impl<'a, 'b> ModPowerOf2Mul<&'b Natural> for &'a Natural #### fn mod_power_of_2_mul(self, other: &'b Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_mul(&Natural::from(2u32), 5), 6); assert_eq!((&Natural::from(10u32)).mod_power_of_2_mul(&Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Mul<&'a Natural> for Natural #### fn mod_power_of_2_mul(self, other: &'a Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_mul(&Natural::from(2u32), 5), 6); assert_eq!(Natural::from(10u32).mod_power_of_2_mul(&Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Mul<Natural> for &'a Natural #### fn mod_power_of_2_mul(self, other: Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_mul(Natural::from(2u32), 5), 6); assert_eq!((&Natural::from(10u32)).mod_power_of_2_mul(Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl ModPowerOf2Mul<Natural> for Natural #### fn mod_power_of_2_mul(self, other: Natural, pow: u64) -> Natural Multiplies two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $xy \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Mul; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_mul(Natural::from(2u32), 5), 6); assert_eq!(Natural::from(10u32).mod_power_of_2_mul(Natural::from(14u32), 4), 12); ``` #### type Output = Natural ### impl<'a> ModPowerOf2MulAssign<&'a Natural> for Natural #### fn mod_power_of_2_mul_assign(&mut self, other: &'a Natural, pow: u64) Multiplies two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2MulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_mul_assign(&Natural::from(2u32), 5); assert_eq!(x, 6); let mut x = Natural::from(10u32); x.mod_power_of_2_mul_assign(&Natural::from(14u32), 4); assert_eq!(x, 12); ``` ### impl ModPowerOf2MulAssign<Natural> for Natural #### fn mod_power_of_2_mul_assign(&mut self, other: Natural, pow: u64) Multiplies two `Natural`s modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x + y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2MulAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_mul_assign(Natural::from(2u32), 5); assert_eq!(x, 6); let mut x = Natural::from(10u32); x.mod_power_of_2_mul_assign(Natural::from(14u32), 4); assert_eq!(x, 12); ``` ### impl<'a> ModPowerOf2Neg for &'a Natural #### fn mod_power_of_2_neg(self, pow: u64) -> Natural Negates a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, k) = y$, where $x, y < 2^k$ and $-x \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Neg; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).mod_power_of_2_neg(5), 0); assert_eq!((&Natural::ZERO).mod_power_of_2_neg(100), 0); assert_eq!((&Natural::from(100u32)).mod_power_of_2_neg(8), 156); assert_eq!( (&Natural::from(100u32)).mod_power_of_2_neg(100).to_string(), "1267650600228229401496703205276" ); ``` #### type Output = Natural ### impl ModPowerOf2Neg for Natural #### fn mod_power_of_2_neg(self, pow: u64) -> Natural Negates a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, k) = y$, where $x, y < 2^k$ and $-x \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Neg; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.mod_power_of_2_neg(5), 0); assert_eq!(Natural::ZERO.mod_power_of_2_neg(100), 0); assert_eq!(Natural::from(100u32).mod_power_of_2_neg(8), 156); assert_eq!( Natural::from(100u32).mod_power_of_2_neg(100).to_string(), "1267650600228229401496703205276" ); ``` #### type Output = Natural ### impl ModPowerOf2NegAssign for Natural #### fn mod_power_of_2_neg_assign(&mut self, pow: u64) Negates a `Natural` modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^p$ and $-x \equiv y \mod 2^p$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2NegAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut n = Natural::ZERO; n.mod_power_of_2_neg_assign(5); assert_eq!(n, 0); let mut n = Natural::ZERO; n.mod_power_of_2_neg_assign(100); assert_eq!(n, 0); let mut n = Natural::from(100u32); n.mod_power_of_2_neg_assign(8); assert_eq!(n, 156); let mut n = Natural::from(100u32); n.mod_power_of_2_neg_assign(100); assert_eq!(n.to_string(), "1267650600228229401496703205276"); ``` ### impl<'a, 'b> ModPowerOf2Pow<&'b Natural> for &'a Natural #### fn mod_power_of_2_pow(self, exp: &Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. Both `Natural`s are taken by reference. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_pow(&Natural::from(10u32), 8), 169); assert_eq!( (&Natural::from(11u32)).mod_power_of_2_pow(&Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Pow<&'a Natural> for Natural #### fn mod_power_of_2_pow(self, exp: &Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_pow(&Natural::from(10u32), 8), 169); assert_eq!( Natural::from(11u32).mod_power_of_2_pow(&Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Pow<Natural> for &'a Natural #### fn mod_power_of_2_pow(self, exp: Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(3u32)).mod_power_of_2_pow(Natural::from(10u32), 8), 169); assert_eq!( (&Natural::from(11u32)).mod_power_of_2_pow(Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl ModPowerOf2Pow<Natural> for Natural #### fn mod_power_of_2_pow(self, exp: Natural, pow: u64) -> Natural Raises a `Natural` to a `Natural` power modulo $2^k$. Assumes the input is already reduced mod $2^k$. Both `Natural`s are taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Pow; use malachite_nz::natural::Natural; assert_eq!(Natural::from(3u32).mod_power_of_2_pow(Natural::from(10u32), 8), 169); assert_eq!( Natural::from(11u32).mod_power_of_2_pow(Natural::from(1000u32), 30), 289109473 ); ``` #### type Output = Natural ### impl<'a> ModPowerOf2PowAssign<&'a Natural> for Natural #### fn mod_power_of_2_pow_assign(&mut self, exp: &Natural, pow: u64) Raises a `Natural` to a `Natural` power modulo $2^k$, in place. Assumes the input is already reduced mod $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2PowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_pow_assign(&Natural::from(10u32), 8); assert_eq!(x, 169); let mut x = Natural::from(11u32); x.mod_power_of_2_pow_assign(&Natural::from(1000u32), 30); assert_eq!(x, 289109473); ``` ### impl ModPowerOf2PowAssign<Natural> for Natural #### fn mod_power_of_2_pow_assign(&mut self, exp: Natural, pow: u64) Raises a `Natural` to a `Natural` power modulo $2^k$, in place. Assumes the input is already reduced mod $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < 2^k$ and $x^n \equiv y \mod 2^k$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `pow`, and $m$ is `exp.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2PowAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(3u32); x.mod_power_of_2_pow_assign(Natural::from(10u32), 8); assert_eq!(x, 169); let mut x = Natural::from(11u32); x.mod_power_of_2_pow_assign(Natural::from(1000u32), 30); assert_eq!(x, 289109473); ``` ### impl<'a> ModPowerOf2Shl<i128> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i128> for Natural #### fn mod_power_of_2_shl(self, bits: i128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i16> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i16> for Natural #### fn mod_power_of_2_shl(self, bits: i16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i32> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i32> for Natural #### fn mod_power_of_2_shl(self, bits: i32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i64> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i64> for Natural #### fn mod_power_of_2_shl(self, bits: i64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<i8> for &'a Natural #### fn mod_power_of_2_shl(self, bits: i8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<i8> for Natural #### fn mod_power_of_2_shl(self, bits: i8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<isize> for &'a Natural #### fn mod_power_of_2_shl(self, bits: isize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<isize> for Natural #### fn mod_power_of_2_shl(self, bits: isize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u128> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u128> for Natural #### fn mod_power_of_2_shl(self, bits: u128, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u16> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u16> for Natural #### fn mod_power_of_2_shl(self, bits: u16, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u32> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u32> for Natural #### fn mod_power_of_2_shl(self, bits: u32, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u64> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u64> for Natural #### fn mod_power_of_2_shl(self, bits: u64, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<u8> for &'a Natural #### fn mod_power_of_2_shl(self, bits: u8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<u8> for Natural #### fn mod_power_of_2_shl(self, bits: u8, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shl<usize> for &'a Natural #### fn mod_power_of_2_shl(self, bits: usize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shl<usize> for Natural #### fn mod_power_of_2_shl(self, bits: usize, pow: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2ShlAssign<i128> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i128, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i16> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i16, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i32> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i32, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i64> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i64, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<i8> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: i8, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<isize> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: isize, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^nx \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u128> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u128, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u16> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u16, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u32> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u32, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u64> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u64, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<u8> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: u8, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShlAssign<usize> for Natural #### fn mod_power_of_2_shl_assign(&mut self, bits: usize, pow: u64) Left-shifts a `Natural` (multiplies it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $2^nx \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl<'a> ModPowerOf2Shr<i128> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i128, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i128> for Natural #### fn mod_power_of_2_shr(self, bits: i128, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i16> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i16, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i16> for Natural #### fn mod_power_of_2_shr(self, bits: i16, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i32> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i32, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i32> for Natural #### fn mod_power_of_2_shr(self, bits: i32, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i64> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i64, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i64> for Natural #### fn mod_power_of_2_shr(self, bits: i64, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<i8> for &'a Natural #### fn mod_power_of_2_shr(self, bits: i8, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<i8> for Natural #### fn mod_power_of_2_shr(self, bits: i8, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModPowerOf2Shr<isize> for &'a Natural #### fn mod_power_of_2_shr(self, bits: isize, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2Shr<isize> for Natural #### fn mod_power_of_2_shr(self, bits: isize, pow: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, n, k) = y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. #### type Output = Natural ### impl ModPowerOf2ShrAssign<i128> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i128, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i16> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i16, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i32> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i32, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i64> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i64, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<i8> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: i8, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl ModPowerOf2ShrAssign<isize> for Natural #### fn mod_power_of_2_shr_assign(&mut self, bits: isize, pow: u64) Right-shifts a `Natural` (divides it by a power of 2) modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples See here. ### impl<'a> ModPowerOf2Square for &'a Natural #### fn mod_power_of_2_square(self, pow: u64) -> Natural Squares a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by reference. $f(x, k) = y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!((&Natural::ZERO).mod_power_of_2_square(2), 0); assert_eq!((&Natural::from(5u32)).mod_power_of_2_square(3), 1); assert_eq!( (&Natural::from_str("12345678987654321").unwrap()) .mod_power_of_2_square(64).to_string(), "16556040056090124897" ); ``` #### type Output = Natural ### impl ModPowerOf2Square for Natural #### fn mod_power_of_2_square(self, pow: u64) -> Natural Squares a `Natural` modulo $2^k$. Assumes the input is already reduced modulo $2^k$. The `Natural` is taken by value. $f(x, k) = y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.mod_power_of_2_square(2), 0); assert_eq!(Natural::from(5u32).mod_power_of_2_square(3), 1); assert_eq!( Natural::from_str("12345678987654321").unwrap().mod_power_of_2_square(64).to_string(), "16556040056090124897" ); ``` #### type Output = Natural ### impl ModPowerOf2SquareAssign for Natural #### fn mod_power_of_2_square_assign(&mut self, pow: u64) Squares a `Natural` modulo $2^k$, in place. Assumes the input is already reduced modulo $2^k$. $x \gets y$, where $x, y < 2^k$ and $x^2 \equiv y \mod 2^k$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::str::FromStr; let mut n = Natural::ZERO; n.mod_power_of_2_square_assign(2); assert_eq!(n, 0); let mut n = Natural::from(5u32); n.mod_power_of_2_square_assign(3); assert_eq!(n, 1); let mut n = Natural::from_str("12345678987654321").unwrap(); n.mod_power_of_2_square_assign(64); assert_eq!(n.to_string(), "16556040056090124897"); ``` ### impl<'a, 'b> ModPowerOf2Sub<&'a Natural> for &'b Natural #### fn mod_power_of_2_sub(self, other: &'a Natural, pow: u64) -> Natural Subtracts two `Natural` modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).mod_power_of_2_sub(&Natural::TWO, 4), 8); assert_eq!((&Natural::from(56u32)).mod_power_of_2_sub(&Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Sub<&'a Natural> for Natural #### fn mod_power_of_2_sub(self, other: &'a Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by value and the second by reference. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).mod_power_of_2_sub(&Natural::TWO, 4), 8); assert_eq!(Natural::from(56u32).mod_power_of_2_sub(&Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2Sub<Natural> for &'a Natural #### fn mod_power_of_2_sub(self, other: Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. The first `Natural` is taken by reference and the second by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(10u32)).mod_power_of_2_sub(Natural::TWO, 4), 8); assert_eq!((&Natural::from(56u32)).mod_power_of_2_sub(Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl ModPowerOf2Sub<Natural> for Natural #### fn mod_power_of_2_sub(self, other: Natural, pow: u64) -> Natural Subtracts two `Natural`s modulo $2^k$. Assumes the inputs are already reduced modulo $2^k$. Both `Natural`s are taken by value. $f(x, y, k) = z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2Sub; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; assert_eq!(Natural::from(10u32).mod_power_of_2_sub(Natural::TWO, 4), 8); assert_eq!(Natural::from(56u32).mod_power_of_2_sub(Natural::from(123u32), 9), 445); ``` #### type Output = Natural ### impl<'a> ModPowerOf2SubAssign<&'a Natural> for Natural #### fn mod_power_of_2_sub_assign(&mut self, other: &'a Natural, pow: u64) Subtracts two `Natural` modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by reference. $x \gets z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SubAssign; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.mod_power_of_2_sub_assign(&Natural::TWO, 4); assert_eq!(x, 8); let mut x = Natural::from(56u32); x.mod_power_of_2_sub_assign(&Natural::from(123u32), 9); assert_eq!(x, 445); ``` ### impl ModPowerOf2SubAssign<Natural> for Natural #### fn mod_power_of_2_sub_assign(&mut self, other: Natural, pow: u64) Subtracts two `Natural` modulo $2^k$, in place. Assumes the inputs are already reduced modulo $2^k$. The `Natural` on the right-hand side is taken by value. $x \gets z$, where $x, y, z < 2^k$ and $x - y \equiv z \mod 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModPowerOf2SubAssign; use malachite_base::num::basic::traits::Two; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32); x.mod_power_of_2_sub_assign(Natural::TWO, 4); assert_eq!(x, 8); let mut x = Natural::from(56u32); x.mod_power_of_2_sub_assign(Natural::from(123u32), 9); assert_eq!(x, 445); ``` ### impl ModShl<i128, Natural> for Natural #### fn mod_shl(self, bits: i128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i128, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i128, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i128, &'a Natural> for Natural #### fn mod_shl(self, bits: i128, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i128, Natural> for &'a Natural #### fn mod_shl(self, bits: i128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i16, Natural> for Natural #### fn mod_shl(self, bits: i16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i16, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i16, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i16, &'a Natural> for Natural #### fn mod_shl(self, bits: i16, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i16, Natural> for &'a Natural #### fn mod_shl(self, bits: i16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i32, Natural> for Natural #### fn mod_shl(self, bits: i32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i32, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i32, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i32, &'a Natural> for Natural #### fn mod_shl(self, bits: i32, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i32, Natural> for &'a Natural #### fn mod_shl(self, bits: i32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i64, Natural> for Natural #### fn mod_shl(self, bits: i64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i64, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i64, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i64, &'a Natural> for Natural #### fn mod_shl(self, bits: i64, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i64, Natural> for &'a Natural #### fn mod_shl(self, bits: i64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<i8, Natural> for Natural #### fn mod_shl(self, bits: i8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<i8, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: i8, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i8, &'a Natural> for Natural #### fn mod_shl(self, bits: i8, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<i8, Natural> for &'a Natural #### fn mod_shl(self, bits: i8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<isize, Natural> for Natural #### fn mod_shl(self, bits: isize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<isize, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: isize, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<isize, &'a Natural> for Natural #### fn mod_shl(self, bits: isize, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<isize, Natural> for &'a Natural #### fn mod_shl(self, bits: isize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u128, Natural> for Natural #### fn mod_shl(self, bits: u128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u128, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u128, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u128, &'a Natural> for Natural #### fn mod_shl(self, bits: u128, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u128, Natural> for &'a Natural #### fn mod_shl(self, bits: u128, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u16, Natural> for Natural #### fn mod_shl(self, bits: u16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u16, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u16, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u16, &'a Natural> for Natural #### fn mod_shl(self, bits: u16, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u16, Natural> for &'a Natural #### fn mod_shl(self, bits: u16, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u32, Natural> for Natural #### fn mod_shl(self, bits: u32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u32, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u32, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u32, &'a Natural> for Natural #### fn mod_shl(self, bits: u32, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u32, Natural> for &'a Natural #### fn mod_shl(self, bits: u32, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u64, Natural> for Natural #### fn mod_shl(self, bits: u64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u64, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u64, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u64, &'a Natural> for Natural #### fn mod_shl(self, bits: u64, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u64, Natural> for &'a Natural #### fn mod_shl(self, bits: u64, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<u8, Natural> for Natural #### fn mod_shl(self, bits: u8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<u8, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: u8, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u8, &'a Natural> for Natural #### fn mod_shl(self, bits: u8, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<u8, Natural> for &'a Natural #### fn mod_shl(self, bits: u8, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShl<usize, Natural> for Natural #### fn mod_shl(self, bits: usize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShl<usize, &'b Natural> for &'a Natural #### fn mod_shl(self, bits: usize, m: &'b Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<usize, &'a Natural> for Natural #### fn mod_shl(self, bits: usize, m: &'a Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShl<usize, Natural> for &'a Natural #### fn mod_shl(self, bits: usize, m: Natural) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShlAssign<i128, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i128, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i128, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i128, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i16, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i16, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i16, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i16, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i32, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i32, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i32, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i32, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i64, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i64, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i64, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i64, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<i8, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i8, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<i8, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: i8, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<isize, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: isize, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<isize, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: isize, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^nx \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u128, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u128, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u128, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u128, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u16, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u16, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u16, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u16, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u32, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u32, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u32, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u32, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u64, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u64, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u64, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u64, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<u8, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u8, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<u8, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: u8, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShlAssign<usize, Natural> for Natural #### fn mod_shl_assign(&mut self, bits: usize, m: Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShlAssign<usize, &'a Natural> for Natural #### fn mod_shl_assign(&mut self, bits: usize, m: &'a Natural) Left-shifts a `Natural` (multiplies it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $2^nx \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShr<i128, Natural> for Natural #### fn mod_shr(self, bits: i128, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i128, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i128, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i128, &'a Natural> for Natural #### fn mod_shr(self, bits: i128, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i128, Natural> for &'a Natural #### fn mod_shr(self, bits: i128, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i16, Natural> for Natural #### fn mod_shr(self, bits: i16, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i16, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i16, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i16, &'a Natural> for Natural #### fn mod_shr(self, bits: i16, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i16, Natural> for &'a Natural #### fn mod_shr(self, bits: i16, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i32, Natural> for Natural #### fn mod_shr(self, bits: i32, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i32, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i32, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i32, &'a Natural> for Natural #### fn mod_shr(self, bits: i32, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i32, Natural> for &'a Natural #### fn mod_shr(self, bits: i32, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i64, Natural> for Natural #### fn mod_shr(self, bits: i64, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i64, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i64, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i64, &'a Natural> for Natural #### fn mod_shr(self, bits: i64, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i64, Natural> for &'a Natural #### fn mod_shr(self, bits: i64, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<i8, Natural> for Natural #### fn mod_shr(self, bits: i8, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<i8, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: i8, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i8, &'a Natural> for Natural #### fn mod_shr(self, bits: i8, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<i8, Natural> for &'a Natural #### fn mod_shr(self, bits: i8, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShr<isize, Natural> for Natural #### fn mod_shr(self, bits: isize, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a, 'b> ModShr<isize, &'b Natural> for &'a Natural #### fn mod_shr(self, bits: isize, m: &'b Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<isize, &'a Natural> for Natural #### fn mod_shr(self, bits: isize, m: &'a Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl<'a> ModShr<isize, Natural> for &'a Natural #### fn mod_shr(self, bits: isize, m: Natural) -> Natural Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, n, m) = y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural ### impl ModShrAssign<i128, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i128, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i128, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i128, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i16, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i16, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i16, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i16, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i32, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i32, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i32, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i32, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i64, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i64, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i64, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i64, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<i8, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i8, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<i8, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: i8, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ModShrAssign<isize, Natural> for Natural #### fn mod_shr_assign(&mut self, bits: isize, m: Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> ModShrAssign<isize, &'a Natural> for Natural #### fn mod_shr_assign(&mut self, bits: isize, m: &'a Natural) Right-shifts a `Natural` (divides it by a power of 2) modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $\lfloor 2^{-n}x \rfloor \equiv y \mod m$. ##### Worst-case complexity $T(n, m) = O(mn \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `m.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a, 'b> ModSquare<&'b Natural> for &'a Natural #### fn mod_square(self, m: &'b Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by reference. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(2u32)).mod_square(&Natural::from(10u32)), 4); assert_eq!((&Natural::from(100u32)).mod_square(&Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquare<&'a Natural> for Natural #### fn mod_square(self, m: &'a Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by value and the second by reference. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!(Natural::from(2u32).mod_square(&Natural::from(10u32)), 4); assert_eq!(Natural::from(100u32).mod_square(&Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquare<Natural> for &'a Natural #### fn mod_square(self, m: Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. The first `Natural` is taken by reference and the second by value. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(2u32)).mod_square(Natural::from(10u32)), 4); assert_eq!((&Natural::from(100u32)).mod_square(Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl ModSquare<Natural> for Natural #### fn mod_square(self, m: Natural) -> Natural Squares a `Natural` modulo another `Natural` $m$. Assumes the input is already reduced modulo $m$. Both `Natural`s are taken by value. $f(x, m) = y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquare; use malachite_nz::natural::Natural; assert_eq!(Natural::from(2u32).mod_square(Natural::from(10u32)), 4); assert_eq!(Natural::from(100u32).mod_square(Natural::from(497u32)), 60); ``` #### type Output = Natural ### impl<'a> ModSquareAssign<&'a Natural> for Natural #### fn mod_square_assign(&mut self, m: &'a Natural) Squares a `Natural` modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by reference. $x \gets y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquareAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(2u32); x.mod_square_assign(&Natural::from(10u32)); assert_eq!(x, 4); let mut x = Natural::from(100u32); x.mod_square_assign(&Natural::from(497u32)); assert_eq!(x, 60); ``` ### impl ModSquareAssign<Natural> for Natural #### fn mod_square_assign(&mut self, m: Natural) Squares a `Natural` modulo another `Natural` $m$, in place. Assumes the input is already reduced modulo $m$. The `Natural` on the right-hand side is taken by value. $x \gets y$, where $x, y < m$ and $x^2 \equiv y \mod m$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSquareAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(2u32); x.mod_square_assign(Natural::from(10u32)); assert_eq!(x, 4); let mut x = Natural::from(100u32); x.mod_square_assign(Natural::from(497u32)); assert_eq!(x, 60); ``` ### impl<'a> ModSub<&'a Natural, Natural> for Natural #### fn mod_sub(self, other: &'a Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by value and the second by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(&Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(&Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This isequivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by value and `c` is taken by reference. #### type Output = Natural ### impl<'a, 'b, 'c> ModSub<&'b Natural, &'c Natural> for &'a Natural #### fn mod_sub(self, other: &'b Natural, m: &'c Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(&Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(&Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModSub<&'a Natural, &'b Natural> for Natural #### fn mod_sub(self, other: &'a Natural, m: &'b Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by value and the second and third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(&Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(&Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by value and `c` and `m` are taken by reference. #### type Output = Natural ### impl<'a, 'b> ModSub<&'b Natural, Natural> for &'a Natural #### fn mod_sub(self, other: &'b Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by reference and the third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(&Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(&Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by reference and `m` is taken by value. #### type Output = Natural ### impl<'a, 'b> ModSub<Natural, &'b Natural> for &'a Natural #### fn mod_sub(self, other: Natural, m: &'b Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first and third `Natural`s are taken by reference and the second by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by reference and `c` is taken by value. #### type Output = Natural ### impl<'a> ModSub<Natural, &'a Natural> for Natural #### fn mod_sub(self, other: Natural, m: &'a Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first two `Natural`s are taken by value and the third by reference. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(Natural::from(3u32), &Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(Natural::from(9u32), &Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by value and `m` is taken by reference. #### type Output = Natural ### impl<'a> ModSub<Natural, Natural> for &'a Natural #### fn mod_sub(self, other: Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. The first `Natural` is taken by reference and the second and third by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(4u32)).mod_sub(Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( (&Natural::from(7u32)).mod_sub(Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by reference and `c` and `m` are taken by value. #### type Output = Natural ### impl ModSub<Natural, Natural> for Natural #### fn mod_sub(self, other: Natural, m: Natural) -> Natural Subtracts two `Natural`s modulo a third `Natural` $m$. Assumes the inputs are already reduced modulo $m$. All three `Natural`s are taken by value. $f(x, y, m) = z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSub; use malachite_nz::natural::Natural; assert_eq!( Natural::from(4u32).mod_sub(Natural::from(3u32), Natural::from(5u32)).to_string(), "1" ); assert_eq!( Natural::from(7u32).mod_sub(Natural::from(9u32), Natural::from(10u32)).to_string(), "8" ); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value. #### type Output = Natural ### impl<'a> ModSubAssign<&'a Natural, Natural> for Natural #### fn mod_sub_assign(&mut self, other: &'a Natural, m: Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by reference and the second by value. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(&Natural::from(3u32), Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(&Natural::from(9u32), Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `m` are taken by value, `c` is taken by reference, and `a == b`. ### impl<'a, 'b> ModSubAssign<&'a Natural, &'b Natural> for Natural #### fn mod_sub_assign(&mut self, other: &'a Natural, m: &'b Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by reference. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(&Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(&Natural::from(9u32), &Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` is taken by value, `c` and `m` are taken by reference, and `a == b`. ### impl<'a> ModSubAssign<Natural, &'a Natural> for Natural #### fn mod_sub_assign(&mut self, other: Natural, m: &'a Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. The first `Natural` on the right-hand side is taken by value and the second by reference. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(Natural::from(3u32), &Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(Natural::from(9u32), &Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b` and `c` are taken by value, `m` is taken by reference, and `a == b`. ### impl ModSubAssign<Natural, Natural> for Natural #### fn mod_sub_assign(&mut self, other: Natural, m: Natural) Subtracts two `Natural`s modulo a third `Natural` $m$, in place. Assumes the inputs are already reduced modulo $m$. Both `Natural`s on the right-hand side are taken by value. $x \gets z$, where $x, y, z < m$ and $x - y \equiv z \mod m$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `m.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::ModSubAssign; use malachite_nz::natural::Natural; let mut x = Natural::from(4u32); x.mod_sub_assign(Natural::from(3u32), Natural::from(5u32)); assert_eq!(x.to_string(), "1"); let mut x = Natural::from(7u32); x.mod_sub_assign(Natural::from(9u32), Natural::from(10u32)); assert_eq!(x.to_string(), "8"); ``` This is equivalent to `_fmpz_mod_subN` from `fmpz_mod/sub.c`, FLINT 2.7.1, where `b`, `c`, and `m` are taken by value and `a == b`. ### impl<'a, 'b> Mul<&'a Natural> for &'b Natural #### fn mul(self, other: &'a Natural) -> Natural Multiplies two `Natural`s, taking both by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(&Natural::ONE * &Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) * &Natural::ZERO, 0); assert_eq!(&Natural::from(123u32) * &Natural::from(456u32), 56088); assert_eq!( (&Natural::from_str("123456789000").unwrap() * &Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> Mul<&'a Natural> for Natural #### fn mul(self, other: &'a Natural) -> Natural Multiplies two `Natural`s, taking the first by value and the second by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ONE * &Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) * &Natural::ZERO, 0); assert_eq!(Natural::from(123u32) * &Natural::from(456u32), 56088); assert_eq!( (Natural::from_str("123456789000").unwrap() * &Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> Mul<Natural> for &'a Natural #### fn mul(self, other: Natural) -> Natural Multiplies two `Natural`s, taking the first by reference and the second by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(&Natural::ONE * Natural::from(123u32), 123); assert_eq!(&Natural::from(123u32) * Natural::ZERO, 0); assert_eq!(&Natural::from(123u32) * Natural::from(456u32), 56088); assert_eq!( (&Natural::from_str("123456789000").unwrap() * Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl Mul<Natural> for Natural #### fn mul(self, other: Natural) -> Natural Multiplies two `Natural`s, taking both by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ONE * Natural::from(123u32), 123); assert_eq!(Natural::from(123u32) * Natural::ZERO, 0); assert_eq!(Natural::from(123u32) * Natural::from(456u32), 56088); assert_eq!( (Natural::from_str("123456789000").unwrap() * Natural::from_str("987654321000") .unwrap()).to_string(), "121932631112635269000000" ); ``` #### type Output = Natural The resulting type after applying the `*` operator.### impl<'a> MulAssign<&'a Natural> for Natural #### fn mul_assign(&mut self, other: &'a Natural) Multiplies a `Natural` by a `Natural` in place, taking the `Natural` on the right-hand side by reference. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::ONE; x *= &Natural::from_str("1000").unwrap(); x *= &Natural::from_str("2000").unwrap(); x *= &Natural::from_str("3000").unwrap(); x *= &Natural::from_str("4000").unwrap(); assert_eq!(x.to_string(), "24000000000000"); ``` ### impl MulAssign<Natural> for Natural #### fn mul_assign(&mut self, other: Natural) Multiplies a `Natural` by a `Natural` in place, taking the `Natural` on the right-hand side by value. $$ x \gets = xy. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::ONE; x *= Natural::from_str("1000").unwrap(); x *= Natural::from_str("2000").unwrap(); x *= Natural::from_str("3000").unwrap(); x *= Natural::from_str("4000").unwrap(); assert_eq!(x.to_string(), "24000000000000"); ``` ### impl Multifactorial for Natural #### fn multifactorial(n: u64, m: u64) -> Natural Computes a multifactorial of a number. $$ f(n, m) = n!^{(m)} = n \times (n - m) \times (n - 2m) \times \cdots \times i. $$ If $n$ is divisible by $m$, then $i$ is $m$; otherwise, $i$ is the remainder when $n$ is divided by $m$. $n!^{(m)} = O(\sqrt{n}(n/e)^{n/m})$. ##### Worst-case complexity $T(n, m) = O(n (\log n)^2 \log\log n)$ $M(n, m) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Multifactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::multifactorial(0, 1), 1); assert_eq!(Natural::multifactorial(1, 1), 1); assert_eq!(Natural::multifactorial(2, 1), 2); assert_eq!(Natural::multifactorial(3, 1), 6); assert_eq!(Natural::multifactorial(4, 1), 24); assert_eq!(Natural::multifactorial(5, 1), 120); assert_eq!(Natural::multifactorial(0, 2), 1); assert_eq!(Natural::multifactorial(1, 2), 1); assert_eq!(Natural::multifactorial(2, 2), 2); assert_eq!(Natural::multifactorial(3, 2), 3); assert_eq!(Natural::multifactorial(4, 2), 8); assert_eq!(Natural::multifactorial(5, 2), 15); assert_eq!(Natural::multifactorial(6, 2), 48); assert_eq!(Natural::multifactorial(7, 2), 105); assert_eq!(Natural::multifactorial(0, 3), 1); assert_eq!(Natural::multifactorial(1, 3), 1); assert_eq!(Natural::multifactorial(2, 3), 2); assert_eq!(Natural::multifactorial(3, 3), 3); assert_eq!(Natural::multifactorial(4, 3), 4); assert_eq!(Natural::multifactorial(5, 3), 10); assert_eq!(Natural::multifactorial(6, 3), 18); assert_eq!(Natural::multifactorial(7, 3), 28); assert_eq!(Natural::multifactorial(8, 3), 80); assert_eq!(Natural::multifactorial(9, 3), 162); assert_eq!( Natural::multifactorial(100, 3).to_string(), "174548867015437739741494347897360069928419328000000000" ); ``` ### impl Named for Natural #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl<'a> Neg for &'a Natural #### fn neg(self) -> Integer Negates a `Natural`, taking it by reference and returning an `Integer`. $$ f(x) = -x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(-&Natural::ZERO, 0); assert_eq!(-&Natural::from(123u32), -123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl Neg for Natural #### fn neg(self) -> Integer Negates a `Natural`, taking it by value and returning an `Integer`. $$ f(x) = -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(-Natural::ZERO, 0); assert_eq!(-Natural::from(123u32), -123); ``` #### type Output = Integer The resulting type after applying the `-` operator.### impl<'a, 'b> NegMod<&'b Natural> for &'a Natural #### fn neg_mod(self, other: &'b Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!((&Natural::from(23u32)).neg_mod(&Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegMod<&'a Natural> for Natural #### fn neg_mod(self, other: &'a Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!(Natural::from(23u32).neg_mod(&Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .neg_mod(&Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegMod<Natural> for &'a Natural #### fn neg_mod(self, other: Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!((&Natural::from(23u32)).neg_mod(Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( (&Natural::from_str("1000000000000000000000000").unwrap()) .neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl NegMod<Natural> for Natural #### fn neg_mod(self, other: Natural) -> Natural Divides the negative of a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ f(x, y) = y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegMod; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 assert_eq!(Natural::from(23u32).neg_mod(Natural::from(10u32)), 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() .neg_mod(Natural::from_str("1234567890987").unwrap()), 704498996588u64 ); ``` #### type Output = Natural ### impl<'a> NegModAssign<&'a Natural> for Natural #### fn neg_mod_assign(&mut self, other: &'a Natural) Divides the negative of a `Natural` by another `Natural`, taking the second `Natural`s by reference and replacing the first by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ x \gets y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); x.neg_mod_assign(&Natural::from(10u32)); assert_eq!(x, 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.neg_mod_assign(&Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 704498996588u64); ``` ### impl NegModAssign<Natural> for Natural #### fn neg_mod_assign(&mut self, other: Natural) Divides the negative of a `Natural` by another `Natural`, taking the second `Natural`s by value and replacing the first by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy - r$ and $0 \leq r < y$. $$ x \gets y\left \lceil \frac{x}{y} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModAssign; use malachite_nz::natural::Natural; use std::str::FromStr; // 3 * 10 - 7 = 23 let mut x = Natural::from(23u32); x.neg_mod_assign(Natural::from(10u32)); assert_eq!(x, 7); // 810000006724 * 1234567890987 - 704498996588 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x.neg_mod_assign(Natural::from_str("1234567890987").unwrap()); assert_eq!(x, 704498996588u64); ``` ### impl<'a> NegModPowerOf2 for &'a Natural #### fn neg_mod_power_of_2(self, pow: u64) -> Natural Divides the negative of a `Natural` by a $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ f(x, k) = 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 assert_eq!((&Natural::from(260u32)).neg_mod_power_of_2(8), 252); // 101 * 2^4 - 5 = 1611 assert_eq!((&Natural::from(1611u32)).neg_mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl NegModPowerOf2 for Natural #### fn neg_mod_power_of_2(self, pow: u64) -> Natural Divides the negative of a `Natural` by a $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ f(x, k) = 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 assert_eq!(Natural::from(260u32).neg_mod_power_of_2(8), 252); // 101 * 2^4 - 5 = 1611 assert_eq!(Natural::from(1611u32).neg_mod_power_of_2(4), 5); ``` #### type Output = Natural ### impl NegModPowerOf2Assign for Natural #### fn neg_mod_power_of_2_assign(&mut self, pow: u64) Divides the negative of a `Natural` by $2^k$, returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k - r$ and $0 \leq r < 2^k$. $$ x \gets 2^k\left \lceil \frac{x}{2^k} \right \rceil - x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegModPowerOf2Assign; use malachite_nz::natural::Natural; // 2 * 2^8 - 252 = 260 let mut x = Natural::from(260u32); x.neg_mod_power_of_2_assign(8); assert_eq!(x, 252); // 101 * 2^4 - 5 = 1611 let mut x = Natural::from(1611u32); x.neg_mod_power_of_2_assign(4); assert_eq!(x, 5); ``` ### impl<'a> NextPowerOf2 for &'a Natural #### fn next_power_of_2(self) -> Natural Finds the smallest power of 2 greater than or equal to a `Natural`. The `Natural` is taken by reference. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).next_power_of_2(), 1); assert_eq!((&Natural::from(123u32)).next_power_of_2(), 128); assert_eq!((&Natural::from(10u32).pow(12)).next_power_of_2(), 1099511627776u64); ``` #### type Output = Natural ### impl NextPowerOf2 for Natural #### fn next_power_of_2(self) -> Natural Finds the smallest power of 2 greater than or equal to a `Natural`. The `Natural` is taken by value. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.next_power_of_2(), 1); assert_eq!(Natural::from(123u32).next_power_of_2(), 128); assert_eq!(Natural::from(10u32).pow(12).next_power_of_2(), 1099511627776u64); ``` #### type Output = Natural ### impl NextPowerOf2Assign for Natural #### fn next_power_of_2_assign(&mut self) Replaces a `Natural` with the smallest power of 2 greater than or equal to it. $x \gets 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ (only if the underlying `Vec` needs to reallocate) where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{NextPowerOf2Assign, Pow}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.next_power_of_2_assign(); assert_eq!(x, 1); let mut x = Natural::from(123u32); x.next_power_of_2_assign(); assert_eq!(x, 128); let mut x = Natural::from(10u32).pow(12); x.next_power_of_2_assign(); assert_eq!(x, 1099511627776u64); ``` ### impl<'a> Not for &'a Natural #### fn not(self) -> Integer Returns the bitwise negation of a `Natural`, taking it by reference and returning an `Integer`. The `Natural` is bitwise-negated as if it were represented in two’s complement. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(!&Natural::ZERO, -1); assert_eq!(!&Natural::from(123u32), -124); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Not for Natural #### fn not(self) -> Integer Returns the bitwise negation of a `Natural`, taking it by value and returning an `Integer`. The `Natural` is bitwise-negated as if it were represented in two’s complement. $$ f(n) = -n - 1. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(!Natural::ZERO, -1); assert_eq!(!Natural::from(123u32), -124); ``` #### type Output = Integer The resulting type after applying the `!` operator.### impl Octal for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to an octal `String`. Using the `#` format flag prepends `"0o"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToOctalString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_octal_string(), "0"); assert_eq!(Natural::from(123u32).to_octal_string(), "173"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_octal_string(), "16432451210000" ); assert_eq!(format!("{:07o}", Natural::from(123u32)), "0000173"); assert_eq!(format!("{:#o}", Natural::ZERO), "0o0"); assert_eq!(format!("{:#o}", Natural::from(123u32)), "0o173"); assert_eq!( format!("{:#o}", Natural::from_str("1000000000000").unwrap()), "0o16432451210000" ); assert_eq!(format!("{:#07o}", Natural::from(123u32)), "0o00173"); ``` ### impl One for Natural The constant 1. #### const ONE: Natural = _ ### impl Ord for Natural #### fn cmp(&self, other: &Natural) -> Ordering Compares two `Natural`s. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Natural::from(122u32)); assert!(Natural::from(123u32) >= Natural::from(122u32)); assert!(Natural::from(123u32) < Natural::from(124u32)); assert!(Natural::from(123u32) <= Natural::from(124u32)); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn overflowing_from(value: &Natural) -> (i128, bool) Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i16 #### fn overflowing_from(value: &Natural) -> (i16, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i32 #### fn overflowing_from(value: &Natural) -> (i32, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i64 #### fn overflowing_from(value: &Natural) -> (i64, bool) Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s), wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for i8 #### fn overflowing_from(value: &Natural) -> (i8, bool) Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for isize #### fn overflowing_from(value: &Natural) -> (isize, bool) Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u128 #### fn overflowing_from(value: &Natural) -> (u128, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u16 #### fn overflowing_from(value: &Natural) -> (u16, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u32 #### fn overflowing_from(value: &Natural) -> (u32, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u64 #### fn overflowing_from(value: &Natural) -> (u64, bool) Converts a `Natural` to a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for u8 #### fn overflowing_from(value: &Natural) -> (u8, bool) Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> OverflowingFrom<&'a Natural> for usize #### fn overflowing_from(value: &Natural) -> (usize, bool) Converts a `Natural` to a `usize`, wrapping modulo $2^W$, where $W$ is the width of a limb. The returned boolean value indicates whether wrapping occurred. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> Parity for &'a Natural #### fn even(self) -> bool Tests whether a `Natural` is even. $f(x) = (2|x)$. $f(x) = (\exists k \in \N : x = 2k)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.even(), true); assert_eq!(Natural::from(123u32).even(), false); assert_eq!(Natural::from(0x80u32).even(), true); assert_eq!(Natural::from(10u32).pow(12).even(), true); assert_eq!((Natural::from(10u32).pow(12) + Natural::ONE).even(), false); ``` #### fn odd(self) -> bool Tests whether a `Natural` is odd. $f(x) = (2\nmid x)$. $f(x) = (\exists k \in \N : x = 2k+1)$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Parity, Pow}; use malachite_base::num::basic::traits::{One, Zero}; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.odd(), false); assert_eq!(Natural::from(123u32).odd(), true); assert_eq!(Natural::from(0x80u32).odd(), false); assert_eq!(Natural::from(10u32).pow(12).odd(), false); assert_eq!((Natural::from(10u32).pow(12) + Natural::ONE).odd(), true); ``` ### impl PartialEq<Integer> for Natural #### fn eq(&self, other: &Integer) -> bool Determines whether a `Natural` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) == Integer::from(123)); assert!(Natural::from(123u32) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Integer #### fn eq(&self, other: &Natural) -> bool Determines whether an `Integer` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())` ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) == Natural::from(123u32)); assert!(Integer::from(123) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Rational #### fn eq(&self, other: &Natural) -> bool Determines whether a `Rational` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from(123) == Natural::from(123u32)); assert!(Rational::from_signeds(22, 7) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Natural #### fn eq(&self, other: &Rational) -> bool Determines whether a `Natural` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(123u32) == Rational::from(123)); assert!(Natural::from(5u32) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f32> for Natural #### fn eq(&self, other: &f32) -> bool Determines whether a `Natural` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Natural #### fn eq(&self, other: &f64) -> bool Determines whether a `Natural` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i128> for Natural #### fn eq(&self, other: &i128) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i16> for Natural #### fn eq(&self, other: &i16) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i32> for Natural #### fn eq(&self, other: &i32) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i64> for Natural #### fn eq(&self, other: &i64) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i8> for Natural #### fn eq(&self, other: &i8) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<isize> for Natural #### fn eq(&self, other: &isize) -> bool Determines whether a `Natural` is equal to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u128> for Natural #### fn eq(&self, other: &u128) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Natural #### fn eq(&self, other: &u16) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u32> for Natural #### fn eq(&self, other: &u32) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u64> for Natural #### fn eq(&self, other: &u64) -> bool Determines whether a `Natural` is equal to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u8> for Natural #### fn eq(&self, other: &u8) -> bool Determines whether a `Natural` is equal to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<usize> for Natural #### fn eq(&self, other: &usize) -> bool Determines whether a `Natural` is equal to a `usize`. ##### Worst-case complexity Constant time and additional memory. See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Natural #### fn eq(&self, other: &Natural) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Integer> for Natural #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Natural` to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32) > Integer::from(122)); assert!(Natural::from(123u32) >= Integer::from(122)); assert!(Natural::from(123u32) < Integer::from(124)); assert!(Natural::from(123u32) <= Integer::from(124)); assert!(Natural::from(123u32) > Integer::from(-123)); assert!(Natural::from(123u32) >= Integer::from(-123)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares an `Integer` to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where n = `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123) > Natural::from(122u32)); assert!(Integer::from(123) >= Natural::from(122u32)); assert!(Integer::from(123) < Natural::from(124u32)); assert!(Integer::from(123) <= Natural::from(124u32)); assert!(Integer::from(-123) < Natural::from(123u32)); assert!(Integer::from(-123) <= Natural::from(123u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares a `Rational` to a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Natural::from(3u32)); assert!(Rational::from_signeds(22, 7) < Natural::from(4u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares a `Natural` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(3u32) < Rational::from_signeds(22, 7)); assert!(Natural::from(4u32) > Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f32) -> Option<OrderingCompares a `Natural` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f64) -> Option<OrderingCompares a `Natural` to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i128) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i16) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i32) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i64) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i8) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &isize) -> Option<OrderingCompares a `Natural` to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u128) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u16) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u32) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u64) -> Option<OrderingCompares a `Natural` to a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u8) -> Option<OrderingCompares a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &usize) -> Option<OrderingCompares a `Natural` to a `usize`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares two `Natural`s. See the documentation for the `Ord` implementation. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Natural` and an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Natural::from(123u32).gt_abs(&Integer::from(122))); assert!(Natural::from(123u32).ge_abs(&Integer::from(122))); assert!(Natural::from(123u32).lt_abs(&Integer::from(124))); assert!(Natural::from(123u32).le_abs(&Integer::from(124))); assert!(Natural::from(123u32).lt_abs(&Integer::from(-124))); assert!(Natural::from(123u32).le_abs(&Integer::from(-124))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of an `Integer` and a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert!(Integer::from(123).gt_abs(&Natural::from(122u32))); assert!(Integer::from(123).ge_abs(&Natural::from(122u32))); assert!(Integer::from(123).lt_abs(&Natural::from(124u32))); assert!(Integer::from(123).le_abs(&Natural::from(124u32))); assert!(Integer::from(-124).gt_abs(&Natural::from(123u32))); assert!(Integer::from(-124).ge_abs(&Natural::from(123u32))); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of a `Rational` and a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a primitive float to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a primitive float to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute value of a signed primitive integer to a `Natural`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares a value of unsigned primitive integer type to a `Natural`. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a `Natural` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f32) -> Option<OrderingCompares a `Natural` to the absolute value of a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f64) -> Option<OrderingCompares a `Natural` to the absolute value of a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i128) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i16) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i32) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i64) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i8) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &isize) -> Option<OrderingCompares a `Natural` to the absolute value of a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u128) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u16) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u32) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u64) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u8) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &usize) -> Option<OrderingCompares a `Natural` to an unsigned primitive integer. Since both values are non-negative, this is the same as ordinary `partial_cmp`. ##### Worst-case complexity Constant time and additional memory. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn pow(self, exp: u64) -> Natural Raises a `Natural` to a power, taking the `Natural` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( (&Natural::from(3u32)).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( (&Natural::from_str("12345678987654321").unwrap()).pow(3).to_string(), "1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Natural ### impl Pow<u64> for Natural #### fn pow(self, exp: u64) -> Natural Raises a `Natural` to a power, taking the `Natural` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!( Natural::from(3u32).pow(100).to_string(), "515377520732011331036461129765621272702107522001" ); assert_eq!( Natural::from_str("12345678987654321").unwrap().pow(3).to_string(), "1881676411868862234942354805142998028003108518161" ); ``` #### type Output = Natural ### impl PowAssign<u64> for Natural #### fn pow_assign(&mut self, exp: u64) Raises a `Natural` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_nz::natural::Natural; use std::str::FromStr; let mut x = Natural::from(3u32); x.pow_assign(100); assert_eq!(x.to_string(), "515377520732011331036461129765621272702107522001"); let mut x = Natural::from_str("12345678987654321").unwrap(); x.pow_assign(3); assert_eq!(x.to_string(), "1881676411868862234942354805142998028003108518161"); ``` ### impl PowerOf2<u64> for Natural #### fn power_of_2(pow: u64) -> Natural Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_nz::natural::Natural; assert_eq!(Natural::power_of_2(0), 1); assert_eq!(Natural::power_of_2(3), 8); assert_eq!(Natural::power_of_2(100).to_string(), "1267650600228229401496703205376"); ``` ### impl<'a> PowerOf2DigitIterable<Natural> for &'a Natural #### fn power_of_2_digits(self, log_base: u64) -> NaturalPowerOf2DigitIterator<'aReturns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use itertools::Itertools; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::PowerOf2DigitIterable; use malachite_nz::natural::Natural; let n = Natural::ZERO; assert!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).next().is_none()); // 107 = 1223_4 let n = Natural::from(107u32); assert_eq!( PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).collect_vec(), vec![ Natural::from(3u32), Natural::from(2u32), Natural::from(2u32), Natural::from(1u32) ] ); let n = Natural::ZERO; assert!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2) .next_back() .is_none()); // 107 = 1223_4 let n = Natural::from(107u32); assert_eq!( PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2) .rev() .collect_vec(), vec![ Natural::from(1u32), Natural::from(2u32), Natural::from(2u32), Natural::from(3u32) ] ); ``` #### type PowerOf2DigitIterator = NaturalPowerOf2DigitIterator<'a### impl<'a> PowerOf2DigitIterable<u128> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u128Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u128### impl<'a> PowerOf2DigitIterable<u16> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u16Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u16### impl<'a> PowerOf2DigitIterable<u32> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u32Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u32### impl<'a> PowerOf2DigitIterable<u64> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u64Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u64### impl<'a> PowerOf2DigitIterable<u8> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, u8Returns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, u8### impl<'a> PowerOf2DigitIterable<usize> for &'a Natural #### fn power_of_2_digits( self, log_base: u64 ) -> NaturalPowerOf2DigitPrimitiveIterator<'a, usizeReturns a double-ended iterator over the base-$2^k$ digits of a `Natural`. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. The forward order is ascending, so that less significant digits appear first. There are no trailing zero digits going forward, or leading zero digits going backward. If it’s necessary to get a `Vec` of all the digits, consider using `to_power_of_2_digits_asc` or `to_power_of_2_digits_desc` instead. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type PowerOf2DigitIterator = NaturalPowerOf2DigitPrimitiveIterator<'a, usize### impl<'a> PowerOf2DigitIterator<Natural> for NaturalPowerOf2DigitIterator<'a#### fn get(&self, index: u64) -> Natural Retrieves the base-$2^k$ digits of a `Natural` by index. $f(x, k, i) = d_i$, where $0 \leq d_i < 2^k$ for all $i$ and $$ \sum_{i=0}^\infty2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `log_base`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::{PowerOf2DigitIterable, PowerOf2DigitIterator}; use malachite_nz::natural::Natural; let n = Natural::ZERO; assert_eq!(PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2).get(0), 0); // 107 = 1223_4 let n = Natural::from(107u32); let digits = PowerOf2DigitIterable::<Natural>::power_of_2_digits(&n, 2); assert_eq!(digits.get(0), 3); assert_eq!(digits.get(1), 2); assert_eq!(digits.get(2), 2); assert_eq!(digits.get(3), 1); assert_eq!(digits.get(4), 0); assert_eq!(digits.get(100), 0); ``` ### impl PowerOf2Digits<Natural> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<Natural, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::ZERO, 6) .to_debug_string(), "[]" ); assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::TWO, 6) .to_debug_string(), "[2]" ); // 123_10 = 173_8 assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_asc(&Natural::from(123u32), 3) .to_debug_string(), "[3, 7, 1]" ); ``` #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<Natural, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. The type of each digit is `Natural`. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::ZERO, 6) .to_debug_string(), "[]" ); assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::TWO, 6) .to_debug_string(), "[2]" ); // 123_10 = 173_8 assert_eq!( PowerOf2Digits::<Natural>::to_power_of_2_digits_desc(&Natural::from(123u32), 3) .to_debug_string(), "[1, 7, 3]" ); ``` #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. The type of each digit is `Natural`. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{One, Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; let digits = &[Natural::ZERO, Natural::ZERO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_asc(6, digits.iter().cloned()).to_debug_string(), "Some(0)" ); let digits = &[Natural::TWO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_asc(6, digits.iter().cloned()).to_debug_string(), "Some(2)" ); let digits = &[Natural::from(3u32), Natural::from(7u32), Natural::ONE]; assert_eq!( Natural::from_power_of_2_digits_asc(3, digits.iter().cloned()).to_debug_string(), "Some(123)" ); let digits = &[Natural::from(100u32)]; assert_eq!( Natural::from_power_of_2_digits_asc(3, digits.iter().cloned()).to_debug_string(), "None" ); ``` #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = Natural>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. The type of each digit is `Natural`. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `digits.count()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::num::basic::traits::{One, Two, Zero}; use malachite_base::num::conversion::traits::PowerOf2Digits; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; let digits = &[Natural::ZERO, Natural::ZERO, Natural::ZERO]; assert_eq!( Natural::from_power_of_2_digits_desc(6, digits.iter().cloned()).to_debug_string(), "Some(0)" ); let digits = &[Natural::ZERO, Natural::TWO]; assert_eq!( Natural::from_power_of_2_digits_desc(6, digits.iter().cloned()).to_debug_string(), "Some(2)" ); let digits = &[Natural::ONE, Natural::from(7u32), Natural::from(3u32)]; assert_eq!( Natural::from_power_of_2_digits_desc(3, digits.iter().cloned()).to_debug_string(), "Some(123)" ); let digits = &[Natural::from(100u32)]; assert_eq!( Natural::from_power_of_2_digits_desc(3, digits.iter().cloned()).to_debug_string(), "None" ); ``` ### impl PowerOf2Digits<u128> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u128, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u128, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u128>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u16> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u16, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u16, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u16>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u32> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u32, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u32, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u32>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u64> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u64, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u64, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u64>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<u8> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<u8, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<u8, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = u8>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl PowerOf2Digits<usize> for Natural #### fn to_power_of_2_digits_asc(&self, log_base: u64) -> Vec<usize, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in ascending order: least- to most-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it ends with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_{n-1} \neq 0$, and $$ \sum_{i=0}^{n-1}2^{ki}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn to_power_of_2_digits_desc(&self, log_base: u64) -> Vec<usize, GlobalReturns a `Vec` containing the base-$2^k$ digits of a `Natural` in descending order: most- to least-significant. The base-2 logarithm of the base is specified. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If the `Natural` is 0, the `Vec` is empty; otherwise, it begins with a nonzero digit. $f(x, k) = (d_i)_ {i=0}^{n-1}$, where $0 \leq d_i < 2^k$ for all $i$, $n=0$ or $d_0 \neq 0$, and $$ \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i = x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `log_base` is greater than the width of the digit type, or if `log_base` is zero. ##### Examples See here. #### fn from_power_of_2_digits_asc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in ascending order: least- to most-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{ki}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. #### fn from_power_of_2_digits_desc<I>(log_base: u64, digits: I) -> Option<Natural>where I: Iterator<Item = usize>, Converts an iterator of base-$2^k$ digits into a `Natural`. The base-2 logarithm of the base is specified. The input digits are in descending order: most- to least-significant. Each digit has primitive integer type, and `log_base` must be no larger than the width of that type. If some digit is greater than $2^k$, `None` is returned. $$ f((d_i)_ {i=0}^{n-1}, k) = \sum_{i=0}^{n-1}2^{k (n-i-1)}d_i. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `digits.count()`. ##### Panics Panics if `log_base` is zero or greater than the width of the digit type. ##### Examples See here. ### impl Primes for Natural #### fn primes_less_than(n: &Natural) -> NaturalPrimesLessThanIterator Returns an iterator that generates all primes less than a given value. The iterator produced by `primes_less_than(n)` generates the same primes as the iterator produced by `primes().take_while(|&p| p < n)`, but the latter would be slower because it doesn’t know in advance how large its prime sieve should be, and might have to create larger and larger prime sieves. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### fn primes_less_than_or_equal_to(n: &Natural) -> NaturalPrimesLessThanIterator Returns an iterator that generates all primes less than or equal to a given value. The iterator produced by `primes_less_than_or_equal_to(n)` generates the same primes as the iterator produced by `primes().take_while(|&p| p <= n)`, but the latter would be slower because it doesn’t know in advance how large its prime sieve should be, and might have to create larger and larger prime sieves. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### fn primes() -> NaturalPrimesIterator Returns all `Natural` primes. ##### Worst-case complexity (amortized) $T(i) = O(\log \log i)$ $M(i) = O(1)$ where $T$ is time, $M$ is additional memory, and $i$ is the iteration index. ##### Examples See here. #### type I = NaturalPrimesIterator #### type LI = NaturalPrimesLessThanIterator ### impl Primorial for Natural #### fn primorial(n: u64) -> Natural Computes the primorial of a `Natural`: the product of all primes less than or equal to it. The `product_of_first_n_primes` function is similar; it computes the primorial of the $n$th prime. $$ f(n) = n\# =prod_{pleq natop p\text {prime}} p. $$ $n\# = O(e^{(1+o(1))n})$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Primorial; use malachite_nz::natural::Natural; assert_eq!(Natural::primorial(0), 1); assert_eq!(Natural::primorial(1), 1); assert_eq!(Natural::primorial(2), 2); assert_eq!(Natural::primorial(3), 6); assert_eq!(Natural::primorial(4), 6); assert_eq!(Natural::primorial(5), 30); assert_eq!(Natural::primorial(100).to_string(), "2305567963945518424753102147331756070"); ``` This is equivalent to `mpz_primorial_ui` from `mpz/primorial_ui.c`, GMP 6.2.1. #### fn product_of_first_n_primes(n: u64) -> Natural Computes the product of the first $n$ primes. The `primorial` function is similar; it computes the product of all primes less than or equal to $n$. $$ f(n) = p_n\# =prod_{k=1}^n p_n, $$ where $p_n$ is the $n$th prime number. $p_n\# = O\left (left (frac{1}{e}k\log k\left (frac{\log k}{e^2}k right )^{1/\log k}right )^komega(1)\right )$. This asymptotic approximation is due to <NAME>ichels. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Primorial; use malachite_nz::natural::Natural; assert_eq!(Natural::product_of_first_n_primes(0), 1); assert_eq!(Natural::product_of_first_n_primes(1), 2); assert_eq!(Natural::product_of_first_n_primes(2), 6); assert_eq!(Natural::product_of_first_n_primes(3), 30); assert_eq!(Natural::product_of_first_n_primes(4), 210); assert_eq!(Natural::product_of_first_n_primes(5), 2310); assert_eq!( Natural::product_of_first_n_primes(100).to_string(), "4711930799906184953162487834760260422020574773409675520188634839616415335845034221205\ 28925670554468197243910409777715799180438028421831503871944494399049257903072063599053\ 8452312528339864352999310398481791730017201031090" ); ``` ### impl<'a> Product<&'a Natural> for Natural #### fn product<I>(xs: I) -> Naturalwhere I: Iterator<Item = &'a Natural>, Multiplies together all the `Natural`s in an iterator of `Natural` references. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Product; assert_eq!(Natural::product(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().iter()), 210); ``` ### impl Product<Natural> for Natural #### fn product<I>(xs: I) -> Naturalwhere I: Iterator<Item = Natural>, Multiplies together all the `Natural`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Product; assert_eq!( Natural::product(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().into_iter()), 210 ); ``` ### impl<'a, 'b> Rem<&'b Natural> for &'a Natural #### fn rem(self, other: &'b Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) % &Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() % &Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> Rem<&'a Natural> for Natural #### fn rem(self, other: &'a Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by value and the second by reference and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) % &Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() % &Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> Rem<Natural> for &'a Natural #### fn rem(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking the first by reference and the second by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(&Natural::from(23u32) % Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( &Natural::from_str("1000000000000000000000000").unwrap() % Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl Rem<Natural> for Natural #### fn rem(self, other: Natural) -> Natural Divides a `Natural` by another `Natural`, taking both by value and returning just the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ f(x, y) = x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem` is equivalent to `mod_op`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 assert_eq!(Natural::from(23u32) % Natural::from(10u32), 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 assert_eq!( Natural::from_str("1000000000000000000000000").unwrap() % Natural::from_str("1234567890987").unwrap(), 530068894399u64 ); ``` #### type Output = Natural The resulting type after applying the `%` operator.### impl<'a> RemAssign<&'a Natural> for Natural #### fn rem_assign(&mut self, other: &'a Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by reference and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem_assign` is equivalent to `mod_assign`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x %= &Natural::from(10u32); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x %= &Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 530068894399u64); ``` ### impl RemAssign<Natural> for Natural #### fn rem_assign(&mut self, other: Natural) Divides a `Natural` by another `Natural`, taking the second `Natural` by value and replacing the first by the remainder. If the quotient were computed, he quotient and remainder would satisfy $x = qy + r$ and $0 \leq r < y$. $$ x \gets x - y\left \lfloor \frac{x}{y} \right \rfloor. $$ For `Natural`s, `rem_assign` is equivalent to `mod_assign`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is zero. ##### Examples ``` use malachite_nz::natural::Natural; use std::str::FromStr; // 2 * 10 + 3 = 23 let mut x = Natural::from(23u32); x %= Natural::from(10u32); assert_eq!(x, 3); // 810000006723 * 1234567890987 + 530068894399 = 1000000000000000000000000 let mut x = Natural::from_str("1000000000000000000000000").unwrap(); x %= Natural::from_str("1234567890987").unwrap(); assert_eq!(x, 530068894399u64); ``` ### impl<'a> RemPowerOf2 for &'a Natural #### fn rem_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by reference. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2` is equivalent to `mod_power_of_2`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!((&Natural::from(260u32)).rem_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!((&Natural::from(1611u32)).rem_power_of_2(4), 11); ``` #### type Output = Natural ### impl RemPowerOf2 for Natural #### fn rem_power_of_2(self, pow: u64) -> Natural Divides a `Natural` by $2^k$, returning just the remainder. The `Natural` is taken by value. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ f(x, k) = x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2` is equivalent to `mod_power_of_2`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 assert_eq!(Natural::from(260u32).rem_power_of_2(8), 4); // 100 * 2^4 + 11 = 1611 assert_eq!(Natural::from(1611u32).rem_power_of_2(4), 11); ``` #### type Output = Natural ### impl RemPowerOf2Assign for Natural #### fn rem_power_of_2_assign(&mut self, pow: u64) Divides a `Natural` by $2^k$, replacing the first `Natural` by the remainder. If the quotient were computed, the quotient and remainder would satisfy $x = q2^k + r$ and $0 \leq r < 2^k$. $$ x \gets x - 2^k\left \lfloor \frac{x}{2^k} \right \rfloor. $$ For `Natural`s, `rem_power_of_2_assign` is equivalent to `mod_power_of_2_assign`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::RemPowerOf2Assign; use malachite_nz::natural::Natural; // 1 * 2^8 + 4 = 260 let mut x = Natural::from(260u32); x.rem_power_of_2_assign(8); assert_eq!(x, 4); // 100 * 2^4 + 11 = 1611 let mut x = Natural::from(1611u32); x.rem_power_of_2_assign(4); assert_eq!(x, 11); ``` ### impl RootAssignRem<u64> for Natural #### fn root_assign_rem(&mut self, exp: u64) -> Natural Replaces a `Natural` with the floor of its $n$th root, and returns the remainder (the difference between the original `Natural` and the $n$th power of the floor). $f(x, n) = x - \lfloor\sqrt[n]{x}\rfloor^n$, $x \gets \lfloor\sqrt[n]{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootAssignRem; use malachite_nz::natural::Natural; let mut x = Natural::from(999u16); assert_eq!(x.root_assign_rem(3), 270); assert_eq!(x, 9); let mut x = Natural::from(1000u16); assert_eq!(x.root_assign_rem(3), 0); assert_eq!(x, 10); let mut x = Natural::from(1001u16); assert_eq!(x.root_assign_rem(3), 1); assert_eq!(x, 10); let mut x = Natural::from(100000000000u64); assert_eq!(x.root_assign_rem(5), 1534195232); assert_eq!(x, 158); ``` #### type RemOutput = Natural ### impl<'a> RootRem<u64> for &'a Natural #### fn root_rem(self, exp: u64) -> (Natural, Natural) Returns the floor of the $n$th root of a `Natural`, and the remainder (the difference between the `Natural` and the $n$th power of the floor). The `Natural` is taken by reference. $f(x, n) = (\lfloor\sqrt[n]{x}\rfloor, x - \lfloor\sqrt[n]{x}\rfloor^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(999u16)).root_rem(3).to_debug_string(), "(9, 270)"); assert_eq!((&Natural::from(1000u16)).root_rem(3).to_debug_string(), "(10, 0)"); assert_eq!((&Natural::from(1001u16)).root_rem(3).to_debug_string(), "(10, 1)"); assert_eq!( (&Natural::from(100000000000u64)).root_rem(5).to_debug_string(), "(158, 1534195232)" ); ``` #### type RootOutput = Natural #### type RemOutput = Natural ### impl RootRem<u64> for Natural #### fn root_rem(self, exp: u64) -> (Natural, Natural) Returns the floor of the $n$th root of a `Natural`, and the remainder (the difference between the `Natural` and the $n$th power of the floor). The `Natural` is taken by value. $f(x, n) = (\lfloor\sqrt[n]{x}\rfloor, x - \lfloor\sqrt[n]{x}\rfloor^n)$. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::RootRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(999u16).root_rem(3).to_debug_string(), "(9, 270)"); assert_eq!(Natural::from(1000u16).root_rem(3).to_debug_string(), "(10, 0)"); assert_eq!(Natural::from(1001u16).root_rem(3).to_debug_string(), "(10, 1)"); assert_eq!( Natural::from(100000000000u64).root_rem(5).to_debug_string(), "(158, 1534195232)" ); ``` #### type RootOutput = Natural #### type RemOutput = Natural ### impl<'a, 'b> RoundToMultiple<&'b Natural> for &'a Natural #### fn round_to_multiple( self, other: &'b Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. Both `Natural`s are taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(5u32)).round_to_multiple(&Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( (&Natural::from(20u32)).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(14u32)).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultiple<&'a Natural> for Natural #### fn round_to_multiple( self, other: &'a Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. The first `Natural` is taken by value and the second by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(5u32).round_to_multiple(&Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( Natural::from(20u32).round_to_multiple(&Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(14u32).round_to_multiple(&Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultiple<Natural> for &'a Natural #### fn round_to_multiple( self, other: Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. The first `Natural` is taken by reference and the second by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(5u32)).round_to_multiple(Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( (&Natural::from(20u32)).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(14u32)).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl RoundToMultiple<Natural> for Natural #### fn round_to_multiple( self, other: Natural, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of another `Natural`, according to a specified rounding mode. Both `Natural`s are taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \N$. The following two expressions are equivalent: * `x.round_to_multiple(other, RoundingMode::Exact)` * `{ assert!(x.divisible_by(other)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(5u32).round_to_multiple(Natural::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(5u32), RoundingMode::Exact) .to_debug_string(), "(10, Equal)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(9, Less)" ); assert_eq!( Natural::from(20u32).round_to_multiple(Natural::from(3u32), RoundingMode::Nearest) .to_debug_string(), "(21, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(14u32).round_to_multiple(Natural::from(4u32), RoundingMode::Nearest) .to_debug_string(), "(16, Greater)" ); ``` #### type Output = Natural ### impl<'a> RoundToMultipleAssign<&'a Natural> for Natural #### fn round_to_multiple_assign( &mut self, other: &'a Natural, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of another `Natural` in place, according to a specified rounding mode. The `Natural` on the right-hand side is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut x = Natural::from(5u32); assert_eq!(x.round_to_multiple_assign(&Natural::ZERO, RoundingMode::Down), Ordering::Less); assert_eq!(x, 0); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Down), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Up), Ordering::Greater ); assert_eq!(x, 12); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(5u32), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, 10); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 9); let mut x = Natural::from(20u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 21); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(14u32); assert_eq!( x.round_to_multiple_assign(&Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 16); ``` ### impl RoundToMultipleAssign<Natural> for Natural #### fn round_to_multiple_assign( &mut self, other: Natural, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of another `Natural` in place, according to a specified rounding mode. The `Natural` on the right-hand side is taken by value. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_assign(other, RoundingMode::Exact);` * `assert!(x.divisible_by(other));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut x = Natural::from(5u32); assert_eq!(x.round_to_multiple_assign(Natural::ZERO, RoundingMode::Down), Ordering::Less); assert_eq!(x, 0); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Down), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Up), Ordering::Greater ); assert_eq!(x, 12); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(5u32), RoundingMode::Exact), Ordering::Equal ); assert_eq!(x, 10); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 9); let mut x = Natural::from(20u32); assert_eq!( x.round_to_multiple_assign(Natural::from(3u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 21); let mut x = Natural::from(10u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x, 8); let mut x = Natural::from(14u32); assert_eq!( x.round_to_multiple_assign(Natural::from(4u32), RoundingMode::Nearest), Ordering::Greater ); assert_eq!(x, 16); ``` ### impl<'a> RoundToMultipleOfPowerOf2<u64> for &'a Natural #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of $2^k$ according to a specified rounding mode. The `Natural` is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( (&Natural::from(10u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( (&Natural::from(12u32)).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(12, Equal)" ); ``` #### type Output = Natural ### impl RoundToMultipleOfPowerOf2<u64> for Natural #### fn round_to_multiple_of_power_of_2( self, pow: u64, rm: RoundingMode ) -> (Natural, Ordering) Rounds a `Natural` to a multiple of $2^k$ according to a specified rounding mode. The `Natural` is taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2(pow, RoundingMode::Exact)` * `{ assert!(x.divisible_by_power_of_2(pow)); x }` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Floor) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Ceiling) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Down) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Up) .to_debug_string(), "(12, Greater)" ); assert_eq!( Natural::from(10u32).round_to_multiple_of_power_of_2(2, RoundingMode::Nearest) .to_debug_string(), "(8, Less)" ); assert_eq!( Natural::from(12u32).round_to_multiple_of_power_of_2(2, RoundingMode::Exact) .to_debug_string(), "(12, Equal)" ); ``` #### type Output = Natural ### impl RoundToMultipleOfPowerOf2Assign<u64> for Natural #### fn round_to_multiple_of_power_of_2_assign( &mut self, pow: u64, rm: RoundingMode ) -> Ordering Rounds a `Natural` to a multiple of $2^k$ in place, according to a specified rounding mode. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultipleOfPowerOf2` documentation for details. The following two expressions are equivalent: * `x.round_to_multiple_of_power_of_2_assign(pow, RoundingMode::Exact);` * `assert!(x.divisible_by_power_of_2(pow));` but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2Assign; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; use std::cmp::Ordering; let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Floor), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(n, 12); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Down), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Up), Ordering::Greater ); assert_eq!(n, 12); let mut n = Natural::from(10u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Nearest), Ordering::Less ); assert_eq!(n, 8); let mut n = Natural::from(12u32); assert_eq!( n.round_to_multiple_of_power_of_2_assign(2, RoundingMode::Exact), Ordering::Equal ); assert_eq!(n, 12); ``` ### impl<'a> RoundingFrom<&'a Natural> for f32 #### fn rounding_from(value: &'a Natural, rm: RoundingMode) -> (f32, Ordering) Converts a `Natural` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` or `Down`, the largest float less than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. * If the rounding mode is `Ceiling` or `Up`, the smallest float greater than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then positive infinity is returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Natural` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Natural> for f64 #### fn rounding_from(value: &'a Natural, rm: RoundingMode) -> (f64, Ordering) Converts a `Natural` to a primitive float according to a specified `RoundingMode`. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. * If the rounding mode is `Floor` or `Down`, the largest float less than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. * If the rounding mode is `Ceiling` or `Up`, the smallest float greater than or equal to the `Natural` is returned. If the `Natural` is greater than the maximum finite float, then positive infinity is returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Natural` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Natural` is greater than the maximum finite float, then the maximum finite float is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for Natural #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl RoundingFrom<Rational> for Natural #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl RoundingFrom<f32> for Natural #### fn rounding_from(value: f32, rm: RoundingMode) -> (Natural, Ordering) Converts a floating-point value to a `Natural`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite, and it cannot round to a negative integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite, if it would round to a negative integer, or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl RoundingFrom<f64> for Natural #### fn rounding_from(value: f64, rm: RoundingMode) -> (Natural, Ordering) Converts a floating-point value to a `Natural`, using the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. The floating-point value cannot be NaN or infinite, and it cannot round to a negative integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Panics Panics if `value` is NaN or infinite, if it would round to a negative integer, or if the rounding mode is `Exact` and `value` is not an integer. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Integer> for Natural #### fn saturating_from(value: &'a Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(&Integer::from(123)), 123); assert_eq!(Natural::saturating_from(&Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(&Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(&-Integer::from(10u32).pow(12)), 0); ``` ### impl<'a> SaturatingFrom<&'a Natural> for i128 #### fn saturating_from(value: &Natural) -> i128 Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i16 #### fn saturating_from(value: &Natural) -> i16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i32 #### fn saturating_from(value: &Natural) -> i32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i64 #### fn saturating_from(value: &Natural) -> i64 Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s). If the `Natural` is too large to fit in a `SignedLimb`, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for i8 #### fn saturating_from(value: &Natural) -> i8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for isize #### fn saturating_from(value: &Natural) -> isize Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u128 #### fn saturating_from(value: &Natural) -> u128 Converts a `Natural` to a value of an unsigned primitive integer type that’s larger than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u16 #### fn saturating_from(value: &Natural) -> u16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u32 #### fn saturating_from(value: &Natural) -> u32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u64 #### fn saturating_from(value: &Natural) -> u64 Converts a `Natural` to a `Limb`. If the `Natural` is too large to fit in a `Limb`, the maximum representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for u8 #### fn saturating_from(value: &Natural) -> u8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`. If the `Natural` is too large to fit in the output type, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> SaturatingFrom<&'a Natural> for usize #### fn saturating_from(value: &Natural) -> usize Converts a `Natural` to a `usize`. If the `Natural` is too large to fit in a `usize`, the largest representable value is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<Integer> for Natural #### fn saturating_from(value: Integer) -> Natural Converts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SaturatingFrom; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::saturating_from(Integer::from(123)), 123); assert_eq!(Natural::saturating_from(Integer::from(-123)), 0); assert_eq!(Natural::saturating_from(Integer::from(10u32).pow(12)), 1000000000000u64); assert_eq!(Natural::saturating_from(-Integer::from(10u32).pow(12)), 0); ``` ### impl SaturatingFrom<i128> for Natural #### fn saturating_from(i: i128) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i16> for Natural #### fn saturating_from(i: i16) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i32> for Natural #### fn saturating_from(i: i32) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i64> for Natural #### fn saturating_from(i: i64) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<i8> for Natural #### fn saturating_from(i: i8) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl SaturatingFrom<isize> for Natural #### fn saturating_from(i: isize) -> Natural Converts a signed primitive primitive integer to a `Natural`. If the integer is negative, 0 is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a, 'b> SaturatingSub<&'a Natural> for &'b Natural #### fn saturating_sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by reference and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).saturating_sub(&Natural::from(123u32)), 0); assert_eq!((&Natural::from(123u32)).saturating_sub(&Natural::ZERO), 123); assert_eq!((&Natural::from(456u32)).saturating_sub(&Natural::from(123u32)), 333); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .saturating_sub(&Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSub<&'a Natural> for Natural #### fn saturating_sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by value and the second by reference and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.saturating_sub(&Natural::from(123u32)), 0); assert_eq!(Natural::from(123u32).saturating_sub(&Natural::ZERO), 123); assert_eq!(Natural::from(456u32).saturating_sub(&Natural::from(123u32)), 333); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .saturating_sub(&Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSub<Natural> for &'a Natural #### fn saturating_sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by reference and the second by value and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).saturating_sub(Natural::from(123u32)), 0); assert_eq!((&Natural::from(123u32)).saturating_sub(Natural::ZERO), 123); assert_eq!((&Natural::from(456u32)).saturating_sub(Natural::from(123u32)), 333); assert_eq!( (&(Natural::from(10u32).pow(12) * Natural::from(3u32))) .saturating_sub(Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl SaturatingSub<Natural> for Natural #### fn saturating_sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by value and returning 0 if the result is negative. $$ f(x, y) = \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSub}; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.saturating_sub(Natural::from(123u32)), 0); assert_eq!(Natural::from(123u32).saturating_sub(Natural::ZERO), 123); assert_eq!(Natural::from(456u32).saturating_sub(Natural::from(123u32)), 333); assert_eq!( (Natural::from(10u32).pow(12) * Natural::from(3u32)) .saturating_sub(Natural::from(10u32).pow(12)), 2000000000000u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubAssign<&'a Natural> for Natural #### fn saturating_sub_assign(&mut self, other: &'a Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference and setting the left-hand side to 0 if the result is negative. $$ x \gets \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SaturatingSubAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::from(123u32)); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::ZERO); assert_eq!(x, 123); let mut x = Natural::from(456u32); x.saturating_sub_assign(&Natural::from(123u32)); assert_eq!(x, 333); let mut x = Natural::from(123u32); x.saturating_sub_assign(&Natural::from(456u32)); assert_eq!(x, 0); ``` ### impl SaturatingSubAssign<Natural> for Natural #### fn saturating_sub_assign(&mut self, other: Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value and setting the left-hand side to 0 if the result is negative. $$ x \gets \max(x - y, 0). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SaturatingSubAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::from(123u32)); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::ZERO); assert_eq!(x, 123); let mut x = Natural::from(456u32); x.saturating_sub_assign(Natural::from(123u32)); assert_eq!(x, 333); let mut x = Natural::from(123u32); x.saturating_sub_assign(Natural::from(456u32)); assert_eq!(x, 0); ``` ### impl<'a> SaturatingSubMul<&'a Natural, Natural> for Natural #### fn saturating_sub_mul(self, y: &'a Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(&Natural::from(3u32), Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(&Natural::from(3u32), Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> SaturatingSubMul<&'b Natural, &'c Natural> for &'a Natural #### fn saturating_sub_mul(self, y: &'b Natural, z: &'c Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( (&Natural::from(20u32)).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( (&Natural::from(10u32)).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( (&Natural::from(10u32).pow(12)) .saturating_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b> SaturatingSubMul<&'a Natural, &'b Natural> for Natural #### fn saturating_sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubMul<Natural, &'a Natural> for Natural #### fn saturating_sub_mul(self, y: Natural, z: &'a Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(Natural::from(3u32), &Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(Natural::from(3u32), &Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl SaturatingSubMul<Natural, Natural> for Natural #### fn saturating_sub_mul(self, y: Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by value and returning 0 if the result is negative. $$ f(x, y, z) = \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMul}; use malachite_nz::natural::Natural; assert_eq!( Natural::from(20u32).saturating_sub_mul(Natural::from(3u32), Natural::from(4u32)), 8 ); assert_eq!( Natural::from(10u32).saturating_sub_mul(Natural::from(3u32), Natural::from(4u32)), 0 ); assert_eq!( Natural::from(10u32).pow(12) .saturating_sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SaturatingSubMulAssign<&'a Natural, Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: &'a Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by reference and the second by value and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(&Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a, 'b> SaturatingSubMulAssign<&'a Natural, &'b Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by reference and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(&Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SaturatingSubMulAssign<Natural, &'a Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: Natural, z: &'a Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by value and the second by reference and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl SaturatingSubMulAssign<Natural, Natural> for Natural #### fn saturating_sub_mul_assign(&mut self, y: Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by value and replacing the left-hand side `Natural` with 0 if the result is negative. $$ x \gets \max(x - yz, 0). $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SaturatingSubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.saturating_sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32); x.saturating_sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 0); let mut x = Natural::from(10u32).pow(12); x.saturating_sub_mul_assign(Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SciMantissaAndExponent<f32, u64, Natural> for &'a Natural #### fn sci_mantissa_and_exponent(self) -> (f32, u64) Returns a `Natural`’s scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f32, sci_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the `Nearest` rounding mode. To specify other rounding modes, use `from_sci_mantissa_and_exponent_round`. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Examples See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.#### fn sci_exponent(self) -> E Extracts the scientific exponent from a number.### impl<'a> SciMantissaAndExponent<f64, u64, Natural> for &'a Natural #### fn sci_mantissa_and_exponent(self) -> (f64, u64) Returns a `Natural`’s scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f64, sci_exponent: u64 ) -> Option<NaturalConstructs a `Natural` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. Some combinations of mantissas and exponents do not specify a `Natural`, in which case the resulting value is rounded to a `Natural` using the `Nearest` rounding mode. To specify other rounding modes, use `from_sci_mantissa_and_exponent_round`. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. ##### Examples See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.#### fn sci_exponent(self) -> E Extracts the scientific exponent from a number.### impl<'a> Shl<i128> for &'a Natural #### fn shl(self, bits: i128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i128> for Natural #### fn shl(self, bits: i128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i16> for &'a Natural #### fn shl(self, bits: i16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i16> for Natural #### fn shl(self, bits: i16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i32> for &'a Natural #### fn shl(self, bits: i32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i32> for Natural #### fn shl(self, bits: i32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i64> for &'a Natural #### fn shl(self, bits: i64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i64> for Natural #### fn shl(self, bits: i64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<i8> for &'a Natural #### fn shl(self, bits: i8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<i8> for Natural #### fn shl(self, bits: i8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<isize> for &'a Natural #### fn shl(self, bits: isize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<isize> for Natural #### fn shl(self, bits: isize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u128> for &'a Natural #### fn shl(self, bits: u128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u128> for Natural #### fn shl(self, bits: u128) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u16> for &'a Natural #### fn shl(self, bits: u16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u16> for Natural #### fn shl(self, bits: u16) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u32> for &'a Natural #### fn shl(self, bits: u32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u32> for Natural #### fn shl(self, bits: u32) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u64> for &'a Natural #### fn shl(self, bits: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u64> for Natural #### fn shl(self, bits: u64) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<u8> for &'a Natural #### fn shl(self, bits: u8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<u8> for Natural #### fn shl(self, bits: u8) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl<'a> Shl<usize> for &'a Natural #### fn shl(self, bits: usize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by reference. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl Shl<usize> for Natural #### fn shl(self, bits: usize) -> Natural Left-shifts a `Natural` (multiplies it by a power of 2), taking it by value. $f(x, k) = x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `<<` operator.### impl ShlAssign<i128> for Natural #### fn shl_assign(&mut self, bits: i128) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i16> for Natural #### fn shl_assign(&mut self, bits: i16) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i32> for Natural #### fn shl_assign(&mut self, bits: i32) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i64> for Natural #### fn shl_assign(&mut self, bits: i64) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<i8> for Natural #### fn shl_assign(&mut self, bits: i8) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<isize> for Natural #### fn shl_assign(&mut self, bits: isize) Left-shifts a `Natural` (multiplies it by a power of 2 or divides it by a power of 2 and takes the floor), in place. $$ x \gets \lfloor x2^k \rfloor. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShlAssign<u128> for Natural #### fn shl_assign(&mut self, bits: u128) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u16> for Natural #### fn shl_assign(&mut self, bits: u16) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u32> for Natural #### fn shl_assign(&mut self, bits: u32) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u64> for Natural #### fn shl_assign(&mut self, bits: u64) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<u8> for Natural #### fn shl_assign(&mut self, bits: u8) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl ShlAssign<usize> for Natural #### fn shl_assign(&mut self, bits: usize) Left-shifts a `Natural` (multiplies it by a power of 2), in place. $x \gets x2^k$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`.s ##### Examples See here. ### impl<'a> ShlRound<i128> for &'a Natural #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i128> for Natural #### fn shl_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i16> for &'a Natural #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i16> for Natural #### fn shl_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i32> for &'a Natural #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i32> for Natural #### fn shl_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i64> for &'a Natural #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i64> for Natural #### fn shl_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<i8> for &'a Natural #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<i8> for Natural #### fn shl_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShlRound<isize> for &'a Natural #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ g(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRound<isize> for Natural #### fn shl_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Left-shifts a `Natural` (multiplies or divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is non-negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. Let $q = x2^k$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $g(x, k, \mathrm{Down}) = g(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $g(x, k, \mathrm{Up}) = g(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $g(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. #### type Output = Natural ### impl ShlRoundAssign<i128> for Natural #### fn shl_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i16> for Natural #### fn shl_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i32> for Natural #### fn shl_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i64> for Natural #### fn shl_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<i8> for Natural #### fn shl_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl ShlRoundAssign<isize> for Natural #### fn shl_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Left-shifts a `Natural` (multiplies or divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `bits > 0 || self.divisible_by_power_of_2(bits)`. Rounding might only be necessary if `bits` is negative. See the `ShlRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is negative and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^{-k}$. ##### Examples See here. ### impl<'a> Shr<i128> for &'a Natural #### fn shr(self, bits: i128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i128> for Natural #### fn shr(self, bits: i128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i16> for &'a Natural #### fn shr(self, bits: i16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i16> for Natural #### fn shr(self, bits: i16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i32> for &'a Natural #### fn shr(self, bits: i32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i32> for Natural #### fn shr(self, bits: i32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i64> for &'a Natural #### fn shr(self, bits: i64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i64> for Natural #### fn shr(self, bits: i64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<i8> for &'a Natural #### fn shr(self, bits: i8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<i8> for Natural #### fn shr(self, bits: i8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<isize> for &'a Natural #### fn shr(self, bits: isize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<isize> for Natural #### fn shr(self, bits: isize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u128> for &'a Natural #### fn shr(self, bits: u128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u128> for Natural #### fn shr(self, bits: u128) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u16> for &'a Natural #### fn shr(self, bits: u16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u16> for Natural #### fn shr(self, bits: u16) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u32> for &'a Natural #### fn shr(self, bits: u32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u32> for Natural #### fn shr(self, bits: u32) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u64> for &'a Natural #### fn shr(self, bits: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u64> for Natural #### fn shr(self, bits: u64) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<u8> for &'a Natural #### fn shr(self, bits: u8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<u8> for Natural #### fn shr(self, bits: u8) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl<'a> Shr<usize> for &'a Natural #### fn shr(self, bits: usize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by reference. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl Shr<usize> for Natural #### fn shr(self, bits: usize) -> Natural Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), taking it by value. $$ f(x, k) = \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. #### type Output = Natural The resulting type after applying the `>>` operator.### impl ShrAssign<i128> for Natural #### fn shr_assign(&mut self, bits: i128) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i16> for Natural #### fn shr_assign(&mut self, bits: i16) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i32> for Natural #### fn shr_assign(&mut self, bits: i32) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i64> for Natural #### fn shr_assign(&mut self, bits: i64) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<i8> for Natural #### fn shr_assign(&mut self, bits: i8) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<isize> for Natural #### fn shr_assign(&mut self, bits: isize) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor or multiplies it by a power of 2), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u128> for Natural #### fn shr_assign(&mut self, bits: u128) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u16> for Natural #### fn shr_assign(&mut self, bits: u16) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u32> for Natural #### fn shr_assign(&mut self, bits: u32) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u64> for Natural #### fn shr_assign(&mut self, bits: u64) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<u8> for Natural #### fn shr_assign(&mut self, bits: u8) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl ShrAssign<usize> for Natural #### fn shr_assign(&mut self, bits: usize) Right-shifts a `Natural` (divides it by a power of 2 and takes the floor), in place. $$ x \gets \left \lfloor \frac{x}{2^k} \right \rfloor. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `max(1, self.significant_bits() - bits)`. ##### Examples See here. ### impl<'a> ShrRound<i128> for &'a Natural #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i128> for Natural #### fn shr_round(self, bits: i128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i16> for &'a Natural #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i16> for Natural #### fn shr_round(self, bits: i16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i32> for &'a Natural #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i32> for Natural #### fn shr_round(self, bits: i32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i64> for &'a Natural #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i64> for Natural #### fn shr_round(self, bits: i64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<i8> for &'a Natural #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<i8> for Natural #### fn shr_round(self, bits: i8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<isize> for &'a Natural #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<isize> for Natural #### fn shr_round(self, bits: isize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides or multiplies it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u128> for &'a Natural #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u128> for Natural #### fn shr_round(self, bits: u128, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u16> for &'a Natural #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u16> for Natural #### fn shr_round(self, bits: u16, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u32> for &'a Natural #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u32> for Natural #### fn shr_round(self, bits: u32, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u64> for &'a Natural #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u64> for Natural #### fn shr_round(self, bits: u64, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<u8> for &'a Natural #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<u8> for Natural #### fn shr_round(self, bits: u8, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl<'a> ShrRound<usize> for &'a Natural #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by reference, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(1, self.significant_bits() - bits)`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRound<usize> for Natural #### fn shr_round(self, bits: usize, rm: RoundingMode) -> (Natural, Ordering) Shifts a `Natural` right (divides it by a power of 2), taking it by value, and rounds according to the specified rounding mode. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. Let $q = \frac{x}{2^k}$, and let $g$ be the function that just returns the first element of the pair, without the `Ordering`: $f(x, k, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = \lfloor q \rfloor.$ $f(x, k, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2}, \\ \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even}, \\ \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd}. \end{cases} $$ $f(x, k, \mathrm{Exact}) = q$, but panics if $q \notin \N$. Then $f(x, k, r) = (g(x, k, r), \operatorname{cmp}(g(x, k, r), q))$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. #### type Output = Natural ### impl ShrRoundAssign<i128> for Natural #### fn shr_round_assign(&mut self, bits: i128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i16> for Natural #### fn shr_round_assign(&mut self, bits: i16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i32> for Natural #### fn shr_round_assign(&mut self, bits: i32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i64> for Natural #### fn shr_round_assign(&mut self, bits: i64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<i8> for Natural #### fn shr_round_assign(&mut self, bits: i8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<isize> for Natural #### fn shr_round_assign(&mut self, bits: isize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides or multiplies it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. If `bits` is negative, then the returned `Ordering` is always `Equal`, even if the higher bits of the result are lost. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(-bits, 0)`. ##### Panics Let $k$ be `bits`. Panics if $k$ is positive and `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u128> for Natural #### fn shr_round_assign(&mut self, bits: u128, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u16> for Natural #### fn shr_round_assign(&mut self, bits: u16, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u32> for Natural #### fn shr_round_assign(&mut self, bits: u32, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u64> for Natural #### fn shr_round_assign(&mut self, bits: u64, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<u8> for Natural #### fn shr_round_assign(&mut self, bits: u8, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl ShrRoundAssign<usize> for Natural #### fn shr_round_assign(&mut self, bits: usize, rm: RoundingMode) -> Ordering Shifts a `Natural` right (divides it by a power of 2) and rounds according to the specified rounding mode, in place. An `Ordering` is returned, indicating whether the assigned value is less than, equal to, or greater than the exact value. Passing `RoundingMode::Floor` or `RoundingMode::Down` is equivalent to using `>>=`. To test whether `RoundingMode::Exact` can be passed, use `self.divisible_by_power_of_2(bits)`. See the `ShrRound` documentation for details. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Let $k$ be `bits`. Panics if `rm` is `RoundingMode::Exact` but `self` is not divisible by $2^k$. ##### Examples See here. ### impl Sign for Natural #### fn sign(&self) -> Ordering Compares a `Natural` to zero. Returns `Greater` or `Equal` depending on whether the `Natural` is positive or zero, respectively. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Sign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use std::cmp::Ordering; assert_eq!(Natural::ZERO.sign(), Ordering::Equal); assert_eq!(Natural::from(123u32).sign(), Ordering::Greater); ``` ### impl<'a> SignificantBits for &'a Natural #### fn significant_bits(self) -> u64 Returns the number of significant bits of a `Natural`. $$ f(n) = \begin{cases} 0 & \text{if} \quad n = 0, \\ \lfloor \log_2 n \rfloor + 1 & \text{if} \quad n > 0. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::logic::traits::SignificantBits; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.significant_bits(), 0); assert_eq!(Natural::from(100u32).significant_bits(), 7); ``` ### impl SqrtAssignRem for Natural #### fn sqrt_assign_rem(&mut self) -> Natural Replaces a `Natural` with the floor of its square root and returns the remainder (the difference between the original `Natural` and the square of the floor). $f(x) = x - \lfloor\sqrt{x}\rfloor^2$, $x \gets \lfloor\sqrt{x}\rfloor$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtAssignRem; use malachite_nz::natural::Natural; let mut x = Natural::from(99u8); assert_eq!(x.sqrt_assign_rem(), 18); assert_eq!(x, 9); let mut x = Natural::from(100u8); assert_eq!(x.sqrt_assign_rem(), 0); assert_eq!(x, 10); let mut x = Natural::from(101u8); assert_eq!(x.sqrt_assign_rem(), 1); assert_eq!(x, 10); let mut x = Natural::from(1000000000u32); assert_eq!(x.sqrt_assign_rem(), 49116); assert_eq!(x, 31622); let mut x = Natural::from(10000000000u64); assert_eq!(x.sqrt_assign_rem(), 0); assert_eq!(x, 100000); ``` #### type RemOutput = Natural ### impl<'a> SqrtRem for &'a Natural #### fn sqrt_rem(self) -> (Natural, Natural) Returns the floor of the square root of a `Natural` and the remainder (the difference between the `Natural` and the square of the floor). The `Natural` is taken by reference. $f(x) = (\lfloor\sqrt{x}\rfloor, x - \lfloor\sqrt{x}\rfloor^2)$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(99u8)).sqrt_rem().to_debug_string(), "(9, 18)"); assert_eq!((&Natural::from(100u8)).sqrt_rem().to_debug_string(), "(10, 0)"); assert_eq!((&Natural::from(101u8)).sqrt_rem().to_debug_string(), "(10, 1)"); assert_eq!((&Natural::from(1000000000u32)).sqrt_rem().to_debug_string(), "(31622, 49116)"); assert_eq!((&Natural::from(10000000000u64)).sqrt_rem().to_debug_string(), "(100000, 0)"); ``` #### type SqrtOutput = Natural #### type RemOutput = Natural ### impl SqrtRem for Natural #### fn sqrt_rem(self) -> (Natural, Natural) Returns the floor of the square root of a `Natural` and the remainder (the difference between the `Natural` and the square of the floor). The `Natural` is taken by value. $f(x) = (\lfloor\sqrt{x}\rfloor, x - \lfloor\sqrt{x}\rfloor^2)$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SqrtRem; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; assert_eq!(Natural::from(99u8).sqrt_rem().to_debug_string(), "(9, 18)"); assert_eq!(Natural::from(100u8).sqrt_rem().to_debug_string(), "(10, 0)"); assert_eq!(Natural::from(101u8).sqrt_rem().to_debug_string(), "(10, 1)"); assert_eq!(Natural::from(1000000000u32).sqrt_rem().to_debug_string(), "(31622, 49116)"); assert_eq!(Natural::from(10000000000u64).sqrt_rem().to_debug_string(), "(100000, 0)"); ``` #### type SqrtOutput = Natural #### type RemOutput = Natural ### impl<'a> Square for &'a Natural #### fn square(self) -> Natural Squares a `Natural`, taking it by reference. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!((&Natural::ZERO).square(), 0); assert_eq!((&Natural::from(123u32)).square(), 15129); ``` #### type Output = Natural ### impl Square for Natural #### fn square(self) -> Natural Squares a `Natural`, taking it by value. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::ZERO.square(), 0); assert_eq!(Natural::from(123u32).square(), 15129); ``` #### type Output = Natural ### impl SquareAssign for Natural #### fn square_assign(&mut self) Squares a `Natural` in place. $$ x \gets x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; let mut x = Natural::ZERO; x.square_assign(); assert_eq!(x, 0); let mut x = Natural::from(123u32); x.square_assign(); assert_eq!(x, 15129); ``` ### impl<'a, 'b> Sub<&'a Natural> for &'b Natural #### fn sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) - &Natural::ZERO, 123); assert_eq!(&Natural::from(456u32) - &Natural::from(123u32), 333); assert_eq!( &(Natural::from(10u32).pow(12) * Natural::from(3u32)) - &Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> Sub<&'a Natural> for Natural #### fn sub(self, other: &'a Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by value and the second by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) - &Natural::ZERO, 123); assert_eq!(Natural::from(456u32) - &Natural::from(123u32), 333); assert_eq!( Natural::from(10u32).pow(12) * Natural::from(3u32) - &Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> Sub<Natural> for &'a Natural #### fn sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking the first by reference and the second by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(&Natural::from(123u32) - Natural::ZERO, 123); assert_eq!(&Natural::from(456u32) - Natural::from(123u32), 333); assert_eq!( &(Natural::from(10u32).pow(12) * Natural::from(3u32)) - Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl Sub<Natural> for Natural #### fn sub(self, other: Natural) -> Natural Subtracts a `Natural` by another `Natural`, taking both by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; assert_eq!(Natural::from(123u32) - Natural::ZERO, 123); assert_eq!(Natural::from(456u32) - Natural::from(123u32), 333); assert_eq!( Natural::from(10u32).pow(12) * Natural::from(3u32) - Natural::from(10u32).pow(12), 2000000000000u64 ); ``` #### type Output = Natural The resulting type after applying the `-` operator.### impl<'a> SubAssign<&'a Natural> for Natural #### fn sub_assign(&mut self, other: &'a Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32).pow(12) * Natural::from(10u32); x -= &Natural::from(10u32).pow(12); x -= &(Natural::from(10u32).pow(12) * Natural::from(2u32)); x -= &(Natural::from(10u32).pow(12) * Natural::from(3u32)); x -= &(Natural::from(10u32).pow(12) * Natural::from(4u32)); assert_eq!(x, 0); ``` ### impl SubAssign<Natural> for Natural #### fn sub_assign(&mut self, other: Natural) Subtracts a `Natural` by another `Natural` in place, taking the `Natural` on the right-hand side by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `other` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_nz::natural::Natural; let mut x = Natural::from(10u32).pow(12) * Natural::from(10u32); x -= Natural::from(10u32).pow(12); x -= Natural::from(10u32).pow(12) * Natural::from(2u32); x -= Natural::from(10u32).pow(12) * Natural::from(3u32); x -= Natural::from(10u32).pow(12) * Natural::from(4u32); assert_eq!(x, 0); ``` ### impl<'a> SubMul<&'a Natural, Natural> for Natural #### fn sub_mul(self, y: &'a Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first and third by value and the second by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(&Natural::from(3u32), Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(&Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b, 'c> SubMul<&'a Natural, &'b Natural> for &'c Natural #### fn sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n, m) = O(m + n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!((&Natural::from(20u32)).sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( (&Natural::from(10u32).pow(12)) .sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a, 'b> SubMul<&'a Natural, &'b Natural> for Natural #### fn sub_mul(self, y: &'a Natural, z: &'b Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first by value and the second and third by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(&Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(&Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SubMul<Natural, &'a Natural> for Natural #### fn sub_mul(self, y: Natural, z: &'a Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking the first two by value and the third by reference. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(Natural::from(3u32), &Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(Natural::from(0x10000u32), &Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl SubMul<Natural, Natural> for Natural #### fn sub_mul(self, y: Natural, z: Natural) -> Natural Subtracts a `Natural` by the product of two other `Natural`s, taking all three by value. $$ f(x, y, z) = x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMul}; use malachite_nz::natural::Natural; assert_eq!(Natural::from(20u32).sub_mul(Natural::from(3u32), Natural::from(4u32)), 8); assert_eq!( Natural::from(10u32).pow(12) .sub_mul(Natural::from(0x10000u32), Natural::from(0x10000u32)), 995705032704u64 ); ``` #### type Output = Natural ### impl<'a> SubMulAssign<&'a Natural, Natural> for Natural #### fn sub_mul_assign(&mut self, y: &'a Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by reference and the second by value. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(&Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(&Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a, 'b> SubMulAssign<&'a Natural, &'b Natural> for Natural #### fn sub_mul_assign(&mut self, y: &'a Natural, z: &'b Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by reference. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(&Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(&Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl<'a> SubMulAssign<Natural, &'a Natural> for Natural #### fn sub_mul_assign(&mut self, y: Natural, z: &'a Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking the first `Natural` on the right-hand side by value and the second by reference. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(Natural::from(3u32), &Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(Natural::from(0x10000u32), &Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl SubMulAssign<Natural, Natural> for Natural #### fn sub_mul_assign(&mut self, y: Natural, z: Natural) Subtracts a `Natural` by the product of two other `Natural`s in place, taking both `Natural`s on the right-hand side by value. $$ x \gets x - yz. $$ ##### Worst-case complexity $T(n, m) = O(m + n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, $n$ is `max(y.significant_bits(), z.significant_bits())`, and $m$ is `x.significant_bits()`. ##### Panics Panics if `y * z` is greater than `self`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, SubMulAssign}; use malachite_nz::natural::Natural; let mut x = Natural::from(20u32); x.sub_mul_assign(Natural::from(3u32), Natural::from(4u32)); assert_eq!(x, 8); let mut x = Natural::from(10u32).pow(12); x.sub_mul_assign(Natural::from(0x10000u32), Natural::from(0x10000u32)); assert_eq!(x, 995705032704u64); ``` ### impl Subfactorial for Natural #### fn subfactorial(n: u64) -> Natural Computes the subfactorial of a number. The subfactorial of $n$ counts the number of derangements of a set of size $n$; a derangement is a permutation with no fixed points. $$ f(n) = \ !n = \lfloor n!/e \rfloor. $$ $!n = O(n!) = O(\sqrt{n}(n/e)^n)$. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ ##### Examples ``` use malachite_base::num::arithmetic::traits::Subfactorial; use malachite_nz::natural::Natural; assert_eq!(Natural::subfactorial(0), 1); assert_eq!(Natural::subfactorial(1), 0); assert_eq!(Natural::subfactorial(2), 1); assert_eq!(Natural::subfactorial(3), 2); assert_eq!(Natural::subfactorial(4), 9); assert_eq!(Natural::subfactorial(5), 44); assert_eq!( Natural::subfactorial(100).to_string(), "3433279598416380476519597752677614203236578380537578498354340028268518079332763243279\ 1396429850988990237345920155783984828001486412574060553756854137069878601" ); ``` ### impl<'a> Sum<&'a Natural> for Natural #### fn sum<I>(xs: I) -> Naturalwhere I: Iterator<Item = &'a Natural>, Adds up all the `Natural`s in an iterator of `Natural` references. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Sum; assert_eq!(Natural::sum(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().iter()), 17); ``` ### impl Sum<Natural> for Natural #### fn sum<I>(xs: I) -> Naturalwhere I: Iterator<Item = Natural>, Adds up all the `Natural`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Natural::sum(xs.map(Natural::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use std::iter::Sum; assert_eq!(Natural::sum(vec_from_str::<Natural>("[2, 3, 5, 7]").unwrap().into_iter()), 17); ``` ### impl ToSci for Natural #### fn fmt_sci_valid(&self, options: ToSciOptions) -> bool Determines whether a `Natural` can be converted to a string using `to_sci` and a particular set of options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; let mut options = ToSciOptions::default(); assert!(Natural::from(123u8).fmt_sci_valid(options)); assert!(Natural::from(u128::MAX).fmt_sci_valid(options)); // u128::MAX has more than 16 significant digits options.set_rounding_mode(RoundingMode::Exact); assert!(!Natural::from(u128::MAX).fmt_sci_valid(options)); options.set_precision(50); assert!(Natural::from(u128::MAX).fmt_sci_valid(options)); ``` #### fn fmt_sci( &self, f: &mut Formatter<'_>, options: ToSciOptions ) -> Result<(), ErrorConverts a `Natural` to a string using a specified base, possibly formatting the number using scientific notation. See `ToSciOptions` for details on the available options. Note that setting `neg_exp_threshold` has no effect, since there is never a need to use negative exponents when representing a `Natural`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `options.rounding_mode` is `Exact`, but the size options are such that the input must be rounded. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_nz::natural::Natural; assert_eq!(format!("{}", Natural::from(u128::MAX).to_sci()), "3.402823669209385e38"); assert_eq!(Natural::from(u128::MAX).to_sci().to_string(), "3.402823669209385e38"); let n = Natural::from(123456u32); let mut options = ToSciOptions::default(); assert_eq!(format!("{}", n.to_sci_with_options(options)), "123456"); assert_eq!(n.to_sci_with_options(options).to_string(), "123456"); options.set_precision(3); assert_eq!(n.to_sci_with_options(options).to_string(), "1.23e5"); options.set_rounding_mode(RoundingMode::Ceiling); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24e5"); options.set_e_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E5"); options.set_force_exponent_plus_sign(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.24E+5"); options = ToSciOptions::default(); options.set_base(36); assert_eq!(n.to_sci_with_options(options).to_string(), "2n9c"); options.set_uppercase(); assert_eq!(n.to_sci_with_options(options).to_string(), "2N9C"); options.set_base(2); options.set_precision(10); assert_eq!(n.to_sci_with_options(options).to_string(), "1.1110001e16"); options.set_include_trailing_zeros(true); assert_eq!(n.to_sci_with_options(options).to_string(), "1.111000100e16"); ``` #### fn to_sci_with_options(&self, options: ToSciOptions) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation.#### fn to_sci(&self) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation, using the default `ToSciOptions`.### impl ToStringBase for Natural #### fn to_string_base(&self, base: u8) -> String Converts a `Natural` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the lowercase `char`s `'a'` to `'z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(1000u32).to_string_base(2), "1111101000"); assert_eq!(Natural::from(1000u32).to_string_base(10), "1000"); assert_eq!(Natural::from(1000u32).to_string_base(36), "rs"); ``` #### fn to_string_base_upper(&self, base: u8) -> String Converts a `Natural` to a `String` using a specified base. Digits from 0 to 9 become `char`s from `'0'` to `'9'`. Digits from 10 to 35 become the uppercase `char`s `'A'` to `'Z'`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_base::num::conversion::traits::ToStringBase; use malachite_nz::natural::Natural; assert_eq!(Natural::from(1000u32).to_string_base_upper(2), "1111101000"); assert_eq!(Natural::from(1000u32).to_string_base_upper(10), "1000"); assert_eq!(Natural::from(1000u32).to_string_base_upper(36), "RS"); ``` ### impl<'a> TryFrom<&'a Integer> for Natural #### fn try_from( value: &'a Integer ) -> Result<Natural, <Natural as TryFrom<&'a Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by reference. If the `Integer` is negative, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(&Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(&Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(&Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(&(-Integer::from(10u32).pow(12))).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl<'a> TryFrom<&'a Rational> for Natural #### fn try_from( x: &Rational ) -> Result<Natural, <Natural as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by reference. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(&Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(&Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Integer> for Natural #### fn try_from( value: Integer ) -> Result<Natural, <Natural as TryFrom<Integer>>::ErrorConverts an `Integer` to a `Natural`, taking the `Natural` by value. If the `Integer` is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_nz::natural::Natural; assert_eq!(Natural::try_from(Integer::from(123)).to_debug_string(), "Ok(123)"); assert_eq!( Natural::try_from(Integer::from(-123)).to_debug_string(), "Err(NaturalFromIntegerError)" ); assert_eq!( Natural::try_from(Integer::from(10u32).pow(12)).to_debug_string(), "Ok(1000000000000)" ); assert_eq!( Natural::try_from(-Integer::from(10u32).pow(12)).to_debug_string(), "Err(NaturalFromIntegerError)" ); ``` #### type Error = NaturalFromIntegerError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Natural #### fn try_from( x: Rational ) -> Result<Natural, <Natural as TryFrom<Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by value. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<SerdeNatural> for Natural #### type Error = String The type returned in the event of a conversion error.#### fn try_from(s: SerdeNatural) -> Result<Natural, StringPerforms the conversion.### impl TryFrom<f32> for Natural #### fn try_from(value: f32) -> Result<Natural, <Natural as TryFrom<f32>>::ErrorConverts a floating-point value to a `Natural`. If the input isn’t exactly equal to some `Natural`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = UnsignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<f64> for Natural #### fn try_from(value: f64) -> Result<Natural, <Natural as TryFrom<f64>>::ErrorConverts a floating-point value to a `Natural`. If the input isn’t exactly equal to some `Natural`, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent()`. ##### Examples See here. #### type Error = UnsignedFromFloatError The type returned in the event of a conversion error.### impl TryFrom<i128> for Natural #### fn try_from(i: i128) -> Result<Natural, <Natural as TryFrom<i128>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i16> for Natural #### fn try_from(i: i16) -> Result<Natural, <Natural as TryFrom<i16>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i32> for Natural #### fn try_from(i: i32) -> Result<Natural, <Natural as TryFrom<i32>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i64> for Natural #### fn try_from(i: i64) -> Result<Natural, <Natural as TryFrom<i64>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<i8> for Natural #### fn try_from(i: i8) -> Result<Natural, <Natural as TryFrom<i8>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl TryFrom<isize> for Natural #### fn try_from(i: isize) -> Result<Natural, <Natural as TryFrom<isize>>::ErrorConverts a signed primitive integer to a `Natural`. If the integer is negative, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. #### type Error = NaturalFromSignedError The type returned in the event of a conversion error.### impl Two for Natural The constant 2. #### const TWO: Natural = _ ### impl UpperHex for Natural #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Natural` to a hexadecimal `String` using uppercase characters. Using the `#` format flag prepends `"0x"` to the string. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToUpperHexString; use malachite_nz::natural::Natural; use std::str::FromStr; assert_eq!(Natural::ZERO.to_upper_hex_string(), "0"); assert_eq!(Natural::from(123u32).to_upper_hex_string(), "7B"); assert_eq!( Natural::from_str("1000000000000").unwrap().to_upper_hex_string(), "E8D4A51000" ); assert_eq!(format!("{:07X}", Natural::from(123u32)), "000007B"); assert_eq!(format!("{:#X}", Natural::ZERO), "0x0"); assert_eq!(format!("{:#X}", Natural::from(123u32)), "0x7B"); assert_eq!( format!("{:#X}", Natural::from_str("1000000000000").unwrap()), "0xE8D4A51000" ); assert_eq!(format!("{:#07X}", Natural::from(123u32)), "0x0007B"); ``` ### impl<'a> WrappingFrom<&'a Natural> for i128 #### fn wrapping_from(value: &Natural) -> i128 Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i16 #### fn wrapping_from(value: &Natural) -> i16 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i32 #### fn wrapping_from(value: &Natural) -> i32 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i64 #### fn wrapping_from(value: &Natural) -> i64 Converts a `Natural` to a `SignedLimb` (the signed type whose width is the same as a limb’s), wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for i8 #### fn wrapping_from(value: &Natural) -> i8 Converts a `Natural` to a value of a signed primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for isize #### fn wrapping_from(value: &Natural) -> isize Converts a `Natural` to an `isize` or a value of a signed primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u128 #### fn wrapping_from(value: &Natural) -> u128 Converts a `Natural` to a `usize` or a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u16 #### fn wrapping_from(value: &Natural) -> u16 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u32 #### fn wrapping_from(value: &Natural) -> u32 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u64 #### fn wrapping_from(value: &Natural) -> u64 Converts a `Natural` to a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for u8 #### fn wrapping_from(value: &Natural) -> u8 Converts a `Natural` to a value of an unsigned primitive integer type that’s smaller than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> WrappingFrom<&'a Natural> for usize #### fn wrapping_from(value: &Natural) -> usize Converts a `Natural` to a `usize` or a value of an unsigned primitive integer type that’s larger than a `Limb`, wrapping modulo $2^W$, where $W$ is the width of a limb. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Zero for Natural The constant 0. #### const ZERO: Natural = _ ### impl Eq for Natural ### impl StructuralEq for Natural ### impl StructuralPartialEq for Natural Auto Trait Implementations --- ### impl RefUnwindSafe for Natural ### impl Send for Natural ### impl Sync for Natural ### impl Unpin for Natural ### impl UnwindSafe for Natural Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToBinaryString for Twhere T: Binary, #### fn to_binary_string(&self) -> String Returns the `String` produced by `T`s `Binary` implementation. ##### Examples ``` use malachite_base::strings::ToBinaryString; assert_eq!(5u64.to_binary_string(), "101"); assert_eq!((-100i16).to_binary_string(), "1111111110011100"); ``` ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToLowerHexString for Twhere T: LowerHex, #### fn to_lower_hex_string(&self) -> String Returns the `String` produced by `T`s `LowerHex` implementation. ##### Examples ``` use malachite_base::strings::ToLowerHexString; assert_eq!(50u64.to_lower_hex_string(), "32"); assert_eq!((-100i16).to_lower_hex_string(), "ff9c"); ``` ### impl<T> ToOctalString for Twhere T: Octal, #### fn to_octal_string(&self) -> String Returns the `String` produced by `T`s `Octal` implementation. ##### Examples ``` use malachite_base::strings::ToOctalString; assert_eq!(50u64.to_octal_string(), "62"); assert_eq!((-100i16).to_octal_string(), "177634"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. T: UpperHex, #### fn to_upper_hex_string(&self) -> String Returns the `String` produced by `T`s `UpperHex` implementation. ##### Examples ``` use malachite_base::strings::ToUpperHexString; assert_eq!(50u64.to_upper_hex_string(), "32"); assert_eq!((-100i16).to_upper_hex_string(), "FF9C"); ``` ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U {"LimbIterator<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"natural/conversion/to_limbs/struct.LimbIterator.html\" title=\"struct malachite::natural::conversion::to_limbs::LimbIterator\">LimbIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"natural/conversion/to_limbs/struct.LimbIterator.html\" title=\"struct malachite::natural::conversion::to_limbs::LimbIterator\">LimbIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u64.html\">u64</a>;</span>","NaturalBitIterator<'a>":"<h3>Notable traits for <code><a class=\"struct\" href=\"natural/logic/bit_iterable/struct.NaturalBitIterator.html\" title=\"struct malachite::natural::logic::bit_iterable::NaturalBitIterator\">NaturalBitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"natural/logic/bit_iterable/struct.NaturalBitIterator.html\" title=\"struct malachite::natural::logic::bit_iterable::NaturalBitIterator\">NaturalBitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.bool.html\">bool</a>;</span>","NaturalPowerOf2DigitIterator<'a>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitIterator\">NaturalPowerOf2DigitIterator</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitIterator\">NaturalPowerOf2DigitIterator</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"natural/struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u128>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u16>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u32>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u64>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, u8>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPowerOf2DigitPrimitiveIterator<'a, usize>":"<h3>Notable traits for <code><a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a, T&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"enum\" href=\"natural/conversion/digits/power_of_2_digit_iterable/enum.NaturalPowerOf2DigitPrimitiveIterator.html\" title=\"enum malachite::natural::conversion::digits::power_of_2_digit_iterable::NaturalPowerOf2DigitPrimitiveIterator\">NaturalPowerOf2DigitPrimitiveIterator</a>&lt;'a, T&gt;<span class=\"where fmt-newline\">where\n T: <a class=\"trait\" href=\"num/basic/unsigneds/trait.PrimitiveUnsigned.html\" title=\"trait malachite::num::basic::unsigneds::PrimitiveUnsigned\">PrimitiveUnsigned</a>,</span></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = T;</span>","NaturalPrimesIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"natural/factorization/primes/struct.NaturalPrimesIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesIterator\">NaturalPrimesIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"natural/factorization/primes/struct.NaturalPrimesIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesIterator\">NaturalPrimesIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"natural/struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>","NaturalPrimesLessThanIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"natural/factorization/primes/struct.NaturalPrimesLessThanIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesLessThanIterator\">NaturalPrimesLessThanIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"natural/factorization/primes/struct.NaturalPrimesLessThanIterator.html\" title=\"struct malachite::natural::factorization::primes::NaturalPrimesLessThanIterator\">NaturalPrimesLessThanIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"struct\" href=\"natural/struct.Natural.html\" title=\"struct malachite::natural::Natural\">Natural</a>;</span>"} Struct malachite::Rational === ``` pub struct Rational { /* private fields */ } ``` A rational number. `Rational`s whose numerator and denominator have 64 significant bits or fewer can be represented without any memory allocation. (Unless Malachite is compiled with `32_bit_limbs`, in which case the limit is 32). Implementations --- ### impl Rational #### pub fn approx_log(&self) -> f64 Calculates the approximate natural logarithm of a positive `Rational`. $f(x) = (1+\epsilon)(\log x)$, where $|\epsilon| < 2^{-52}.$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::{Pow, PowerOf2}; use malachite_base::num::float::NiceFloat; use malachite_q::Rational; assert_eq!(NiceFloat(Rational::from(10i32).approx_log()), NiceFloat(2.3025850929940455)); assert_eq!( NiceFloat(Rational::from(10i32).pow(100u64).approx_log()), NiceFloat(230.25850929940455) ); assert_eq!( NiceFloat(Rational::power_of_2(1000000u64).approx_log()), NiceFloat(693147.1805599453) ); assert_eq!( NiceFloat(Rational::power_of_2(-1000000i64).approx_log()), NiceFloat(-693147.1805599453) ); ``` This is equivalent to `fmpz_dlog` from `fmpz/dlog.c`, FLINT 2.7.1. ### impl Rational #### pub fn floor_log_base_2_abs(&self) -> i64 Returns the floor of the base-2 logarithm of the absolute value of a nonzero `Rational`. $f(x) = \lfloor\log_2 |x|\rfloor$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is zero. ##### Examples ``` use malachite_q::Rational; assert_eq!(Rational::from(3u32).floor_log_base_2_abs(), 1); assert_eq!(Rational::from_signeds(1, 3).floor_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(1, 4).floor_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(1, 5).floor_log_base_2_abs(), -3); assert_eq!(Rational::from(-3).floor_log_base_2_abs(), 1); assert_eq!(Rational::from_signeds(-1, 3).floor_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(-1, 4).floor_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(-1, 5).floor_log_base_2_abs(), -3); ``` #### pub fn ceiling_log_base_2_abs(&self) -> i64 Returns the ceiling of the base-2 logarithm of the absolute value of a nonzero `Rational`. $f(x) = \lfloor\log_2 |x|\rfloor$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` less than or equal to zero. ##### Examples ``` use malachite_q::Rational; assert_eq!(Rational::from(3u32).ceiling_log_base_2_abs(), 2); assert_eq!(Rational::from_signeds(1, 3).ceiling_log_base_2_abs(), -1); assert_eq!(Rational::from_signeds(1, 4).ceiling_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(1, 5).ceiling_log_base_2_abs(), -2); assert_eq!(Rational::from(-3).ceiling_log_base_2_abs(), 2); assert_eq!(Rational::from_signeds(-1, 3).ceiling_log_base_2_abs(), -1); assert_eq!(Rational::from_signeds(-1, 4).ceiling_log_base_2_abs(), -2); assert_eq!(Rational::from_signeds(-1, 5).ceiling_log_base_2_abs(), -2); ``` ### impl Rational #### pub fn cmp_complexity(&self, other: &Rational) -> Ordering Compares two `Rational`s according to their complexity. Complexity is defined as follows: If two `Rational`s have different denominators, then the one with the larger denominator is more complex. If they have the same denominator, then the one whose numerator is further from zero is more complex. Finally, if $q > 0$, then $q$ is simpler than $-q$. The `Rational`s ordered by complexity look like this: $$ 0, 1, -1, 2, -2, \ldots, 1/2, -1/2, 3/2, -3/2, \ldots, 1/3, -1/3, 2/3, -2/3, \ldots, \ldots. $$ This order is a well-order, and the order type of the `Rational`s under this order is $\omega^2$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(1, 2).cmp_complexity(&Rational::from_signeds(1, 3)), Ordering::Less ); assert_eq!( Rational::from_signeds(1, 2).cmp_complexity(&Rational::from_signeds(3, 2)), Ordering::Less ); assert_eq!( Rational::from_signeds(1, 2).cmp_complexity(&Rational::from_signeds(-1, 2)), Ordering::Less ); ``` ### impl Rational #### pub fn from_continued_fraction<I>(floor: Integer, xs: I) -> Rationalwhere I: Iterator<Item = Natural>, Converts a finite continued fraction to a `Rational`, taking the inputs by value. The input has two components. The first is the first value of the continued fraction, which may be any `Integer` and is equal to the floor of the `Rational`. The second is an iterator of the remaining values, which must all be positive. Using the standard notation for continued fractions, the first value is the number before the semicolon, and the second value contains the remaining numbers. Each rational number has two continued fraction representations. Either one is a valid input. $f(a_0, (a_1, a_2, a_3, \ldots)) = [a_0; a_1, a_2, a_3, \ldots]$. ##### Worst-case complexity $T(n, m) = O((nm)^2 \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `max(floor.significant_bits(), xs.map(Natural::significant_bits).max())`, and $m$ is `xs.count()`. ##### Panics Panics if any `Natural` in `xs` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use malachite_q::Rational; let xs = vec_from_str("[1, 2]").unwrap().into_iter(); assert_eq!(Rational::from_continued_fraction(Integer::ZERO, xs).to_string(), "2/3"); let xs = vec_from_str("[7, 16]").unwrap().into_iter(); assert_eq!(Rational::from_continued_fraction(Integer::from(3), xs).to_string(), "355/113"); ``` #### pub fn from_continued_fraction_ref<'a, I>(floor: &Integer, xs: I) -> Rationalwhere I: Iterator<Item = &'a Natural>, Converts a finite continued fraction to a `Rational`, taking the inputs by reference. The input has two components. The first is the first value of the continued fraction, which may be any `Integer` and is equal to the floor of the `Rational`. The second is an iterator of the remaining values, which must all be positive. Using the standard notation for continued fractions, the first value is the number before the semicolon, and the second value contains the remaining numbers. Each rational number has two continued fraction representations. Either one is a valid input. $f(a_0, (a_1, a_2, a_3, \ldots)) = [a_0; a_1, a_2, a_3, \ldots]$. ##### Worst-case complexity $T(n, m) = O((nm)^2 \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `max(floor.significant_bits(), xs.map(Natural::significant_bits).max())`, and $m$ is `xs.count()`. ##### Panics Panics if any `Natural` in `xs` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::vecs::vec_from_str; use malachite_nz::integer::Integer; use malachite_q::Rational; let xs = vec_from_str("[1, 2]").unwrap(); assert_eq!( Rational::from_continued_fraction_ref(&Integer::ZERO, xs.iter()).to_string(), "2/3" ); let xs = vec_from_str("[7, 16]").unwrap(); assert_eq!( Rational::from_continued_fraction_ref(&Integer::from(3), xs.iter()).to_string(), "355/113" ); ``` ### impl Rational #### pub fn digits(&self, base: &Natural) -> (Vec<Natural, Global>, RationalDigits) Returns the base-$b$ digits of a `Rational`. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is an iterator of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(b-1)$s. If the `Rational` has a small denominator, it may be more efficient to use `to_digits` or `into_digits` instead. These functions compute the entire repeating portion of the repeating digits. For example, consider these two expressions: * `Rational::from_signeds(1, 7).digits(Natural::from(10u32)).1.nth(1000)` * `Rational::from_signeds(1, 7).into_digits(Natural::from(10u32)).1[1000]` Both get the thousandth digit after the decimal point of `1/7`. The first way explicitly calculates each digit after the decimal point, whereas the second way determines that `1/7` is `0.(142857)`, with the `142857` repeating, and takes `1000 % 6 == 4` to determine that the thousandth digit is 5. But when the `Rational` has a large denominator, the second way is less efficient. ##### Worst-case complexity per iteration $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), base.significant_bits())`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::iterators::prefix_to_string; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(prefix_to_string(after_point, 10), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(prefix_to_string(after_point, 10), "[1, 4, 2, 8, 5, 7, 1, 4, 2, 8, ...]"); ``` ### impl Rational #### pub fn from_digits( base: &Natural, before_point: Vec<Natural, Global>, after_point: RationalSequence<Natural> ) -> Rational Converts base-$b$ digits to a `Rational`. The inputs are taken by value. The input consists of the digits of the integer portion of the `Rational` and the digits of the fractional portion. The integer-portion digits are ordered from least- to most-significant, and the fractional-portion digits from most- to least. The fractional-portion digits may end in infinitely many zeros or $(b-1)$s; these are handled correctly. ##### Worst-case complexity $T(n, m) = O(nm \log (nm)^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `max(before_point.len(), after_point.component_len())`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use malachite_q::Rational; let before_point = vec_from_str("[3]").unwrap(); let after_point = RationalSequence::from_vecs( Vec::new(), vec_from_str("[1, 4, 2, 8, 5, 7]").unwrap(), ); assert_eq!( Rational::from_digits(&Natural::from(10u32), before_point, after_point).to_string(), "22/7" ); // 21.34565656... let before_point = vec_from_str("[1, 2]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[3, 4]").unwrap(), vec_from_str("[5, 6]").unwrap(), ); assert_eq!( Rational::from_digits(&Natural::from(10u32), before_point, after_point).to_string(), "105661/4950" ); ``` #### pub fn from_digits_ref( base: &Natural, before_point: &[Natural], after_point: &RationalSequence<Natural> ) -> Rational Converts base-$b$ digits to a `Rational`. The inputs are taken by reference. The input consists of the digits of the integer portion of the `Rational` and the digits of the fractional portion. The integer-portion digits are ordered from least- to most-significant, and the fractional-portion digits from most- to least. The fractional-portion digits may end in infinitely many zeros or $(b-1)$s; these are handled correctly. ##### Worst-case complexity $T(n, m) = O(nm \log (nm)^2 \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `max(before_point.len(), after_point.component_len())`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; use malachite_base::vecs::vec_from_str; use malachite_nz::natural::Natural; use malachite_q::Rational; let before_point = vec_from_str("[3]").unwrap(); let after_point = RationalSequence::from_vecs( Vec::new(), vec_from_str("[1, 4, 2, 8, 5, 7]").unwrap(), ); assert_eq!( Rational::from_digits_ref(&Natural::from(10u32), &before_point, &after_point) .to_string(), "22/7" ); // 21.34565656... let before_point = vec_from_str("[1, 2]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[3, 4]").unwrap(), vec_from_str("[5, 6]").unwrap(), ); assert_eq!( Rational::from_digits_ref(&Natural::from(10u32), &before_point, &after_point) .to_string(), "105661/4950" ); ``` ### impl Rational #### pub fn from_power_of_2_digits( log_base: u64, before_point: Vec<Natural, Global>, after_point: RationalSequence<Natural> ) -> Rational Converts base-$2^k$ digits to a `Rational`. The inputs are taken by value. The input consists of the digits of the integer portion of the `Rational` and the digits of the fractional portion. The integer-portion digits are ordered from least- to most-significant, and the fractional-portion digits from most- to least. The fractional-portion digits may end in infinitely many zeros or $(2^k-1)$s; these are handled correctly. ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `max(before_point.len(), after_point.component_len())`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; use malachite_base::vecs::vec_from_str; use malachite_q::Rational; let before_point = vec_from_str("[1, 1]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[0]").unwrap(), vec_from_str("[0, 0, 1]").unwrap(), ); assert_eq!( Rational::from_power_of_2_digits(1, before_point, after_point).to_string(), "43/14" ); // 21.34565656..._32 let before_point = vec_from_str("[1, 2]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[3, 4]").unwrap(), vec_from_str("[5, 6]").unwrap(), ); assert_eq!( Rational::from_power_of_2_digits(5, before_point, after_point).to_string(), "34096673/523776" ); ``` #### pub fn from_power_of_2_digits_ref( log_base: u64, before_point: &[Natural], after_point: &RationalSequence<Natural> ) -> Rational Converts base-$2^k$ digits to a `Rational`. The inputs are taken by reference. The input consists of the digits of the integer portion of the `Rational` and the digits of the fractional portion. The integer-portion digits are ordered from least- to most-significant, and the fractional-portion digits from most- to least. The fractional-portion digits may end in infinitely many zeros or $(2^k-1)$s; these are handled correctly. ##### Worst-case complexity $T(n, m) = O(nm)$ $M(n, m) = O(nm)$ where $T$ is time, $M$ is additional memory, $n$ is `max(before_point.len(), after_point.component_len())`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::rational_sequences::RationalSequence; use malachite_base::vecs::vec_from_str; use malachite_q::Rational; let before_point = vec_from_str("[1, 1]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[0]").unwrap(), vec_from_str("[0, 0, 1]").unwrap(), ); assert_eq!( Rational::from_power_of_2_digits_ref(1, &before_point, &after_point).to_string(), "43/14" ); // 21.34565656..._32 let before_point = vec_from_str("[1, 2]").unwrap(); let after_point = RationalSequence::from_vecs( vec_from_str("[3, 4]").unwrap(), vec_from_str("[5, 6]").unwrap(), ); assert_eq!( Rational::from_power_of_2_digits_ref(5, &before_point, &after_point).to_string(), "34096673/523776" ); ``` ### impl Rational #### pub fn power_of_2_digits( &self, log_base: u64 ) -> (Vec<Natural, Global>, RationalPowerOf2Digits) Returns the base-$2^k$ digits of a `Rational`. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is an iterator of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(2^k-1)$s. If the `Rational` has a small denominator, it may be more efficient to use `to_power_of_2_digits` or `into_power_of_2_digits` instead. These functions compute the entire repeating portion of the repeating digits. For example, consider these two expressions: * `Rational::from_signeds(1, 7).power_of_2_digits(1).1.nth(1000)` * `Rational::from_signeds(1, 7).into_power_of_2_digits(1).1[1000]` Both get the thousandth digit after the binary point of `1/7`. The first way explicitly calculates each bit after the binary point, whereas the second way determines that `1/7` is `0.(001)`, with the `001` repeating, and takes `1000 % 3 == 1` to determine that the thousandth bit is 0. But when the `Rational` has a large denominator, the second way is less efficient. ##### Worst-case complexity per iteration $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), base)`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::iterators::prefix_to_string; use malachite_base::strings::ToDebugString; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(prefix_to_string(after_point, 10), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(prefix_to_string(after_point, 10), "[0, 0, 1, 0, 0, 1, 0, 0, 1, 0, ...]"); let (before_point, after_point) = Rational::from_signeds(22, 7).power_of_2_digits(10); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!( prefix_to_string(after_point, 10), "[146, 292, 585, 146, 292, 585, 146, 292, 585, 146, ...]" ); ``` ### impl Rational #### pub fn into_digits( self, base: &Natural ) -> (Vec<Natural, Global>, RationalSequence<Natural>) Returns the base-$b$ digits of a `Rational`, taking the `Rational` by value. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is a `RationalSequence` of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(b-1)$s. The fractional portion may be very large; the length of the repeating part may be almost as large as the denominator. If the `Rational` has a large denominator, consider using `digits` instead, which returns an iterator. That function computes the fractional digits lazily and doesn’t need to compute the entire repeating part. ##### Worst-case complexity $T(n, m) = O(m2^n)$ $M(n, m) = O(m2^n)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).into_digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).into_digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[[1, 4, 2, 8, 5, 7]]"); ``` #### pub fn to_digits( &self, base: &Natural ) -> (Vec<Natural, Global>, RationalSequence<Natural>) Returns the base-$b$ digits of a `Rational`, taking the `Rational` by reference. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is a `RationalSequence` of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(b-1)$s. The fractional portion may be very large; the length of the repeating part may be almost as large as the denominator. If the `Rational` has a large denominator, consider using `digits` instead, which returns an iterator. That function computes the fractional digits lazily and doesn’t need to compute the entire repeating part. ##### Worst-case complexity $T(n, m) = O(m2^n)$ $M(n, m) = O(m2^n)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `base.significant_bits()`. ##### Panics Panics if `base` is less than 2. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).to_digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).to_digits(&Natural::from(10u32)); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[[1, 4, 2, 8, 5, 7]]"); ``` ### impl Rational #### pub fn into_power_of_2_digits( self, log_base: u64 ) -> (Vec<Natural, Global>, RationalSequence<Natural>) Returns the base-$2^k$ digits of a `Rational`, taking the `Rational` by value. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is a `RationalSequence` of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(2^k-1)$s. The fractional portion may be very large; the length of the repeating part may be almost as large as the denominator. If the `Rational` has a large denominator, consider using `power_of_2_digits` instead, which returns an iterator. That function computes the fractional digits lazily and doesn’t need to compute the entire repeating part. ##### Worst-case complexity $T(n, m) = O(m2^n)$ $M(n, m) = O(m2^n)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).into_power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(after_point.to_string(), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).into_power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(after_point.to_string(), "[[0, 0, 1]]"); let (before_point, after_point) = Rational::from_signeds(22, 7).into_power_of_2_digits(10); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[[146, 292, 585]]"); ``` #### pub fn to_power_of_2_digits( &self, log_base: u64 ) -> (Vec<Natural, Global>, RationalSequence<Natural>) Returns the base-$2^k$ digits of a `Rational`, taking the `Rational` by reference. The output has two components. The first is a `Vec` of the digits of the integer portion of the `Rational`, least- to most-significant. The second is a `RationalSequence` of the digits of the fractional portion. The output is in its simplest form: the integer-portion digits do not end with a zero, and the fractional-portion digits do not end with infinitely many zeros or $(2^k-1)$s. The fractional portion may be very large; the length of the repeating part may be almost as large as the denominator. If the `Rational` has a large denominator, consider using `power_of_2_digits` instead, which returns an iterator. That function computes the fractional digits lazily and doesn’t need to compute the entire repeating part. ##### Worst-case complexity $T(n, m) = O(m2^n)$ $M(n, m) = O(m2^n)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `log_base`. ##### Panics Panics if `log_base` is zero. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::Rational; let (before_point, after_point) = Rational::from(3u32).to_power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(after_point.to_string(), "[]"); let (before_point, after_point) = Rational::from_signeds(22, 7).to_power_of_2_digits(1); assert_eq!(before_point.to_debug_string(), "[1, 1]"); assert_eq!(after_point.to_string(), "[[0, 0, 1]]"); let (before_point, after_point) = Rational::from_signeds(22, 7).to_power_of_2_digits(10); assert_eq!(before_point.to_debug_string(), "[3]"); assert_eq!(after_point.to_string(), "[[146, 292, 585]]"); ``` ### impl Rational #### pub fn try_from_float_simplest<T>( x: T ) -> Result<Rational, RationalFromPrimitiveFloatError>where T: PrimitiveFloat, Rational: TryFrom<T, Error = RationalFromPrimitiveFloatError>, Converts a primitive float to the simplest `Rational` that rounds to that value. To be more specific: Suppose the floating-point input is $x$. If $x$ is an integer, its `Rational` equivalent is returned. Otherwise, this function finds $a$ and $b$, which are the floating point predecessor and successor of $x$, and finds the simplest `Rational` in the open interval $(\frac{x + a}{2}, \frac{x + b}{2})$. “Simplicity” refers to low complexity. See `Rational::cmp_complexity` for a definition of complexity. For example, `0.1f32` is converted to $1/10$ rather than to the exact value of the float, which is $13421773/134217728$. If you want the exact value, use `Rational::from` instead. If the floating point value cannot be NaN or infinite, and error is returned. ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.sci_exponent()`. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::conversion::from_primitive_float::RationalFromPrimitiveFloatError; use malachite_q::Rational; assert_eq!(Rational::try_from_float_simplest(0.0).to_debug_string(), "Ok(0)"); assert_eq!(Rational::try_from_float_simplest(1.5).to_debug_string(), "Ok(3/2)"); assert_eq!(Rational::try_from_float_simplest(-1.5).to_debug_string(), "Ok(-3/2)"); assert_eq!(Rational::try_from_float_simplest(0.1f32).to_debug_string(), "Ok(1/10)"); assert_eq!(Rational::try_from_float_simplest(0.33333334f32).to_debug_string(), "Ok(1/3)"); assert_eq!( Rational::try_from_float_simplest(f32::NAN), Err(RationalFromPrimitiveFloatError) ); ``` ### impl Rational #### pub const fn const_from_unsigneds( numerator: u64, denominator: u64 ) -> Option<RationalConverts two`Limb`s, representing a numerator and a denominator, to a `Rational`. If `denominator` is zero, `None` is returned. This function is const, so it may be used to define constants. When called at runtime, it is slower than `Rational::from_unsigneds`. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Examples ``` use malachite_q::Rational; const TWO_THIRDS: Option<Rational> = Rational::const_from_unsigneds(2, 3); assert_eq!(TWO_THIRDS, Some(Rational::from_unsigneds(2u32, 3))); const TWO_THIRDS_ALT: Option<Rational> = Rational::const_from_unsigneds(22, 33); assert_eq!(TWO_THIRDS, Some(Rational::from_unsigneds(2u32, 3))); const ZERO_DENOMINATOR: Option<Rational> = Rational::const_from_unsigneds(22, 0); assert_eq!(ZERO_DENOMINATOR, None); ``` #### pub const fn const_from_signeds( numerator: i64, denominator: i64 ) -> Option<RationalConverts two`SignedLimb`s, representing a numerator and a denominator, to a `Rational`. If `denominator` is zero, `None` is returned. This function is const, so it may be used to define constants. When called at runtime, it is slower than `Rational::from_signeds`. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Examples ``` use malachite_q::Rational; const NEGATIVE_TWO_THIRDS: Option<Rational> = Rational::const_from_signeds(-2, 3); assert_eq!(NEGATIVE_TWO_THIRDS, Some(Rational::from_signeds(-2, 3))); const NEGATIVE_TWO_THIRDS_ALT: Option<Rational> = Rational::const_from_signeds(-22, 33); assert_eq!(NEGATIVE_TWO_THIRDS, Some(Rational::from_signeds(-2, 3))); const ZERO_DENOMINATOR: Option<Rational> = Rational::const_from_signeds(-22, 0); assert_eq!(ZERO_DENOMINATOR, None); ``` #### pub fn from_naturals(numerator: Natural, denominator: Natural) -> Rational Converts two `Natural`s to a `Rational`, taking the `Natural`s by value. The `Natural`s become the `Rational`’s numerator and denominator. Only non-negative `Rational`s can be produced with this function. The denominator may not be zero. The input `Natural`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Rational::from_naturals(Natural::from(4u32), Natural::from(6u32)).to_string(), "2/3" ); assert_eq!(Rational::from_naturals(Natural::ZERO, Natural::from(6u32)), 0); ``` #### pub fn from_naturals_ref(numerator: &Natural, denominator: &Natural) -> Rational Converts two `Natural`s to a `Rational`, taking the `Natural`s by reference. The `Natural`s become the `Rational`’s numerator and denominator. Only non-negative `Rational`s can be produced with this function. The denominator may not be zero. The input `Natural`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Rational::from_naturals_ref(&Natural::from(4u32), &Natural::from(6u32)).to_string(), "2/3" ); assert_eq!(Rational::from_naturals_ref(&Natural::ZERO, &Natural::from(6u32)), 0); ``` #### pub fn from_unsigneds<T>(numerator: T, denominator: T) -> Rationalwhere T: PrimitiveUnsigned, Natural: From<T>, Converts two unsigned primitive integers to a `Rational`. The integers become the `Rational`’s numerator and denominator. Only non-negative `Rational`s can be produced with this function. The denominator may not be zero. The input integers may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::from_unsigneds(4u32, 6).to_string(), "2/3"); assert_eq!(Rational::from_unsigneds(0u32, 6), 0); ``` #### pub fn from_integers(numerator: Integer, denominator: Integer) -> Rational Converts two `Integer`s to a `Rational`, taking the `Integer`s by value. The absolute values of the `Integer`s become the `Rational`’s numerator and denominator. The sign of the `Rational` is the sign of the `Integer`s’ quotient. The denominator may not be zero. The input `Integer`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Rational::from_integers(Integer::from(4), Integer::from(6)).to_string(), "2/3" ); assert_eq!( Rational::from_integers(Integer::from(4), Integer::from(-6)).to_string(), "-2/3" ); assert_eq!(Rational::from_integers(Integer::ZERO, Integer::from(6)), 0); assert_eq!(Rational::from_integers(Integer::ZERO, Integer::from(-6)), 0); ``` #### pub fn from_integers_ref(numerator: &Integer, denominator: &Integer) -> Rational Converts two `Integer`s to a `Rational`, taking the `Integer`s by reference. The absolute values of the `Integer`s become the `Rational`’s numerator and denominator. The sign of the `Rational` is the sign of the `Integer`s’ quotient. The denominator may not be zero. The input `Integer`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Rational::from_integers_ref(&Integer::from(4), &Integer::from(6)).to_string(), "2/3" ); assert_eq!( Rational::from_integers_ref(&Integer::from(4), &Integer::from(-6)).to_string(), "-2/3" ); assert_eq!(Rational::from_integers_ref(&Integer::ZERO, &Integer::from(6)), 0); assert_eq!(Rational::from_integers_ref(&Integer::ZERO, &Integer::from(-6)), 0); ``` #### pub fn from_signeds<T>(numerator: T, denominator: T) -> Rationalwhere T: PrimitiveSigned, Integer: From<T>, Converts two signed primitive integers to a [`Rational]`. The absolute values of the integers become the `Rational`’s numerator and denominator. The sign of the `Rational` is the sign of the integers’ quotient. The denominator may not be zero. The input integers may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::from_signeds(4i8, 6).to_string(), "2/3"); assert_eq!(Rational::from_signeds(4i8, -6).to_string(), "-2/3"); assert_eq!(Rational::from_signeds(0i8, 6), 0); assert_eq!(Rational::from_signeds(0i8, -6), 0); ``` #### pub fn from_sign_and_naturals( sign: bool, numerator: Natural, denominator: Natural ) -> Rational Converts a sign and two `Natural`s to a `Rational`, taking the `Natural`s by value. The `Natural`s become the `Rational`’s numerator and denominator, and the sign indicates whether the `Rational` should be non-negative. If the numerator is zero, then the `Rational` will be non-negative regardless of the sign. The denominator may not be zero. The input `Natural`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Rational::from_sign_and_naturals( true, Natural::from(4u32), Natural::from(6u32) ).to_string(), "2/3" ); assert_eq!( Rational::from_sign_and_naturals( false, Natural::from(4u32), Natural::from(6u32) ).to_string(), "-2/3" ); ``` #### pub fn from_sign_and_naturals_ref( sign: bool, numerator: &Natural, denominator: &Natural ) -> Rational Converts a sign and two `Natural`s to a `Rational`, taking the `Natural`s by reference. The `Natural`s become the `Rational`’s numerator and denominator, and the sign indicates whether the `Rational` should be non-negative. If the numerator is zero, then the `Rational` will be non-negative regardless of the sign. The denominator may not be zero. The input `Natural`s may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Rational::from_sign_and_naturals_ref( true, &Natural::from(4u32), &Natural::from(6u32) ).to_string(), "2/3" ); assert_eq!( Rational::from_sign_and_naturals_ref( false, &Natural::from(4u32), &Natural::from(6u32) ).to_string(), "-2/3" ); ``` #### pub fn from_sign_and_unsigneds<T>( sign: bool, numerator: T, denominator: T ) -> Rationalwhere T: PrimitiveUnsigned, Natural: From<T>, Converts a sign and two primitive unsigned integers to a `Rational`. The integers become the `Rational`’s numerator and denominator, and the sign indicates whether the `Rational` should be non-negative. If the numerator is zero, then the `Rational` will be non-negative regardless of the sign. The denominator may not be zero. The input integers may have common factors; this function reduces them. ##### Worst-case complexity $T(n) = O(n^2)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(numerator.significant_bits(), denominator.significant_bits())`. ##### Panics Panics if `denominator` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::from_sign_and_unsigneds(true, 4u32, 6).to_string(), "2/3"); assert_eq!(Rational::from_sign_and_unsigneds(false, 4u32, 6).to_string(), "-2/3"); ``` ### impl Rational #### pub const fn const_from_unsigned(x: u64) -> Rational Converts a `Limb` to a `Rational`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; const TEN: Rational = Rational::const_from_unsigned(10); assert_eq!(TEN, 10); ``` #### pub const fn const_from_signed(x: i64) -> Rational Converts a `SignedLimb` to a `Rational`. This function is const, so it may be used to define constants. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; const TEN: Rational = Rational::const_from_signed(10); assert_eq!(TEN, 10); const NEGATIVE_TEN: Rational = Rational::const_from_signed(-10); assert_eq!(NEGATIVE_TEN, -10); ``` ### impl Rational #### pub fn sci_mantissa_and_exponent_round<T>( self, rm: RoundingMode ) -> Option<(T, i64, Ordering)>where T: PrimitiveFloat, Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned mantissa and exponent represent a value that is less than, equal to, or greater than the absolute value of the `Rational`. The `Rational`’s sign is ignored. This means that, for example, that rounding using `Floor` is equivalent to rounding using `Down`, even if the `Rational is negative. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the provided rounding mode. If the rounding mode is `Exact` but the conversion is not exact, `None` is returned. $$ f(x, r) \approx \left (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor\right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::num::float::NiceFloat; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; use std::cmp::Ordering; let test = |n: Rational, rm: RoundingMode, out: Option<(f32, i64, Ordering)>| { assert_eq!( n.sci_mantissa_and_exponent_round(rm) .map(|(m, e, o)| (NiceFloat(m), e, o)), out.map(|(m, e, o)| (NiceFloat(m), e, o)) ); }; test(Rational::from(3u32), RoundingMode::Down, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Ceiling, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Up, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Nearest, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Exact, Some((1.5, 1, Ordering::Equal))); test( Rational::from_signeds(1, 3), RoundingMode::Floor, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(1, 3), RoundingMode::Down, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(1, 3), RoundingMode::Ceiling, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Up, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Nearest, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Exact, None ); test( Rational::from_signeds(-1, 3), RoundingMode::Floor, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(-1, 3), RoundingMode::Down, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(-1, 3), RoundingMode::Ceiling, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(-1, 3), RoundingMode::Up, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(-1, 3), RoundingMode::Nearest, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(-1, 3), RoundingMode::Exact, None ); ``` #### pub fn sci_mantissa_and_exponent_round_ref<T>( &self, rm: RoundingMode ) -> Option<(T, i64, Ordering)>where T: PrimitiveFloat, Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned mantissa and exponent represent a value that is less than, equal to, or greater than the original value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the provided rounding mode. If the rounding mode is `Exact` but the conversion is not exact, `None` is returned. $$ f(x, r) \approx \left (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor\right ). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::conversion::traits::SciMantissaAndExponent; use malachite_base::num::float::NiceFloat; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; use std::cmp::Ordering; let test = |n: Rational, rm: RoundingMode, out: Option<(f32, i64, Ordering)>| { assert_eq!( n.sci_mantissa_and_exponent_round_ref(rm) .map(|(m, e, o)| (NiceFloat(m), e, o)), out.map(|(m, e, o)| (NiceFloat(m), e, o)) ); }; test(Rational::from(3u32), RoundingMode::Down, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Ceiling, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Up, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Nearest, Some((1.5, 1, Ordering::Equal))); test(Rational::from(3u32), RoundingMode::Exact, Some((1.5, 1, Ordering::Equal))); test( Rational::from_signeds(1, 3), RoundingMode::Floor, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(1, 3), RoundingMode::Down, Some((1.3333333, -2, Ordering::Less)) ); test( Rational::from_signeds(1, 3), RoundingMode::Ceiling, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Up, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Nearest, Some((1.3333334, -2, Ordering::Greater)) ); test( Rational::from_signeds(1, 3), RoundingMode::Exact, None ); ``` ### impl Rational #### pub fn mutate_numerator<F, T>(&mut self, f: F) -> Twhere F: FnOnce(&mut Natural) -> T, Mutates the numerator of a `Rational` using a provided closure, and then returns whatever the closure returns. After the closure executes, this function reduces the `Rational`. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use malachite_q::Rational; let mut q = Rational::from_signeds(22, 7); let ret = q.mutate_numerator(|x| { *x -= Natural::ONE; true }); assert_eq!(q, 3); assert_eq!(ret, true); ``` #### pub fn mutate_denominator<F, T>(&mut self, f: F) -> Twhere F: FnOnce(&mut Natural) -> T, Mutates the denominator of a `Rational` using a provided closure. After the closure executes, this function reduces the `Rational`. ##### Panics Panics if the closure sets the denominator to zero. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use malachite_q::Rational; let mut q = Rational::from_signeds(22, 7); let ret = q.mutate_denominator(|x| { *x -= Natural::ONE; true }); assert_eq!(q.to_string(), "11/3"); assert_eq!(ret, true); ``` #### pub fn mutate_numerator_and_denominator<F, T>(&mut self, f: F) -> Twhere F: FnOnce(&mut Natural, &mut Natural) -> T, Mutates the numerator and denominator of a `Rational` using a provided closure. After the closure executes, this function reduces the `Rational`. ##### Panics Panics if the closure sets the denominator to zero. ##### Examples ``` use malachite_base::num::basic::traits::One; use malachite_nz::natural::Natural; use malachite_q::Rational; let mut q = Rational::from_signeds(22, 7); let ret = q.mutate_numerator_and_denominator(|x, y| { *x -= Natural::ONE; *y -= Natural::ONE; true }); assert_eq!(q.to_string(), "7/2"); assert_eq!(ret, true); ``` ### impl Rational #### pub fn from_sci_string_simplest_with_options( s: &str, options: FromSciStringOptions ) -> Option<RationalConverts a string, possibly in scientfic notation, to a `Rational`. This function finds the simplest `Rational` which rounds to the target string according to the precision implied by the string. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive). The rounding mode option is ignored. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. If the string is unparseable, `None` is returned. Here’s a more precise description of the function’s behavior. Suppose we are using base $b$, and the literal value of the string (as parsed by `from_sci_string`) is $q$, and the implied scale is $s$ (meaning $s$ digits are provided after the point; if the string is `"123.456"`, then $s$ is 3). Then this function computes $\epsilon = b^{-s}/2$ and finds the simplest `Rational` in the closed interval $[q - \epsilon, q + \epsilon]$. The simplest `Rational` is the one with minimal denominator; if there are multiple such `Rational`s, the one with the smallest absolute numerator is chosen. The following discussion assumes base 10. This method allows the function to convert `"0.333"` to $1/3$, since $1/3$ is the simplest `Rational` in the interval $[0.3325, 0.3335]$. But note that if the scale of the input is low, some unexpected behavior may occur. For example, `"0.1"` will be converted into $1/7$ rather than $1/10$, since $1/7$ is the simplest `Rational` in $[0.05, 0.15]$. If you’d prefer that result be $1/10$, you have a few options: * Use `from_sci_string_with_options` instead. This function interprets its input literally; it converts `"0.333"` to $333/1000$. * Increase the scale of the input; `"0.10"` is converted to $1/10$. * Use `from_sci_string_with_options`, and round the result manually using functions like `round_to_multiple` and `simplest_rational_in_closed_interval`. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; let mut options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Rational::from_sci_string_simplest_with_options("ff", options).unwrap(), 255); assert_eq!( Rational::from_sci_string_simplest_with_options("ffE+5", options).unwrap(), 267386880 ); // 1/4105 is 0.000ff705..._16 assert_eq!( Rational::from_sci_string_simplest_with_options("ffE-5", options).unwrap().to_string(), "1/4105" ); ``` #### pub fn from_sci_string_simplest(s: &str) -> Option<RationalConverts a string, possibly in scientfic notation, to a `Rational`. This function finds the simplest `Rational` which rounds to the target string according to the precision implied by the string. The string is parsed using base 10. To use other bases, try `from_sci_string_simplest_with_options` instead. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. The exponent itself is also parsed using base 10. Decimal points are allowed. If the string is unparseable, `None` is returned. Here’s a more precise description of the function’s behavior. Suppose that the literal value of the string (as parsed by `from_sci_string`) is $q$, and the implied scale is $s$ (meaning $s$ digits are provided after the point; if the string is `"123.456"`, then $s$ is 3). Then this function computes $\epsilon = 10^{-s}/2$ and finds the simplest `Rational` in the closed interval $[q - \epsilon, q + \epsilon]$. The simplest `Rational` is the one with minimal denominator; if there are multiple such `Rational`s, the one with the smallest absolute numerator is chosen. This method allows the function to convert `"0.333"` to $1/3$, since $1/3$ is the simplest `Rational` in the interval $[0.3325, 0.3335]$. But note that if the scale of the input is low, some unexpected behavior may occur. For example, `"0.1"` will be converted into $1/7$ rather than $1/10$, since $1/7$ is the simplest `Rational` in $[0.05, 0.15]$. If you’d prefer that result be $1/10$, you have a few options: * Use `from_sci_string` instead. This function interprets its input literally; it converts `"0.333"` to $333/1000$. * Increase the scale of the input; `"0.10"` is converted to $1/10$. * Use `from_sci_string`, and round the result manually using functions like `round_to_multiple` and `simplest_rational_in_closed_interval`. ##### Worst-case complexity $T(n) = O(10^n n \log n)$ $M(n) = O(10^n n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_q::Rational; assert_eq!(Rational::from_sci_string_simplest("123").unwrap(), 123); assert_eq!(Rational::from_sci_string_simplest("0.1").unwrap().to_string(), "1/7"); assert_eq!(Rational::from_sci_string_simplest("0.10").unwrap().to_string(), "1/10"); assert_eq!(Rational::from_sci_string_simplest("0.333").unwrap().to_string(), "1/3"); assert_eq!(Rational::from_sci_string_simplest("1.2e5").unwrap(), 120000); assert_eq!(Rational::from_sci_string_simplest("1.2e-5").unwrap().to_string(), "1/80000"); ``` ### impl Rational #### pub fn length_after_point_in_small_base(&self, base: u8) -> Option<u64When expanding a `Rational` in a small base $b$, determines how many digits after the decimal (or other-base) point are in the base-$b$ expansion. If the expansion is non-terminating, this method returns `None`. This happens iff the `Rational`’s denominator has prime factors not present in $b$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `base` is less than 2 or greater than 36. ##### Examples ``` use malachite_q::Rational; // 3/8 is 0.375 in base 10. assert_eq!(Rational::from_signeds(3, 8).length_after_point_in_small_base(10), Some(3)); // 1/20 is 0.05 in base 10. assert_eq!(Rational::from_signeds(1, 20).length_after_point_in_small_base(10), Some(2)); // 1/7 is non-terminating in base 10. assert_eq!(Rational::from_signeds(1, 7).length_after_point_in_small_base(10), None); // 1/7 is 0.3 in base 21. assert_eq!(Rational::from_signeds(1, 7).length_after_point_in_small_base(21), Some(1)); ``` ### impl Rational #### pub fn to_numerator(&self) -> Natural Extracts the numerator of a `Rational`, taking the `Rational` by reference and cloning. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::from_str("2/3").unwrap().to_numerator(), 2); assert_eq!(Rational::from_str("0").unwrap().to_numerator(), 0); ``` #### pub fn to_denominator(&self) -> Natural Extracts the denominator of a `Rational`, taking the `Rational` by reference and cloning. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::from_str("2/3").unwrap().to_denominator(), 3); assert_eq!(Rational::from_str("0").unwrap().to_denominator(), 1); ``` #### pub fn to_numerator_and_denominator(&self) -> (Natural, Natural) Extracts the numerator and denominator of a `Rational`, taking the `Rational` by reference and cloning. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::Rational; use std::str::FromStr; assert_eq!( Rational::from_str("2/3").unwrap().to_numerator_and_denominator().to_debug_string(), "(2, 3)" ); assert_eq!( Rational::from_str("0").unwrap().to_numerator_and_denominator().to_debug_string(), "(0, 1)" ); ``` #### pub fn into_numerator(self) -> Natural Extracts the numerator of a `Rational`, taking the `Rational` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::from_str("2/3").unwrap().into_numerator(), 2); assert_eq!(Rational::from_str("0").unwrap().into_numerator(), 0); ``` #### pub fn into_denominator(self) -> Natural Extracts the denominator of a `Rational`, taking the `Rational` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::from_str("2/3").unwrap().into_denominator(), 3); assert_eq!(Rational::from_str("0").unwrap().into_denominator(), 1); ``` #### pub fn into_numerator_and_denominator(self) -> (Natural, Natural) Extracts the numerator and denominator of a `Rational`, taking the `Rational` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::Rational; use std::str::FromStr; assert_eq!( Rational::from_str("2/3").unwrap().into_numerator_and_denominator().to_debug_string(), "(2, 3)" ); assert_eq!( Rational::from_str("0").unwrap().into_numerator_and_denominator().to_debug_string(), "(0, 1)" ); ``` #### pub const fn numerator_ref(&self) -> &Natural Returns a reference to the numerator of a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(*Rational::from_str("2/3").unwrap().numerator_ref(), 2); assert_eq!(*Rational::from_str("0").unwrap().numerator_ref(), 0); ``` #### pub const fn denominator_ref(&self) -> &Natural Returns a reference to the denominator of a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(*Rational::from_str("2/3").unwrap().denominator_ref(), 3); assert_eq!(*Rational::from_str("0").unwrap().denominator_ref(), 1); ``` #### pub const fn numerator_and_denominator_ref(&self) -> (&Natural, &Natural) Returns references to the numeraror and denominator of a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::strings::ToDebugString; use malachite_q::Rational; use std::str::FromStr; assert_eq!( Rational::from_str("2/3").unwrap().numerator_and_denominator_ref().to_debug_string(), "(2, 3)" ); assert_eq!( Rational::from_str("0").unwrap().numerator_and_denominator_ref().to_debug_string(), "(0, 1)" ); ``` Trait Implementations --- ### impl<'a> Abs for &'a Rational #### fn abs(self) -> Rational Takes the absolute value of a `Rational`, taking the `Rational` by reference. $$ f(x) = |x|. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!((&Rational::ZERO).abs(), 0); assert_eq!((&Rational::from_signeds(22, 7)).abs().to_string(), "22/7"); assert_eq!((&Rational::from_signeds(-22, 7)).abs().to_string(), "22/7"); ``` #### type Output = Rational ### impl Abs for Rational #### fn abs(self) -> Rational Takes the absolute value of a `Rational`, taking the `Rational` by value. $$ f(x) = |x|. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Abs; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::ZERO.abs(), 0); assert_eq!(Rational::from_signeds(22, 7).abs().to_string(), "22/7"); assert_eq!(Rational::from_signeds(-22, 7).abs().to_string(), "22/7"); ``` #### type Output = Rational ### impl AbsAssign for Rational #### fn abs_assign(&mut self) Replaces a `Rational` with its absolute value. $$ x \gets |x|. $$ ##### Examples ``` use malachite_base::num::arithmetic::traits::AbsAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::ZERO; x.abs_assign(); assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x.abs_assign(); assert_eq!(x.to_string(), "22/7"); let mut x = Rational::from_signeds(-22, 7); x.abs_assign(); assert_eq!(x.to_string(), "22/7"); ``` ### impl<'a, 'b> Add<&'a Rational> for &'b Rational #### fn add(self, other: &'a Rational) -> Rational Adds two `Rational`s, taking both by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF + &Rational::ONE_HALF, 1); assert_eq!( (&Rational::from_signeds(22, 7) + &Rational::from_signeds(99, 100)).to_string(), "2893/700" ); ``` #### type Output = Rational The resulting type after applying the `+` operator.### impl<'a> Add<&'a Rational> for Rational #### fn add(self, other: &'a Rational) -> Rational Adds two `Rational`s, taking both by the first by value and the second by reference. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF + &Rational::ONE_HALF, 1); assert_eq!( (Rational::from_signeds(22, 7) + &Rational::from_signeds(99, 100)).to_string(), "2893/700" ); ``` #### type Output = Rational The resulting type after applying the `+` operator.### impl<'a> Add<Rational> for &'a Rational #### fn add(self, other: Rational) -> Rational Adds two `Rational`s, taking the first by reference and the second by value $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF + Rational::ONE_HALF, 1); assert_eq!( (&Rational::from_signeds(22, 7) + Rational::from_signeds(99, 100)).to_string(), "2893/700" ); ``` #### type Output = Rational The resulting type after applying the `+` operator.### impl Add<Rational> for Rational #### fn add(self, other: Rational) -> Rational Adds two `Rational`s, taking both by value. $$ f(x, y) = x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF + Rational::ONE_HALF, 1); assert_eq!( (Rational::from_signeds(22, 7) + Rational::from_signeds(99, 100)).to_string(), "2893/700" ); ``` #### type Output = Rational The resulting type after applying the `+` operator.### impl<'a> AddAssign<&'a Rational> for Rational #### fn add_assign(&mut self, other: &'a Rational) Adds a `Rational` to a `Rational` in place, taking the `Rational` on the right-hand side by reference. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x += &Rational::ONE_HALF; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x += &Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "2893/700"); ``` ### impl AddAssign<Rational> for Rational #### fn add_assign(&mut self, other: Rational) Adds a `Rational` to a `Rational` in place, taking the `Rational` on the right-hand side by value. $$ x \gets x + y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x += Rational::ONE_HALF; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x += Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "2893/700"); ``` ### impl<'a> Approximate for &'a Rational #### fn approximate(self, max_denominator: &Natural) -> Rational Finds the best approximation of a `Rational` using a denominator no greater than a specified maximum, taking the `Rational` by reference. Let $f(x, d) = p/q$, with $p$ and $q$ relatively prime. Then the following properties hold: * $q \leq d$ * For all $n \in \Z$ and all $m \in \Z$ with $0 < m \leq d$, $|x - p/q| \leq |x - n/m|$. * If $|x - n/m| = |x - p/q|$, then $q \leq m$. * If $|x - n/q| = |x - p/q|$, then $p$ is even and $n$ is either equal to $p$ or odd. ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics * If `max_denominator` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::ExactFrom; use malachite_nz::natural::Natural; use malachite_q::arithmetic::traits::Approximate; use malachite_q::Rational; assert_eq!( (&Rational::exact_from(std::f64::consts::PI)).approximate(&Natural::from(1000u32)) .to_string(), "355/113" ); assert_eq!( (&Rational::from_signeds(333i32, 1000)).approximate(&Natural::from(100u32)) .to_string(), "1/3" ); ``` ##### Implementation notes This algorithm follows the description in https://en.wikipedia.org/wiki/Continued_fraction#Best_rational_approximations. One part of the algorithm not mentioned in that article is that if the last term $n$ in the continued fraction needs to be reduced, the optimal replacement term $m$ may be found using division. ### impl Approximate for Rational #### fn approximate(self, max_denominator: &Natural) -> Rational Finds the best approximation of a `Rational` using a denominator no greater than a specified maximum, taking the `Rational` by value. Let $f(x, d) = p/q$, with $p$ and $q$ relatively prime. Then the following properties hold: * $q \leq d$ * For all $n \in \Z$ and all $m \in \Z$ with $0 < m \leq d$, $|x - p/q| \leq |x - n/m|$. * If $|x - n/m| = |x - p/q|$, then $q \leq m$. * If $|x - n/q| = |x - p/q|$, then $p$ is even and $n$ is either equal to $p$ or odd. ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics * If `max_denominator` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::ExactFrom; use malachite_nz::natural::Natural; use malachite_q::arithmetic::traits::Approximate; use malachite_q::Rational; assert_eq!( Rational::exact_from(std::f64::consts::PI).approximate(&Natural::from(1000u32)) .to_string(), "355/113" ); assert_eq!( Rational::from_signeds(333i32, 1000).approximate(&Natural::from(100u32)).to_string(), "1/3" ); ``` ##### Implementation notes This algorithm follows the description in https://en.wikipedia.org/wiki/Continued_fraction#Best_rational_approximations. One part of the algorithm not mentioned in that article is that if the last term $n$ in the continued fraction needs to be reduced, the optimal replacement term $m$ may be found using division. ### impl ApproximateAssign for Rational #### fn approximate_assign(&mut self, max_denominator: &Natural) Finds the best approximation of a `Rational` using a denominator no greater than a specified maximum, mutating the `Rational` in place. See `Rational::approximate` for more information. ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics * If `max_denominator` is zero. ##### Examples ``` use malachite_base::num::conversion::traits::ExactFrom; use malachite_nz::natural::Natural; use malachite_q::arithmetic::traits::ApproximateAssign; use malachite_q::Rational; let mut x = Rational::exact_from(std::f64::consts::PI); x.approximate_assign(&Natural::from(1000u32)); assert_eq!(x.to_string(), "355/113"); let mut x = Rational::from_signeds(333i32, 1000); x.approximate_assign(&Natural::from(100u32)); assert_eq!(x.to_string(), "1/3"); ``` ### impl<'a> Ceiling for &'a Rational #### fn ceiling(self) -> Integer Finds the ceiling of a `Rational`, taking the `Rational` by reference. $$ f(x) = \lceil x \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Ceiling; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!((&Rational::ZERO).ceiling(), 0); assert_eq!((&Rational::from_signeds(22, 7)).ceiling(), 4); assert_eq!((&Rational::from_signeds(-22, 7)).ceiling(), -3); ``` #### type Output = Integer ### impl Ceiling for Rational #### fn ceiling(self) -> Integer Finds the ceiling of a `Rational`, taking the `Rational` by value. $$ f(x) = \lceil x \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Ceiling; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::ZERO.ceiling(), 0); assert_eq!(Rational::from_signeds(22, 7).ceiling(), 4); assert_eq!(Rational::from_signeds(-22, 7).ceiling(), -3); ``` #### type Output = Integer ### impl CeilingAssign for Rational #### fn ceiling_assign(&mut self) Replaces a `Rational` with its ceiling. $$ x \gets \lceil x \rceil. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::ZERO; x.ceiling_assign(); assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x.ceiling_assign(); assert_eq!(x, 4); let mut x = Rational::from_signeds(-22, 7); x.ceiling_assign(); assert_eq!(x, -3); ``` ### impl<'a, 'b> CeilingLogBase<&'b Rational> for &'a Rational #### fn ceiling_log_base(self, base: &Rational) -> i64 Returns the ceiling of the base-$b$ logarithm of a positive `Rational`. Note that this function may be slow if the base is very close to 1. $f(x, b) = \lceil\log_b x\rceil$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `base.significant_bits()`, and $m$ is $|\log_b x|$, where $b$ is `base` and $x$ is `x`. ##### Panics Panics if `self` less than or equal to zero or `base` is 1. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase; use malachite_q::Rational; assert_eq!(Rational::from(80u32).ceiling_log_base(&Rational::from(3u32)), 4); assert_eq!(Rational::from(81u32).ceiling_log_base(&Rational::from(3u32)), 4); assert_eq!(Rational::from(82u32).ceiling_log_base(&Rational::from(3u32)), 5); assert_eq!(Rational::from(4294967296u64).ceiling_log_base(&Rational::from(10u32)), 10); assert_eq!( Rational::from_signeds(936851431250i64, 1397).ceiling_log_base(&Rational::from(10u32)), 9 ); assert_eq!( Rational::from_signeds(5153632, 16807) .ceiling_log_base(&Rational::from_signeds(22, 7)), 5 ); ``` #### type Output = i64 ### impl<'a> CeilingLogBase2 for &'a Rational #### fn ceiling_log_base_2(self) -> i64 Returns the ceiling of the base-2 logarithm of a positive `Rational`. $f(x) = \lfloor\log_2 x\rfloor$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBase2; use malachite_q::Rational; assert_eq!(Rational::from(3u32).ceiling_log_base_2(), 2); assert_eq!(Rational::from_signeds(1, 3).ceiling_log_base_2(), -1); assert_eq!(Rational::from_signeds(1, 4).ceiling_log_base_2(), -2); assert_eq!(Rational::from_signeds(1, 5).ceiling_log_base_2(), -2); ``` #### type Output = i64 ### impl<'a> CeilingLogBasePowerOf2<i64> for &'a Rational #### fn ceiling_log_base_power_of_2(self, pow: i64) -> i64 Returns the ceiling of the base-$2^k$ logarithm of a positive `Rational`. $k$ may be negative. $f(x, p) = \lceil\log_{2^p} x\rceil$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CeilingLogBasePowerOf2; use malachite_q::Rational; assert_eq!(Rational::from(100).ceiling_log_base_power_of_2(2), 4); assert_eq!(Rational::from(4294967296u64).ceiling_log_base_power_of_2(8), 4); // 4^(-2) < 1/10 < 4^(-1) assert_eq!(Rational::from_signeds(1, 10).ceiling_log_base_power_of_2(2), -1); // (1/4)^2 < 1/10 < (1/4)^1 assert_eq!(Rational::from_signeds(1, 10).ceiling_log_base_power_of_2(-2), 2); ``` #### type Output = i64 ### impl<'a, 'b> CheckedDiv<&'a Rational> for &'b Rational #### fn checked_div(self, other: &'a Rational) -> Option<RationalDivides a `Rational` by another `Rational`, taking both by reference. Returns `None` when the second `Rational` is zero, `Some` otherwise. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedDiv; use malachite_base::num::basic::traits::{Two, Zero}; use malachite_q::Rational; assert_eq!((&Rational::TWO).checked_div(&Rational::TWO).unwrap(), 1); assert_eq!((&Rational::TWO).checked_div(&Rational::ZERO), None); assert_eq!( (&Rational::from_signeds(22, 7)).checked_div(&Rational::from_signeds(99, 100)).unwrap() .to_string(), "200/63" ); ``` #### type Output = Rational ### impl<'a> CheckedDiv<&'a Rational> for Rational #### fn checked_div(self, other: &'a Rational) -> Option<RationalDivides a `Rational` by another `Rational`, taking the first by value and the second by reference. Returns `None` when the second `Rational` is zero, `Some` otherwise. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedDiv; use malachite_base::num::basic::traits::{Two, Zero}; use malachite_q::Rational; assert_eq!(Rational::TWO.checked_div(&Rational::TWO).unwrap(), 1); assert_eq!(Rational::TWO.checked_div(&Rational::ZERO), None); assert_eq!( (Rational::from_signeds(22, 7).checked_div(&Rational::from_signeds(99, 100))).unwrap() .to_string(), "200/63" ); ``` #### type Output = Rational ### impl<'a> CheckedDiv<Rational> for &'a Rational #### fn checked_div(self, other: Rational) -> Option<RationalDivides a `Rational` by another `Rational`, taking the first by reference and the second by value. Returns `None` when the second `Rational` is zero, `Some` otherwise. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedDiv; use malachite_base::num::basic::traits::{Two, Zero}; use malachite_q::Rational; assert_eq!((&Rational::TWO).checked_div(Rational::TWO).unwrap(), 1); assert_eq!((&Rational::TWO).checked_div(Rational::ZERO), None); assert_eq!( (&Rational::from_signeds(22, 7)).checked_div(Rational::from_signeds(99, 100)).unwrap() .to_string(), "200/63" ); ``` #### type Output = Rational ### impl CheckedDiv<Rational> for Rational #### fn checked_div(self, other: Rational) -> Option<RationalDivides a `Rational` by another `Rational`, taking both by value. Returns `None` when the second `Rational` is zero, `Some` otherwise. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedDiv; use malachite_base::num::basic::traits::{Two, Zero}; use malachite_q::Rational; assert_eq!(Rational::TWO.checked_div(Rational::TWO).unwrap(), 1); assert_eq!(Rational::TWO.checked_div(Rational::ZERO), None); assert_eq!( (Rational::from_signeds(22, 7).checked_div(Rational::from_signeds(99, 100))).unwrap() .to_string(), "200/63" ); ``` #### type Output = Rational ### impl<'a, 'b> CheckedLogBase<&'b Rational> for &'a Rational #### fn checked_log_base(self, base: &Rational) -> Option<i64Returns the base-$b$ logarithm of a positive `Rational`. If the `Rational` is not a power of $b$, then `None` is returned. Note that this function may be slow if the base is very close to 1. $$ f(x, b) = \begin{cases} \operatorname{Some}(\log_b x) & \text{if} \quad \log_b x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `base.significant_bits()`, and $m$ is $|\log_b x|$, where $b$ is `base` and $x$ is `x`. ##### Panics Panics if `self` less than or equal to zero or `base` is 1. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase; use malachite_q::Rational; assert_eq!(Rational::from(80u32).checked_log_base(&Rational::from(3u32)), None); assert_eq!(Rational::from(81u32).checked_log_base(&Rational::from(3u32)), Some(4)); assert_eq!(Rational::from(82u32).checked_log_base(&Rational::from(3u32)), None); assert_eq!(Rational::from(4294967296u64).checked_log_base(&Rational::from(10u32)), None); assert_eq!( Rational::from_signeds(936851431250i64, 1397).checked_log_base(&Rational::from(10u32)), None ); assert_eq!( Rational::from_signeds(5153632, 16807) .checked_log_base(&Rational::from_signeds(22, 7)), Some(5) ); ``` #### type Output = i64 ### impl<'a> CheckedLogBase2 for &'a Rational #### fn checked_log_base_2(self) -> Option<i64Returns the base-2 logarithm of a positive `Rational`. If the `Rational` is not a power of 2, then `None` is returned. $$ f(x) = \begin{cases} \operatorname{Some}(\log_2 x) & \text{if} \quad \log_2 x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBase2; use malachite_q::Rational; assert_eq!(Rational::from(3u32).checked_log_base_2(), None); assert_eq!(Rational::from_signeds(1, 3).checked_log_base_2(), None); assert_eq!(Rational::from_signeds(1, 4).checked_log_base_2(), Some(-2)); assert_eq!(Rational::from_signeds(1, 5).checked_log_base_2(), None); ``` #### type Output = i64 ### impl<'a> CheckedLogBasePowerOf2<i64> for &'a Rational #### fn checked_log_base_power_of_2(self, pow: i64) -> Option<i64Returns the base-$2^k$ logarithm of a positive `Rational`. If the `Rational` is not a power of $2^k$, then `None` is returned. $k$ may be negative. $$ f(x, p) = \begin{cases} \operatorname{Some}(\log_{2^p} x) & \text{if} \quad \log_{2^p} x \in \Z, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedLogBasePowerOf2; use malachite_q::Rational; assert_eq!(Rational::from(100).checked_log_base_power_of_2(2), None); assert_eq!(Rational::from(4294967296u64).checked_log_base_power_of_2(8), Some(4)); // 4^(-2) < 1/10 < 4^(-1) assert_eq!(Rational::from_signeds(1, 10).checked_log_base_power_of_2(2), None); assert_eq!(Rational::from_signeds(1, 16).checked_log_base_power_of_2(2), Some(-2)); // (1/4)^2 < 1/10 < (1/4)^1 assert_eq!(Rational::from_signeds(1, 10).checked_log_base_power_of_2(-2), None); assert_eq!(Rational::from_signeds(1, 16).checked_log_base_power_of_2(-2), Some(2)); ``` #### type Output = i64 ### impl<'a> CheckedRoot<i64> for &'a Rational #### fn checked_root(self, pow: i64) -> Option<RationalReturns the the $n$th root of a `Rational`, or `None` if the `Rational` is not a perfect $n$th power. The `Rational` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, if `exp` is even and `self` is negative, or if `self` is zero and `exp` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!((&Rational::from(999i32)).checked_root(3i64).to_debug_string(), "None"); assert_eq!((&Rational::from(1000i32)).checked_root(3i64).to_debug_string(), "Some(10)"); assert_eq!((&Rational::from(1001i32)).checked_root(3i64).to_debug_string(), "None"); assert_eq!((&Rational::from(-1000i32)).checked_root(3i64).to_debug_string(), "Some(-10)"); assert_eq!((&Rational::from_signeds(22, 7)).checked_root(3i64).to_debug_string(), "None"); assert_eq!( (&Rational::from_signeds(27, 8)).checked_root(3i64).to_debug_string(), "Some(3/2)" ); assert_eq!( (&Rational::from_signeds(-27, 8)).checked_root(3i64).to_debug_string(), "Some(-3/2)" ); assert_eq!((&Rational::from(1000i32)).checked_root(-3i64).to_debug_string(), "Some(1/10)"); assert_eq!( (&Rational::from_signeds(-27, 8)).checked_root(-3i64).to_debug_string(), "Some(-2/3)" ); ``` #### type Output = Rational ### impl CheckedRoot<i64> for Rational #### fn checked_root(self, pow: i64) -> Option<RationalReturns the the $n$th root of a `Rational`, or `None` if the `Rational` is not a perfect $n$th power. The `Rational` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, if `exp` is even and `self` is negative, or if `self` is zero and `exp` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!(Rational::from(999i32).checked_root(3i64).to_debug_string(), "None"); assert_eq!(Rational::from(1000i32).checked_root(3i64).to_debug_string(), "Some(10)"); assert_eq!(Rational::from(1001i32).checked_root(3i64).to_debug_string(), "None"); assert_eq!(Rational::from(-1000i32).checked_root(3i64).to_debug_string(), "Some(-10)"); assert_eq!(Rational::from_signeds(22, 7).checked_root(3i64).to_debug_string(), "None"); assert_eq!( Rational::from_signeds(27, 8).checked_root(3i64).to_debug_string(), "Some(3/2)" ); assert_eq!( Rational::from_signeds(-27, 8).checked_root(3i64).to_debug_string(), "Some(-3/2)" ); assert_eq!(Rational::from(1000i32).checked_root(-3i64).to_debug_string(), "Some(1/10)"); assert_eq!( Rational::from_signeds(-27, 8).checked_root(-3i64).to_debug_string(), "Some(-2/3)" ); ``` #### type Output = Rational ### impl<'a> CheckedRoot<u64> for &'a Rational #### fn checked_root(self, pow: u64) -> Option<RationalReturns the the $n$th root of a `Rational`, or `None` if the `Rational` is not a perfect $n$th power. The `Rational` is taken by reference. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!(Rational::from(999i32).checked_root(3u64).to_debug_string(), "None"); assert_eq!(Rational::from(1000i32).checked_root(3u64).to_debug_string(), "Some(10)"); assert_eq!(Rational::from(1001i32).checked_root(3u64).to_debug_string(), "None"); assert_eq!(Rational::from(-1000i32).checked_root(3u64).to_debug_string(), "Some(-10)"); assert_eq!(Rational::from_signeds(22, 7).checked_root(3u64).to_debug_string(), "None"); assert_eq!( Rational::from_signeds(27, 8).checked_root(3u64).to_debug_string(), "Some(3/2)" ); assert_eq!( Rational::from_signeds(-27, 8).checked_root(3u64).to_debug_string(), "Some(-3/2)" ); ``` #### type Output = Rational ### impl CheckedRoot<u64> for Rational #### fn checked_root(self, pow: u64) -> Option<RationalReturns the the $n$th root of a `Rational`, or `None` if the `Rational` is not a perfect $n$th power. The `Rational` is taken by value. $$ f(x, n) = \begin{cases} \operatorname{Some}(sqrt[n]{x}) & \text{if} \quad \sqrt[n]{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `exp` is zero, or if `exp` is even and `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedRoot; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!(Rational::from(999i32).checked_root(3u64).to_debug_string(), "None"); assert_eq!(Rational::from(1000i32).checked_root(3u64).to_debug_string(), "Some(10)"); assert_eq!(Rational::from(1001i32).checked_root(3u64).to_debug_string(), "None"); assert_eq!(Rational::from(-1000i32).checked_root(3u64).to_debug_string(), "Some(-10)"); assert_eq!(Rational::from_signeds(22, 7).checked_root(3u64).to_debug_string(), "None"); assert_eq!( Rational::from_signeds(27, 8).checked_root(3u64).to_debug_string(), "Some(3/2)" ); assert_eq!( Rational::from_signeds(-27, 8).checked_root(3u64).to_debug_string(), "Some(-3/2)" ); ``` #### type Output = Rational ### impl<'a> CheckedSqrt for &'a Rational #### fn checked_sqrt(self) -> Option<RationalReturns the the square root of a `Rational`, or `None` if it is not a perfect square. The `Rational` is taken by reference. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!((&Rational::from(99u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Rational::from(100u8)).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!((&Rational::from(101u8)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Rational::from_signeds(22, 7)).checked_sqrt().to_debug_string(), "None"); assert_eq!((&Rational::from_signeds(25, 9)).checked_sqrt().to_debug_string(), "Some(5/3)"); ``` #### type Output = Rational ### impl CheckedSqrt for Rational #### fn checked_sqrt(self) -> Option<RationalReturns the the square root of a `Rational`, or `None` if it is not a perfect square. The `Rational` is taken by value. $$ f(x) = \begin{cases} \operatorname{Some}(sqrt{x}) & \text{if} \quad \sqrt{x} \in \mathbb{Q}, \\ \operatorname{None} & \textrm{otherwise}. \end{cases} $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::CheckedSqrt; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!(Rational::from(99u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Rational::from(100u8).checked_sqrt().to_debug_string(), "Some(10)"); assert_eq!(Rational::from(101u8).checked_sqrt().to_debug_string(), "None"); assert_eq!(Rational::from_signeds(22, 7).checked_sqrt().to_debug_string(), "None"); assert_eq!(Rational::from_signeds(25, 9).checked_sqrt().to_debug_string(), "Some(5/3)"); ``` #### type Output = Rational ### impl Clone for Rational #### fn clone(&self) -> Rational Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn continued_fraction(self) -> (Integer, RationalContinuedFraction) Returns the continued fraction of a `Rational`, taking the `Rational` by reference. The output has two components. The first is the first value of the continued fraction, which may be any `Integer` and is equal to the floor of the `Rational`. The second is an iterator that produces the remaining values, which are all positive. Using the standard notation for continued fractions, the first value is the number before the semicolon, and the second value produces the remaining numbers. Each rational number has two continued fraction representations. The shorter of the two representations (the one that does not end in 1) is returned. $f(x) = (a_0, (a_1, a_2, \ldots, a_3)),$ where $x = [a_0; a_1, a_2, \ldots, a_3]$ and $a_3 \neq 1$. The output length is $O(n)$, where $n$ is `self.significant_bits()`. ##### Worst-case complexity per iteration $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use itertools::Itertools; use malachite_base::strings::ToDebugString; use malachite_q::conversion::traits::ContinuedFraction; use malachite_q::Rational; let (head, tail) = (&Rational::from_signeds(2, 3)).continued_fraction(); let tail = tail.collect_vec(); assert_eq!(head, 0); assert_eq!(tail.to_debug_string(), "[1, 2]"); let (head, tail) = (&Rational::from_signeds(355, 113)).continued_fraction(); let tail = tail.collect_vec(); assert_eq!(head, 3); assert_eq!(tail.to_debug_string(), "[7, 16]"); ``` #### type CF = RationalContinuedFraction ### impl ContinuedFraction for Rational #### fn continued_fraction(self) -> (Integer, RationalContinuedFraction) Returns the continued fraction of a `Rational`, taking the `Rational` by value. The output has two components. The first is the first value of the continued fraction, which may be any `Integer` and is equal to the floor of the `Rational`. The second is an iterator that produces the remaining values, which are all positive. Using the standard notation for continued fractions, the first value is the number before the semicolon, and the second value produces the remaining numbers. Each rational number has two continued fraction representations. The shorter of the two representations (the one that does not end in 1) is returned. $f(x) = (a_0, (a_1, a_2, \ldots, a_3)),$ where $x = [a_0; a_1, a_2, \ldots, a_3]$ and $a_3 \neq 1$. The output length is $O(n)$, where $n$ is `self.significant_bits()`. ##### Worst-case complexity per iteration $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use itertools::Itertools; use malachite_base::strings::ToDebugString; use malachite_q::conversion::traits::ContinuedFraction; use malachite_q::Rational; let (head, tail) = Rational::from_signeds(2, 3).continued_fraction(); let tail = tail.collect_vec(); assert_eq!(head, 0); assert_eq!(tail.to_debug_string(), "[1, 2]"); let (head, tail) = Rational::from_signeds(355, 113).continued_fraction(); let tail = tail.collect_vec(); assert_eq!(head, 3); assert_eq!(tail.to_debug_string(), "[7, 16]"); ``` #### type CF = RationalContinuedFraction ### impl<'a> Convergents for &'a Rational #### fn convergents(self) -> RationalConvergents Returns the convergents of a `Rational`, taking the `Rational` by reference. The convergents of a number are the sequence of rational numbers whose continued fractions are the prefixes of the number’s continued fraction. The first convergent is the floor of the number. The sequence of convergents is finite iff the number is rational, in which case the last convergent is the number itself. Each convergent is closer to the number than the previous convergent is. The even-indexed convergents are less than or equal to the number, and the odd-indexed ones are greater than or equal to it. $f(x) = ([a_0; a_1, a_2, \ldots, a_i])_{i=0}^{n}$, where $x = [a_0; a_1, a_2, \ldots, a_n]$ and $a_n \neq 1$. The output length is $O(n)$, where $n$ is `self.significant_bits()`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use itertools::Itertools; use malachite_base::strings::ToDebugString; use malachite_q::conversion::traits::Convergents; use malachite_q::Rational; assert_eq!( (&Rational::from_signeds(2, 3)).convergents().collect_vec().to_debug_string(), "[0, 1, 2/3]" ); assert_eq!( (&Rational::from_signeds(355, 113)).convergents().collect_vec().to_debug_string(), "[3, 22/7, 355/113]", ); ``` #### type C = RationalConvergents ### impl Convergents for Rational #### fn convergents(self) -> RationalConvergents Returns the convergents of a `Rational`, taking the `Rational` by value. The convergents of a number are the sequence of rational numbers whose continued fractions are the prefixes of the number’s continued fraction. The first convergent is the floor of the number. The sequence of convergents is finite iff the number is rational, in which case the last convergent is the number itself. Each convergent is closer to the number than the previous convergent is. The even-indexed convergents are less than or equal to the number, and the odd-indexed ones are greater than or equal to it. $f(x) = ([a_0; a_1, a_2, \ldots, a_i])_{i=0}^{n}$, where $x = [a_0; a_1, a_2, \ldots, a_n]$ and $a_n \neq 1$. The output length is $O(n)$, where $n$ is `self.significant_bits()`. ##### Worst-case complexity per iteration $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use itertools::Itertools; use malachite_base::strings::ToDebugString; use malachite_q::conversion::traits::Convergents; use malachite_q::Rational; assert_eq!( Rational::from_signeds(2, 3).convergents().collect_vec().to_debug_string(), "[0, 1, 2/3]" ); assert_eq!( Rational::from_signeds(355, 113).convergents().collect_vec().to_debug_string(), "[3, 22/7, 355/113]", ); ``` #### type C = RationalConvergents ### impl<'a> ConvertibleFrom<&'a Rational> for Integer #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to an `Integer`, taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Integer::convertible_from(&Rational::from(123)), true); assert_eq!(Integer::convertible_from(&Rational::from(-123)), true); assert_eq!(Integer::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl<'a> ConvertibleFrom<&'a Rational> for Natural #### fn convertible_from(x: &Rational) -> bool Determines whether a `Rational` can be converted to a `Natural` (when the `Rational` is non-negative and an integer), taking the `Rational` by reference. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::conversion::traits::ConvertibleFrom; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Natural::convertible_from(&Rational::from(123)), true); assert_eq!(Natural::convertible_from(&Rational::from(-123)), false); assert_eq!(Natural::convertible_from(&Rational::from_signeds(22, 7)), false); ``` ### impl<'a> ConvertibleFrom<&'a Rational> for f32 #### fn convertible_from(value: &'a Rational) -> bool Determines whether a `Rational` can be exactly converted to a primitive float, taking the `Rational` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for f64 #### fn convertible_from(value: &'a Rational) -> bool Determines whether a `Rational` can be exactly converted to a primitive float, taking the `Rational` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for i128 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for i16 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for i32 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for i64 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for i8 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for isize #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to a signed primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for u128 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for u16 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for u32 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for u64 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for u8 #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl<'a> ConvertibleFrom<&'a Rational> for usize #### fn convertible_from(value: &Rational) -> bool Determines whether a `Rational` can be converted to an unsigned primitive integer. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<Rational> for f32 #### fn convertible_from(value: Rational) -> bool Determines whether a `Rational` can be exactly converted to a primitive float, taking the `Rational` by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl ConvertibleFrom<Rational> for f64 #### fn convertible_from(value: Rational) -> bool Determines whether a `Rational` can be exactly converted to a primitive float, taking the `Rational` by value. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples See here. ### impl ConvertibleFrom<f32> for Rational #### fn convertible_from(x: f32) -> bool Determines whether a primitive float can be converted to a `Rational`. (It can if it is finite.) ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl ConvertibleFrom<f64> for Rational #### fn convertible_from(x: f64) -> bool Determines whether a primitive float can be converted to a `Rational`. (It can if it is finite.) ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl Debug for Rational #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Rational` to a `String`. This is the same implementation as for `Display`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!(Rational::ZERO.to_debug_string(), "0"); assert_eq!(Rational::from(123).to_debug_string(), "123"); assert_eq!(Rational::from_signeds(22, 7).to_debug_string(), "22/7"); ``` ### impl Default for Rational #### fn default() -> Rational The default value of a `Rational`, 0. ### impl<'a, 'b> DenominatorsInClosedInterval<'a, 'b> for Rational #### fn denominators_in_closed_interval( a: &'a Rational, b: &'b Rational ) -> DenominatorsInClosedRationalInterval<'a, 'bReturns an iterator of all denominators that appear in the `Rational`s contained in a closed interval. For example, consider the interval $[1/3, 1/2]$. It contains no integers, so no `Rational`s with denominator 1. It does contain `Rational`s with denominators 2 and 3 (the endpoints). It contains none with denominator 4, but it does contain $2/5$. It contains none with denominator 6 (though $1/3$ and $1/2$ are $2/6$ and $3/6$, those representations are not reduced). It contains $3/7$, $3/8$, and $4/9$ but none with denominator 10 ($0.4$ does not count because it is $2/5$). It contains all denominators greater than 10, so the complete list is $2, 3, 5, 7, 8, 9, 11, 12, 13, \ldots$. ##### Worst-case complexity per iteration $T(n, i) = O(n + \log i)$ $M(n, i) = O(n + \log i)$ where $T$ is time, $M$ is additional memory, $i$ is the iteration number, and $n$ is `max(a.significant_bits(), b.significant_bits())`. ##### Panics Panics if $a \geq b$. ``` use malachite_base::iterators::prefix_to_string; use malachite_base::num::basic::traits::{One, Two}; use malachite_q::arithmetic::traits::DenominatorsInClosedInterval; use malachite_q::Rational; assert_eq!( prefix_to_string( Rational::denominators_in_closed_interval( &Rational::ONE, &Rational::TWO ), 20 ), "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...]" ); assert_eq!( prefix_to_string( Rational::denominators_in_closed_interval( &Rational::from_signeds(1, 3), &Rational::from_signeds(1, 2) ), 20 ), "[2, 3, 5, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, ...]" ); assert_eq!( prefix_to_string( Rational::denominators_in_closed_interval( &Rational::from_signeds(1, 1000001), &Rational::from_signeds(1, 1000000) ), 20 ), "[1000000, 1000001, 3000001, 3000002, 4000001, 4000003, 5000001, 5000002, 5000003, \ 5000004, 6000001, 6000005, 7000001, 7000002, 7000003, 7000004, 7000005, 7000006, \ 8000001, 8000003, ...]" ); ``` #### type Denominators = DenominatorsInClosedRationalInterval<'a, 'b### impl Display for Rational #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorConverts a `Rational` to a `String`. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::ZERO.to_string(), "0"); assert_eq!(Rational::from(123).to_string(), "123"); assert_eq!(Rational::from_str("22/7").unwrap().to_string(), "22/7"); ``` ### impl<'a, 'b> Div<&'a Rational> for &'b Rational #### fn div(self, other: &'a Rational) -> Rational Divides a `Rational` by another `Rational`, taking both by reference. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; assert_eq!(&Rational::TWO / &Rational::TWO, 1); assert_eq!( (&Rational::from_signeds(22, 7) / &Rational::from_signeds(99, 100)).to_string(), "200/63" ); ``` #### type Output = Rational The resulting type after applying the `/` operator.### impl<'a> Div<&'a Rational> for Rational #### fn div(self, other: &'a Rational) -> Rational Divides a `Rational` by another `Rational`, taking the first by value and the second by reference. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; assert_eq!(Rational::TWO / &Rational::TWO, 1); assert_eq!( (Rational::from_signeds(22, 7) / &Rational::from_signeds(99, 100)).to_string(), "200/63" ); ``` #### type Output = Rational The resulting type after applying the `/` operator.### impl<'a> Div<Rational> for &'a Rational #### fn div(self, other: Rational) -> Rational Divides a `Rational` by another `Rational`, taking the first by reference and the second by value. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; assert_eq!(&Rational::TWO / Rational::TWO, 1); assert_eq!( (&Rational::from_signeds(22, 7) / Rational::from_signeds(99, 100)).to_string(), "200/63" ); ``` #### type Output = Rational The resulting type after applying the `/` operator.### impl Div<Rational> for Rational #### fn div(self, other: Rational) -> Rational Divides a `Rational` by another `Rational`, taking both by value. $$ f(x, y) = \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; assert_eq!(Rational::TWO / Rational::TWO, 1); assert_eq!( (Rational::from_signeds(22, 7) / Rational::from_signeds(99, 100)).to_string(), "200/63" ); ``` #### type Output = Rational The resulting type after applying the `/` operator.### impl<'a> DivAssign<&'a Rational> for Rational #### fn div_assign(&mut self, other: &'a Rational) Divides a `Rational` by a `Rational` in place, taking the `Rational` on the right-hand side by reference. $$ x \gets \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; let mut x = Rational::TWO; x /= &Rational::TWO; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x /= &Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "200/63"); ``` ### impl DivAssign<Rational> for Rational #### fn div_assign(&mut self, other: Rational) Divides a `Rational` by a `Rational` in place, taking the `Rational` on the right-hand side by value. $$ x \gets \frac{x}{y}. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Panics Panics if the second `Rational` is zero. ##### Examples ``` use malachite_base::num::basic::traits::Two; use malachite_q::Rational; let mut x = Rational::TWO; x /= Rational::TWO; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x /= Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "200/63"); ``` ### impl<'a> Floor for &'a Rational #### fn floor(self) -> Integer Finds the floor of a `Rational`, taking the `Rational` by reference. $$ f(x) = \lfloor x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Floor; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; use std::str::FromStr; assert_eq!((&Rational::ZERO).floor(), 0); assert_eq!((&Rational::from_signeds(22, 7)).floor(), 3); assert_eq!((&Rational::from_signeds(-22, 7)).floor(), -4); ``` #### type Output = Integer ### impl Floor for Rational #### fn floor(self) -> Integer Finds the floor of a `Rational`, taking the `Rational` by value. $$ f(x) = \lfloor x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Floor; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::ZERO.floor(), 0); assert_eq!(Rational::from_signeds(22, 7).floor(), 3); assert_eq!(Rational::from_signeds(-22, 7).floor(), -4); ``` #### type Output = Integer ### impl FloorAssign for Rational #### fn floor_assign(&mut self) Replaces a `Rational` with its floor. $$ x \gets \lfloor x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; use std::str::FromStr; let mut x = Rational::ZERO; x.floor_assign(); assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x.floor_assign(); assert_eq!(x, 3); let mut x = Rational::from_signeds(-22, 7); x.floor_assign(); assert_eq!(x, -4); ``` ### impl<'a, 'b> FloorLogBase<&'b Rational> for &'a Rational #### fn floor_log_base(self, base: &Rational) -> i64 Returns the floor of the base-$b$ logarithm of a positive `Rational`. Note that this function may be slow if the base is very close to 1. $f(x, b) = \lfloor\log_b x\rfloor$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `base.significant_bits()`, and $m$ is $|\log_b x|$, where $b$ is `base` and $x$ is `x`. ##### Panics Panics if `self` less than or equal to zero or `base` is 1. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase; use malachite_q::Rational; assert_eq!(Rational::from(80u32).floor_log_base(&Rational::from(3u32)), 3); assert_eq!(Rational::from(81u32).floor_log_base(&Rational::from(3u32)), 4); assert_eq!(Rational::from(82u32).floor_log_base(&Rational::from(3u32)), 4); assert_eq!(Rational::from(4294967296u64).floor_log_base(&Rational::from(10u32)), 9); assert_eq!( Rational::from_signeds(936851431250i64, 1397).floor_log_base(&Rational::from(10u32)), 8 ); assert_eq!( Rational::from_signeds(5153632, 16807).floor_log_base(&Rational::from_signeds(22, 7)), 5 ); ``` #### type Output = i64 ### impl<'a> FloorLogBase2 for &'a Rational #### fn floor_log_base_2(self) -> i64 Returns the floor of the base-2 logarithm of a positive `Rational`. $f(x) = \lfloor\log_2 x\rfloor$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBase2; use malachite_q::Rational; assert_eq!(Rational::from(3u32).floor_log_base_2(), 1); assert_eq!(Rational::from_signeds(1, 3).floor_log_base_2(), -2); assert_eq!(Rational::from_signeds(1, 4).floor_log_base_2(), -2); assert_eq!(Rational::from_signeds(1, 5).floor_log_base_2(), -3); ``` #### type Output = i64 ### impl<'a> FloorLogBasePowerOf2<i64> for &'a Rational #### fn floor_log_base_power_of_2(self, pow: i64) -> i64 Returns the floor of the base-$2^k$ logarithm of a positive `Rational`. $k$ may be negative. $f(x, k) = \lfloor\log_{2^k} x\rfloor$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to 0 or `pow` is 0. ##### Examples ``` use malachite_base::num::arithmetic::traits::FloorLogBasePowerOf2; use malachite_q::Rational; assert_eq!(Rational::from(100).floor_log_base_power_of_2(2), 3); assert_eq!(Rational::from(4294967296u64).floor_log_base_power_of_2(8), 4); // 4^(-2) < 1/10 < 4^(-1) assert_eq!(Rational::from_signeds(1, 10).floor_log_base_power_of_2(2), -2); // (1/4)^2 < 1/10 < (1/4)^1 assert_eq!(Rational::from_signeds(1, 10).floor_log_base_power_of_2(-2), 1); ``` #### type Output = i64 ### impl<'a> From<&'a Integer> for Rational #### fn from(value: &'a Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(&Integer::from(123)), 123); assert_eq!(Rational::from(&Integer::from(-123)), -123); ``` ### impl<'a> From<&'a Natural> for Rational #### fn from(value: &'a Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by reference. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(&Natural::from(123u32)), 123); ``` ### impl From<Integer> for Rational #### fn from(value: Integer) -> Rational Converts an `Integer` to a `Rational`, taking the `Integer` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!(Rational::from(Integer::from(123)), 123); assert_eq!(Rational::from(Integer::from(-123)), -123); ``` ### impl From<Natural> for Rational #### fn from(value: Natural) -> Rational Converts a `Natural` to a `Rational`, taking the `Natural` by value. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!(Rational::from(Natural::from(123u32)), 123); ``` ### impl From<bool> for Rational #### fn from(b: bool) -> Rational Converts a `bool` to 0 or 1. This function is known as the Iverson bracket. $$ f(P) = [P] = \begin{cases} 1 & \text{if} \quad P, \\ 0 & \text{otherwise}. \end{cases} $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_q::Rational; assert_eq!(Rational::from(false), 0); assert_eq!(Rational::from(true), 1); ``` ### impl From<i128> for Rational #### fn from(i: i128) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i16> for Rational #### fn from(i: i16) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i32> for Rational #### fn from(i: i32) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i64> for Rational #### fn from(i: i64) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<i8> for Rational #### fn from(i: i8) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<isize> for Rational #### fn from(i: isize) -> Rational Converts a signed primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u128> for Rational #### fn from(u: u128) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u16> for Rational #### fn from(u: u16) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u32> for Rational #### fn from(u: u32) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u64> for Rational #### fn from(u: u64) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<u8> for Rational #### fn from(u: u8) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl From<usize> for Rational #### fn from(u: usize) -> Rational Converts an unsigned primitive integer to a `Rational`. ##### Worst-case complexity Constant time and additional memory. ##### Examples See here. ### impl FromSciString for Rational #### fn from_sci_string_with_options( s: &str, options: FromSciStringOptions ) -> Option<RationalConverts a string, possibly in scientfic notation, to a `Rational`. Use `FromSciStringOptions` to specify the base (from 2 to 36, inclusive). The rounding mode option is ignored. If the base is greater than 10, the higher digits are represented by the letters `'a'` through `'z'` or `'A'` through `'Z'`; the case doesn’t matter and doesn’t need to be consistent. Exponents are allowed, and are indicated using the character `'e'` or `'E'`. If the base is 15 or greater, an ambiguity arises where it may not be clear whether `'e'` is a digit or an exponent indicator. To resolve this ambiguity, always use a `'+'` or `'-'` sign after the exponent indicator when the base is 15 or greater. The exponent itself is always parsed using base 10. Decimal (or other-base) points are allowed. If the string is unparseable, `None` is returned. This function is very literal; given `"0.333"`, it will return $333/1000$ rather than $1/3$. If you’d prefer that it return $1/3$, consider using `from_sci_string_simplest` instead. However, that function has its quirks too: given `"0.1"`, it will not return $1/10$ (see its documentation for an explanation of this behavior). This function *does* return $1/10$. ##### Worst-case complexity $T(n, m) = O(m^n n \log m (\log n + \log\log m))$ $M(n, m) = O(m^n n \log m)$ where $T$ is time, $M$ is additional memory, $n$ is `s.len()`, and $m$ is `options.base`. ##### Examples ``` use malachite_base::num::conversion::string::options::FromSciStringOptions; use malachite_base::num::conversion::traits::FromSciString; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; assert_eq!(Rational::from_sci_string("123").unwrap(), 123); assert_eq!(Rational::from_sci_string("0.1").unwrap().to_string(), "1/10"); assert_eq!(Rational::from_sci_string("0.10").unwrap().to_string(), "1/10"); assert_eq!(Rational::from_sci_string("0.333").unwrap().to_string(), "333/1000"); assert_eq!(Rational::from_sci_string("1.2e5").unwrap(), 120000); assert_eq!(Rational::from_sci_string("1.2e-5").unwrap().to_string(), "3/250000"); let mut options = FromSciStringOptions::default(); options.set_base(16); assert_eq!(Rational::from_sci_string_with_options("ff", options).unwrap(), 255); assert_eq!(Rational::from_sci_string_with_options("ffE+5", options).unwrap(), 267386880); assert_eq!( Rational::from_sci_string_with_options("ffE-5", options).unwrap().to_string(), "255/1048576" ); ``` #### fn from_sci_string(s: &str) -> Option<SelfConverts a `&str`, possibly in scientific notation, to a number, using the default `FromSciStringOptions`.### impl FromStr for Rational #### fn from_str(s: &str) -> Result<Rational, ()Converts an string to a `Rational`. If the string does not represent a valid `Rational`, an `Err` is returned. The numerator and denominator do not need to be in lowest terms, but the denominator must be nonzero. A negative sign is only allowed at the 0th position of the string. ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `s.len()`. ##### Examples ``` use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::from_str("123456").unwrap(), 123456); assert_eq!(Rational::from_str("00123456").unwrap(), 123456); assert_eq!(Rational::from_str("0").unwrap(), 0); assert_eq!(Rational::from_str("-123456").unwrap(), -123456); assert_eq!(Rational::from_str("-00123456").unwrap(), -123456); assert_eq!(Rational::from_str("-0").unwrap(), 0); assert_eq!(Rational::from_str("22/7").unwrap().to_string(), "22/7"); assert_eq!(Rational::from_str("01/02").unwrap().to_string(), "1/2"); assert_eq!(Rational::from_str("3/21").unwrap().to_string(), "1/7"); assert_eq!(Rational::from_str("-22/7").unwrap().to_string(), "-22/7"); assert_eq!(Rational::from_str("-01/02").unwrap().to_string(), "-1/2"); assert_eq!(Rational::from_str("-3/21").unwrap().to_string(), "-1/7"); assert!(Rational::from_str("").is_err()); assert!(Rational::from_str("a").is_err()); assert!(Rational::from_str("1/0").is_err()); assert!(Rational::from_str("/1").is_err()); assert!(Rational::from_str("1/").is_err()); assert!(Rational::from_str("--1").is_err()); assert!(Rational::from_str("1/-2").is_err()); ``` #### type Err = () The associated error which can be returned from parsing.### impl Hash for Rational #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn is_integer(self) -> bool Determines whether a `Rational` is an integer. $f(x) = x \in \Z$. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::{One, Zero}; use malachite_base::num::conversion::traits::IsInteger; use malachite_q::Rational; assert_eq!(Rational::ZERO.is_integer(), true); assert_eq!(Rational::ONE.is_integer(), true); assert_eq!(Rational::from(100).is_integer(), true); assert_eq!(Rational::from(-100).is_integer(), true); assert_eq!(Rational::from_signeds(22, 7).is_integer(), false); assert_eq!(Rational::from_signeds(-22, 7).is_integer(), false); ``` ### impl IsPowerOf2 for Rational #### fn is_power_of_2(&self) -> bool Determines whether a `Rational` is an integer power of 2. $f(x) = (\exists n \in \Z : 2^n = x)$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::IsPowerOf2; use malachite_q::Rational; assert_eq!(Rational::from(0x80).is_power_of_2(), true); assert_eq!(Rational::from_signeds(1, 8).is_power_of_2(), true); assert_eq!(Rational::from_signeds(-1, 8).is_power_of_2(), false); assert_eq!(Rational::from_signeds(22, 7).is_power_of_2(), false); ``` ### impl<'a, 'b> Mul<&'a Rational> for &'b Rational #### fn mul(self, other: &'a Rational) -> Rational Multiplies two `Rational`s, taking both by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF * &Rational::TWO, 1); assert_eq!( (&Rational::from_signeds(22, 7) * &Rational::from_signeds(99, 100)).to_string(), "1089/350" ); ``` #### type Output = Rational The resulting type after applying the `*` operator.### impl<'a> Mul<&'a Rational> for Rational #### fn mul(self, other: &'a Rational) -> Rational Multiplies two `Rational`s, taking the first by value and the second by reference. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF * &Rational::TWO, 1); assert_eq!( (Rational::from_signeds(22, 7) * &Rational::from_signeds(99, 100)).to_string(), "1089/350" ); ``` #### type Output = Rational The resulting type after applying the `*` operator.### impl<'a> Mul<Rational> for &'a Rational #### fn mul(self, other: Rational) -> Rational Multiplies two `Rational`s, taking the first by reference and the second by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF * Rational::TWO, 1); assert_eq!( (&Rational::from_signeds(22, 7) * Rational::from_signeds(99, 100)).to_string(), "1089/350" ); ``` #### type Output = Rational The resulting type after applying the `*` operator.### impl Mul<Rational> for Rational #### fn mul(self, other: Rational) -> Rational Multiplies two `Rational`s, taking both by value. $$ f(x, y) = xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF * Rational::TWO, 1); assert_eq!( (Rational::from_signeds(22, 7) * Rational::from_signeds(99, 100)).to_string(), "1089/350" ); ``` #### type Output = Rational The resulting type after applying the `*` operator.### impl<'a> MulAssign<&'a Rational> for Rational #### fn mul_assign(&mut self, other: &'a Rational) Multiplies a `Rational` by a `Rational` in place, taking the `Rational` on the right-hand side by reference. $$ x \gets xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^3 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x *= &Rational::TWO; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x *= &Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "1089/350"); ``` ### impl MulAssign<Rational> for Rational #### fn mul_assign(&mut self, other: Rational) Multiplies a `Rational` by a `Rational` in place, taking the `Rational` on the right-hand side by value. $$ x \gets xy. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::{OneHalf, Two}; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x *= Rational::TWO; assert_eq!(x, 1); let mut x = Rational::from_signeds(22, 7); x *= Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "1089/350"); ``` ### impl Named for Rational #### const NAME: &'static str = _ The name of this type, as given by the `stringify` macro. See the documentation for `impl_named` for more details. ### impl<'a> Neg for &'a Rational #### fn neg(self) -> Rational Negates a `Rational`, taking it by reference. $$ f(x) = -x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(-&Rational::ZERO, 0); assert_eq!((-&Rational::from_signeds(22, 7)).to_string(), "-22/7"); assert_eq!((-&Rational::from_signeds(-22, 7)).to_string(), "22/7"); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl Neg for Rational #### fn neg(self) -> Rational Negates a `Rational`, taking it by value. $$ f(x) = -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(-Rational::ZERO, 0); assert_eq!((-Rational::from_signeds(22, 7)).to_string(), "-22/7"); assert_eq!((-Rational::from_signeds(-22, 7)).to_string(), "22/7"); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl NegAssign for Rational #### fn neg_assign(&mut self) Negates a `Rational` in place. $$ x \gets -x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::NegAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::ZERO; x.neg_assign(); assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x.neg_assign(); assert_eq!(x.to_string(), "-22/7"); let mut x = Rational::from_signeds(-22, 7); x.neg_assign(); assert_eq!(x.to_string(), "22/7"); ``` ### impl NegativeOne for Rational The constant -1. #### const NEGATIVE_ONE: Rational = _ ### impl<'a> NextPowerOf2 for &'a Rational #### fn next_power_of_2(self) -> Rational Finds the smallest power of 2 greater than or equal to a `Rational`. The `Rational` is taken by reference. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NextPowerOf2; use malachite_q::Rational; assert_eq!((&Rational::from(123)).next_power_of_2(), 128); assert_eq!((&Rational::from_signeds(1, 10)).next_power_of_2().to_string(), "1/8"); ``` #### type Output = Rational ### impl NextPowerOf2 for Rational #### fn next_power_of_2(self) -> Rational Finds the smallest power of 2 greater than or equal to a `Rational`. The `Rational` is taken by value. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NextPowerOf2; use malachite_q::Rational; assert_eq!(Rational::from(123).next_power_of_2(), 128); assert_eq!(Rational::from_signeds(1, 10).next_power_of_2().to_string(), "1/8"); ``` #### type Output = Rational ### impl NextPowerOf2Assign for Rational #### fn next_power_of_2_assign(&mut self) Finds the smallest power of 2 greater than or equal to a `Rational`. The `Rational` is taken by reference. $f(x) = 2^{\lceil \log_2 x \rceil}$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics Panics if `self` is less than or equal to zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::NextPowerOf2Assign; use malachite_q::Rational; use std::str::FromStr; let mut x = Rational::from(123); x.next_power_of_2_assign(); assert_eq!(x, 128); let mut x = Rational::from_signeds(1, 10); x.next_power_of_2_assign(); assert_eq!(x.to_string(), "1/8"); ``` ### impl One for Rational The constant 1. #### const ONE: Rational = _ ### impl OneHalf for Rational The constant 1/2. #### const ONE_HALF: Rational = _ ### impl Ord for Rational #### fn cmp(&self, other: &Rational) -> Ordering Compares two `Rational`s. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; use std::str::FromStr; assert!(Rational::from_str("2/3").unwrap() > Rational::ONE_HALF); assert!(Rational::from_str("-2/3").unwrap() < Rational::ONE_HALF); ``` 1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn cmp_abs(&self, other: &Rational) -> Ordering Compares the absolute values of two `Rational`s. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_base::num::comparison::traits::OrdAbs; use malachite_q::Rational; use std::cmp::Ordering; use std::str::FromStr; assert_eq!( Rational::from_str("2/3").unwrap().cmp_abs(&Rational::ONE_HALF), Ordering::Greater ); assert_eq!( Rational::from_str("-2/3").unwrap().cmp_abs(&Rational::ONE_HALF), Ordering::Greater ); ``` ### impl PartialEq<Integer> for Rational #### fn eq(&self, other: &Integer) -> bool Determines whether a `Rational` is equal to an `Integer`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from(-123) == Integer::from(-123)); assert!(Rational::from_signeds(22, 7) != Integer::from(5)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Natural> for Rational #### fn eq(&self, other: &Natural) -> bool Determines whether a `Rational` is equal to a `Natural`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from(123) == Natural::from(123u32)); assert!(Rational::from_signeds(22, 7) != Natural::from(5u32)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Integer #### fn eq(&self, other: &Rational) -> bool Determines whether an `Integer` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(-123) == Rational::from(-123)); assert!(Integer::from(5) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Natural #### fn eq(&self, other: &Rational) -> bool Determines whether a `Natural` is equal to a `Rational`. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `min(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(123u32) == Rational::from(123)); assert!(Natural::from(5u32) != Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f32> for Rational #### fn eq(&self, other: &f32) -> bool Determines whether a `Rational` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`, and $m$ is `other.sci_exponent().abs()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<f64> for Rational #### fn eq(&self, other: &f64) -> bool Determines whether a `Rational` is equal to a primitive float. ##### Worst-case complexity $T(n) = O(n)$ $M(m) = O(m)$ where $T$ is time, $M$ is additional memory, $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`, and $m$ is `other.sci_exponent().abs()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i128> for Rational #### fn eq(&self, other: &i128) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i16> for Rational #### fn eq(&self, other: &i16) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i32> for Rational #### fn eq(&self, other: &i32) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i64> for Rational #### fn eq(&self, other: &i64) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<i8> for Rational #### fn eq(&self, other: &i8) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<isize> for Rational #### fn eq(&self, other: &isize) -> bool Determines whether a `Rational` is equal to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u128> for Rational #### fn eq(&self, other: &u128) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u16> for Rational #### fn eq(&self, other: &u16) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u32> for Rational #### fn eq(&self, other: &u32) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u64> for Rational #### fn eq(&self, other: &u64) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<u8> for Rational #### fn eq(&self, other: &u8) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<usize> for Rational #### fn eq(&self, other: &usize) -> bool Determines whether a `Rational` is equal to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(1)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialEq<Rational> for Rational #### fn eq(&self, other: &Rational) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Integer> for Rational #### fn partial_cmp(&self, other: &Integer) -> Option<OrderingCompares a `Rational` to an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Integer::from(3)); assert!(Rational::from_signeds(22, 7) < Integer::from(4)); assert!(Rational::from_signeds(-22, 7) < Integer::from(-3)); assert!(Rational::from_signeds(-22, 7) > Integer::from(-4)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Natural) -> Option<OrderingCompares a `Rational` to a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Rational::from_signeds(22, 7) > Natural::from(3u32)); assert!(Rational::from_signeds(22, 7) < Natural::from(4u32)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares an `Integer` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::Rational; assert!(Integer::from(3) < Rational::from_signeds(22, 7)); assert!(Integer::from(4) > Rational::from_signeds(22, 7)); assert!(Integer::from(-3) > Rational::from_signeds(-22, 7)); assert!(Integer::from(-4) < Rational::from_signeds(-22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares a `Natural` to a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::Rational; assert!(Natural::from(3u32) < Rational::from_signeds(22, 7)); assert!(Natural::from(4u32) > Rational::from_signeds(22, 7)); ``` 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f32) -> Option<OrderingCompares a `Rational` to a primitive float. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &f64) -> Option<OrderingCompares a `Rational` to a primitive float. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i128) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i16) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i32) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i64) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &i8) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &isize) -> Option<OrderingCompares a `Rational` to a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u128) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u16) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u32) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u64) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &u8) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &usize) -> Option<OrderingCompares a `Rational` to an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp(&self, other: &Rational) -> Option<OrderingCompares two `Rational`s. See the documentation for the `Ord` implementation. 1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn partial_cmp_abs(&self, other: &Integer) -> Option<OrderingCompares the absolute values of a `Rational` and an `Integer`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Integer::from(3)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Integer::from(-3)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Natural) -> Option<OrderingCompares the absolute values of a `Rational` and a `Natural`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Rational::from_signeds(22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); assert_eq!( Rational::from_signeds(-22, 7).partial_cmp_abs(&Natural::from(3u32)), Some(Ordering::Greater) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an `Integer` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::integer::Integer; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Integer::from(3).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Integer::from(-3).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a `Natural` and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::comparison::traits::PartialOrdAbs; use malachite_nz::natural::Natural; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(22, 7)), Some(Ordering::Less) ); assert_eq!( Natural::from(3u32).partial_cmp_abs(&Rational::from_signeds(-22, 7)), Some(Ordering::Less) ); ``` #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a primitive float and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.sci_exponent().abs(), other.significant_bits())`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a primitive float and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.sci_exponent().abs(), other.significant_bits())`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of a signed primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of an unsigned primitive integer and a `Rational`. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `other.significant_bits()`. See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f32) -> Option<OrderingCompares the absolute values of a `Rational` and a primitive float. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &f64) -> Option<OrderingCompares the absolute values of a `Rational` and a primitive float. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.sci_exponent().abs())`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i128) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i16) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i32) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i64) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &i8) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &isize) -> Option<OrderingCompares the absolute values of a `Rational` and a signed primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u128) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u16) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u32) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u64) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &u8) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &usize) -> Option<OrderingCompares the absolute values of a `Rational` and an unsigned primitive integer. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn partial_cmp_abs(&self, other: &Rational) -> Option<OrderingCompares the absolute values of two `Rational`s. See the documentation for the `OrdAbs` implementation. #### fn lt_abs(&self, other: &Rhs) -> bool Determines whether the absolute value of one number is less than the absolute value of another. Determines whether the absolute value of one number is less than or equal to the absolute value of another. Determines whether the absolute value of one number is greater than the absolute value of another. Determines whether the absolute value of one number is greater than or equal to the absolute value of another. #### fn pow(self, exp: i64) -> Rational Raises a `Rational` to a power, taking the `Rational` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp.abs()`. ##### Panics Panics if `self` is zero and `exp` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!((&Rational::from_signeds(22, 7)).pow(3i64).to_string(), "10648/343"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(3i64).to_string(), "-10648/343"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(4i64).to_string(), "234256/2401"); assert_eq!((&Rational::from_signeds(22, 7)).pow(-3i64).to_string(), "343/10648"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(-3i64).to_string(), "-343/10648"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(-4i64).to_string(), "2401/234256"); ``` #### type Output = Rational ### impl Pow<i64> for Rational #### fn pow(self, exp: i64) -> Rational Raises a `Rational` to a power, taking the `Rational` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp.abs()`. ##### Panics Panics if `self` is zero and `exp` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::from_signeds(22, 7).pow(3i64).to_string(), "10648/343"); assert_eq!(Rational::from_signeds(-22, 7).pow(3i64).to_string(), "-10648/343"); assert_eq!(Rational::from_signeds(-22, 7).pow(4i64).to_string(), "234256/2401"); assert_eq!(Rational::from_signeds(22, 7).pow(-3i64).to_string(), "343/10648"); assert_eq!(Rational::from_signeds(-22, 7).pow(-3i64).to_string(), "-343/10648"); assert_eq!(Rational::from_signeds(-22, 7).pow(-4i64).to_string(), "2401/234256"); ``` #### type Output = Rational ### impl<'a> Pow<u64> for &'a Rational #### fn pow(self, exp: u64) -> Rational Raises a `Rational` to a power, taking the `Rational` by reference. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!((&Rational::from_signeds(22, 7)).pow(3u64).to_string(), "10648/343"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(3u64).to_string(), "-10648/343"); assert_eq!((&Rational::from_signeds(-22, 7)).pow(4u64).to_string(), "234256/2401"); ``` #### type Output = Rational ### impl Pow<u64> for Rational #### fn pow(self, exp: u64) -> Rational Raises a `Rational` to a power, taking the `Rational` by value. $f(x, n) = x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Pow; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::from_signeds(22, 7).pow(3u64).to_string(), "10648/343"); assert_eq!(Rational::from_signeds(-22, 7).pow(3u64).to_string(), "-10648/343"); assert_eq!(Rational::from_signeds(-22, 7).pow(4u64).to_string(), "234256/2401"); ``` #### type Output = Rational ### impl PowAssign<i64> for Rational #### fn pow_assign(&mut self, exp: i64) Raises a `Rational` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp.abs()`. ##### Panics Panics if `self` is zero and `exp` is negative. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::from_signeds(22, 7); x.pow_assign(3i64); assert_eq!(x.to_string(), "10648/343"); let mut x = Rational::from_signeds(-22, 7); x.pow_assign(3i64); assert_eq!(x.to_string(), "-10648/343"); let mut x = Rational::from_signeds(22, 7); x.pow_assign(4i64); assert_eq!(x.to_string(), "234256/2401"); let mut x = Rational::from_signeds(22, 7); x.pow_assign(-3i64); assert_eq!(x.to_string(), "343/10648"); let mut x = Rational::from_signeds(-22, 7); x.pow_assign(-3i64); assert_eq!(x.to_string(), "-343/10648"); let mut x = Rational::from_signeds(22, 7); x.pow_assign(-4i64); assert_eq!(x.to_string(), "2401/234256"); ``` ### impl PowAssign<u64> for Rational #### fn pow_assign(&mut self, exp: u64) Raises a `Rational` to a power in place. $x \gets x^n$. ##### Worst-case complexity $T(n, m) = O(nm \log (nm) \log\log (nm))$ $M(n, m) = O(nm \log (nm))$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `exp`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::from_signeds(22, 7); x.pow_assign(3u64); assert_eq!(x.to_string(), "10648/343"); let mut x = Rational::from_signeds(-22, 7); x.pow_assign(3u64); assert_eq!(x.to_string(), "-10648/343"); let mut x = Rational::from_signeds(22, 7); x.pow_assign(4u64); assert_eq!(x.to_string(), "234256/2401"); ``` ### impl PowerOf2<i64> for Rational #### fn power_of_2(pow: i64) -> Rational Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow.abs()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_q::Rational; assert_eq!(Rational::power_of_2(0i64), 1); assert_eq!(Rational::power_of_2(3i64), 8); assert_eq!(Rational::power_of_2(100i64).to_string(), "1267650600228229401496703205376"); assert_eq!(Rational::power_of_2(-3i64).to_string(), "1/8"); assert_eq!(Rational::power_of_2(-100i64).to_string(), "1/1267650600228229401496703205376"); ``` ### impl PowerOf2<u64> for Rational #### fn power_of_2(pow: u64) -> Rational Raises 2 to an integer power. $f(k) = 2^k$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `pow`. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_q::Rational; assert_eq!(Rational::power_of_2(0u64), 1); assert_eq!(Rational::power_of_2(3u64), 8); assert_eq!(Rational::power_of_2(100u64).to_string(), "1267650600228229401496703205376"); ``` ### impl<'a> Product<&'a Rational> for Rational #### fn product<I>(xs: I) -> Rationalwhere I: Iterator<Item = &'a Rational>, Multiplies together all the `Rational`s in an iterator of `Rational` references. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Rational::sum(xs.map(Rational::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_q::Rational; use std::iter::Product; assert_eq!( Rational::product( vec_from_str::<Rational>("[1, 2/3, 3/4, 4/5, 5/6, 6/7, 7/8, 8/9, 9/10]") .unwrap().iter() ).to_string(), "1/5" ); ``` ### impl Product<Rational> for Rational #### fn product<I>(xs: I) -> Rationalwhere I: Iterator<Item = Rational>, Multiplies together all the `Rational`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \prod_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^3 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Rational::sum(xs.map(Rational::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_q::Rational; use std::iter::Product; assert_eq!( Rational::product( vec_from_str::<Rational>("[1, 2/3, 3/4, 4/5, 5/6, 6/7, 7/8, 8/9, 9/10]") .unwrap().into_iter() ).to_string(), "1/5" ); ``` ### impl<'a> Reciprocal for &'a Rational #### fn reciprocal(self) -> Rational Reciprocates a `Rational`, taking it by reference. $$ f(x) = 1/x. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Reciprocal; use malachite_q::Rational; assert_eq!((&Rational::from_signeds(22, 7)).reciprocal().to_string(), "7/22"); assert_eq!((&Rational::from_signeds(7, 22)).reciprocal().to_string(), "22/7"); ``` #### type Output = Rational ### impl Reciprocal for Rational #### fn reciprocal(self) -> Rational Reciprocates a `Rational`, taking it by value. $$ f(x) = 1/x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Reciprocal; use malachite_q::Rational; assert_eq!(Rational::from_signeds(22, 7).reciprocal().to_string(), "7/22"); assert_eq!(Rational::from_signeds(7, 22).reciprocal().to_string(), "22/7"); ``` #### type Output = Rational ### impl ReciprocalAssign for Rational #### fn reciprocal_assign(&mut self) Reciprocates a `Rational` in place. $$ x \gets 1/x. $$ ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::ReciprocalAssign; use malachite_q::Rational; let mut x = Rational::from_signeds(22, 7); x.reciprocal_assign(); assert_eq!(x.to_string(), "7/22"); let mut x = Rational::from_signeds(7, 22); x.reciprocal_assign(); assert_eq!(x.to_string(), "22/7"); ``` ### impl<'a, 'b> RoundToMultiple<&'b Rational> for &'a Rational #### fn round_to_multiple( self, other: &'b Rational, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of another `Rational`, according to a specified rounding mode. Both `Rational`s are taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!( (&Rational::from(-5)).round_to_multiple(&Rational::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); assert_eq!( (&q).round_to_multiple(&hundredth, RoundingMode::Down).to_debug_string(), "(157/50, Less)" ); assert_eq!( (&q).round_to_multiple(&hundredth, RoundingMode::Floor).to_debug_string(), "(157/50, Less)" ); assert_eq!( (&q).round_to_multiple(&hundredth, RoundingMode::Up).to_debug_string(), "(63/20, Greater)" ); assert_eq!( (&q).round_to_multiple(&hundredth, RoundingMode::Ceiling).to_debug_string(), "(63/20, Greater)" ); assert_eq!( (&q).round_to_multiple(&hundredth, RoundingMode::Nearest).to_debug_string(), "(157/50, Less)" ); ``` #### type Output = Rational ### impl<'a> RoundToMultiple<&'a Rational> for Rational #### fn round_to_multiple( self, other: &'a Rational, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of another `Rational`, according to a specified rounding mode. The first `Rational` is taken by value and the second by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!( Rational::from(-5).round_to_multiple(&Rational::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); assert_eq!( q.clone().round_to_multiple(&hundredth, RoundingMode::Down).to_debug_string(), "(157/50, Less)" ); assert_eq!( q.clone().round_to_multiple(&hundredth, RoundingMode::Floor).to_debug_string(), "(157/50, Less)" ); assert_eq!( q.clone().round_to_multiple(&hundredth, RoundingMode::Up).to_debug_string(), "(63/20, Greater)" ); assert_eq!( q.clone().round_to_multiple(&hundredth, RoundingMode::Ceiling).to_debug_string(), "(63/20, Greater)" ); assert_eq!( q.clone().round_to_multiple(&hundredth, RoundingMode::Nearest).to_debug_string(), "(157/50, Less)" ); ``` #### type Output = Rational ### impl<'a> RoundToMultiple<Rational> for &'a Rational #### fn round_to_multiple( self, other: Rational, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of another `Rational`, according to a specified rounding mode. The first `Rational` is taken by reference and the second by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!( (&Rational::from(-5)).round_to_multiple(Rational::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); assert_eq!( (&q).round_to_multiple(hundredth.clone(), RoundingMode::Down).to_debug_string(), "(157/50, Less)" ); assert_eq!( (&q).round_to_multiple(hundredth.clone(), RoundingMode::Floor).to_debug_string(), "(157/50, Less)" ); assert_eq!( (&q).round_to_multiple(hundredth.clone(), RoundingMode::Up).to_debug_string(), "(63/20, Greater)" ); assert_eq!( (&q).round_to_multiple(hundredth.clone(), RoundingMode::Ceiling).to_debug_string(), "(63/20, Greater)" ); assert_eq!( (&q).round_to_multiple(hundredth.clone(), RoundingMode::Nearest).to_debug_string(), "(157/50, Less)" ); ``` #### type Output = Rational ### impl RoundToMultiple<Rational> for Rational #### fn round_to_multiple( self, other: Rational, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of another `Rational`, according to a specified rounding mode. Both `Rational`s are taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{y}$: $f(x, y, \mathrm{Down}) = f(x, y, \mathrm{Floor}) = y \lfloor q \rfloor.$ $f(x, y, \mathrm{Up}) = f(x, y, \mathrm{Ceiling}) = y \lceil q \rceil.$ $$ f(x, y, \mathrm{Nearest}) = \begin{cases} y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ y \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ y \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, y, \mathrm{Exact}) = x$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultiple; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; assert_eq!( Rational::from(-5).round_to_multiple(Rational::ZERO, RoundingMode::Down) .to_debug_string(), "(0, Greater)" ); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); assert_eq!( q.clone().round_to_multiple(hundredth.clone(), RoundingMode::Down).to_debug_string(), "(157/50, Less)" ); assert_eq!( q.clone().round_to_multiple(hundredth.clone(), RoundingMode::Floor).to_debug_string(), "(157/50, Less)" ); assert_eq!( q.clone().round_to_multiple(hundredth.clone(), RoundingMode::Up).to_debug_string(), "(63/20, Greater)" ); assert_eq!( q.clone().round_to_multiple(hundredth.clone(), RoundingMode::Ceiling) .to_debug_string(), "(63/20, Greater)" ); assert_eq!( q.clone().round_to_multiple(hundredth.clone(), RoundingMode::Nearest) .to_debug_string(), "(157/50, Less)" ); ``` #### type Output = Rational ### impl<'a> RoundToMultipleAssign<&'a Rational> for Rational #### fn round_to_multiple_assign( &mut self, other: &'a Rational, rm: RoundingMode ) -> Ordering Rounds a `Rational` to an integer multiple of another `Rational` in place, according to a specified rounding mode. The `Rational` on the right-hand side is taken by reference. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; use std::cmp::Ordering; let mut x = Rational::from(-5); assert_eq!( x.round_to_multiple_assign(&Rational::ZERO, RoundingMode::Down), Ordering::Greater ); assert_eq!(x, 0); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); let mut x = q.clone(); assert_eq!(x.round_to_multiple_assign(&hundredth, RoundingMode::Down), Ordering::Less); assert_eq!(x.to_string(), "157/50"); let mut x = q.clone(); assert_eq!(x.round_to_multiple_assign(&hundredth, RoundingMode::Floor), Ordering::Less); assert_eq!(x.to_string(), "157/50"); let mut x = q.clone(); assert_eq!(x.round_to_multiple_assign(&hundredth, RoundingMode::Up), Ordering::Greater); assert_eq!(x.to_string(), "63/20"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(&hundredth, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(x.to_string(), "63/20"); let mut x = q.clone(); assert_eq!(x.round_to_multiple_assign(&hundredth, RoundingMode::Nearest), Ordering::Less); assert_eq!(x.to_string(), "157/50"); ``` ### impl RoundToMultipleAssign<Rational> for Rational #### fn round_to_multiple_assign( &mut self, other: Rational, rm: RoundingMode ) -> Ordering Rounds a `Rational` to an integer multiple of another `Rational` in place, according to a specified rounding mode. The `Rational` on the right-hand side is taken by value. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultiple` documentation for details. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Panics * If `rm` is `Exact`, but `self` is not a multiple of `other`. * If `self` is nonzero, `other` is zero, and `rm` is trying to round away from zero. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleAssign; use malachite_base::num::basic::traits::Zero; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; use std::cmp::Ordering; let mut x = Rational::from(-5); assert_eq!( x.round_to_multiple_assign(Rational::ZERO, RoundingMode::Down), Ordering::Greater) ; assert_eq!(x, 0); let q = Rational::exact_from(std::f64::consts::PI); let hundredth = Rational::from_signeds(1, 100); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(hundredth.clone(), RoundingMode::Down), Ordering::Less ); assert_eq!(x.to_string(), "157/50"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(hundredth.clone(), RoundingMode::Floor), Ordering::Less ); assert_eq!(x.to_string(), "157/50"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(hundredth.clone(), RoundingMode::Up), Ordering::Greater ); assert_eq!(x.to_string(), "63/20"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(hundredth.clone(), RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(x.to_string(), "63/20"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_assign(hundredth.clone(), RoundingMode::Nearest), Ordering::Less ); assert_eq!(x.to_string(), "157/50"); ``` ### impl<'a> RoundToMultipleOfPowerOf2<i64> for &'a Rational #### fn round_to_multiple_of_power_of_2( self, pow: i64, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of $2^k$ according to a specified rounding mode. The `Rational` is taken by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; let q = Rational::exact_from(std::f64::consts::PI); assert_eq!( (&q).round_to_multiple_of_power_of_2(-3, RoundingMode::Floor).to_debug_string(), "(25/8, Less)" ); assert_eq!( (&q).round_to_multiple_of_power_of_2(-3, RoundingMode::Down).to_debug_string(), "(25/8, Less)" ); assert_eq!( (&q).round_to_multiple_of_power_of_2(-3, RoundingMode::Ceiling).to_debug_string(), "(13/4, Greater)" ); assert_eq!( (&q).round_to_multiple_of_power_of_2(-3, RoundingMode::Up).to_debug_string(), "(13/4, Greater)" ); assert_eq!( (&q).round_to_multiple_of_power_of_2(-3, RoundingMode::Nearest).to_debug_string(), "(25/8, Less)" ); ``` #### type Output = Rational ### impl RoundToMultipleOfPowerOf2<i64> for Rational #### fn round_to_multiple_of_power_of_2( self, pow: i64, rm: RoundingMode ) -> (Rational, Ordering) Rounds a `Rational` to an integer multiple of $2^k$ according to a specified rounding mode. The `Rational` is taken by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. Let $q = \frac{x}{2^k}$: $f(x, k, \mathrm{Down}) = 2^k \operatorname{sgn}(q) \lfloor |q| \rfloor.$ $f(x, k, \mathrm{Up}) = 2^k \operatorname{sgn}(q) \lceil |q| \rceil.$ $f(x, k, \mathrm{Floor}) = 2^k \lfloor q \rfloor.$ $f(x, k, \mathrm{Ceiling}) = 2^k \lceil q \rceil.$ $$ f(x, k, \mathrm{Nearest}) = \begin{cases} 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor < \frac{1}{2} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor > \frac{1}{2} \\ 2^k \lfloor q \rfloor & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is even} \\ 2^k \lceil q \rceil & \text{if} \quad q - \lfloor q \rfloor = \frac{1}{2} \ \text{and} \ \lfloor q \rfloor \ \text{is odd.} \end{cases} $$ $f(x, k, \mathrm{Exact}) = 2^k q$, but panics if $q \notin \Z$. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_q::Rational; let q = Rational::exact_from(std::f64::consts::PI); assert_eq!( q.clone().round_to_multiple_of_power_of_2(-3, RoundingMode::Floor).to_debug_string(), "(25/8, Less)" ); assert_eq!( q.clone().round_to_multiple_of_power_of_2(-3, RoundingMode::Down).to_debug_string(), "(25/8, Less)" ); assert_eq!( q.clone().round_to_multiple_of_power_of_2(-3, RoundingMode::Ceiling).to_debug_string(), "(13/4, Greater)" ); assert_eq!( q.clone().round_to_multiple_of_power_of_2(-3, RoundingMode::Up).to_debug_string(), "(13/4, Greater)" ); assert_eq!( q.clone().round_to_multiple_of_power_of_2(-3, RoundingMode::Nearest).to_debug_string(), "(25/8, Less)" ); ``` #### type Output = Rational ### impl RoundToMultipleOfPowerOf2Assign<i64> for Rational #### fn round_to_multiple_of_power_of_2_assign( &mut self, pow: i64, rm: RoundingMode ) -> Ordering Rounds a `Rational` to a multiple of $2^k$ in place, according to a specified rounding mode. An `Ordering` is returned, indicating whether the returned value is less than, equal to, or greater than the original value. See the `RoundToMultipleOfPowerOf2` documentation for details. but the latter should be used as it is clearer and more efficient. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), pow / Limb::WIDTH)`. ##### Panics Panics if `rm` is `Exact`, but `self` is not a multiple of the power of 2. ##### Examples ``` use malachite_base::num::arithmetic::traits::RoundToMultipleOfPowerOf2Assign; use malachite_base::num::conversion::traits::ExactFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; use std::cmp::Ordering; let q = Rational::exact_from(std::f64::consts::PI); let mut x = q.clone(); assert_eq!( x.round_to_multiple_of_power_of_2_assign(-3, RoundingMode::Floor), Ordering::Less ); assert_eq!(x.to_string(), "25/8"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_of_power_of_2_assign(-3, RoundingMode::Down), Ordering::Less ); assert_eq!(x.to_string(), "25/8"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_of_power_of_2_assign(-3, RoundingMode::Ceiling), Ordering::Greater ); assert_eq!(x.to_string(), "13/4"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_of_power_of_2_assign(-3, RoundingMode::Up), Ordering::Greater ); assert_eq!(x.to_string(), "13/4"); let mut x = q.clone(); assert_eq!( x.round_to_multiple_of_power_of_2_assign(-3, RoundingMode::Nearest), Ordering::Less ); assert_eq!(x.to_string(), "25/8"); ``` ### impl<'a> RoundingFrom<&'a Rational> for Integer #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!(Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl<'a> RoundingFrom<&'a Rational> for Natural #### fn rounding_from(x: &Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by reference. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(&Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(&Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl<'a> RoundingFrom<&'a Rational> for f32 #### fn rounding_from(value: &'a Rational, rm: RoundingMode) -> (f32, Ordering) Converts a `Rational` to a value of a primitive float according to a specified `RoundingMode`, taking the `Rational` by reference. * If the rounding mode is `Floor`, the largest float less than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. If it is between zero and the minimum positive float, then positive zero is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. If it is between zero and the maximum negative float, then negative zero is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Rational` is non-negative and as with `Ceiling` if the `Rational` is negative. If the `Rational` is between the maximum negative float and the minimum positive float, then positive zero is returned when the `Rational` is non-negative and negative zero otherwise. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Rational` is non-negative and as with `Floor` if the `Rational` is negative. Positive zero is only returned when the `Rational` is zero, and negative zero is never returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Rational` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If the `Rational` is closer to zero than to any float (or if there is a tie between zero and another float), then positive or negative zero is returned, depending on the `Rational`’s sign. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for f64 #### fn rounding_from(value: &'a Rational, rm: RoundingMode) -> (f64, Ordering) Converts a `Rational` to a value of a primitive float according to a specified `RoundingMode`, taking the `Rational` by reference. * If the rounding mode is `Floor`, the largest float less than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. If it is between zero and the minimum positive float, then positive zero is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. If it is between zero and the maximum negative float, then negative zero is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Rational` is non-negative and as with `Ceiling` if the `Rational` is negative. If the `Rational` is between the maximum negative float and the minimum positive float, then positive zero is returned when the `Rational` is non-negative and negative zero otherwise. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Rational` is non-negative and as with `Floor` if the `Rational` is negative. Positive zero is only returned when the `Rational` is zero, and negative zero is never returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Rational` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If the `Rational` is closer to zero than to any float (or if there is a tie between zero and another float), then positive or negative zero is returned, depending on the `Rational`’s sign. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for i128 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (i128, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for i16 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (i16, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for i32 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (i32, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for i64 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (i64, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for i8 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (i8, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for isize #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (isize, Ordering) Converts a `Rational` to a signed integer, using a specified `RoundingMode`. If the `Rational` is smaller than the minimum value of the unsigned type, then it will be rounded to the minimum value when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than `T::MIN` and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for u128 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (u128, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for u16 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (u16, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for u32 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (u32, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for u64 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (u64, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for u8 #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (u8, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl<'a> RoundingFrom<&'a Rational> for usize #### fn rounding_from(value: &Rational, rm: RoundingMode) -> (usize, Ordering) Converts a `Rational` to an unsigned integer, using a specified `RoundingMode`. If the `Rational` is negative, then it will be rounded to zero when `rm` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. If the `Rational` is larger than the maximum value of the unsigned type, then it will be rounded to the maximum value when `rm` is `Floor`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`, or if the `Rational` is greater than `T::MAX` and `rm` is not `Down`, `Floor`, or `Nearest`. ##### Examples See here. ### impl RoundingFrom<Rational> for Integer #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Integer, Ordering) Converts a `Rational` to an `Integer`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::integer::Integer; use malachite_q::Rational; assert_eq!( Integer::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from(-123), RoundingMode::Exact).to_debug_string(), "(-123, Equal)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Floor) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Down) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Ceiling) .to_debug_string(), "(-3, Greater)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Up) .to_debug_string(), "(-4, Less)" ); assert_eq!( Integer::rounding_from(Rational::from_signeds(-22, 7), RoundingMode::Nearest) .to_debug_string(), "(-3, Greater)" ); ``` ### impl RoundingFrom<Rational> for Natural #### fn rounding_from(x: Rational, rm: RoundingMode) -> (Natural, Ordering) Converts a `Rational` to a `Natural`, using a specified `RoundingMode` and taking the `Rational` by value. An `Ordering` is also returned, indicating whether the returned value is less than, equal to, or greater than the original value. If the `Rational` is negative, then it will be rounded to zero when the `RoundingMode` is `Ceiling`, `Down`, or `Nearest`. Otherwise, this function will panic. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Panics Panics if the `Rational` is not an integer and `rm` is `Exact`, or if the `Rational` is less than zero and `rm` is not `Down`, `Ceiling`, or `Nearest`. ##### Examples ``` use malachite_base::num::conversion::traits::RoundingFrom; use malachite_base::rounding_modes::RoundingMode; use malachite_base::strings::ToDebugString; use malachite_nz::natural::Natural; use malachite_q::Rational; assert_eq!( Natural::rounding_from(Rational::from(123), RoundingMode::Exact).to_debug_string(), "(123, Equal)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Floor) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Down) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Ceiling) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Up) .to_debug_string(), "(4, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from_signeds(22, 7), RoundingMode::Nearest) .to_debug_string(), "(3, Less)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Down).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Ceiling).to_debug_string(), "(0, Greater)" ); assert_eq!( Natural::rounding_from(Rational::from(-123), RoundingMode::Nearest).to_debug_string(), "(0, Greater)" ); ``` ### impl RoundingFrom<Rational> for f32 #### fn rounding_from(value: Rational, rm: RoundingMode) -> (f32, Ordering) Converts a `Rational` to a value of a primitive float according to a specified `RoundingMode`, taking the `Rational` by value. * If the rounding mode is `Floor`, the largest float less than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. If it is between zero and the minimum positive float, then positive zero is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. If it is between zero and the maximum negative float, then negative zero is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Rational` is non-negative and as with `Ceiling` if the `Rational` is negative. If the `Rational` is between the maximum negative float and the minimum positive float, then positive zero is returned when the `Rational` is non-negative and negative zero otherwise. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Rational` is non-negative and as with `Floor` if the `Rational` is negative. Positive zero is only returned when the `Rational` is zero, and negative zero is never returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Rational` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If the `Rational` is closer to zero than to any float (or if there is a tie between zero and another float), then positive or negative zero is returned, depending on the `Rational`’s sign. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl RoundingFrom<Rational> for f64 #### fn rounding_from(value: Rational, rm: RoundingMode) -> (f64, Ordering) Converts a `Rational` to a value of a primitive float according to a specified `RoundingMode`, taking the `Rational` by value. * If the rounding mode is `Floor`, the largest float less than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If it is smaller than the minimum finite float, then negative infinity is returned. If it is between zero and the minimum positive float, then positive zero is returned. * If the rounding mode is `Ceiling`, the smallest float greater than or equal to the `Rational` is returned. If the `Rational` is greater than the maximum finite float, then positive infinity is returned. If it is smaller than the minimum finite float, then the minimum finite float is returned. If it is between zero and the maximum negative float, then negative zero is returned. * If the rounding mode is `Down`, then the rounding proceeds as with `Floor` if the `Rational` is non-negative and as with `Ceiling` if the `Rational` is negative. If the `Rational` is between the maximum negative float and the minimum positive float, then positive zero is returned when the `Rational` is non-negative and negative zero otherwise. * If the rounding mode is `Up`, then the rounding proceeds as with `Ceiling` if the `Rational` is non-negative and as with `Floor` if the `Rational` is negative. Positive zero is only returned when the `Rational` is zero, and negative zero is never returned. * If the rounding mode is `Nearest`, then the nearest float is returned. If the `Rational` is exactly between two floats, the float with the zero least-significant bit in its representation is selected. If the `Rational` is greater than the maximum finite float, then the maximum finite float is returned. If the `Rational` is closer to zero than to any float (or if there is a tie between zero and another float), then positive or negative zero is returned, depending on the `Rational`’s sign. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.significant_bits()`. ##### Panics Panics if the rounding mode is `Exact` and `value` cannot be represented exactly. ##### Examples See here. ### impl SciMantissaAndExponent<f32, i64, Rational> for Rational #### fn sci_mantissa_and_exponent(self) -> (f32, i64) Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn sci_exponent(self) -> i64 Returns a `Rational`’s scientific exponent, taking the `Rational` by value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx \lfloor \log_2 x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f32, sci_exponent: i64 ) -> Option<RationalConstructs a `Rational` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. All finite floats can be represented using `Rational`s, so no rounding is needed. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.### impl<'a> SciMantissaAndExponent<f32, i64, Rational> for &'a Rational #### fn sci_mantissa_and_exponent(self) -> (f32, i64) Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by reference. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn sci_exponent(self) -> i64 Returns a `Rational`’s scientific exponent, taking the `Rational` by reference. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx \lfloor \log_2 x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f32, sci_exponent: i64 ) -> Option<RationalConstructs a `Rational` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. All finite floats can be represented using `Rational`s, so no rounding is needed. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.### impl SciMantissaAndExponent<f64, i64, Rational> for Rational #### fn sci_mantissa_and_exponent(self) -> (f64, i64) Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn sci_exponent(self) -> i64 Returns a `Rational`’s scientific exponent, taking the `Rational` by value. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx \lfloor \log_2 x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f64, sci_exponent: i64 ) -> Option<RationalConstructs a `Rational` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. All finite floats can be represented using `Rational`s, so no rounding is needed. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.### impl<'a> SciMantissaAndExponent<f64, i64, Rational> for &'a Rational #### fn sci_mantissa_and_exponent(self) -> (f64, i64) Returns a `Rational`’s scientific mantissa and exponent, taking the `Rational` by reference. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx (\frac{x}{2^{\lfloor \log_2 x \rfloor}}, \lfloor \log_2 x \rfloor). $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples See here. #### fn sci_exponent(self) -> i64 Returns a `Rational`’s scientific exponent, taking the `Rational` by reference. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. We represent the rational mantissa as a float. The conversion might not be exact, so we round to the nearest float using the `Nearest` rounding mode. To use other rounding modes, use `sci_mantissa_and_exponent_round`. $$ f(x) \approx \lfloor \log_2 x \rfloor. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. #### fn from_sci_mantissa_and_exponent( sci_mantissa: f64, sci_exponent: i64 ) -> Option<RationalConstructs a `Rational` from its scientific mantissa and exponent. When $x$ is positive, we can write $x = 2^{e_s}m_s$, where $e_s$ is an integer and $m_s$ is a rational number with $1 \leq m_s < 2$. Here, the rational mantissa is provided as a float. If the mantissa is outside the range $[1, 2)$, `None` is returned. All finite floats can be represented using `Rational`s, so no rounding is needed. $$ f(x) \approx 2^{e_s}m_s. $$ ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `sci_exponent`. See here. #### fn sci_mantissa(self) -> M Extracts the scientific mantissa from a number.### impl<'a> Shl<i128> for &'a Rational #### fn shl(self, bits: i128) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<i128> for Rational #### fn shl(self, bits: i128) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<i16> for &'a Rational #### fn shl(self, bits: i16) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<i16> for Rational #### fn shl(self, bits: i16) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<i32> for &'a Rational #### fn shl(self, bits: i32) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<i32> for Rational #### fn shl(self, bits: i32) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<i64> for &'a Rational #### fn shl(self, bits: i64) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<i64> for Rational #### fn shl(self, bits: i64) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<i8> for &'a Rational #### fn shl(self, bits: i8) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<i8> for Rational #### fn shl(self, bits: i8) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<isize> for &'a Rational #### fn shl(self, bits: isize) -> Rational Left-shifts a `Rational` (multiplies or divides it by a power of 2), taking it by reference. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<isize> for Rational #### fn shl(self, bits: isize) -> Rational Left-shifts a `Rational` (multiplies it or divides it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<u128> for &'a Rational #### fn shl(self, bits: u128) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<u128> for Rational #### fn shl(self, bits: u128) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<u16> for &'a Rational #### fn shl(self, bits: u16) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<u16> for Rational #### fn shl(self, bits: u16) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<u32> for &'a Rational #### fn shl(self, bits: u32) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<u32> for Rational #### fn shl(self, bits: u32) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<u64> for &'a Rational #### fn shl(self, bits: u64) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<u64> for Rational #### fn shl(self, bits: u64) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<u8> for &'a Rational #### fn shl(self, bits: u8) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<u8> for Rational #### fn shl(self, bits: u8) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl<'a> Shl<usize> for &'a Rational #### fn shl(self, bits: usize) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl Shl<usize> for Rational #### fn shl(self, bits: usize) -> Rational Left-shifts a `Rational` (multiplies it by a power of 2), taking it by value. $$ f(x, k) = x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. #### type Output = Rational The resulting type after applying the `<<` operator.### impl ShlAssign<i128> for Rational #### fn shl_assign(&mut self, bits: i128) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<i16> for Rational #### fn shl_assign(&mut self, bits: i16) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<i32> for Rational #### fn shl_assign(&mut self, bits: i32) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<i64> for Rational #### fn shl_assign(&mut self, bits: i64) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<i8> for Rational #### fn shl_assign(&mut self, bits: i8) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<isize> for Rational #### fn shl_assign(&mut self, bits: isize) Left-shifts a `Rational` (multiplies or divides it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. See here. ### impl ShlAssign<u128> for Rational #### fn shl_assign(&mut self, bits: u128) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u16> for Rational #### fn shl_assign(&mut self, bits: u16) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u32> for Rational #### fn shl_assign(&mut self, bits: u32) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u64> for Rational #### fn shl_assign(&mut self, bits: u64) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<u8> for Rational #### fn shl_assign(&mut self, bits: u8) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl ShlAssign<usize> for Rational #### fn shl_assign(&mut self, bits: usize) Left-shifts a `Rational` (multiplies it by a power of 2), in place. $$ x \gets x2^k. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `bits`. ##### Examples See here. ### impl<'a> Shr<i128> for &'a Rational #### fn shr(self, bits: i128) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<i128> for Rational #### fn shr(self, bits: i128) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<i16> for &'a Rational #### fn shr(self, bits: i16) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<i16> for Rational #### fn shr(self, bits: i16) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<i32> for &'a Rational #### fn shr(self, bits: i32) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<i32> for Rational #### fn shr(self, bits: i32) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<i64> for &'a Rational #### fn shr(self, bits: i64) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<i64> for Rational #### fn shr(self, bits: i64) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<i8> for &'a Rational #### fn shr(self, bits: i8) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<i8> for Rational #### fn shr(self, bits: i8) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<isize> for &'a Rational #### fn shr(self, bits: isize) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<isize> for Rational #### fn shr(self, bits: isize) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<u128> for &'a Rational #### fn shr(self, bits: u128) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<u128> for Rational #### fn shr(self, bits: u128) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<u16> for &'a Rational #### fn shr(self, bits: u16) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<u16> for Rational #### fn shr(self, bits: u16) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<u32> for &'a Rational #### fn shr(self, bits: u32) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<u32> for Rational #### fn shr(self, bits: u32) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<u64> for &'a Rational #### fn shr(self, bits: u64) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<u64> for Rational #### fn shr(self, bits: u64) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<u8> for &'a Rational #### fn shr(self, bits: u8) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<u8> for Rational #### fn shr(self, bits: u8) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl<'a> Shr<usize> for &'a Rational #### fn shr(self, bits: usize) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl Shr<usize> for Rational #### fn shr(self, bits: usize) -> Rational Right-shifts a `Rational` (divides it by a power of 2), taking it by value. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. #### type Output = Rational The resulting type after applying the `>>` operator.### impl ShrAssign<i128> for Rational #### fn shr_assign(&mut self, bits: i128) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<i16> for Rational #### fn shr_assign(&mut self, bits: i16) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<i32> for Rational #### fn shr_assign(&mut self, bits: i32) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<i64> for Rational #### fn shr_assign(&mut self, bits: i64) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<i8> for Rational #### fn shr_assign(&mut self, bits: i8) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<isize> for Rational #### fn shr_assign(&mut self, bits: isize) Right-shifts a `Rational` (divides it by a power of 2), in reference. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<u128> for Rational #### fn shr_assign(&mut self, bits: u128) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<u16> for Rational #### fn shr_assign(&mut self, bits: u16) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<u32> for Rational #### fn shr_assign(&mut self, bits: u32) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<u64> for Rational #### fn shr_assign(&mut self, bits: u64) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<u8> for Rational #### fn shr_assign(&mut self, bits: u8) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl ShrAssign<usize> for Rational #### fn shr_assign(&mut self, bits: usize) Right-shifts a `Rational` (divides it by a power of 2), in place. $$ f(x, k) = \frac{x}{2^k}. $$ ##### Worst-case complexity $T(n, m) = O(n + m)$ $M(n, m) = O(n + m)$ where $T$ is time, $M$ is additional memory, $n$ is `self.significant_bits()`, and $m$ is `max(bits, 0)`. See here. ### impl Sign for Rational #### fn sign(&self) -> Ordering Compares a `Rational` to zero. Returns `Greater`, `Equal`, or `Less`, depending on whether the `Rational` is positive, zero, or negative, respectively. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::arithmetic::traits::Sign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; use std::cmp::Ordering; assert_eq!(Rational::ZERO.sign(), Ordering::Equal); assert_eq!(Rational::from_signeds(22, 7).sign(), Ordering::Greater); assert_eq!(Rational::from_signeds(-22, 7).sign(), Ordering::Less); ``` ### impl<'a> SignificantBits for &'a Rational #### fn significant_bits(self) -> u64 Returns the sum of the bits needed to represent the numerator and denominator. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_base::num::logic::traits::SignificantBits; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; use std::str::FromStr; assert_eq!(Rational::ZERO.significant_bits(), 1); assert_eq!(Rational::from_str("-100/101").unwrap().significant_bits(), 14); ``` ### impl SimplestRationalInInterval for Rational #### fn simplest_rational_in_open_interval(x: &Rational, y: &Rational) -> Rational Finds the simplest `Rational` contained in an open interval. Let $f(x, y) = p/q$, with $p$ and $q$ relatively prime. Then the following properties hold: * $x < p/q < y$ * If $x < m/n < y$, then $n \geq q$ * If $x < m/q < y$, then $|p| \leq |m|$ ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(x.significant_bits(), y.significant_bits())`. ##### Panics Panics if $x \geq y$. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::arithmetic::traits::SimplestRationalInInterval; use malachite_q::Rational; assert_eq!( Rational::simplest_rational_in_open_interval( &Rational::from_signeds(1, 3), &Rational::from_signeds(1, 2) ), Rational::from_signeds(2, 5) ); assert_eq!( Rational::simplest_rational_in_open_interval( &Rational::from_signeds(-1, 3), &Rational::from_signeds(1, 3) ), Rational::ZERO ); assert_eq!( Rational::simplest_rational_in_open_interval( &Rational::from_signeds(314, 100), &Rational::from_signeds(315, 100) ), Rational::from_signeds(22, 7) ); ``` #### fn simplest_rational_in_closed_interval(x: &Rational, y: &Rational) -> Rational Finds the simplest `Rational` contained in a closed interval. Let $f(x, y) = p/q$, with $p$ and $q$ relatively prime. Then the following properties hold: * $x \leq p/q \leq y$ * If $x \leq m/n \leq y$, then $n \geq q$ * If $x \leq m/q \leq y$, then $|p| \leq |m|$ ##### Worst-case complexity $T(n) = O(n^2 \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(x.significant_bits(), y.significant_bits())`. ##### Panics Panics if $x > y$. ##### Examples ``` use malachite_base::num::basic::traits::Zero; use malachite_q::arithmetic::traits::SimplestRationalInInterval; use malachite_q::Rational; assert_eq!( Rational::simplest_rational_in_closed_interval( &Rational::from_signeds(1, 3), &Rational::from_signeds(1, 2) ), Rational::from_signeds(1, 2) ); assert_eq!( Rational::simplest_rational_in_closed_interval( &Rational::from_signeds(-1, 3), &Rational::from_signeds(1, 3) ), Rational::ZERO ); assert_eq!( Rational::simplest_rational_in_closed_interval( &Rational::from_signeds(314, 100), &Rational::from_signeds(315, 100) ), Rational::from_signeds(22, 7) ); ``` ### impl<'a> Square for &'a Rational #### fn square(self) -> Rational Squares a `Rational`, taking it by reference. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!((&Rational::ZERO).square(), 0); assert_eq!((&Rational::from_signeds(22, 7)).square().to_string(), "484/49"); assert_eq!((&Rational::from_signeds(-22, 7)).square().to_string(), "484/49"); ``` #### type Output = Rational ### impl Square for Rational #### fn square(self) -> Rational Squares a `Rational`, taking it by value. $$ f(x) = x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::Square; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; assert_eq!(Rational::ZERO.square(), 0); assert_eq!(Rational::from_signeds(22, 7).square().to_string(), "484/49"); assert_eq!(Rational::from_signeds(-22, 7).square().to_string(), "484/49"); ``` #### type Output = Rational ### impl SquareAssign for Rational #### fn square_assign(&mut self) Squares a `Rational` in place. $$ x \gets x^2. $$ ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `self.significant_bits()`. ##### Examples ``` use malachite_base::num::arithmetic::traits::SquareAssign; use malachite_base::num::basic::traits::Zero; use malachite_q::Rational; let mut x = Rational::ZERO; x.square_assign(); assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x.square_assign(); assert_eq!(x.to_string(), "484/49"); let mut x = Rational::from_signeds(-22, 7); x.square_assign(); assert_eq!(x.to_string(), "484/49"); ``` ### impl<'a, 'b> Sub<&'a Rational> for &'b Rational #### fn sub(self, other: &'a Rational) -> Rational Subtracts a `Rational` by another `Rational`, taking both by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF - &Rational::ONE_HALF, 0); assert_eq!( (&Rational::from_signeds(22, 7) - &Rational::from_signeds(99, 100)).to_string(), "1507/700" ); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl<'a> Sub<&'a Rational> for Rational #### fn sub(self, other: &'a Rational) -> Rational Subtracts a `Rational` by another `Rational`, taking the first by value and the second by reference. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF - &Rational::ONE_HALF, 0); assert_eq!( (Rational::from_signeds(22, 7) - &Rational::from_signeds(99, 100)).to_string(), "1507/700" ); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl<'a> Sub<Rational> for &'a Rational #### fn sub(self, other: Rational) -> Rational Subtracts a `Rational` by another `Rational`, taking the first by reference and the second by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(&Rational::ONE_HALF - Rational::ONE_HALF, 0); assert_eq!( (&Rational::from_signeds(22, 7) - Rational::from_signeds(99, 100)).to_string(), "1507/700" ); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl Sub<Rational> for Rational #### fn sub(self, other: Rational) -> Rational Subtracts a `Rational` by another `Rational`, taking both by value. $$ f(x, y) = x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; assert_eq!(Rational::ONE_HALF - Rational::ONE_HALF, 0); assert_eq!( (Rational::from_signeds(22, 7) - Rational::from_signeds(99, 100)).to_string(), "1507/700" ); ``` #### type Output = Rational The resulting type after applying the `-` operator.### impl<'a> SubAssign<&'a Rational> for Rational #### fn sub_assign(&mut self, other: &'a Rational) Subtracts a `Rational` by another `Rational` in place, taking the `Rational` on the right-hand side by reference. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x -= &Rational::ONE_HALF; assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x -= &Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "1507/700"); ``` ### impl SubAssign<Rational> for Rational #### fn sub_assign(&mut self, other: Rational) Subtracts a `Rational` by another `Rational` in place, taking the `Rational` on the right-hand side by value. $$ x \gets x - y. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^2 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), other.significant_bits())`. ##### Examples ``` use malachite_base::num::basic::traits::OneHalf; use malachite_q::Rational; let mut x = Rational::ONE_HALF; x -= Rational::ONE_HALF; assert_eq!(x, 0); let mut x = Rational::from_signeds(22, 7); x -= Rational::from_signeds(99, 100); assert_eq!(x.to_string(), "1507/700"); ``` ### impl<'a> Sum<&'a Rational> for Rational #### fn sum<I>(xs: I) -> Rationalwhere I: Iterator<Item = &'a Rational>, Adds up all the `Rational`s in an iterator of `Rational` references. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^3 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Rational::sum(xs.map(Rational::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_q::Rational; use std::iter::Sum; assert_eq!( Rational::sum( vec_from_str::<Rational>("[0, 1, 2/3, 3/4, 4/5, 5/6, 6/7, 7/8, 8/9, 9/10]") .unwrap().iter() ).to_string(), "19079/2520" ); ``` ### impl Sum<Rational> for Rational #### fn sum<I>(xs: I) -> Rationalwhere I: Iterator<Item = Rational>, Adds up all the `Rational`s in an iterator. $$ f((x_i)_ {i=0}^{n-1}) = \sum_ {i=0}^{n-1} x_i. $$ ##### Worst-case complexity $T(n) = O(n (\log n)^3 \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `Rational::sum(xs.map(Rational::significant_bits))`. ##### Examples ``` use malachite_base::vecs::vec_from_str; use malachite_q::Rational; use std::iter::Sum; assert_eq!( Rational::sum(vec_from_str::<Rational>("[2, -3, 5, 7]").unwrap().into_iter()), 11 ); ``` ### impl ToSci for Rational #### fn fmt_sci_valid(&self, options: ToSciOptions) -> bool Determines whether a `Rational` can be converted to a string using `to_sci` and a particular set of options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), s)`, where `s` depends on the size type specified in `options`. * If `options` has `scale` specified, then `s` is `options.scale`. * If `options` has `precision` specified, then `s` is `options.precision`. * If `options` has `size_complete` specified, then `s` is `self.denominator` (not the log of the denominator!). This reflects the fact that setting `size_complete` might result in a very long string. ##### Examples ``` use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; let mut options = ToSciOptions::default(); assert!(Rational::from(123u8).fmt_sci_valid(options)); assert!(Rational::from(u128::MAX).fmt_sci_valid(options)); // u128::MAX has more than 16 significant digits options.set_rounding_mode(RoundingMode::Exact); assert!(!Rational::from(u128::MAX).fmt_sci_valid(options)); options.set_precision(50); assert!(Rational::from(u128::MAX).fmt_sci_valid(options)); let mut options = ToSciOptions::default(); options.set_size_complete(); // 1/3 is non-terminating in base 10... assert!(!Rational::from_signeds(1, 3).fmt_sci_valid(options)); options.set_size_complete(); // ...but is terminating in base 36 options.set_base(36); assert!(Rational::from_signeds(1, 3).fmt_sci_valid(options)); ``` #### fn fmt_sci( &self, f: &mut Formatter<'_>, options: ToSciOptions ) -> Result<(), ErrorConverts a `Rational`to a string using a specified base, possibly formatting the number using scientific notation. See `ToSciOptions` for details on the available options. ##### Worst-case complexity $T(n) = O(n \log n \log\log n)$ $M(n) = O(n \log n)$ where $T$ is time, $M$ is additional memory, and $n$ is `max(self.significant_bits(), s)`, where `s` depends on the size type specified in `options`. * If `options` has `scale` specified, then `s` is `options.scale`. * If `options` has `precision` specified, then `s` is `options.precision`. * If `options` has `size_complete` specified, then `s` is `self.denominator` (not the log of the denominator!). This reflects the fact that setting `size_complete` might result in a very long string. ##### Panics Panics if `options.rounding_mode` is `Exact`, but the size options are such that the input must be rounded. ##### Examples ``` use malachite_base::num::arithmetic::traits::PowerOf2; use malachite_base::num::conversion::string::options::ToSciOptions; use malachite_base::num::conversion::traits::ToSci; use malachite_base::rounding_modes::RoundingMode; use malachite_q::Rational; let q = Rational::from_signeds(22, 7); let mut options = ToSciOptions::default(); assert_eq!(q.to_sci_with_options(options).to_string(), "3.142857142857143"); options.set_precision(3); assert_eq!(q.to_sci_with_options(options).to_string(), "3.14"); options.set_rounding_mode(RoundingMode::Ceiling); assert_eq!(q.to_sci_with_options(options).to_string(), "3.15"); options = ToSciOptions::default(); options.set_base(20); assert_eq!(q.to_sci_with_options(options).to_string(), "3.2h2h2h2h2h2h2h3"); options.set_uppercase(); assert_eq!(q.to_sci_with_options(options).to_string(), "3.2H2H2H2H2H2H2H3"); options.set_base(2); options.set_rounding_mode(RoundingMode::Floor); options.set_precision(19); assert_eq!(q.to_sci_with_options(options).to_string(), "11.001001001001001"); options.set_include_trailing_zeros(true); assert_eq!(q.to_sci_with_options(options).to_string(), "11.00100100100100100"); let q = Rational::from_unsigneds(936851431250u64, 1397u64); let mut options = ToSciOptions::default(); options.set_precision(6); assert_eq!(q.to_sci_with_options(options).to_string(), "6.70617e8"); options.set_e_uppercase(); assert_eq!(q.to_sci_with_options(options).to_string(), "6.70617E8"); options.set_force_exponent_plus_sign(true); assert_eq!(q.to_sci_with_options(options).to_string(), "6.70617E+8"); let q = Rational::from_signeds(123i64, 45678909876i64); let mut options = ToSciOptions::default(); assert_eq!(q.to_sci_with_options(options).to_string(), "2.692708743135418e-9"); options.set_neg_exp_threshold(-10); assert_eq!(q.to_sci_with_options(options).to_string(), "0.000000002692708743135418"); let q = Rational::power_of_2(-30i64); let mut options = ToSciOptions::default(); assert_eq!(q.to_sci_with_options(options).to_string(), "9.313225746154785e-10"); options.set_size_complete(); assert_eq!(q.to_sci_with_options(options).to_string(), "9.31322574615478515625e-10"); ``` #### fn to_sci_with_options(&self, options: ToSciOptions) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation.#### fn to_sci(&self) -> SciWrapper<'_, SelfConverts a number to a string, possibly in scientific notation, using the default `ToSciOptions`.### impl<'a> TryFrom<&'a Rational> for Integer #### fn try_from( x: &Rational ) -> Result<Integer, <Integer as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by reference. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(&Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(&Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl<'a> TryFrom<&'a Rational> for Natural #### fn try_from( x: &Rational ) -> Result<Natural, <Natural as TryFrom<&'a Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by reference. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `x.significant_bits()`. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(&Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(&Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(&Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Integer #### fn try_from( x: Rational ) -> Result<Integer, <Integer as TryFrom<Rational>>::ErrorConverts a `Rational` to an `Integer`, taking the `Rational` by value. If the `Rational` is not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::integer::Integer; use malachite_q::conversion::integer_from_rational::IntegerFromRationalError; use malachite_q::Rational; assert_eq!(Integer::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Integer::try_from(Rational::from(-123)).unwrap(), -123); assert_eq!( Integer::try_from(Rational::from_signeds(22, 7)), Err(IntegerFromRationalError) ); ``` #### type Error = IntegerFromRationalError The type returned in the event of a conversion error.### impl TryFrom<Rational> for Natural #### fn try_from( x: Rational ) -> Result<Natural, <Natural as TryFrom<Rational>>::ErrorConverts a `Rational` to a `Natural`, taking the `Rational` by value. If the `Rational` is negative or not an integer, an error is returned. ##### Worst-case complexity Constant time and additional memory. ##### Examples ``` use malachite_nz::natural::Natural; use malachite_q::conversion::natural_from_rational::NaturalFromRationalError; use malachite_q::Rational; assert_eq!(Natural::try_from(Rational::from(123)).unwrap(), 123); assert_eq!(Natural::try_from(Rational::from(-123)), Err(NaturalFromRationalError)); assert_eq!( Natural::try_from(Rational::from_signeds(22, 7)), Err(NaturalFromRationalError) ); ``` #### type Error = NaturalFromRationalError The type returned in the event of a conversion error.### impl TryFrom<f32> for Rational #### fn try_from(value: f32) -> Result<Rational, <Rational as TryFrom<f32>>::ErrorConverts a primitive float to the equivalent `Rational`. If the floating point value is `NaN` or infinite, and error is returned. This conversion is literal. For example, `Rational::try_from(0.1f32)` evaluates to Some($13421773/134217728$). If you want $1/10$ instead, use `try_from_float_simplest`; that function returns the simplest `Rational` that rounds to the specified float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent().abs()`. ##### Examples See here. #### type Error = RationalFromPrimitiveFloatError The type returned in the event of a conversion error.### impl TryFrom<f64> for Rational #### fn try_from(value: f64) -> Result<Rational, <Rational as TryFrom<f64>>::ErrorConverts a primitive float to the equivalent `Rational`. If the floating point value is `NaN` or infinite, and error is returned. This conversion is literal. For example, `Rational::try_from(0.1f32)` evaluates to Some($13421773/134217728$). If you want $1/10$ instead, use `try_from_float_simplest`; that function returns the simplest `Rational` that rounds to the specified float. ##### Worst-case complexity $T(n) = O(n)$ $M(n) = O(n)$ where $T$ is time, $M$ is additional memory, and $n$ is `value.sci_exponent().abs()`. ##### Examples See here. #### type Error = RationalFromPrimitiveFloatError The type returned in the event of a conversion error.### impl Two for Rational The constant 2. #### const TWO: Rational = _ ### impl Zero for Rational The constant 0. #### const ZERO: Rational = _ ### impl Eq for Rational ### impl StructuralEq for Rational ### impl StructuralPartialEq for Rational Auto Trait Implementations --- ### impl RefUnwindSafe for Rational ### impl Send for Rational ### impl Sync for Rational ### impl Unpin for Rational ### impl UnwindSafe for Rational Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. U: TryFrom<T>, #### fn exact_from(value: T) -> U ### impl<T, U> ExactInto<U> for Twhere U: ExactFrom<T>, #### fn exact_into(self) -> U ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> OverflowingInto<U> for Twhere U: OverflowingFrom<T>, #### fn overflowing_into(self) -> (U, bool) ### impl<T, U> RoundingInto<U> for Twhere U: RoundingFrom<T>, #### fn rounding_into(self, rm: RoundingMode) -> (U, Ordering) ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> SaturatingInto<U> for Twhere U: SaturatingFrom<T>, #### fn saturating_into(self) -> U ### impl<T> ToDebugString for Twhere T: Debug, #### fn to_debug_string(&self) -> String Returns the `String` produced by `T`s `Debug` implementation. ##### Examples ``` use malachite_base::strings::ToDebugString; assert_eq!([1, 2, 3].to_debug_string(), "[1, 2, 3]"); assert_eq!( [vec![2, 3], vec![], vec![4]].to_debug_string(), "[[2, 3], [], [4]]" ); assert_eq!(Some(5).to_debug_string(), "Some(5)"); ``` ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T, U> WrappingInto<U> for Twhere U: WrappingFrom<T>, #### fn wrapping_into(self) -> U
github.com/vmihailenco/taskq/memqueue/v4
go
Go
None Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [func NewFactory() taskq.Factory](#NewFactory) * [type Queue](#Queue) * + [func NewQueue(opt *taskq.QueueConfig) *Queue](#NewQueue) * + [func (q *Queue) AddJob(ctx context.Context, msg *taskq.Job) error](#Queue.AddJob) + [func (q *Queue) Close() error](#Queue.Close) + [func (q *Queue) CloseTimeout(timeout time.Duration) error](#Queue.CloseTimeout) + [func (q *Queue) Consumer() taskq.QueueConsumer](#Queue.Consumer) + [func (q *Queue) Delete(ctx context.Context, msg *taskq.Job) error](#Queue.Delete) + [func (q *Queue) DeleteBatch(ctx context.Context, msgs []*taskq.Job) error](#Queue.DeleteBatch) + [func (q *Queue) Len(ctx context.Context) (int, error)](#Queue.Len) + [func (q *Queue) Name() string](#Queue.Name) + [func (q *Queue) Options() *taskq.QueueConfig](#Queue.Options) + [func (q *Queue) Purge(ctx context.Context) error](#Queue.Purge) + [func (q *Queue) Release(ctx context.Context, msg *taskq.Job) error](#Queue.Release) + [func (q *Queue) ReserveN(ctx context.Context, _ int, _ time.Duration) ([]taskq.Job, error)](#Queue.ReserveN) + [func (q *Queue) SetNoDelay(noDelay bool)](#Queue.SetNoDelay) + [func (q *Queue) SetSync(sync bool)](#Queue.SetSync) + [func (q *Queue) String() string](#Queue.String) + [func (q *Queue) WaitTimeout(timeout time.Duration) error](#Queue.WaitTimeout) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [NewFactory](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/factory.go#L16) [¶](#NewFactory) ``` func NewFactory() taskq.Factory ``` ### Types [¶](#pkg-types) #### type [Queue](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L74) [¶](#Queue) ``` type Queue struct { // contains filtered or unexported fields } ``` #### func [NewQueue](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L90) [¶](#NewQueue) ``` func NewQueue(opt *taskq.QueueConfig) *[Queue](#Queue) ``` #### func (*Queue) [AddJob](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L172) [¶](#Queue.AddJob) ``` func (q *[Queue](#Queue)) AddJob(ctx [context](/context).[Context](/context#Context), msg *taskq.Job) [error](/builtin#error) ``` Add adds message to the queue. #### func (*Queue) [Close](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L130) [¶](#Queue.Close) ``` func (q *[Queue](#Queue)) Close() [error](/builtin#error) ``` Close is like CloseTimeout with 30 seconds timeout. #### func (*Queue) [CloseTimeout](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L135) [¶](#Queue.CloseTimeout) ``` func (q *[Queue](#Queue)) CloseTimeout(timeout [time](/time).[Duration](/time#Duration)) [error](/builtin#error) ``` CloseTimeout closes the queue waiting for pending messages to be processed. #### func (*Queue) [Consumer](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L117) [¶](#Queue.Consumer) ``` func (q *[Queue](#Queue)) Consumer() taskq.QueueConsumer ``` #### func (*Queue) [Delete](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L223) [¶](#Queue.Delete) ``` func (q *[Queue](#Queue)) Delete(ctx [context](/context).[Context](/context#Context), msg *taskq.Job) [error](/builtin#error) ``` #### func (*Queue) [DeleteBatch](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L229) [¶](#Queue.DeleteBatch) ``` func (q *[Queue](#Queue)) DeleteBatch(ctx [context](/context).[Context](/context#Context), msgs []*taskq.Job) [error](/builtin#error) ``` #### func (*Queue) [Len](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L167) [¶](#Queue.Len) ``` func (q *[Queue](#Queue)) Len(ctx [context](/context).[Context](/context#Context)) ([int](/builtin#int), [error](/builtin#error)) ``` #### func (*Queue) [Name](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L105) [¶](#Queue.Name) ``` func (q *[Queue](#Queue)) Name() [string](/builtin#string) ``` #### func (*Queue) [Options](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L113) [¶](#Queue.Options) ``` func (q *[Queue](#Queue)) Options() *taskq.QueueConfig ``` #### func (*Queue) [Purge](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L241) [¶](#Queue.Purge) ``` func (q *[Queue](#Queue)) Purge(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` #### func (*Queue) [Release](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L216) [¶](#Queue.Release) ``` func (q *[Queue](#Queue)) Release(ctx [context](/context).[Context](/context#Context), msg *taskq.Job) [error](/builtin#error) ``` #### func (*Queue) [ReserveN](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L212) [¶](#Queue.ReserveN) ``` func (q *[Queue](#Queue)) ReserveN(ctx [context](/context).[Context](/context#Context), _ [int](/builtin#int), _ [time](/time).[Duration](/time#Duration)) ([]taskq.Job, [error](/builtin#error)) ``` #### func (*Queue) [SetNoDelay](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L125) [¶](#Queue.SetNoDelay) ``` func (q *[Queue](#Queue)) SetNoDelay(noDelay [bool](/builtin#bool)) ``` #### func (*Queue) [SetSync](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L121) [¶](#Queue.SetSync) ``` func (q *[Queue](#Queue)) SetSync(sync [bool](/builtin#bool)) ``` #### func (*Queue) [String](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L109) [¶](#Queue.String) ``` func (q *[Queue](#Queue)) String() [string](/builtin#string) ``` #### func (*Queue) [WaitTimeout](https://github.com/vmihailenco/taskq/blob/memqueue/v4.0.0-beta.4/memqueue/queue.go#L151) [¶](#Queue.WaitTimeout) ``` func (q *[Queue](#Queue)) WaitTimeout(timeout [time](/time).[Duration](/time#Duration)) [error](/builtin#error) ```
minitrace
rust
Rust
Crate minitrace === `minitrace` is a high-performance, ergonomic, library-level timeline tracing library for Rust. Unlike most tracing libraries which are primarily designed for instrumenting executables, `minitrace` also accommodates the need for library instrumentation. It stands out due to its extreme lightweight and fast performance compared to other tracing libraries. Moreover, it has zero overhead when not enabled in the executable, making it a worry-free choice for libraries concerned about unnecessary performance loss. Getting Started --- ### Libraries Libraries should include `minitrace` as a dependency without enabling any extra features. ``` [dependencies] minitrace = "0.5" ``` Libraries can attach their spans to the caller’s span (if available) via the API boundary. ``` use minitrace::prelude::*; struct Connection { // ... } impl Connection { #[trace] pub fn query(sql: &str) -> Result<QueryResult, Error> { // ... } } ``` Libraries can also create a new trace individually to record their work. ``` use minitrace::prelude::*; pub fn send_request(req: HttpRequest) -> Result<(), Error> { let root = Span::root("send_request", SpanContext::random()); let _guard = root.set_local_parent(); // ... } ``` ### Executables Executables should include `minitrace` as a dependency with the `enable` feature set. To disable `minitrace` statically, simply don’t set the `enable` feature. ``` [dependencies] minitrace = { version = "0.5", features = ["enable"] } ``` Executables should initialize a reporter implementation early in the program’s runtime. Span records generated before the implementation is initialized will be ignored. Before terminating, the reporter should be flushed to ensure all span records are reported. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; fn main() { minitrace::set_reporter(ConsoleReporter, Config::default()); // ... minitrace::flush(); } ``` Key Concepts --- `minitrace` operates through three types: `Span`, `LocalSpan`, and `Event`, each representing a different type of tracing record. The macro `trace` is available to manage these types automatically. For `Future` instrumentation, necessary utilities are provided by `FutureExt`. ### Span A `Span` represents an individual unit of work. It contains: * A name * A start timestamp and duration * A set of key-value properties * A reference to a parent `Span` A new `Span` can be started through `Span::root()`, requiring the trace id and the parent span id from a remote source. If there’s no remote parent span, the parent span id is typically set to its default value of zero. `Span::enter_with_parent()` starts a child span given a parent span. `Span` is thread-safe and can be sent across threads. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root_span = Span::root("root", SpanContext::random()); { let child_span = Span::enter_with_parent("a child span", &root_span); // ... // child_span ends here. } // root_span ends here. } minitrace::flush(); ``` ### Local Span A `Span` can be efficiently replaced with a `LocalSpan`, reducing overhead significantly, provided it is not intended for sending to other threads. Before starting a `LocalSpan`, a scope of parent span should be set using `Span::set_local_parent()`. Use `LocalSpan::enter_with_local_parent()` to start a `LocalSpan`, which then becomes the new local parent. If no local parent is set, the `enter_with_local_parent()` will do nothing. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); { let _guard = root.set_local_parent(); // The parent of this span is `root`. let _span1 = LocalSpan::enter_with_local_parent("a child span"); foo(); } } fn foo() { // The parent of this span is `span1`. let _span2 = LocalSpan::enter_with_local_parent("a child span of child span"); } minitrace::flush(); ``` ### Event `Event` represents a single point in time where something occurred during the execution of a program. An `Event` can be seen as a log record attached to a span. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); Event::add_to_parent("event in root", &root, || []); { let _guard = root.set_local_parent(); let _span1 = LocalSpan::enter_with_local_parent("a child span"); Event::add_to_local_parent("event in span1", || [("key".into(), "value".into())]); } } minitrace::flush(); ``` ### Macro The attribute-macro `trace` helps to reduce boilerplate. However, the function annotated by the `trace` always requires a local parent in the context, otherwise, no span will be recorded. ``` use futures::executor::block_on; use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; #[trace] fn do_something(i: u64) { std::thread::sleep(std::time::Duration::from_millis(i)); } #[trace] async fn do_something_async(i: u64) { futures_timer::Delay::new(std::time::Duration::from_millis(i)).await; } minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); let _guard = root.set_local_parent(); do_something(100); block_on( async { do_something_async(100).await; } .in_span(Span::enter_with_local_parent("aync_job")), ); } minitrace::flush(); ``` ### Reporter `Reporter` is responsible for reporting the span records to a remote agent, such as Jaeger. Executables should initialize a reporter implementation early in the program’s runtime. Span records generated before the implementation is initialized will be ignored. For an easy start, `minitrace` offers a `ConsoleReporter` that prints span records to stderr. For more advanced use, crates like `minitrace-jaeger`, `minitrace-datadog`, and `minitrace-opentelemetry` are available. By default, the reporter is triggered every 500 milliseconds. The reporter can also be triggered manually by calling `flush()`. See `Config` for customizing the reporting behavior. ``` use std::time::Duration; use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; minitrace::set_reporter( ConsoleReporter, Config::default().batch_report_interval(Duration::from_secs(1)), ); minitrace::flush(); ``` Performance --- `minitrace` is designed to be fast and lightweight, considering four scenarios: * **No Tracing**: `minitrace` is not included as dependency in the executable, while the libraries has been intrumented. In this case, it will be completely removed from libraries, causing zero overhead. * **Sample Tracing**: `minitrace` is enabled in the executable, but only a small portion of the traces are enabled via `Span::root()`, while the other portion start with placeholders by `Span::noop()`. The overhead in this case is very small - merely an integer load, comparison, and jump. * **Full Tracing with Tail Sampling**: `minitrace` is enabled in the executable, and all traces are enabled. However, only a select few abnormal tracing records (e.g., P99) are reported. Normal traces can be dismissed by using `Span::cancel()` to avoid reporting. This could be useful when you are interested in examining program’s tail latency. * **Full Tracing**: `minitrace` is enabled in the executable, and all traces are enabled. All tracing records are reported. `minitrace` performs 10x to 100x faster than other tracing libraries in this case. Modules --- * collectorCollector and the collected spans. * futureThis module provides tools to trace a `Future`. * localNon thread-safe span with low overhead. * preludeA “prelude” for crates using `minitrace`. Structs --- * EventAn event that represents a single point in time during the execution of a span. * SpanA thread-safe span. Functions --- * flushFlushes all pending span records to the reporter immediately. * set_reporterSets the reporter and its configuration for the current application. Attribute Macros --- * traceAn attribute macro designed to eliminate boilerplate code. Crate minitrace === `minitrace` is a high-performance, ergonomic, library-level timeline tracing library for Rust. Unlike most tracing libraries which are primarily designed for instrumenting executables, `minitrace` also accommodates the need for library instrumentation. It stands out due to its extreme lightweight and fast performance compared to other tracing libraries. Moreover, it has zero overhead when not enabled in the executable, making it a worry-free choice for libraries concerned about unnecessary performance loss. Getting Started --- ### Libraries Libraries should include `minitrace` as a dependency without enabling any extra features. ``` [dependencies] minitrace = "0.5" ``` Libraries can attach their spans to the caller’s span (if available) via the API boundary. ``` use minitrace::prelude::*; struct Connection { // ... } impl Connection { #[trace] pub fn query(sql: &str) -> Result<QueryResult, Error> { // ... } } ``` Libraries can also create a new trace individually to record their work. ``` use minitrace::prelude::*; pub fn send_request(req: HttpRequest) -> Result<(), Error> { let root = Span::root("send_request", SpanContext::random()); let _guard = root.set_local_parent(); // ... } ``` ### Executables Executables should include `minitrace` as a dependency with the `enable` feature set. To disable `minitrace` statically, simply don’t set the `enable` feature. ``` [dependencies] minitrace = { version = "0.5", features = ["enable"] } ``` Executables should initialize a reporter implementation early in the program’s runtime. Span records generated before the implementation is initialized will be ignored. Before terminating, the reporter should be flushed to ensure all span records are reported. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; fn main() { minitrace::set_reporter(ConsoleReporter, Config::default()); // ... minitrace::flush(); } ``` Key Concepts --- `minitrace` operates through three types: `Span`, `LocalSpan`, and `Event`, each representing a different type of tracing record. The macro `trace` is available to manage these types automatically. For `Future` instrumentation, necessary utilities are provided by `FutureExt`. ### Span A `Span` represents an individual unit of work. It contains: * A name * A start timestamp and duration * A set of key-value properties * A reference to a parent `Span` A new `Span` can be started through `Span::root()`, requiring the trace id and the parent span id from a remote source. If there’s no remote parent span, the parent span id is typically set to its default value of zero. `Span::enter_with_parent()` starts a child span given a parent span. `Span` is thread-safe and can be sent across threads. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root_span = Span::root("root", SpanContext::random()); { let child_span = Span::enter_with_parent("a child span", &root_span); // ... // child_span ends here. } // root_span ends here. } minitrace::flush(); ``` ### Local Span A `Span` can be efficiently replaced with a `LocalSpan`, reducing overhead significantly, provided it is not intended for sending to other threads. Before starting a `LocalSpan`, a scope of parent span should be set using `Span::set_local_parent()`. Use `LocalSpan::enter_with_local_parent()` to start a `LocalSpan`, which then becomes the new local parent. If no local parent is set, the `enter_with_local_parent()` will do nothing. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); { let _guard = root.set_local_parent(); // The parent of this span is `root`. let _span1 = LocalSpan::enter_with_local_parent("a child span"); foo(); } } fn foo() { // The parent of this span is `span1`. let _span2 = LocalSpan::enter_with_local_parent("a child span of child span"); } minitrace::flush(); ``` ### Event `Event` represents a single point in time where something occurred during the execution of a program. An `Event` can be seen as a log record attached to a span. ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); Event::add_to_parent("event in root", &root, || []); { let _guard = root.set_local_parent(); let _span1 = LocalSpan::enter_with_local_parent("a child span"); Event::add_to_local_parent("event in span1", || [("key".into(), "value".into())]); } } minitrace::flush(); ``` ### Macro The attribute-macro `trace` helps to reduce boilerplate. However, the function annotated by the `trace` always requires a local parent in the context, otherwise, no span will be recorded. ``` use futures::executor::block_on; use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; use minitrace::prelude::*; #[trace] fn do_something(i: u64) { std::thread::sleep(std::time::Duration::from_millis(i)); } #[trace] async fn do_something_async(i: u64) { futures_timer::Delay::new(std::time::Duration::from_millis(i)).await; } minitrace::set_reporter(ConsoleReporter, Config::default()); { let root = Span::root("root", SpanContext::random()); let _guard = root.set_local_parent(); do_something(100); block_on( async { do_something_async(100).await; } .in_span(Span::enter_with_local_parent("aync_job")), ); } minitrace::flush(); ``` ### Reporter `Reporter` is responsible for reporting the span records to a remote agent, such as Jaeger. Executables should initialize a reporter implementation early in the program’s runtime. Span records generated before the implementation is initialized will be ignored. For an easy start, `minitrace` offers a `ConsoleReporter` that prints span records to stderr. For more advanced use, crates like `minitrace-jaeger`, `minitrace-datadog`, and `minitrace-opentelemetry` are available. By default, the reporter is triggered every 500 milliseconds. The reporter can also be triggered manually by calling `flush()`. See `Config` for customizing the reporting behavior. ``` use std::time::Duration; use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; minitrace::set_reporter( ConsoleReporter, Config::default().batch_report_interval(Duration::from_secs(1)), ); minitrace::flush(); ``` Performance --- `minitrace` is designed to be fast and lightweight, considering four scenarios: * **No Tracing**: `minitrace` is not included as dependency in the executable, while the libraries has been intrumented. In this case, it will be completely removed from libraries, causing zero overhead. * **Sample Tracing**: `minitrace` is enabled in the executable, but only a small portion of the traces are enabled via `Span::root()`, while the other portion start with placeholders by `Span::noop()`. The overhead in this case is very small - merely an integer load, comparison, and jump. * **Full Tracing with Tail Sampling**: `minitrace` is enabled in the executable, and all traces are enabled. However, only a select few abnormal tracing records (e.g., P99) are reported. Normal traces can be dismissed by using `Span::cancel()` to avoid reporting. This could be useful when you are interested in examining program’s tail latency. * **Full Tracing**: `minitrace` is enabled in the executable, and all traces are enabled. All tracing records are reported. `minitrace` performs 10x to 100x faster than other tracing libraries in this case. Modules --- * collectorCollector and the collected spans. * futureThis module provides tools to trace a `Future`. * localNon thread-safe span with low overhead. * preludeA “prelude” for crates using `minitrace`. Structs --- * EventAn event that represents a single point in time during the execution of a span. * SpanA thread-safe span. Functions --- * flushFlushes all pending span records to the reporter immediately. * set_reporterSets the reporter and its configuration for the current application. Attribute Macros --- * traceAn attribute macro designed to eliminate boilerplate code. Struct minitrace::Span === ``` pub struct Span {} ``` A thread-safe span. Implementations --- ### impl Span #### pub fn noop() -> Self Create a place-holder span that never starts recording. ##### Examples ``` use minitrace::prelude::*; let mut root = Span::noop(); ``` #### pub fn root(name: &'static str, parent: SpanContext) -> Self Create a new trace and return its root span. Once dropped, the root span automatically submits all associated child spans to the reporter. ##### Examples ``` use minitrace::prelude::*; let mut root = Span::root("root", SpanContext::random()); ``` #### pub fn enter_with_parent(name: &'static str, parent: &Span) -> Self Create a new child span associated with the specified parent span. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); let child = Span::enter_with_parent("child", &root); ``` #### pub fn enter_with_parents<'a>( name: &'static str, parents: impl IntoIterator<Item = &'a Span> ) -> Self Create a new child span associated with multiple parent spans. This function is particularly useful when a single operation amalgamates multiple requests. It enables the creation of a unique child span that is interconnected with all the parent spans related to the requests, thereby obviating the need to generate individual child spans for each parent span. The newly created child span, and its children, will have a replica for each trace of parent spans. ##### Examples ``` use minitrace::prelude::*; let parent1 = Span::root("parent1", SpanContext::random()); let parent2 = Span::root("parent2", SpanContext::random()); let child = Span::enter_with_parents("child", [&parent1, &parent2]); ``` #### pub fn enter_with_local_parent(name: &'static str) -> Self Create a new child span associated with the current local span in the current thread. If no local span is active, this function returns a no-op span. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); let _g = root.set_local_parent(); let child = Span::enter_with_local_parent("child"); ``` #### pub fn set_local_parent(&self) -> LocalParentGuard Sets the current `Span` as the local parent for the current thread. This method is used to establish a `Span` as the local parent within the current scope. A local parent is necessary for creating a `LocalSpan` using `LocalSpan::enter_with_local_parent()`. If no local parent is set, `enter_with_local_parent()` will not perform any action. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); let _guard = root.set_local_parent(); // root is now the local parent // Now we can create a LocalSpan with root as the local parent. let _span = LocalSpan::enter_with_local_parent("a child span"); ``` #### pub fn with_property<F>(self, property: F) -> Selfwhere F: FnOnce() -> (Cow<'static, str>, Cow<'static, str>), Add a single property to the `Span` and return the modified `Span`. A property is an arbitrary key-value pair associated with a span. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()).with_property(|| ("key".into(), "value".into())); ``` #### pub fn with_properties<I, F>(self, properties: F) -> Selfwhere I: IntoIterator<Item = (Cow<'static, str>, Cow<'static, str>)>, F: FnOnce() -> I, Add multiple properties to the `Span` and return the modified `Span`. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()).with_properties(|| { vec![ ("key1".into(), "value1".into()), ("key2".into(), "value2".into()), ] }); ``` #### pub fn push_child_spans(&self, local_spans: LocalSpans) Attach a collection of `LocalSpan` instances as child spans to the current span. This method allows you to associate previously collected `LocalSpan` instances with the current span. This is particularly useful when the `LocalSpan` instances were initiated before their parent span, and were collected manually using a `LocalCollector`. ##### Examples ``` use minitrace::local::LocalCollector; use minitrace::prelude::*; // Collect local spans manually without a parent let collector = LocalCollector::start(); let span = LocalSpan::enter_with_local_parent("a child span"); drop(span); let local_spans = collector.collect(); // Attach the local spans to a parent let root = Span::root("root", SpanContext::random()); root.push_child_spans(local_spans); ``` #### pub fn cancel(&mut self) Dismisses the trace, preventing the reporting of any span records associated with it. This is particularly useful when focusing on the tail latency of a program. For instant, you can dismiss all traces finishes within the 99th percentile. ##### Note This method only dismisses the entire trace when called on the root span. If called on a non-root span, it will only cancel the reporting of that specific span. ##### Examples ``` use minitrace::prelude::*; let mut root = Span::root("root", SpanContext::random()); // .. ``` Trait Implementations --- ### impl Default for Span #### fn default() -> Span Returns the “default value” for a type. #### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Span ### impl Send for Span ### impl Sync for Span ### impl Unpin for Span ### impl UnwindSafe for Span Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Struct minitrace::local::LocalSpan === ``` pub struct LocalSpan {} ``` An optimized `Span` for tracing operations within a single thread. Implementations --- ### impl LocalSpan #### pub fn enter_with_local_parent(name: &'static str) -> Self Create a new child span associated with the current local span in the current thread, and then it will become the new local parent. If no local span is active, this function is no-op. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); let _g = root.set_local_parent(); let child = Span::enter_with_local_parent("child"); ``` #### pub fn with_property<F>(self, property: F) -> Selfwhere F: FnOnce() -> (Cow<'static, str>, Cow<'static, str>), Add a single property to the `LocalSpan` and return the modified `LocalSpan`. A property is an arbitrary key-value pair associated with a span. ##### Examples ``` use minitrace::prelude::*; let span = LocalSpan::enter_with_local_parent("a child span") .with_property(|| ("key".into(), "value".into())); ``` #### pub fn with_properties<I, F>(self, properties: F) -> Selfwhere I: IntoIterator<Item = (Cow<'static, str>, Cow<'static, str>)>, F: FnOnce() -> I, Add multiple properties to the `LocalSpan` and return the modified `LocalSpan`. ##### Examples ``` use minitrace::prelude::*; let span = LocalSpan::enter_with_local_parent("a child span").with_properties(|| { vec![ ("key1".into(), "value1".into()), ("key2".into(), "value2".into()), ] }); ``` Trait Implementations --- ### impl Default for LocalSpan #### fn default() -> LocalSpan Returns the “default value” for a type. #### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for LocalSpan ### impl Send for LocalSpan ### impl Sync for LocalSpan ### impl Unpin for LocalSpan ### impl UnwindSafe for LocalSpan Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Struct minitrace::Event === ``` pub struct Event; ``` An event that represents a single point in time during the execution of a span. Implementations --- ### impl Event #### pub fn add_to_parent<I, F>(name: &'static str, parent: &Span, properties: F)where I: IntoIterator<Item = (Cow<'static, str>, Cow<'static, str>)>, F: FnOnce() -> I, Adds an event to the parent span with the given name and properties. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); Event::add_to_parent("event in root", &root, || [("key".into(), "value".into())]); ``` #### pub fn add_to_local_parent<I, F>(name: &'static str, properties: F)where I: IntoIterator<Item = (Cow<'static, str>, Cow<'static, str>)>, F: FnOnce() -> I, Adds an event to the current local parent span with the given name and properties. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); let _guard = root.set_local_parent(); Event::add_to_local_parent("event in root", || [("key".into(), "value".into())]); ``` Auto Trait Implementations --- ### impl RefUnwindSafe for Event ### impl Send for Event ### impl Sync for Event ### impl Unpin for Event ### impl UnwindSafe for Event Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Attribute Macro minitrace::trace === ``` #[trace] ``` An attribute macro designed to eliminate boilerplate code. This macro automatically creates a span for the annotated function. The span name defaults to the function name but can be customized by passing a string literal as an argument using the `name` parameter. The `#[trace]` attribute requires a local parent context to function correctly. Ensure that the function annotated with `#[trace]` is called within the scope of `Span::set_local_parent()`. Examples --- ``` use minitrace::prelude::*; #[trace] fn foo() { // ... } #[trace] async fn bar() { // ... } #[trace(name = "qux", enter_on_poll = true)] async fn baz() { // ... } ``` The code snippets above are equivalent to: ``` fn foo() { let __guard__ = LocalSpan::enter_with_local_parent("foo"); // ... } async fn bar() { async { // ... } .in_span(Span::enter_with_local_parent("bar")) .await } async fn baz() { async { // ... } .enter_on_poll("qux") .await } ``` Trait minitrace::future::FutureExt === ``` pub trait FutureExt: Future + Sized { // Provided methods fn in_span(self, span: Span) -> InSpan<Self> { ... } fn enter_on_poll(self, name: &'static str) -> EnterOnPoll<Self> { ... } } ``` An extension trait for `Futures` that provides tracing instrument adapters. Provided Methods --- #### fn in_span(self, span: Span) -> InSpan<SelfBinds a `Span` to the `Future` that continues to record until the future is dropped. In addition, it sets the span as the local parent at every poll so that `LocalSpan` becomes available within the future. Internally, it calls `Span::set_local_parent` when the executor `poll` it. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("Root", SpanContext::random()); let task = async { // ... } .in_span(Span::enter_with_parent("Task", &root)); tokio::spawn(task); ``` `Span::set_local_parent` #### fn enter_on_poll(self, name: &'static str) -> EnterOnPoll<SelfStarts a `LocalSpan` at every `Future::poll()`. If the future gets polled multiple times, it will create multiple *short* spans. ##### Examples ``` use minitrace::prelude::*; let root = Span::root("Root", SpanContext::random()); let task = async { async { // ... } .enter_on_poll("Sub Task") .await } .in_span(Span::enter_with_parent("Task", &root)); tokio::spawn(task); ``` Implementors --- ### impl<T: Future> FutureExt for T {"EnterOnPoll<Self>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.EnterOnPoll.html\" title=\"struct minitrace::future::EnterOnPoll\">EnterOnPoll</a>&lt;T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;T: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a>&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a> for <a class=\"struct\" href=\"struct.EnterOnPoll.html\" title=\"struct minitrace::future::EnterOnPoll\">EnterOnPoll</a>&lt;T&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" class=\"associatedtype\">Output</a> = T::<a class=\"associatedtype\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" title=\"type core::future::future::Future::Output\">Output</a>;</span>","InSpan<Self>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.InSpan.html\" title=\"struct minitrace::future::InSpan\">InSpan</a>&lt;T&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;T: <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a>&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html\" title=\"trait core::future::future::Future\">Future</a> for <a class=\"struct\" href=\"struct.InSpan.html\" title=\"struct minitrace::future::InSpan\">InSpan</a>&lt;T&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" class=\"associatedtype\">Output</a> = T::<a class=\"associatedtype\" href=\"https://doc.rust-lang.org/nightly/core/future/future/trait.Future.html#associatedtype.Output\" title=\"type core::future::future::Future::Output\">Output</a>;</span>"} Trait minitrace::collector::Reporter === ``` pub trait Reporter: Send + 'static { // Required method fn report(&mut self, spans: &[SpanRecord]); } ``` A trait defining the behavior of a reporter. A reporter is responsible for handling span records, typically by sending them to a remote service for further processing and analysis. Required Methods --- #### fn report(&mut self, spans: &[SpanRecord]) Reports a batch of spans to a remote service. Implementors --- ### impl Reporter for ConsoleReporter Struct minitrace::collector::ConsoleReporter === ``` pub struct ConsoleReporter; ``` A console reporter that prints span records to the stderr. Trait Implementations --- ### impl Reporter for ConsoleReporter #### fn report(&mut self, spans: &[SpanRecord]) Reports a batch of spans to a remote service.Auto Trait Implementations --- ### impl RefUnwindSafe for ConsoleReporter ### impl Send for ConsoleReporter ### impl Sync for ConsoleReporter ### impl Unpin for ConsoleReporter ### impl UnwindSafe for ConsoleReporter Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Function minitrace::flush === ``` pub fn flush() ``` Flushes all pending span records to the reporter immediately. Struct minitrace::collector::Config === ``` pub struct Config { /* private fields */ } ``` Configuration of the behavior of the global collector. Implementations --- ### impl Config #### pub fn max_spans_per_trace(self, max_spans_per_trace: Option<usize>) -> Self A soft limit for the total number of spans and events for a trace, usually used to avoid out-of-memory. ##### Note Root span will always be collected. The eventually collected spans may exceed the limit. ##### Examples ``` use minitrace::collector::Config; let config = Config::default().max_spans_per_trace(Some(100)); minitrace::set_reporter(minitrace::collector::ConsoleReporter, config); ``` #### pub fn batch_report_interval(self, batch_report_interval: Duration) -> Self The time duration between two batch reports. The default value is 500 milliseconds. A batch report will be initiated by the earliest of these events: * When the specified time duration between two batch reports is met. * When the number of spans in a batch hits its limit. ##### Examples ``` use std::time::Duration; use minitrace::collector::Config; let config = Config::default().batch_report_interval(Duration::from_secs(1)); minitrace::set_reporter(minitrace::collector::ConsoleReporter, config); ``` #### pub fn batch_report_max_spans( self, batch_report_max_spans: Option<usize> ) -> Self The soft limit for the maximum number of spans in a batch report. A batch report will be initiated by the earliest of these events: * When the specified time duration between two batch reports is met. * When the number of spans in a batch hits its limit. ##### Note The eventually spans being reported may exceed the limit. ##### Examples ``` use std::time::Duration; use minitrace::collector::Config; let config = Config::default().batch_report_max_spans(Some(200)); minitrace::set_reporter(minitrace::collector::ConsoleReporter, config); ``` Trait Implementations --- ### impl Clone for Config #### fn clone(&self) -> Config Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. #### fn eq(&self, other: &Config) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for Config ### impl Eq for Config ### impl StructuralEq for Config ### impl StructuralPartialEq for Config Auto Trait Implementations --- ### impl RefUnwindSafe for Config ### impl Send for Config ### impl Sync for Config ### impl Unpin for Config ### impl UnwindSafe for Config Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V Module minitrace::collector === Collector and the collected spans. Structs --- * ConfigConfiguration of the behavior of the global collector. * ConsoleReporterA console reporter that prints span records to the stderr. * EventRecordA record of an event that occurred during the execution of a span. * SpanContextA struct representing the context of a span, including its `TraceId` and `SpanId`. * SpanIdAn identifier for a span within a trace. * SpanRecordA record of a span that includes all the information about the span, such as its identifiers, timing information, name, and associated properties. * TraceIdAn identifier for a trace, which groups a set of related spans together. Traits --- * ReporterA trait defining the behavior of a reporter. A reporter is responsible for handling span records, typically by sending them to a remote service for further processing and analysis. Module minitrace::future === This module provides tools to trace a `Future`. The `FutureExt` trait extends `Future` with two methods: `in_span()` and `enter_on_poll()`. It is crucial that the outermost future uses `in_span()`, otherwise, the traces inside the `Future` will be lost. Example --- ``` use minitrace::prelude::*; let root = Span::root("root", SpanContext::random()); // Instrument the a task let task = async { async { // ... } .enter_on_poll("future is polled") .await; } .in_span(Span::enter_with_parent("task", &root)); runtime.spawn(task); ``` Structs --- * EnterOnPollAdapter for `FutureExt::enter_on_poll()`. * InSpanAdapter for `FutureExt::in_span()`. Traits --- * FutureExtAn extension trait for `Futures` that provides tracing instrument adapters. Module minitrace::local === Non thread-safe span with low overhead. Structs --- * LocalCollectorA collector to collect `LocalSpan`. * LocalParentGuardA guard created by `Span::set_local_parent()`. * LocalSpanAn optimized `Span` for tracing operations within a single thread. * LocalSpansA collection of `LocalSpan` instances. Module minitrace::prelude === A “prelude” for crates using `minitrace`. Re-exports --- * `pub use crate::collector::SpanContext;` * `pub use crate::collector::SpanId;` * `pub use crate::collector::SpanRecord;` * `pub use crate::collector::TraceId;` * `pub use crate::event::Event;` * `pub use crate::future::FutureExt as _;` * `pub use crate::local::LocalSpan;` * `pub use crate::span::Span;` * `pub use crate::trace;` Function minitrace::set_reporter === ``` pub fn set_reporter(reporter: impl Reporter, config: Config) ``` Sets the reporter and its configuration for the current application. Examples --- ``` use minitrace::collector::Config; use minitrace::collector::ConsoleReporter; minitrace::set_reporter(ConsoleReporter, Config::default()); ```
pybatfish
readthedoc
SQL
Batfish builds a model of your network behavior based on configuration and other data you provide. You can query the network model as well parsed configuration settings using several categories of Batfish questions listed below. See here for instructions on running questions. Batfish supports configurations for a large and growing set of (physical and virtual) devices, from vendors including: A10 Networks * Arista * Amazon Web Services (AWS) constructs Internet Gateways * NAT Gateways * Network ACLs * Security Groups * Virtual Private Clouds (VPCs) * VPN Gateways * … * Check Point * Cisco ASA * IOS * IOS-XE * IOS-XR * NX-OS * Cumulus * F5 BIG-IP * Fortinet * Free-Range Routing (FRR) * iptables (on hosts) * Juniper (All JunOS platforms) EX * MX * PTX * QFX * SRX * T-series * Palo Alto Networks * SONiC Batfish has limited support for the following platforms: Aruba * Dell Force10 * Foundry If you’d like support for additional vendors or currently-unsupported configuration features, let us know via Slack or GitHub issue. We’ll try to add support. Or, you can — we welcome pull requests! :) This category of questions enables you to retrieve and process the contents of device configurations in a vendor-agnostic manner (except where the question itself is vendor-specific). Batfish organizes configuration content into several sub-categories. ## Node Properties¶ Returns configuration settings of nodes. Lists global settings of devices in the network. Settings that are specific to interfaces, routing protocols, etc. are available via other questions. ``` result = bf.q.nodeProperties().answer().frame() ``` Node | AS_Path_Access_Lists | Authentication_Key_Chains | Community_Match_Exprs | Community_Set_Exprs | Community_Set_Match_Exprs | Community_Sets | Configuration_Format | DNS_Servers | DNS_Source_Interface | Default_Cross_Zone_Action | Default_Inbound_Action | Domain_Name | Hostname | IKE_Phase1_Keys | ... | Interfaces | Logging_Servers | Logging_Source_Interface | NTP_Servers | NTP_Source_Interface | PBR_Policies | Route6_Filter_Lists | Route_Filter_Lists | Routing_Policies | SNMP_Source_Interface | SNMP_Trap_Servers | TACACS_Servers | TACACS_Source_Interface | VRFs | Zones | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as2border2 | [] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | CISCO_IOS | [] | None | PERMIT | PERMIT | lab.local | as2border2 | [] | ... | ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'GigabitEthernet2/0', 'Loopback0'] | [] | None | ['18.18.18.18'] | None | [] | [] | ['101', '103', 'inbound_route_filter', 'outbound_routes', '~MATCH_SUPPRESSED_SUMMARY_ONLY:default~'] | ['as1_to_as2', 'as2_to_as1', 'as2_to_as3', 'as3_to_as2', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:10.23.21.3~', '~BGP_PEER_EXPORT_POLICY:default:2.1.2.1~', '~BGP_PEER_EXPORT_POLICY:default:2.1.2.2~', '~BGP_REDISTRIBUTION_POLICY:default~', '~OSPF_EXPORT_POLICY:default:1~', '~RESOLUTION_POLICY~', '~suppress~rp~summary-only~'] | None | [] | [] | None | ['default'] | [] | 1 | as1border1 | [] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | CISCO_IOS | [] | None | PERMIT | PERMIT | lab.local | as1border1 | [] | ... | ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'Loopback0'] | [] | None | [] | None | [] | [] | ['101', '102', '103', 'default_list', 'inbound_route_filter'] | ['as1_to_as2', 'as1_to_as3', 'as2_to_as1', 'as3_to_as1', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:1.10.1.1~', '~BGP_PEER_EXPORT_POLICY:default:10.12.11.2~', '~BGP_PEER_EXPORT_POLICY:default:3.2.2.2~', '~BGP_PEER_EXPORT_POLICY:default:5.6.7.8~', '~BGP_REDISTRIBUTION_POLICY:default~', '~OSPF_EXPORT_POLICY:default:1~', '~RESOLUTION_POLICY~'] | None | [] | [] | None | ['default'] | [] | 2 | as3border2 | [] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | ['as1_community', 'as2_community', 'as3_community'] | [] | CISCO_IOS | [] | None | PERMIT | PERMIT | lab.local | as3border2 | [] | ... | ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'Loopback0'] | [] | None | ['18.18.18.18', '23.23.23.23'] | None | [] | [] | ['101', '102', '103', 'inbound_route_filter'] | ['as1_to_as3', 'as2_to_as3', 'as3_to_as1', 'as3_to_as2', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:10.13.22.1~', '~BGP_PEER_EXPORT_POLICY:default:3.10.1.1~', '~BGP_REDISTRIBUTION_POLICY:default~', '~OSPF_EXPORT_POLICY:default:1~', '~RESOLUTION_POLICY~'] | None | [] | [] | None | ['default'] | [] | 3 | as1border2 | [] | [] | ['as1_community', 'as2_community', 'as3_community', 'as4_community'] | [] | ['as1_community', 'as2_community', 'as3_community', 'as4_community'] | [] | CISCO_IOS | [] | None | PERMIT | PERMIT | lab.local | as1border2 | [] | ... | ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'GigabitEthernet2/0', 'Loopback0'] | [] | None | ['18.18.18.18', '23.23.23.23'] | None | [] | [] | ['101', '102', '103', 'as4-prefixes', 'inbound_route_filter'] | ['as1_to_as2', 'as1_to_as3', 'as1_to_as4', 'as2_to_as1', 'as3_to_as1', 'as4_to_as1', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:1.10.1.1~', '~BGP_PEER_EXPORT_POLICY:default:10.13.22.3~', '~BGP_PEER_EXPORT_POLICY:default:10.14.22.4~', '~BGP_REDISTRIBUTION_POLICY:default~', '~OSPF_EXPORT_POLICY:default:1~', '~RESOLUTION_POLICY~'] | None | [] | [] | None | ['default'] | [] | 4 | as2dept1 | [] | [] | ['as2_community'] | [] | ['as2_community'] | [] | CISCO_IOS | [] | None | PERMIT | PERMIT | lab.local | as2dept1 | [] | ... | ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'GigabitEthernet2/0', 'GigabitEthernet3/0', 'Loopback0'] | [] | None | [] | None | [] | [] | ['102'] | ['as2_to_dept', 'dept_to_as2', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:2.34.101.3~', '~BGP_PEER_EXPORT_POLICY:default:2.34.201.3~', '~BGP_REDISTRIBUTION_POLICY:default~', '~RESOLUTION_POLICY~'] | None | [] | [] | None | ['default'] | [] | ``` Node as2border2 AS_Path_Access_Lists [] Authentication_Key_Chains [] Community_Match_Exprs ['as1_community', 'as2_community', 'as3_community'] Community_Set_Exprs [] Community_Set_Match_Exprs ['as1_community', 'as2_community', 'as3_community'] Community_Sets [] Configuration_Format CISCO_IOS DNS_Servers [] DNS_Source_Interface None Default_Cross_Zone_Action PERMIT Default_Inbound_Action PERMIT Domain_Name lab.local Hostname as2border2 IKE_Phase1_Keys [] IKE_Phase1_Policies [] IKE_Phase1_Proposals [] IP6_Access_Lists [] IP_Access_Lists ['101', '103', 'INSIDE_TO_AS3', 'OUTSIDE_TO_INSIDE'] IPsec_Peer_Configs [] IPsec_Phase2_Policies [] IPsec_Phase2_Proposals [] Interfaces ['Ethernet0/0', 'GigabitEthernet0/0', 'GigabitEthernet1/0', 'GigabitEthernet2/0', 'Loopback0'] Logging_Servers [] Logging_Source_Interface None NTP_Servers ['18.18.18.18'] NTP_Source_Interface None PBR_Policies [] Route6_Filter_Lists [] Route_Filter_Lists ['101', '103', 'inbound_route_filter', 'outbound_routes', '~MATCH_SUPPRESSED_SUMMARY_ONLY:default~'] Routing_Policies ['as1_to_as2', 'as2_to_as1', 'as2_to_as3', 'as3_to_as2', '~BGP_COMMON_EXPORT_POLICY:default~', '~BGP_PEER_EXPORT_POLICY:default:10.23.21.3~', '~BGP_PEER_EXPORT_POLICY:default:2.1.2.1~', '~BGP_PEER_EXPORT_POLICY:default:2.1.2.2~', '~BGP_REDISTRIBUTION_POLICY:default~', '~OSPF_EXPORT_POLICY:default:1~', '~RESOLUTION_POLICY~', '~suppress~rp~summary-only~'] SNMP_Source_Interface None SNMP_Trap_Servers [] TACACS_Servers [] TACACS_Source_Interface None VRFs ['default'] Zones [] Name: 0, dtype: object ``` ## Interface Properties¶ Returns configuration settings of interfaces. Lists interface-level settings of interfaces. Settings for routing protocols, VRFs, and zones etc. that are attached to interfaces are available via other questions. ``` result = bf.q.interfaceProperties().answer().frame() ``` Interface | Access_VLAN | Active | Admin_Up | All_Prefixes | Allowed_VLANs | Auto_State_VLAN | Bandwidth | Blacklisted | Channel_Group | Channel_Group_Members | DHCP_Relay_Addresses | Declared_Names | Description | Encapsulation_VLAN | ... | Outgoing_Filter_Name | PBR_Policy_Name | Primary_Address | Primary_Network | Proxy_ARP | Rip_Enabled | Rip_Passive | Spanning_Tree_Portfast | Speed | Switchport | Switchport_Mode | Switchport_Trunk_Encapsulation | VRF | VRRP_Groups | Zone_Name | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as1border1[Ethernet0/0] | None | False | False | [] | True | 1e+07 | False | None | [] | [] | ['Ethernet0/0'] | None | None | ... | None | None | None | None | True | False | False | False | 1e+07 | False | NONE | DOT1Q | default | [] | None | 1 | as1border1[GigabitEthernet0/0] | None | True | True | ['1.0.1.1/24'] | True | 1e+09 | False | None | [] | [] | ['GigabitEthernet0/0'] | None | None | ... | None | None | 1.0.1.1/24 | 1.0.1.0/24 | True | False | False | False | 1e+09 | False | NONE | DOT1Q | default | [123] | None | 2 | as1border1[GigabitEthernet1/0] | None | True | True | ['10.12.11.1/24'] | True | 1e+09 | False | None | [] | [] | ['GigabitEthernet1/0'] | None | None | ... | None | None | 10.12.11.1/24 | 10.12.11.0/24 | True | False | False | False | 1e+09 | False | NONE | DOT1Q | default | [] | None | 3 | as1border1[Loopback0] | None | True | True | ['1.1.1.1/32'] | True | 8e+09 | None | None | [] | [] | ['Loopback0'] | None | None | ... | None | None | 1.1.1.1/32 | 1.1.1.1/32 | True | False | False | False | None | False | NONE | DOT1Q | default | [] | None | 4 | as1border2[Ethernet0/0] | None | False | False | [] | True | 1e+07 | False | None | [] | [] | ['Ethernet0/0'] | None | None | ... | None | None | None | None | True | False | False | False | 1e+07 | False | NONE | DOT1Q | default | [] | None | ``` Interface as1border1[Ethernet0/0] Access_VLAN None Active False Admin_Up False All_Prefixes [] Allowed_VLANs Auto_State_VLAN True Bandwidth 1e+07 Blacklisted False Channel_Group None Channel_Group_Members [] DHCP_Relay_Addresses [] Declared_Names ['Ethernet0/0'] Description None Encapsulation_VLAN None HSRP_Groups [] HSRP_Version None Inactive_Reason Administratively down Incoming_Filter_Name None MLAG_ID None MTU 1500 Native_VLAN None Outgoing_Filter_Name None PBR_Policy_Name None Primary_Address None Primary_Network None Proxy_ARP True Rip_Enabled False Rip_Passive False Spanning_Tree_Portfast False Speed 1e+07 Switchport False Switchport_Mode NONE Switchport_Trunk_Encapsulation DOT1Q VRF default VRRP_Groups [] Zone_Name None Name: 0, dtype: object ``` ## BGP Process Configuration¶ Returns configuration settings of BGP processes. Reports configuration settings for each BGP process on each node and VRF in the network. This question reports only process-wide settings. Peer-specific settings are reported by the bgpPeerConfiguration question. ``` result = bf.q.bgpProcessConfiguration().answer().frame() ``` Node | VRF | Router_ID | Confederation_ID | Confederation_Members | Multipath_EBGP | Multipath_IBGP | Multipath_Match_Mode | Neighbors | Route_Reflector | Tie_Breaker | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as2border2 | default | 2.1.1.2 | None | None | True | True | EXACT_PATH | ['2.1.2.1', '2.1.2.2', '10.23.21.3'] | False | ARRIVAL_ORDER | 1 | as2dist1 | default | 2.1.3.1 | None | None | True | True | EXACT_PATH | ['2.1.2.1', '2.1.2.2', '2.34.101.4'] | False | ARRIVAL_ORDER | 2 | as3core1 | default | 3.10.1.1 | None | None | True | True | EXACT_PATH | ['3.1.1.1', '3.2.2.2'] | True | ARRIVAL_ORDER | 3 | as2border1 | default | 2.1.1.1 | None | None | True | True | EXACT_PATH | ['2.1.2.1', '2.1.2.2', '10.12.11.1'] | False | ARRIVAL_ORDER | 4 | as1core1 | default | 1.10.1.1 | None | None | True | True | EXACT_PATH | ['1.1.1.1', '1.2.2.2'] | True | ARRIVAL_ORDER | ``` Node as2border2 VRF default Router_ID 2.1.1.2 Confederation_ID None Confederation_Members None Multipath_EBGP True Multipath_IBGP True Multipath_Match_Mode EXACT_PATH Neighbors ['2.1.2.1', '2.1.2.2', '10.23.21.3'] Route_Reflector False Tie_Breaker ARRIVAL_ORDER Name: 0, dtype: object ``` ## BGP Peer Configuration¶ Returns configuration settings for BGP peerings. Reports configuration settings for each configured BGP peering on each node in the network. This question reports peer-specific settings. Settings that are process-wide are reported by the bgpProcessConfiguration question. Node | VRF | Local_AS | Local_IP | Local_Interface | Confederation | Remote_AS | Remote_IP | Description | Route_Reflector_Client | Cluster_ID | Peer_Group | Import_Policy | Export_Policy | Send_Community | Is_Passive | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as3core1 | default | 3 | 3.10.1.1 | None | None | 3 | 3.1.1.1 | None | True | 3.10.1.1 | as3 | [] | [] | True | False | 1 | as2border1 | default | 2 | 10.12.11.2 | None | None | 1 | 10.12.11.1 | None | False | None | as1 | ['as1_to_as2'] | ['as2_to_as1'] | True | False | 2 | as2core2 | default | 2 | 2.1.2.2 | None | None | 2 | 2.1.3.2 | None | True | 2.1.2.2 | as2 | [] | [] | True | False | 3 | as3border1 | default | 3 | 3.1.1.1 | None | None | 3 | 3.10.1.1 | None | False | None | as3 | [] | [] | True | False | 4 | as2core1 | default | 2 | 2.1.2.1 | None | None | 2 | 2.1.3.2 | None | True | 2.1.2.1 | as2 | [] | [] | True | False | ``` Node as3core1 VRF default Local_AS 3 Local_IP 3.10.1.1 Local_Interface None Confederation None Remote_AS 3 Remote_IP 3.1.1.1 Description None Route_Reflector_Client True Cluster_ID 3.10.1.1 Peer_Group as3 Import_Policy [] Export_Policy [] Send_Community True Is_Passive False Name: 0, dtype: object ``` ## HSRP Properties¶ Returns configuration settings of HSRP groups. Lists information about HSRP groups on interfaces. ``` result = bf.q.hsrpProperties().answer().frame() ``` ``` Interface br2[GigabitEthernet0/2] Group_Id 12 Virtual_Addresses ['192.168.1.254'] Source_Address 192.168.1.2/24 Priority 100 Preempt False Active True Name: 0, dtype: object ``` ## OSPF Process Configuration¶ Returns configuration parameters for OSPF routing processes. Returns the values of important properties for all OSPF processes running across the network. ``` result = bf.q.ospfProcessConfiguration().answer().frame() ``` Node | VRF | Process_ID | Areas | Reference_Bandwidth | Router_ID | Export_Policy_Sources | Area_Border_Router | | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as2border1 | default | 1 | ['1'] | 1e+08 | 2.1.1.1 | [] | False | 1 | as2core1 | default | 1 | ['1'] | 1e+08 | 2.1.2.1 | [] | False | 2 | as2dist1 | default | 1 | ['1'] | 1e+08 | 2.1.3.1 | [] | False | 3 | as2dist2 | default | 1 | ['1'] | 1e+08 | 2.1.3.2 | [] | False | 4 | as1border2 | default | 1 | ['1'] | 1e+08 | 1.2.2.2 | [] | False | ``` Node as2border1 VRF default Process_ID 1 Areas ['1'] Reference_Bandwidth 1e+08 Router_ID 2.1.1.1 Export_Policy_Sources [] Area_Border_Router False Name: 0, dtype: object ``` ## OSPF Interface Configuration¶ Returns OSPF configuration of interfaces. Returns the interface level OSPF configuration details for the interfaces in the network which run OSPF. Invocation `[35]:` ``` result = bf.q.ospfInterfaceConfiguration().answer().frame() ``` Interface | VRF | Process_ID | OSPF_Area_Name | OSPF_Enabled | OSPF_Passive | OSPF_Cost | OSPF_Network_Type | OSPF_Hello_Interval | OSPF_Dead_Interval | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | as1core1[GigabitEthernet1/0] | default | 1 | 1 | True | False | 1 | BROADCAST | 10 | 40 | 1 | as1core1[GigabitEthernet0/0] | default | 1 | 1 | True | False | 1 | BROADCAST | 10 | 40 | 2 | as2dist1[Loopback0] | default | 1 | 1 | True | False | 1 | BROADCAST | 10 | 40 | 3 | as3core1[GigabitEthernet0/0] | default | 1 | 1 | True | False | 1 | BROADCAST | 10 | 40 | 4 | as3core1[GigabitEthernet1/0] | default | 1 | 1 | True | False | 1 | BROADCAST | 10 | 40 | ``` Interface as1core1[GigabitEthernet1/0] VRF default Process_ID 1 OSPF_Area_Name 1 OSPF_Enabled True OSPF_Passive False OSPF_Cost 1 OSPF_Network_Type BROADCAST OSPF_Hello_Interval 10 OSPF_Dead_Interval 40 Name: 0, dtype: object ``` ## OSPF Area Configuration¶ Returns configuration parameters of OSPF areas. Returns information about all OSPF areas defined across the network. Invocation `[40]:` ``` result = bf.q.ospfAreaConfiguration().answer().frame() ``` Node | VRF | Process_ID | Area | Area_Type | Active_Interfaces | Passive_Interfaces | | --- | --- | --- | --- | --- | --- | --- | 0 | as2dist2 | default | 1 | 1 | NONE | ['GigabitEthernet0/0', 'GigabitEthernet1/0', 'Loopback0'] | [] | 1 | as2border2 | default | 1 | 1 | NONE | ['GigabitEthernet1/0', 'GigabitEthernet2/0', 'Loopback0'] | [] | 2 | as3core1 | default | 1 | 1 | NONE | ['GigabitEthernet0/0', 'GigabitEthernet1/0', 'Loopback0'] | [] | 3 | as2core1 | default | 1 | 1 | NONE | ['GigabitEthernet0/0', 'GigabitEthernet1/0', 'GigabitEthernet2/0', 'GigabitEthernet3/0', 'Loopback0'] | [] | 4 | as1border2 | default | 1 | 1 | NONE | ['GigabitEthernet1/0', 'Loopback0'] | [] | Print the first row of the returned Dataframe `[42]:` `result.iloc[0]` `[42]:` ``` Node as2dist2 VRF default Process_ID 1 Area 1 Area_Type NONE Active_Interfaces ['GigabitEthernet0/0', 'GigabitEthernet1/0', 'Loopback0'] Passive_Interfaces [] Name: 0, dtype: object ``` ## Multi-chassis LAG¶ Returns MLAG configuration. Lists the configuration settings for each MLAG domain in the network. Invocation `[45]:` Node | MLAG_ID | Peer_Address | Local_Interface | Source_Interface | | --- | --- | --- | --- | --- | 0 | dc1-bl1a | DC1_BL1 | 10.255.252.11 | dc1-bl1a[Port-Channel3] | dc1-bl1a[Vlan4094] | 1 | dc1-bl1b | DC1_BL1 | 10.255.252.10 | dc1-bl1b[Port-Channel3] | dc1-bl1b[Vlan4094] | 2 | dc1-l2leaf5a | DC1_L2LEAF5 | 10.255.252.19 | dc1-l2leaf5a[Port-Channel3] | dc1-l2leaf5a[Vlan4094] | 3 | dc1-l2leaf5b | DC1_L2LEAF5 | 10.255.252.18 | dc1-l2leaf5b[Port-Channel3] | dc1-l2leaf5b[Vlan4094] | 4 | dc1-l2leaf6a | DC1_L2LEAF6 | 10.255.252.23 | dc1-l2leaf6a[Port-Channel3] | dc1-l2leaf6a[Vlan4094] | ``` Node dc1-bl1a MLAG_ID DC1_BL1 Peer_Address 10.255.252.11 Local_Interface dc1-bl1a[Port-Channel3] Source_Interface dc1-bl1a[Vlan4094] Name: 0, dtype: object ``` ## IP Owners¶ Returns where IP addresses are attached in the network. For each device, lists the mapping from IPs to corresponding interface(s) and VRF(s). Invocation `[50]:` ``` result = bf.q.ipOwners().answer().frame() ``` Node | VRF | Interface | IP | Mask | Active | | --- | --- | --- | --- | --- | --- | 0 | as2dist2 | default | Loopback0 | 2.1.3.2 | 32 | True | 1 | as2dist1 | default | Loopback0 | 2.1.3.1 | 32 | True | 2 | as2dept1 | default | GigabitEthernet1/0 | 2.34.201.4 | 24 | True | 3 | as2dept1 | default | Loopback0 | 2.1.1.2 | 32 | True | 4 | as3border2 | default | GigabitEthernet1/0 | 3.0.2.1 | 24 | True | Print the first row of the returned Dataframe `[52]:` `result.iloc[0]` `[52]:` ``` Node as2dist2 VRF default Interface Loopback0 IP 2.1.3.2 Mask 32 Active True Name: 0, dtype: object ``` ## Named Structures¶ Returns named structure definitions. Return structures defined in the configurations, represented in a vendor-independent JSON format. Invocation `[55]:` ``` result = bf.q.namedStructures().answer().frame() ``` Print the first 5 rows of the returned Dataframe `[56]:` `result.head(5)` `[56]:` Node | Structure_Type | Structure_Name | Structure_Definition | | --- | --- | --- | --- | 0 | as1border1 | Community_Set_Match_Expr | as1_community | {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} | 1 | as1border2 | Community_Set_Match_Expr | as1_community | {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} | 2 | as2border1 | Community_Set_Match_Expr | as1_community | {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} | 3 | as2border2 | Community_Set_Match_Expr | as1_community | {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} | 4 | as3border1 | Community_Set_Match_Expr | as1_community | {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} | ``` Node as1border1 Structure_Type Community_Set_Match_Expr Structure_Name as1_community Structure_Definition {'expr': {'class': 'org.batfish.datamodel.routing_policy.communities.CommunityMatchRegex', 'communityRendering': {'class': 'org.batfish.datamodel.routing_policy.communities.ColonSeparatedRendering'}, 'regex': '(,|\{|\}|^|$| )1:'}} Name: 0, dtype: object ``` ## Defined Structures¶ Lists the structures defined in the network. Lists the structures defined in the network, along with the files and line numbers in which they are defined. Invocation `[60]:` ``` result = bf.q.definedStructures().answer().frame() ``` Structure_Type | Structure_Name | Source_Lines | | --- | --- | --- | 0 | extended ipv4 access-list line | OUTSIDE_TO_INSIDE: permit ip any any | configs/as2border1.cfg:[137] | 1 | extended ipv4 access-list line | blocktelnet: deny tcp any any eq telnet | configs/as2core1.cfg:[122] | 2 | interface | GigabitEthernet1/0 | configs/as1core1.cfg:[69, 70, 71] | 3 | route-map-clause | as3_to_as2 1 | configs/as3border1.cfg:[146, 147, 148, 149] | 4 | extended ipv4 access-list | 101 | configs/as2border2.cfg:[140, 141] | ``` Structure_Type extended ipv4 access-list line Structure_Name OUTSIDE_TO_INSIDE: permit ip any any Source_Lines configs/as2border1.cfg:[137] Name: 0, dtype: object ``` ## Referenced Structures¶ Lists the references in configuration files to vendor-specific structures. Lists the references in configuration files to vendor-specific structures, along with the line number, the name and the type of the structure referenced, and configuration context in which each reference occurs. Invocation `[65]:` ``` result = bf.q.referencedStructures().answer().frame() ``` Structure_Type | Structure_Name | Context | Source_Lines | | --- | --- | --- | --- | 0 | bgp neighbor | 1.10.1.1 (VRF default) | bgp neighbor self ref | configs/as1border1.cfg:[92, 111] | 1 | bgp neighbor | 10.12.11.2 (VRF default) | bgp neighbor self ref | configs/as1border1.cfg:[114] | 2 | bgp neighbor | 3.2.2.2 (VRF default) | bgp neighbor self ref | configs/as1border1.cfg:[112] | 3 | bgp neighbor | 5.6.7.8 (VRF default) | bgp neighbor self ref | configs/as1border1.cfg:[113] | 4 | bgp peer-group | as1 | bgp neighbor peer-group | configs/as1border1.cfg:[91] | ``` Structure_Type bgp neighbor Structure_Name 1.10.1.1 (VRF default) Context bgp neighbor self ref Source_Lines configs/as1border1.cfg:[92, 111] Name: 0, dtype: object ``` ## Undefined References¶ Identifies undefined references in configuration. Finds configurations that have references to named structures (e.g., ACLs) that are not defined. Such occurrences indicate errors and can have serious consequences in some cases. Invocation `[70]:` File_Name | Struct_Type | Ref_Name | Context | Lines | | --- | --- | --- | --- | --- | 0 | configs/as2core2.cfg | route-map | filter-bogons | bgp inbound route-map | configs/as2core2.cfg:[110] | ``` File_Name configs/as2core2.cfg Struct_Type route-map Ref_Name filter-bogons Context bgp inbound route-map Lines configs/as2core2.cfg:[110] Name: 0, dtype: object ``` ## Unused Structures¶ Returns nodes with structures such as ACLs, routemaps, etc. that are defined but not used. Return nodes with structures such as ACLs, routes, etc. that are defined but not used. This may represent a bug in the configuration, which may have occurred because a final step in a template or MOP was not completed. Or it could be harmless extra configuration generated from a master template that is not meant to be used on those nodes. Invocation `[75]:` Structure_Type | Structure_Name | Source_Lines | | --- | --- | --- | 0 | bgp peer-group | as3 | configs/as1border1.cfg:[85] | 1 | expanded community-list | as1_community | configs/as1border1.cfg:[121] | 2 | ipv4 prefix-list | inbound_route_filter | configs/as1border1.cfg:[131, 132] | 3 | bgp peer-group | as2 | configs/as1border2.cfg:[87] | 4 | expanded community-list | as1_community | configs/as1border2.cfg:[123] | ``` Structure_Type bgp peer-group Structure_Name as3 Source_Lines configs/as1border1.cfg:[85] Name: 0, dtype: object ``` ## VLAN Properties¶ Returns configuration settings of switched VLANs. Lists information about implicitly and explicitly configured switched VLANs. Invocation `[80]:` ``` result = bf.q.switchedVlanProperties().answer().frame() ``` Node | VLAN_ID | Interfaces | VXLAN_VNI | | --- | --- | --- | --- | 0 | dc1-bl1b | 250 | [dc1-bl1b[Port-Channel3], dc1-bl1b[Vlan250]] | 20250 | 1 | dc1-leaf1a | 210 | [dc1-leaf1a[Vlan210]] | 20210 | 2 | dc1-leaf1a | 211 | [dc1-leaf1a[Vlan211]] | 20211 | 3 | dc1-leaf2a | 3764 | [dc1-leaf2a[Port-Channel3]] | None | 4 | dc1-leaf2b | 3789 | [dc1-leaf2b[Port-Channel3]] | None | ``` Node dc1-bl1b VLAN_ID 250 Interfaces [dc1-bl1b[Port-Channel3], dc1-bl1b[Vlan250]] VXLAN_VNI 20250 Name: 0, dtype: object ``` ## VRRP Properties¶ Returns configuration settings of VRRP groups. Lists information about VRRP groups on interfaces. Invocation `[85]:` ``` result = bf.q.vrrpProperties().answer().frame() ``` ``` Interface br1[GigabitEthernet0/2] Group_Id 12 Virtual_Addresses ['192.168.1.254'] Source_Address 192.168.1.1/24 Priority 110 Preempt True Active True Name: 0, dtype: object ``` ## A10 Virtual Server Configuration¶ Returns Virtual Server configuration of A10 devices. Lists all the virtual-server to service-group to server mappings in A10 configurations. Invocation `[90]:` ``` result = bf.q.a10VirtualServerConfiguration().answer().frame() ``` Node | Virtual_Server_Name | Virtual_Server_Enabled | Virtual_Server_IP | Virtual_Server_Port | Virtual_Server_Port_Enabled | Virtual_Server_Type | Virtual_Server_Port_Type_Name | Service_Group_Name | Service_Group_Type | Servers | Source_NAT_Pool_Name | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | lb42 | VS_TCP_80 | True | 10.0.0.1 | 80 | True | TCP | None | SG_TCP_80 | TCP | [['SERVER1', '80', '10.1.10.11', 'active'], ['SERVER2', '80', '10.1.10.12', 'inactive']] | None | ``` Node lb42 Virtual_Server_Name VS_TCP_80 Virtual_Server_Enabled True Virtual_Server_IP 10.0.0.1 Virtual_Server_Port 80 Virtual_Server_Port_Enabled True Virtual_Server_Type TCP Virtual_Server_Port_Type_Name None Service_Group_Name SG_TCP_80 Service_Group_Type TCP Servers [['SERVER1', '80', '10.1.10.11', 'active'], ['SERVER2', '80', '10.1.10.12', 'inactive']] Source_NAT_Pool_Name None Name: 0, dtype: object ``` ## F5 BIG-IP VIP Configuration¶ Returns VIP configuration of F5 BIG-IP devices. Lists all the VIP to server IP mappings contained in F5 BIP-IP configurations. Invocation `[95]:` ``` result = bf.q.f5BigipVipConfiguration().answer().frame() ``` Node | VIP_Name | VIP_Endpoint | Servers | Description | | --- | --- | --- | --- | --- | 0 | f5bigip | /Common/virtual1 | 192.0.2.1:80 TCP | ['10.0.0.1:80'] | virtual1 is cool | 1 | f5bigip | /Common/virtual2 | 192.0.2.2:80 TCP | ['10.0.0.2:80'] | pool2 is lame | 2 | f5bigip | /Common/virtual3 | 192.0.2.3:80 TCP | ['10.0.0.4:80', '10.0.0.3:80'] | ``` Node f5bigip VIP_Name /Common/virtual1 VIP_Endpoint 192.0.2.1:80 TCP Servers ['10.0.0.1:80'] Description virtual1 is cool Name: 0, dtype: object ``` This caterogy of questions is intended to retrieve the network topology used by Batfish. This topology is a combination of information in the snapshot and inference logic (e.g., which interfaces are layer3 neighbors). Currently, Layer 3 topology can be retrieved. ## User Provided Layer 1 Topology¶ Returns normalized Layer 1 edges that were input to Batfish. Lists Layer 1 edges after potentially normalizing node and interface names. All node names are lower-cased, and for nodes that appear in the snapshot, interface names are canonicalized based on the vendor. All input edges are in the output, including nodes and interfaces that do not appear in the snapshot. ``` result = bf.q.userProvidedLayer1Edges().answer().frame() ``` Interface | Remote_Interface | | --- | --- | 0 | dc1-leaf2a[Ethernet1] | dc1-spine1[Ethernet2] | 1 | dc1-leaf2b[Ethernet1] | dc1-spine1[Ethernet3] | 2 | dc1-leaf2b[Ethernet2] | dc1-spine2[Ethernet3] | 3 | dc1-svc3b[Ethernet6] | dc1-l2leaf5b[Ethernet2] | 4 | dc1-leaf2a[Ethernet4] | dc1-leaf2b[Ethernet4] | ## Layer 3 Topology¶ Returns Layer 3 links. Lists all Layer 3 edges in the network. ``` result = bf.q.layer3Edges().answer().frame() ``` Interface | IPs | Remote_Interface | Remote_IPs | | --- | --- | --- | --- | 0 | as1border1[GigabitEthernet0/0] | ['1.0.1.1'] | as1core1[GigabitEthernet1/0] | ['1.0.1.2'] | 1 | as1border1[GigabitEthernet1/0] | ['10.12.11.1'] | as2border1[GigabitEthernet0/0] | ['10.12.11.2'] | 2 | as1border2[GigabitEthernet0/0] | ['10.13.22.1'] | as3border2[GigabitEthernet0/0] | ['10.13.22.3'] | 3 | as1border2[GigabitEthernet1/0] | ['1.0.2.1'] | as1core1[GigabitEthernet0/0] | ['1.0.2.2'] | 4 | as1core1[GigabitEthernet0/0] | ['1.0.2.2'] | as1border2[GigabitEthernet1/0] | ['1.0.2.1'] | ``` Interface as1border1[GigabitEthernet0/0] IPs ['1.0.1.1'] Remote_Interface as1core1[GigabitEthernet1/0] Remote_IPs ['1.0.1.2'] Name: 0, dtype: object ``` This category of questions allows you to query how different types of traffic is forwarded by the network and if endpoints are able to communicate. You can analyze these aspects in a few different ways. ## Traceroute¶ Traces the path(s) for the specified flow. Performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. Unlike a real traceroute, this traceroute is directional. That is, for it to succeed, the reverse connectivity is not needed. This feature can help debug connectivity issues by decoupling the two directions. ``` result = bf.q.traceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() ``` Retrieving the flow definition `[6]:` `result.Flow` `[6]:` Retrieving the detailed Trace information `[7]:` `len(result.Traces)` `[7]:` `1` `[8]:` `result.Traces[0]` `[8]:` 1. node: as2border1 Retrieving the disposition of the first Trace `[10]:` `[10]:` Retrieving the first hop of the first Trace `[11]:` `[11]:` RECEIVED(GigabitEthernet2/0) Retrieving the last hop of the first Trace `[12]:` `[12]:` RECEIVED(GigabitEthernet1/0) ## Bi-directional Traceroute¶ Traces the path(s) for the specified flow, along with path(s) for reverse flows. This question performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. If the trace succeeds, a traceroute is performed in the reverse direction. ``` result = bf.q.bidirectionalTraceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() ``` Retrieving the Forward flow definition `[16]:` `result.Forward_Flow` `[16]:` Retrieving the detailed Forward Trace information `[17]:` `[17]:` `1` `[18]:` `[18]:` 1. node: as2border1 Evaluating the first Forward Trace `[19]:` `[20]:` Retrieving the first hop of the first Forward Trace `[21]:` `[21]:` RECEIVED(GigabitEthernet2/0) Retrieving the last hop of the first Forward Trace `[22]:` `[22]:` RECEIVED(GigabitEthernet1/0) Retrieving the Return flow definition `[23]:` `result.Reverse_Flow` `[23]:` ``` 0 start=as2dist2 interface=GigabitEthernet2/0 [2.34.201.10:33434->8.8.8.8:49152 UDP] Name: Reverse_Flow, dtype: object ``` Retrieving the detailed Return Trace information `[24]:` `[24]:` `1` `[25]:` `[25]:` 1. node: as2dist2 `[26]:` 1. node: as2dist2 `[27]:` `'NO_ROUTE'` Retrieving the first hop of the first Reverse Trace `[28]:` `[28]:` RECEIVED(GigabitEthernet2/0) Retrieving the last hop of the first Reverse Trace `[29]:` `[29]:` RECEIVED(GigabitEthernet2/0) ## Reachability¶ Finds flows that match the specified path and header space conditions. Searches across all flows that match the specified conditions and returns examples of such flows. This question can be used to ensure that certain services are globally accessible and parts of the network are perfectly isolated from each other. Invocation `[32]:` ``` result = bf.q.reachability(pathConstraints=PathConstraints(startLocation = '/as2/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), actions='SUCCESS').answer().frame() ``` ``` 0 start=as2border1 [10.0.0.0:49152->2.128.0.101:53 UDP] 1 start=as2border2 [10.0.0.0:49152->2.128.0.101:53 UDP] 2 start=as2core1 [10.0.0.0:49152->2.128.0.101:53 UDP] 3 start=as2core2 [10.0.0.0:49152->2.128.0.101:53 UDP] 4 start=as2dept1 [10.0.0.0:49152->2.128.0.101:53 UDP] 5 start=as2dist1 [10.0.0.0:49152->2.128.0.101:53 UDP] 6 start=as2dist2 [10.0.0.0:49152->2.128.0.101:53 UDP] Name: Flow, dtype: object ``` Retrieving the detailed Trace information `[34]:` `len(result.Traces)` `[34]:` `7` `[35]:` `result.Traces[0]` `[35]:` 1. node: as2border1 FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.0.0/24, Next Hop: ip 2.34.201.4)]) FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.21.3, Routes: [ibgp (Network: 2.128.0.0/24, Next Hop: ip 2.34.101.4)]) Retrieving the disposition of the first Trace `[37]:` `[37]:` `'ACCEPTED'` Retrieving the first hop of the first Trace `[38]:` `[38]:` ORIGINATED(default) `[39]:` RECEIVED(eth0) ## Bi-directional Reachability¶ Searches for successfully delivered flows that can successfully receive a response. Performs two reachability analyses, first originating from specified sources, then returning back to those sources. After the first (forward) pass, sets up sessions in the network and creates returning flows for each successfully delivered forward flow. The second pass searches for return flows that can be successfully delivered in the presence of the setup sessions. Invocation `[42]:` ``` result = bf.q.bidirectionalReachability(pathConstraints=PathConstraints(startLocation = '/as2dist1/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), returnFlowType='SUCCESS').answer().frame() ``` Retrieving the Forward flow definition `[43]:` `result.Forward_Flow` `[43]:` ``` 0 start=as2dist1 [2.34.101.3:49152->2.128.0.101:53 UDP] Name: Forward_Flow, dtype: object ``` Retrieving the detailed Forward Trace information `[44]:` `[44]:` `1` `[45]:` `[45]:` 1. node: as2dist1 Evaluating the first Forward Trace `[46]:` `[46]:` 1. node: as2dist1 `[47]:` `'ACCEPTED'` Retrieving the first hop of the first Forward Trace `[48]:` `[48]:` ORIGINATED(default) `[49]:` RECEIVED(eth0) Retrieving the Return flow definition `[50]:` `result.Reverse_Flow` `[50]:` ``` 0 start=host1 [2.128.0.101:53->2.34.101.3:49152 UDP] Name: Reverse_Flow, dtype: object ``` Retrieving the detailed Return Trace information `[51]:` `[51]:` `1` `[52]:` `[52]:` 1. node: host1 `[53]:` 1. node: host1 `[54]:` `'ACCEPTED'` Retrieving the first hop of the first Reverse Trace `[55]:` `[55]:` ORIGINATED(default) Retrieving the last hop of the first Reverse Trace `[56]:` `[56]:` RECEIVED(GigabitEthernet2/0) ## Loop detection¶ Detects forwarding loops. Searches across all possible flows in the network and returns example flows that will experience forwarding loops. Invocation `[59]:` ``` result = bf.q.detectLoops().answer().frame() ``` Flow | Traces | TraceCount | | --- | --- | --- | ## Multipath Consistency for host-subnets¶ Validates multipath consistency between all pairs of subnets. Searches across all flows between subnets that are treated differently (i.e., dropped versus forwarded) by different paths in the network and returns example flows. Invocation `[63]:` ``` result = bf.q.subnetMultipathConsistency().answer().frame() ``` ``` 0 start=as2dept1 interface=GigabitEthernet0/0 [2.34.101.1:49152->1.0.1.3:23 TCP (SYN)] 1 start=as2dept1 interface=GigabitEthernet1/0 [2.34.201.1:49152->1.0.1.3:23 TCP (SYN)] 2 start=as2dept1 interface=GigabitEthernet2/0 [2.128.0.2:49152->1.0.1.3:23 TCP (SYN)] 3 start=as2dept1 interface=GigabitEthernet3/0 [2.128.1.2:49152->1.0.1.3:23 TCP (SYN)] 4 start=as2dist1 interface=GigabitEthernet0/0 [2.23.11.1:49152->1.0.1.3:23 TCP (SYN)] 5 start=as2dist1 interface=GigabitEthernet1/0 [2.23.21.1:49152->1.0.1.3:23 TCP (SYN)] 6 start=as2dist1 interface=GigabitEthernet2/0 [2.34.101.1:49152->1.0.1.3:23 TCP (SYN)] 7 start=as2dist2 interface=GigabitEthernet0/0 [2.23.22.1:49152->1.0.1.3:23 TCP (SYN)] 8 start=as2dist2 interface=GigabitEthernet1/0 [2.23.12.1:49152->1.0.1.3:23 TCP (SYN)] 9 start=as2dist2 interface=GigabitEthernet2/0 [2.34.201.1:49152->1.0.1.3:23 TCP (SYN)] Name: Flow, dtype: object ``` Retrieving the detailed Trace information `[65]:` `len(result.Traces)` `[65]:` `10` `[66]:` `result.Traces[0]` `[66]:` 1. node: as2dept1 Evaluating the first Trace `[67]:` `result.Traces[0][0]` `[67]:` 1. node: as2dept1 Retrieving the disposition of the first Trace `[68]:` `[68]:` `'DENIED_IN'` Retrieving the first hop of the first Trace `[69]:` `[69]:` RECEIVED(GigabitEthernet0/0) `[70]:` RECEIVED(GigabitEthernet2/0) ## Multipath Consistency for router loopbacks¶ Validates multipath consistency between all pairs of loopbacks. Finds flows between loopbacks that are treated differently (i.e., dropped versus forwarded) by different paths in the presence of multipath routing. Invocation `[73]:` ``` result = bf.q.loopbackMultipathConsistency().answer().frame() ``` Retrieving the flow definition `[74]:` `result.Flow` `[74]:` ``` 0 start=as2core2 [2.1.2.2:49152->2.1.2.1:23 TCP (SYN)] 1 start=as2dist1 [2.1.3.1:49152->2.1.1.1:23 TCP (SYN)] 2 start=as2dist2 [2.1.3.2:49152->2.1.1.1:23 TCP (SYN)] Name: Flow, dtype: object ``` Retrieving the detailed Trace information `[75]:` `len(result.Traces)` `[75]:` `3` `[76]:` `result.Traces[0]` `[76]:` 1. node: as2core2 2. node: as2border1 FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.23.22.3, Routes: [ospf (Network: 2.1.2.1/32, Next Hop: interface GigabitEthernet2/0 ip 2.23.22.3)]) FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.21.3, Routes: [ospf (Network: 2.1.2.1/32, Next Hop: interface GigabitEthernet3/0 ip 2.23.21.3)]) FORWARDED(Forwarded out interface: GigabitEthernet0/0 with resolved next-hop IP: 2.23.11.2, Routes: [ospf (Network: 2.1.2.1/32, Next Hop: interface GigabitEthernet0/0 ip 2.23.11.2)]) Evaluating the first Trace `[77]:` `result.Traces[0][0]` `[77]:` 1. node: as2core2 Retrieving the disposition of the first Trace `[78]:` `[78]:` `'ACCEPTED'` Retrieving the first hop of the first Trace `[79]:` `[79]:` ORIGINATED(default) `[80]:` RECEIVED(GigabitEthernet1/0) Specifier grammars allow you to specify complex inputs for Batfish questions. This category of questions reveals how specifier inputs are resolved by Batfish. ## Resolve Location Specifier¶ Returns the set of locations corresponding to a locationSpec value. ``` result = bf.q.resolveLocationSpecifier(locations='@enter(as2border1[GigabitEthernet2/0])').answer().frame() ``` Location | | --- | 0 | InterfaceLinkLocation{nodeName=as2border1, interfaceName=GigabitEthernet2/0} | ``` Location InterfaceLinkLocation{nodeName=as2border1, interfaceName=GigabitEthernet2/0} Name: 0, dtype: object ``` ## Resolve Filter Specifier¶ Returns the set of filters corresponding to a filterSpec value. ``` result = bf.q.resolveFilterSpecifier(filters='@in(as2border1[GigabitEthernet0/0])').answer().frame() ``` Node | Filter_Name | | --- | --- | 0 | as2border1 | OUTSIDE_TO_INSIDE | ``` Node as2border1 Filter_Name OUTSIDE_TO_INSIDE Name: 0, dtype: object ``` ## Resolve Node Specifier¶ Returns the set of nodes corresponding to a nodeSpec value. Helper question that shows how specified nodeSpec values resolve to the nodes in the network. ``` result = bf.q.resolveNodeSpecifier(nodes='/border/').answer().frame() ``` Node | | --- | 0 | as1border1 | 1 | as1border2 | 2 | as2border1 | 3 | as2border2 | 4 | as3border1 | ``` Node as1border1 Name: 0, dtype: object ``` ## Resolve Interface Specifier¶ Returns the set of interfaces corresponding to an interfaceSpec value. ``` result = bf.q.resolveInterfaceSpecifier(interfaces='/border/[.*Ethernet]').answer().frame() ``` Interface | | --- | 0 | as1border1[Ethernet0/0] | 1 | as1border1[GigabitEthernet0/0] | 2 | as1border1[GigabitEthernet1/0] | 3 | as1border2[Ethernet0/0] | 4 | as1border2[GigabitEthernet0/0] | ## Resolve IPs from Location Specifier¶ Returns IPs that are auto-assigned to locations. Helper question that shows IPs that will be assigned to specified locationSpec values by questions are automatically pick IPs based on locations. ``` result = bf.q.resolveIpsOfLocationSpecifier(locations='@enter(as2border1[GigabitEthernet2/0])').answer().frame() ``` ## Resolve IP Specifier¶ Returns the IP address space corresponding to an ipSpec value. Helper question that shows how specified ipSpec values resolve to IPs. ``` result = bf.q.resolveIpSpecifier(ips='/border/[.*Ethernet]').answer().frame() ``` IP_Space | | --- | 0 | AclIpSpace{lines=[AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[1.0.1.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.12.11.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.13.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[1.0.2.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.14.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.12.11.2]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.11.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.12.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.23.21.2]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.21.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[3.0.1.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.23.21.3]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.13.22.3]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[3.0.2.1]}}]} | ``` IP_Space AclIpSpace{lines=[AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[1.0.1.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.12.11.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.13.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[1.0.2.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.14.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.12.11.2]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.11.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.12.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.23.21.2]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.22.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[2.12.21.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[3.0.1.1]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.23.21.3]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[10.13.22.3]}}, AclIpSpaceLine{ipSpace=IpWildcardSetIpSpace{blacklist=[], whitelist=[3.0.2.1]}}]} Name: 0, dtype: object ``` Differential questions enable you to discover configuration and behavior differences between two snapshot of the network. Most of the Batfish questions can be run differentially by using ``` snapshot=<current snapshot> ``` and ``` reference_snapshot=<reference snapshot> ``` parameters in `.answer()` . For example, to view routing table differences between `snapshot1` and `snapshot0` , run ``` bf.q.routes().answer(snapshot="snapshot1", reference_snapshot="snapshot0") ``` . Batfish also has two questions that are exclusively differential. ## Compare Filters¶ Compares filters with the same name in the current and reference snapshots. Returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently (i.e. one permits and the other denies the flow). This question can be used to summarize how a filter has changed over time. In particular, it highlights differences that cause flows to be denied when they used to be permitted, or vice versa. The output is a table that includes pairs of lines, one from each version of the filter, that both match at least one common flow, and have different action (permit or deny). This is a differential question and the reference snapshot to compare against must be provided in the call to answer(). ``` result = bf.q.compareFilters(nodes='rtr-with-acl').answer(snapshot='filters-change',reference_snapshot='filters').frame() ``` Node | Filter_Name | Line_Index | Line_Content | Line_Action | Reference_Line_Index | Reference_Line_Content | | --- | --- | --- | --- | --- | --- | --- | 0 | rtr-with-acl | acl_in | 23 | 462 permit tcp 10.10.10.0/24 18.18.18.0/26 eq 80 | PERMIT | 101 | 2020 deny tcp any any | 1 | rtr-with-acl | acl_in | 24 | 463 permit tcp 10.10.10.0/24 18.18.18.0/26 eq 8080 | PERMIT | 101 | 2020 deny tcp any any | ## Differential Reachability¶ Returns flows that are successful in one snapshot but not in another. Searches across all possible flows in the network, with the specified header and path constraints, and returns example flows that are successful in one snapshot and not the other. This is a differential question and the reference snapshot to compare against must be provided in the call to answer(). Invocation `[9]:` ``` result = bf.q.differentialReachability().answer(snapshot='forwarding-change',reference_snapshot='forwarding').frame() ``` Flow | Snapshot_Traces | Snapshot_TraceCount | Reference_Traces | Reference_TraceCount | | --- | --- | --- | --- | --- | 0 | start=border1 [10.12.11.2:49152->2.128.1.1:33434 UDP] | [((ORIGINATED(default), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.12.12.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), NULL_ROUTED(Discarded, Routes: [static (Network: 2.128.1.1/32, Next Hop: discard)])))] | 1 | [((ORIGINATED(default), FORWARDED(Forwarded out interface: GigabitEthernet1/0 with resolved next-hop IP: 2.12.11.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet1/0)), (RECEIVED(GigabitEthernet0/0), FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.34.201.4, Routes: [bgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), PERMITTED(RESTRICT_NETWORK_TRAFFIC_IN (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet3/0, Routes: [connected (Network: 2.128.1.0/30, Next Hop: interface GigabitEthernet3/0)]), PERMITTED(RESTRICT_HOST_TRAFFIC_OUT (EGRESS_FILTER)), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(eth0), ACCEPTED(eth0)))] | 1 | 1 | start=border1 interface=GigabitEthernet0/0 [10.12.11.1:49152->2.128.1.1:33434 UDP] | [((RECEIVED(GigabitEthernet0/0), PERMITTED(OUTSIDE_TO_INSIDE (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.12.12.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), NULL_ROUTED(Discarded, Routes: [static (Network: 2.128.1.1/32, Next Hop: discard)])))] | 1 | [((RECEIVED(GigabitEthernet0/0), PERMITTED(OUTSIDE_TO_INSIDE (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet1/0 with resolved next-hop IP: 2.12.11.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet1/0)), (RECEIVED(GigabitEthernet0/0), FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.34.201.4, Routes: [bgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), PERMITTED(RESTRICT_NETWORK_TRAFFIC_IN (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet3/0, Routes: [connected (Network: 2.128.1.0/30, Next Hop: interface GigabitEthernet3/0)]), PERMITTED(RESTRICT_HOST_TRAFFIC_OUT (EGRESS_FILTER)), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(eth0), ACCEPTED(eth0)))] | 1 | 2 | start=border1 interface=GigabitEthernet1/0 [2.12.11.3:49152->2.128.1.1:33434 UDP] | [((RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.12.12.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), NULL_ROUTED(Discarded, Routes: [static (Network: 2.128.1.1/32, Next Hop: discard)])))] | 1 | [((RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet1/0 with resolved next-hop IP: 2.12.11.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet1/0)), (RECEIVED(GigabitEthernet0/0), FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.34.201.4, Routes: [bgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), PERMITTED(RESTRICT_NETWORK_TRAFFIC_IN (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet3/0, Routes: [connected (Network: 2.128.1.0/30, Next Hop: interface GigabitEthernet3/0)]), PERMITTED(RESTRICT_HOST_TRAFFIC_OUT (EGRESS_FILTER)), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(eth0), ACCEPTED(eth0)))] | 1 | 3 | start=border1 interface=GigabitEthernet2/0 [2.12.12.3:49152->2.128.1.1:33434 UDP] | [((RECEIVED(GigabitEthernet2/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.12.12.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), NULL_ROUTED(Discarded, Routes: [static (Network: 2.128.1.1/32, Next Hop: discard)])))] | 1 | [((RECEIVED(GigabitEthernet2/0), FORWARDED(Forwarded out interface: GigabitEthernet1/0 with resolved next-hop IP: 2.12.11.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet1/0)), (RECEIVED(GigabitEthernet0/0), FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.34.201.4, Routes: [bgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), PERMITTED(RESTRICT_NETWORK_TRAFFIC_IN (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet3/0, Routes: [connected (Network: 2.128.1.0/30, Next Hop: interface GigabitEthernet3/0)]), PERMITTED(RESTRICT_HOST_TRAFFIC_OUT (EGRESS_FILTER)), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(eth0), ACCEPTED(eth0)))] | 1 | 4 | start=border2 [10.23.21.2:49152->2.128.1.1:33434 UDP] | [((ORIGINATED(default), FORWARDED(Forwarded out interface: GigabitEthernet1/0 with resolved next-hop IP: 2.12.22.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet1/0)), (RECEIVED(GigabitEthernet0/0), NULL_ROUTED(Discarded, Routes: [static (Network: 2.128.1.1/32, Next Hop: discard)])))] | 1 | [((ORIGINATED(default), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.12.21.2, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet3/0 with resolved next-hop IP: 2.23.12.3, Routes: [ibgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(GigabitEthernet1/0), FORWARDED(Forwarded out interface: GigabitEthernet2/0 with resolved next-hop IP: 2.34.201.4, Routes: [bgp (Network: 2.128.1.0/30, Next Hop: ip 2.34.201.4)]), TRANSMITTED(GigabitEthernet2/0)), (RECEIVED(GigabitEthernet1/0), PERMITTED(RESTRICT_NETWORK_TRAFFIC_IN (INGRESS_FILTER)), FORWARDED(Forwarded out interface: GigabitEthernet3/0, Routes: [connected (Network: 2.128.1.0/30, Next Hop: interface GigabitEthernet3/0)]), PERMITTED(RESTRICT_HOST_TRAFFIC_OUT (EGRESS_FILTER)), TRANSMITTED(GigabitEthernet3/0)), (RECEIVED(eth0), ACCEPTED(eth0)))] | 1 | Batfish questions support parameters with rich specifications for nodes, interfaces etc. The grammar for parameter types is described below. Before reading those grammars, we recommend reading the general notes. For many parameters types, there is a “resolver” question that may be used to learn what a given specification expands to. For instance, `resolveNodeSpecifier` is the resolver for `nodeSpec` , and ``` bfq.resolveNodeSpecifier(nodes="/bor/") ``` (Pybatfish syntax) will return the set of nodes that match `/bor/` . ## General notes on the grammar¶ Set operations: Specifiers denote sets of entities (e.g., nodeSpec resolves to a set of nodes). In many cases, the grammar allows for union, intersection, and difference of such sets, respectively, using `,` , `&` , and `\` . Thus, `(node1, node2)\node1` will resolve to `node1` . * Escaping names: Names of entities such as nodes and interfaces must be double-quoted if they begin with a digit (0-9), double quote (‘”’), or slash (‘/’), or they contain a space or one of `,&()[]@!#$%^;?<>={}` . Thus, the following names are legal: * `as1border1` (no quotes) * `as1-border1` * `"as1border1"` (quotes unnecessary, but OK) * `"1startsWithADigit"` (quotes needed) * `"has space"` * `"has["` * Regexes: Regular expressions must be enclosed by `/` s like `/abc/` . Batfish uses Java’s syntax and semantics for regular expressions. For simple expressions, this language is similar to others. For example: * `/abc/` , `/^abc/` , `/abc$/` match strings strings containing, beginning with, and ending with ‘abc’ * `/ab[c-d]/` and `/ab(c|d)/` match strings ‘abc’ and ‘abd’. * Case-insensitive names: All names and regexes use case-insensitive matching. Thus, `AS1BORDER1` is same as `as1border1` and `Ethernet0/0` is same as `ethernet0/0` . ## Set of enums or names¶ Many types such as `applicationSpec` or `mlagIdSpec` are simply sets of values. Such parameters share a common grammar, but with different base values. Example expressions for this grammar are: * `val1` specifies a singleton set with that value. * `/val.*/` specifies a set whose values all match regex `val.*` . * `val1, val2` specifies a set with exactly those two values. * `! val1` specifies all values other than `val1` . * `/val.*/, ! val1` specifies all values that match regex `val.*` other than `val1` . The full specification of this grammar is: > enumSetSpec := enumSetTerm [, enumSetTerm] enumSetTerm := <enum-value> | !<enum-value> | /<regex-over-enum-values>/ | !/<regex-over-enum-values>/ ## Application Specifier¶ A specification for IP traffic that includes information about protocols (ICMP, TCP, UDP) and about destination ports for TCP and UDP and type, code for ICMP. * `HTTP` specifies TCP traffic to port 80. * `tcp/80` also specifies TCP traffic to port 80. * `tcp/80,3000-3030` specifies TCP traffic to port 80 and ports between 3000 and 3030. * `tcp` specifies TCP traffic to all ports. * `icmp` also specifies ICMP traffic of all types. * `icmp/0/0` specifies ICMP traffic of type 0 and code 0. * `HTTP, udp/53` specifies TCP traffic to port 80 and UDP traffic to port 53. ### Application Specifier Grammar¶ > applicationSpec := applicationTerm [, applicationTerm] applicationTerm := tcp[/portSpec] | udp[/portSpec] | icmp[/<icmp-type>[/<icmp-code>]] | <application-name> portSpec := portTerm [, portTerm] portTerm := <port-number> | <from-port>-<to-port> Application name is one of `DNS` (means udp/53), `ECHO-REPLY` (icmp/0/0), `ECHO-REQUEST` (icmp/8/0), `HTTP` (tcp/80), `HTTPS` (tcp/443), `MYSQL` (tcp/3306), `SNMP` (udp/161), `SSH` (tcp/22), `TELNET` (tcp/23). ## BGP Peer Property Specifier¶ A specification for a set of BGP peer properties (e.g., those returned by the `bgpPeerConfiguration` question). A BGP peer property property specifier follows the enum set grammar over the following values: `Local_AS` , `Local_IP` , `Is_Passive` , `Remote_AS` , ``` Route_Reflector_Client ``` , `Cluster_ID` , `Peer_Group` , `Import_Policy` , `Export_Policy` , `Send_Community` . ## BGP Process Property Specifier¶ A specification for a set of BGP process properties (e.g., those returned by the ``` bgpProcessConfiguration ``` question). A BGP process property specifier follows the enum set grammar over the following values: `Multipath_Match_Mode` , `Multipath_EBGP` , `Multipath_IBGP` , `Neighbors` , `Route_Reflector` , `Tie_Breaker` . ## BGP Route Status Specifier¶ A specification for a set of BGP route statuses. A BGP route status specifier follows the enum set grammar over the following values: * `BEST` - a route that is the unique best route for an NLRI (e.g. an IP prefix for IPv4 unicast) in the BGP RIB, or a route that is equivalent to the unique best route for the purpose of ECMP routing. A `BEST` route may be installed in the main RIB. * `BACKUP` - a route that is inferior to all `BEST` routes in the BGP RIB for the same NLRI. Such a route will not be installed in the main RIB. However, it may be advertised on a BGP ADD-PATH-enabled session and the route matches the add-path policy. ``` LOCAL_IP_UNKNOWN_STATICALLY ``` — local IP address for an iBGP or multihop eBGP session is not configured * `NO_LOCAL_IP` — local IP address for a singlehop eBGP session is not configured * `NO_LOCAL_AS` — local AS for the session is not configured * `NO_REMOTE_IP` — remote IP address for a point-to-point peer is not configured * `NO_REMOTE_PREFIX` — remote prefix for a dynamic peer is not configured * `NO_REMOTE_AS` — remote AS for the session is not configured * `INVALID_LOCAL_IP` — configured local IP address does not belong to any active interface * `UNKNOWN_REMOTE` — configured remote IP is not present in the network snapshot * `HALF_OPEN` — no compatible match found in the network snapshot for a point-to-point peer * `MULTIPLE_REMOTES` — multiple compatible matches found for a point-to-point peer * `UNIQUE_MATCH` — exactly one match found for a point-to-point peer * `DYNAMIC_MATCH` — at least one compatible match found for a dynamic peer * `NO_MATCH_FOUND` — no compatible match found for a dynamic peer * `NOT_COMPATIBLE` — the BGP session is not compatibly configured * `NOT_ESTABLISHED` — the BGP session configuration is compatible but the session was not established * `ESTABLISHED` — the BGP session is established ## BGP Session Type Specifier¶ A specification for a set of BGP session types. A BGP session type specifier follows the enum set grammar over the following values: `IBGP` , `EBGP_SINGLEHOP` , `EBGP_MULTIHOP` , `EBGP_UNNUMBERED` , `IBGP_UNNUMBERED` , `UNSET` . ## Disposition Specifier¶ Flow dispositions are used in questions like reachability to identify flow outcomes. The disposition specifier takes as input a comma-separated list of disposition values, which are interpreted using logical OR. There are two coarse-grained flow dispositions: * `Success` : a flow has been successfully delivered * `Failure` : a flow has been dropped somewhere in the network The following fine-grained disposition values are also supported: Success dispositions: * `Accepted` : a flow has been accepted by a device in the snapshot * `Delivered_to_subnet` : a flow has been delivered to a host subnet * `Exits_network` : a flow has been successfully forwarded to a device currently outside of the snapshot * Failure dispositions: * `Denied_in` : a flow was denied by an input filter (an ACL or a firewall rule) on an interface * `Denied_out` : a flow was denied by an output filter on an interface * `No_route` : a flow was dropped because no matching route exists on device * `Null_routed` : a flow was dropped because it matched a `null` route * `Neighbor_unreachable` : a flow was dropped because it could not reach the next hop (e.g., an ARP failure) * `Loop` : the flow encountered a forwarding loop * `Insufficient_info` : Batfish does not have enough information to make a determination with certainty (e.g., some device configs are missing) ## Filter Specifier¶ A specification for filters (ACLs or firewall rules) in the network. * `filter1` includes filters on all nodes with that name. * `/^acl/` includes all filters (on all nodes) whose names name regex ‘^acl’, i.e., begin with ‘acl’. * ``` nodeTerm[filterWithoutNode] ``` indicates filters that match the `filterWithoutNode` specification on nodes that match the `nodeTerm` specification. A simple example is `as1border1[filter1]` which refers to the filter `filter1` on `as1border1` . * `@in(interfaceSpec)` refers to filters that get applied when packets enter the specified interfaces. For example, `@in(Ethernet0/0)` includes filters for incoming packets on interfaces named `Ethernet0/0` on all nodes. * `@out(interfaceSpec)` is similar except that it indicates filters that get applied when packets exit the specified interfaces. ### Filter Specifier Grammar¶ > filterSpec := filterTerm [(&|,|\) filterTerm] filterTerm := filterWithNode | filterWithoutNode | (filterSpec) filterWithNode := nodeTerm[filterWithoutNode] filterWithoutNode := filterWithoutNodeTerm [(&|,|\) filterWithoutNodeTerm] filterWithoutNodeTerm := <filter-name> | /<filter-name-regex>/ | @in(interfaceSpec) | @out(interfaceSpec) | (filterWithoutNode) ### Filter Specifier Resolver¶ ``` resolveFilterSpecifier ``` A specification for a set of interface-level properties (e.g., those returned by the `interfaceProperties` question). An interface property specifier follows the enum set grammar over the following values: `Access_VLAN` , `Active` , `Allowed_VLANs` , `All_Prefixes` , `Auto_State_VLAN` , `Bandwidth` , `Blacklisted` , `Channel_Group` , ``` Channel_Group_Members ``` , `Declared_Names` , `Description` , `DHCP_Relay_Addresses` , `Encapsulation_VLAN` , `HSRP_Groups` , `HSRP_Version` , `Incoming_Filter_Name` , `MLAG_ID` , `MTU` , `Native_VLAN` , `Outgoing_Filter_Name` , `PBR_Policy_Name` , `Primary_Address` , `Primary_Network` , `Proxy_ARP` , `Rip_Enabled` , `Rip_Passive` , ``` Spanning_Tree_Portfast ``` , `Speed` , `Switchport` , `Switchport_Mode` , ``` Switchport_Trunk_Encapsulation ``` , `VRF` , `VRRP_Groups` , `Zone_Name` . ## Interface Specifier¶ A specification for interfaces in the network. * `Ethernet0/1` indicates interfaces on all nodes with that name. * `/^Eth/` indicates all interfaces (on all nodes) whose names match the regex ‘^Eth’, i.e., start with ‘Eth’. * ``` nodeTerm[interfaceWithoutNode] ``` indicates interfaces that match the `interfaceWithoutNode` specification on nodes that match the `nodeTerm` specification. A simple example is ``` as1border1[Ethernet0/1] ``` which refers to the interface `Ethernet0/1` on `as1border1` . * `@connectedTo(ipSpec)` indicates all interfaces with configured IPv4 networks that overlap with specified IPs (see `ipSpec` ) * ``` @interfaceGroup(book, group) ``` looks in the configured reference library for an interface group with name ‘group’ in book with name ‘book’. * `@vrf(vrf1)` indicates all interfaces configured to be in the VRF with name ‘vrf1’. * `@zone(zone3)` indicates all interfaces configured to be in the zone with name ‘zone3’. ### Interface Specifier Grammar¶ > interfaceSpec := interfaceTerm [(&|,|\) interfaceTerm] interfaceTerm := interfaceWithNode | interfaceWithoutNode | (interfaceSpec) interfaceWithNode := nodeTerm[interfaceWithoutNode] interfaceWithoutNode := interfaceWithoutNodeTerm [(&|,|\) interfaceWithoutNodeTerm] interfaceWithoutNodeTerm := <interface-name> | /<interface-name-regex>/ | interfaceFunc | (interfaceWithoutNode) interfaceFunc := @connectedTo(ipSpec) | @interfaceGroup(<reference-book-name>, <<interface-group-name>) | @vrf(<vrf-name>) | @zone(<zone-name>) ### Interface Specifier Resolver¶ ``` resolveInterfaceSpecifier ``` shows the set of interfaces represented by the given input. ## IP Protocol Specifier¶ A specification for a set of IP protocols. IP protocol names from the list below, such as `TCP` , may be used. * IP protocol numbers between 0 and 255 (inclusive), such as `6` to denote TCP, may be used. * A negation operator `!` may be used to denote all IP protocols other than the one specified. The semantics of negation is: * `!TCP` refers to all IP protocols other than TCP * `!TCP, !UDP` refers to all IP protocols other than TCP and UDP * `TCP, !UDP` refers to TCP > ipProtocolSpec := ipProtocolTerm [, ipProtocolTerm] ipProtocolTerm := ipProtocol | !ipProtocol ipProtocol := <ip-protocol-name> | <ip-protocol-number### IP Protocol Names¶ Batfish understands the following protocol names (with corresponding numbers in parenthesis): `AHP` (51), `AN` (107), `ANY_0_HOP_PROTOCOL` (114), ``` ANY_DISTRIBUTED_FILE_SYSTEM ``` (68), ``` ANY_HOST_INTERNAL_PROTOCOL ``` (61), `ANY_LOCAL_NETWORK` (63), ``` ANY_PRIVATE_ENCRYPTION_SCHEME ``` (99), `ARGUS` (13), `ARIS` (104), `AX25` (93), `BBN_RCC_MON` (10), `BNA` (49), `BR_SAT_MON` (76), `CBT` (7), `CFTP` (62), `CHAOS` (16), `COMPAQ_PEER` (110), `CPHB` (73), `CPNX` (72), `CRTP` (126), `CRUDP` (127), `DCCP` (33), `DCN_MEAS` (19), `DDP` (37), `DDX` (116), `DGP` (86), `EGP` (8), `EIGRP` (88), `EMCON` (14), `ENCAP` (98), `ESP` (50), `ETHERIP` (97), `FC` (133), `FIRE` (125), `GGP` (3), `GMTP` (100), `GRE` (47), `HIP` (139), `HMP` (20), `HOPOPT` (0), `I_NLSP` (52), `IATP` (117), `IPV6_ROUTE` (43), `IPX_IN_IP` (111), `IRTP` (28), `ISIS` (124), `ISO_IP` (80), `ISO_TP4` (29), `KRYPTOLAN` (65), `L2TP` (115), `LARP` (91), `LEAF1` (25), `LEAF2` (26), `MANAET` (138), `MERIT_INP` (32), `MFE_NSP` (31), `MHRP` (48), `MICP` (95), `MOBILE` (55), `MOBILITY` (135), `MPLS_IN_IP` (137), `MTP` (92), `MUX` (18), `NARP` (54), `NETBLT` (30), `NSFNET_IGP` (85), `NVPII` (11), `OSPF` (89), `PGM` (113), `PIM` (103), `PIPE` (131), `PNNI` (102), `PRM` (21), `PTP` (123), `PUP` (12), `PVP` (75), `QNX` (106), `RDP` (27), `ROHC` (142), `RSVP` (46), `RSVP_E2E_IGNORE` (134), `RVD` (66), `SAT_EXPAK` (64), `SAT_MON` (69), `SCC_SP` (96), `SCPS` (105), `SCTP` (132), `SDRP` (42), `SECURE_VMTP` (82), `SHIM6` (140), `SKIP` (57), `SM` (122), `SMP` (121), `SNP` (109), `SPRITE_RPC` (90), `SPS` (130), `SRP` (119), `SSCOPMCE` (128), `ST` (5), `STP` (118), `SUN_ND` (77), `SWIPE` (53), `TCF` (87), `TCP` (6), `THREE_PC` (34), `TLSP` (56), `TPPLUSPLUS` (39), `TRUNK1` (23), `TRUNK2` (24), `TTP` (84), `UDP` (17), `UDP_LITE` (136), `UTI` (120), `VINES` (83), `VISA` (70), `VMTP` (81), `VRRP` (112), `WB_EXPAK` (79), `WB_MON` (78), `WESP` (141), `WSN` (74), `XNET` (15), `XNS_IDP` (22), `XTP` (36). ## IP Specifier¶ A specification for a set of IPv4 addresses. Constant values that denote addresses (e.g., `1.2.3.4` ), prefixes (e.g., `1.2.3.0/24` ), address ranges (e.g., `1.2.3.4 - 1.2.3.7` ), and wildcards (e.g., ``` 1.2.3.4:255.255.255.0 ``` ) may be used. * ``` @addressGroup(book, group) ``` looks in the configured reference library for an address group name ‘group’ in book name ‘book’. * `locationSpec` can be used to denote addresses corresponding to the specified location (see `locationSpec` ). For example, includes all IPv4 addresses configured on `as1border1` interface `Ethernet0/0` . > ipSpec := ipTerm [(&|,|\) ipTerm] ipTerm := <ip-address> | <ip-prefix> | <ip-address-low> - <ip-address-high> | <ip wildcard> | @addressGroup(<reference-book-name>, <address-group-name>) | locationSpec ### IP Specifier Resolver¶ * `resolveIpSpecifier` shows the set of IP addresses represented by the given input. ## IPSec Session Status Specifier¶ An IPSec session status specifier follows the enum set grammar over the following values: ``` IPSEC_SESSION_ESTABLISHED ``` , `IKE_PHASE1_FAILED` , ``` IKE_PHASE1_KEY_MISMATCH ``` , `IPSEC_PHASE2_FAILED` , `MISSING_END_POINT` . ## Location Specifier¶ A specification for locations of packets, including where they start or terminate. There are two types of locations: * `InterfaceLocation` : at the interface, used to model packets that originate or terminate at the interface * : on the link connected to the interface, used to model packets before they enter the interface or after they exit Unless expilcitly specified, questions like `traceroute` and `reachability` will automatically assign IP addresses to packets based on their location. For `InterfaceLocation` , the set of assigned addresses is the interface address(es). This set is empty for interfaces that do not have an assigned address. For , the set of assigned addresses corresponds to what (hypothetical) hosts attached to that interface can have, which includes all addresses in the subnet except for the address of the interface and the first and last addresses of the subnet. This set is empty for interface subnets that are `/30` or longer (e.g., loopback interfaces). Locations for which Batfish cannot automatically assign a viable IP are ignored. To force their consideration, explicit source IPs must be specified. Some examples: * `as1border1` specifies the `InterfaceLocation` for all interfaces on node `as1border1` . Any `nodeTerm` (see node specifier grammar) can be used as a location specifier. * specifies the `InterfaceLocation` for `Ethernet0/0` on node `as1border1` . Any valid `interfaceWithNode` expression can be used as a location specifier. * `@vrf(vrf1)` specifies the `InterfaceLocation` for any interface in `vrf1` on all nodes. Any `interfaceFunc` can be used as a location specifier. * ``` @enter(as1border1[Ethernet0/0]) ``` specifies the for packets entering `Ethernet0/0` on `as1border1` . ### Location Specifier Grammar¶ > locationSpec := locationTerm [(&|,|\) locationTerm] locationTerm := locationInterface | @enter(locationInterface) | (locationSpec) locationInterface := nodeTerm | interfaceFunc | interfaceWithNode ### Location Specifier Resolver¶ ``` resolveLocationSpecifier ``` ``` resolveIpsOfLocationSpecifier ``` shows the mapping from locations to IPs that will be used in `traceroute` and `reachability` questions when IPs are not explicitly specified. ## MLAG ID Specifier¶ A specification for a set of MLAG domain identifiers. An MLAG ID specifier follows the enum set grammar over the domain ID values that appear in the snapshot. ## Named Structure Specifier¶ A specification for a set of structure types in Batfish’s vendor independent model. A named structure specifier follows the enum set grammar over the following values: `AS_PATH_ACCESS_LIST` , ``` AUTHENTICATION_KEY_CHAIN ``` ``` COMMUNITY_MATCH_EXPRS ``` , `COMMUNITY_SET_EXPRS` , ``` COMMUNITY_SET_MATCH_EXPRS ``` , `COMMUNITY_SETS` , `IKE_PHASE1_KEYS` , `IKE_PHASE1_POLICIES` , `IKE_PHASE1_PROPOSALS` , `IP_ACCESS_LIST` , `IP_6_ACCESS_LIST` , `IPSEC_PEER_CONFIGS` , ``` IPSEC_PHASE2_POLICIES ``` ``` IPSEC_PHASE2_PROPOSALS ``` , `PBR_POLICY` , `ROUTE_FILTER_LIST` , `ROUTE_6_FILTER_LIST` , `ROUTING_POLICY` , `VRF` , `ZONE` . ## Node Property Specifier¶ A specification for a set of node-level properties (e.g., those returned by the `nodeProperties` question). A node property specifier follows the enum set grammar over the following values: `AS_Path_Access_Lists` , ``` Authentication_Key_Chains ``` , `Canonical_IP` , ``` Community_Match_Exprs ``` , `Community_Set_Exprs` , ``` Community_Set_Match_Exprs ``` , `Community_Sets` , `Configuration_Format` , ``` Default_Cross_Zone_Action ``` ``` Default_Inbound_Action ``` , `DNS_Servers` , `DNS_Source_Interface` , `Domain_Name` , `Hostname` , `IKE_Phase1_Keys` , `IKE_Phase1_Policies` , `IKE_Phase1_Proposals` , `Interfaces` , `IP_Access_Lists` , `IP_Spaces` , `IP6_Access_Lists` , `IPsec_Peer_Configs` , ``` IPsec_Phase2_Policies ``` ``` IPsec_Phase2_Proposals ``` , `IPSec_Vpns` , `Logging_Servers` , ``` Logging_Source_Interface ``` , `NTP_Servers` , `NTP_Source_Interface` , `PBR_Policies` , `Route_Filter_Lists` , `Route6_Filter_Lists` , `Routing_Policies` , ``` SNMP_Source_Interface ``` , `SNMP_Trap_Servers` , `TACACS_Servers` , ``` TACACS_Source_Interface ``` , `VRFs` , `Zones` . ## Node Specifier¶ A specification for nodes in the network. * `as1border1` indicates a node with that name. * `/^as1/` indicates all nodes whose names match the regex `^as1` , i.e., start with ‘as1’. * `@role(dim, role)` indicates all nodes with role name ‘role’ in dimension name ‘dim’. ### Node Specifier Grammar¶ > nodeSpec := nodeTerm [(&|,|\) nodeTerm] nodeTerm := <node-name> | /<node-name-regex>/ | nodeFunc | (nodeSpec) nodeFunc := @role(<dimension-name>, <role-name>) ### Node Specifier Resolver¶ * `resolveNodeSpecifier` shows the set of nodes represented by the given input. A specification for a set of OSPF interface properties. An OSPF interface property specifier follows the enum set grammar over the following values: `OSPF_AREA_NAME` , `OSPF_COST` , `OSPF_ENABLED` , `OSPF_PASSIVE` , `OSPF_NETWORK_TYPE` . ## OSPF Process Property Specifier¶ A specification for a set of OSPF process properties. An OSPF process property specifier follows the enum set grammar over the following values: `AREA_BORDER_ROUTER` , `AREAS` , ``` EXPORT_POLICY_SOURCES ``` , `REFERENCE_BANDWIDTH` , `RFC_1583_COMPATIBLE` , `ROUTER_ID` . ## OSPF Session Status Specifier¶ A specification for a set of OSPF session statuses. An OSPF session status specifier follows the enum set grammar over the following values: `AREA_INVALID` , `AREA_MISMATCH` , `AREA_TYPE_MISMATCH` , ``` DEAD_INTERVAL_MISMATCH ``` , `DUPLICATE_ROUTER_ID` , `ESTABLISHED` , ``` HELLO_INTERVAL_MISMATCH ``` , `MTU_MISMATCH` , ``` NETWORK_TYPE_MISMATCH ``` , `NO_SESSION` , `PASSIVE_MISMATCH` , `PROCESS_INVALID` , ``` UNKNOWN_COMPATIBILITY_ISSUE ``` . ## Routing Protocol Specifier¶ A specification for a set of routing protocols. The routing protocol specifier grammar follows the enum set grammar over protocol names. The set of names include most-specific protocols such as `OSPF-INTRA` and logical names that denote multiple specific protocols. The logical name `ALL` denotes all protocols. The full hierarchy of names is: `ALL` * `IGP` * `OSPF` * `OSPF-INT` * `OSPF-INTRA` * `OSPF-INTER` * `OSPF-EXT` * `OSPF-EXT1` * `OSPF-EXT2` * `ISIS` * `ISIS-L1` * `ISIS-L2` * `EIGRP` * `EIGRP-INT` * `EIGRP-EXT` * `RIP` * `BGP` * `EBGP` * `IBGP` * `AGGREGATE` * `STATIC` * `LOCAL` * `CONNECTED` ## Routing Policy Specifier¶ A specification for routing policies in the network. * `routingPolicy1` includes routing policies on all nodes with that name. * `/^rtpol/` includes all routing policies (on all nodes) whose names match the regex ‘^rtpol’, i.e., start wtih ‘rtpol’. ### Routing Policy Grammar¶ > routingPolicySpec := routingPolicyTerm [(&|,|\) routingPolicyTerm] routingPolicyTerm := <routing-policy-name> | /<routing-policy-name-regex>/ | (routingPolicySpec) ## VXLAN VNI Property Specifier¶ A specification for a set of VXLAN VNI properties. A VXLAN VNI property specifier follows the enum set grammar over the following values: `LOCAL_VTEP_IP` , `MULTICAST_GROUP` , `VLAN` , `VNI` , `VTEP_FLOOD_LIST` , `VXLAN_PORT` . Utility assert functions for writing network tests (or policies). All assert_* methods will raise an ``` BatfishAssertException ``` if the assertion fails. ``` assert_filter_has_no_unreachable_lines ``` (filters, soft=False, snapshot=None, session=None, df_format='table')[source]¶ * Check that a filter (e.g. an ACL) has no unreachable lines. A filter line is considered unreachable if it will never match a packet, e.g., because its match condition is empty or covered completely by those of prior lines.” ``` assert_filter_permits ``` `assert_flows_fail` (startLocation, headers, soft=False, snapshot=None, session=None, df_format='table')[source]¶ * Check if the specified set of flows, denoted by starting locations and headers, fail. `assert_flows_succeed` (startLocation, headers, soft=False, snapshot=None, session=None, df_format='table')[source]¶ * Check if the specified set of flows, denoted by starting locations and headers, succeed. Note If a node or VRF is missing in the route answer the assertion will NOT fail, but a warning will be generated. ``` assert_no_duplicate_router_ids ``` (snapshot: Optional[str] = None, nodes: Optional[str] = None, protocols: Optional[List[str]] = None, soft: bool = False, session: Optional[Session] = None, df_format: str = 'table', ignore_same_node: bool = False) → bool[source]¶ * Assert that there are no duplicate router IDs present in the snapshot. protocols – the protocols on which to run the assertion, currently BGP and OSPF are supported * ignore_same_node – whether to ignore duplicate router-ids on the same node ``` assert_no_forwarding_loops ``` Assert that there are no forwarding loops in the snapshot. ``` assert_no_incompatible_bgp_sessions ``` Assert that there are no incompatible BGP sessions present in the snapshot. status – select sessions matching the specified BGP session status specifier, if none is specified then all statuses other than UNIQUE_MATCH, DYNAMIC_MATCH, and UNKNOWN_REMOTE are selected. * ``` assert_no_incompatible_ospf_sessions ``` Assert that there are no incompatible or unestablished OSPF sessions present in the snapshot. ``` assert_no_unestablished_bgp_sessions ``` Assert that there are no BGP sessions that are compatible but not established. This assertion is run (only) for sessions that are compatible based on configuration settings and it will fail if any such session is not established because of routing or forwarding problems. To find sessions that are incompatible you may run the assert_no_incompatible_bgp_sessions assertion. ``` assert_no_undefined_references ``` Assert that there are no undefined references present in the snapshot. `assert_num_results` (answer, num, soft=False)[source]¶ * Assert an exact number of results were returned. answer – Batfish answer or DataFrame * num (int) – expected number of results *
tmaptools
cran
R
Package ‘tmaptools’ October 14, 2022 Type Package Title Thematic Map Tools Version 3.1-1 Description Set of tools for reading and processing spatial data. The aim is to supply the workflow to cre- ate thematic maps. This package also facilitates 'tmap', the package for visualizing thematic maps. License GPL-3 Encoding UTF-8 LazyData true Date 2021-01-19 Depends R (>= 3.5), methods Imports sf (>= 0.9.2), lwgeom (>= 0.1-4), stars (>= 0.4-1), units (>= 0.6-1), grid, magrittr, RColorBrewer, viridisLite, stats, dichromat, XML Suggests tmap (>= 3.0), rmapshaper, osmdata, OpenStreetMap, raster, png, shiny, shinyjs URL https://github.com/mtennekes/tmaptools BugReports https://github.com/mtennekes/tmaptools/issues RoxygenNote 7.1.1 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-01-19 20:30:02 UTC R topics documented: tmaptools-packag... 2 approx_area... 3 approx_distance... 5 b... 6 bb_pol... 9 calc_densitie... 10 crop_shap... 11 geocode_OS... 12 get_asp_rati... 14 get_brewer_pa... 15 get_neighbour... 17 map_colorin... 17 palette_explore... 19 read_GP... 20 read_os... 20 rev_geocode_OS... 22 simplify_shap... 23 %>... 25 tmaptools-package Thematic Map Tools Description This package offers a set of handy tool functions for reading and processing spatial data. The aim of these functions is to supply the workflow to create thematic maps, e.g. read shape files, set map projections, append data, calculate areas and distances, and query OpenStreetMap. The visualization of thematic maps can be done with the tmap package. Details This page provides a brief overview of all package functions. Tool functions (shape) approx_areas Approximate area sizes of polygons approx_distances Approximate distances bb Create, extract or modify a bounding box bb_poly Convert bounding box to a polygon get_asp_ratio Get the aspect ratio of a shape object ————————— ————————————————————————————————— Tool functions (colors) get_brewer_pal Get and plot a (modified) Color Brewer palette map_coloring Find different colors for adjacent polygons palette_explorer Explore Color Brewer palettes ————————— ————————————————————————————————— Spatial transformation functions crop_shape Crop shape objects simplify_shape Simplify a shape ————————— ————————————————————————————————— Input and output functions geocode_OSM Get a location from an address description read_GPX Read a GPX file read_osm Read Open Street Map data rev_geocode_OSM Get an address description from a location ————————— ————————————————————————————————— Author(s) <NAME> <<EMAIL>> approx_areas Approximate area sizes of the shapes Description Approximate the area sizes of the polygons in real-world area units (such as sq km or sq mi), proportional numbers, or normalized numbers. Also, the areas can be calibrated to a prespecified area total. This function is a convenient wrapper around st_area. Usage approx_areas(shp, target = "metric", total.area = NULL) Arguments shp shape object, i.e., an sf or sp object. target target unit, one of "prop": Proportional numbers. In other words, the sum of the area sizes equals one. "norm": Normalized numbers. All area sizes are normalized to the largest area, of which the area size equals one. "metric" (default): Output area sizes will be either "km" (kilometer) or "m" (meter) depending on the map scale "imperial": Output area sizes will be either "mi" (miles) or "ft" (feet) de- pending on the map scale other: Predefined values are "km^2", "m^2", "mi^2", and "ft^2". Other values can be specified as well, in which case to is required). These units are the output units. See orig for the coordinate units used by the shape shp. total.area total area size of shp in number of target units (defined by target). Useful if the total area of the shp differs from a reference total area value. For "metric" and "imperial" units, please provide the total area in squared kilometers re- spectively miles. Details Note that the method of determining areas is an approximation, since it depends on the used projec- tion and the level of detail of the shape object. Projections with equal-area property are highly rec- ommended. See https://en.wikipedia.org/wiki/List_of_map_projections for equal area world map projections. Value Numeric vector of area sizes (class units). See Also approx_distances Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(NLD_muni) NLD_muni$area <- approx_areas(NLD_muni, total.area = 33893) tm_shape(NLD_muni) + tm_bubbles(size="area", title.size=expression("Area in " * km^2)) # function that returns min, max, mean and sum of area values summary_areas <- function(x) { list(min_area=min(x), max_area=max(x), mean_area=mean(x), sum_area=sum(x)) } # area of the polygons approx_areas(NLD_muni) %>% summary_areas() # area of the polygons, adjusted corrected for a specified total area size approx_areas(NLD_muni, total.area=33893) %>% summary_areas() # proportional area of the polygons approx_areas(NLD_muni, target = "prop") %>% summary_areas() # area in squared miles approx_areas(NLD_muni, target = "mi mi") %>% summary_areas() # area of the polygons when unprojected approx_areas(NLD_muni %>% sf::st_transform(crs = 4326)) %>% summary_areas() } approx_distances Approximate distances Description Approximate distances between two points or across the horizontal and vertical centerlines of a bounding box. Usage approx_distances(x, y = NULL, projection = NULL, target = NULL) Arguments x object that can be coerced to a bounding box with bb, or a pair of coordintes (vector of two). In the former case, the distance across the horizontal and vertical centerlines of the bounding box are approximated. In the latter case, y is also required; the distance between points x and y is approximated. y a pair of coordintes, vector of two. Only required when x is also a pair of coordintes. projection projection code, needed in case x is a bounding box or when x and y are pairs of coordinates. See get_proj4 target target unit, one of: "m", "km", "mi", and "ft". Value If y is specifyed, a list of two: unit and dist. Else, a list of three: unit, hdist (horizontal distance) and vdist (vertical distance). See Also approx_areas Examples ## Not run: if (require(tmap)) { data(NLD_prov) # North-South and East-West distances of the Netherlands approx_distances(NLD_prov) # Distance between Maastricht and Groningen p_maastricht <- geocode_OSM("Maastricht")$coords p_groningen <- geocode_OSM("Groningen")$coords approx_distances(p_maastricht, p_groningen, projection = 4326, target = "km") # Check distances in several projections sapply(c(3035, 28992, 4326), function(projection) { p_maastricht <- geocode_OSM("Maastricht", projection = projection)$coords p_groningen <- geocode_OSM("Groningen", projection = projection)$coords approx_distances(p_maastricht, p_groningen, projection = projection) }) } ## End(Not run) bb Bounding box generator Description Swiss army knife for bounding boxes. Modify an existing bounding box or create a new bounding box from scratch. See details. Usage bb( x = NA, ext = NULL, cx = NULL, cy = NULL, width = NULL, height = NULL, xlim = NULL, ylim = NULL, relative = FALSE, asp.limit = NULL, current.projection = NULL, projection = NULL, output = c("bbox", "matrix", "extent") ) Arguments x One of the following: • A shape from class sf, stars, sp, or raster. • A bounding box (st_bbox, Extent (raster package, which will no longer be supported in the future versions), numeric vector of 4 (default order: xmin, ymin, xmax, ymax), or a 2x2 matrix). • Open Street Map search query. The bounding is automatically generated by querying x from Open Street Map Nominatim. See geocode_OSM and https://wiki.openstreetmap.org/wiki/Nominatim. If x is not specified, a bounding box can be created from scratch (see details). ext Extension factor of the bounding box. If 1, the bounding box is unchanged. Val- ues smaller than 1 reduces the bounding box, and values larger than 1 enlarges the bounding box. This argument is a shortcut for both width and height with relative=TRUE. If a negative value is specified, then the shortest side of the bounding box (so width or height) is extended with ext, and the longest side is extended with the same absolute value. This is especially useful for bounding boxes with very low or high aspect ratios. cx center x coordinate cy center y coordinate width width of the bounding box. These are either absolute or relative (depending on the argument relative). height height of the bounding box. These are either absolute or relative (depending on the argument relative). xlim limits of the x-axis. These are either absolute or relative (depending on the argument relative). ylim limits of the y-axis. See xlim. relative boolean that determines whether relative values are used for width, height, xlim and ylim or absolute. If x is unspecified, relative is set to "FALSE". asp.limit maximum aspect ratio, which is width/height. Number greater than or equal to 1. For landscape bounding boxes, 1/asp.limit will be used. The returned bounding box will have an aspect ratio between 1/asp.limit and asp.limit. current.projection projection that corresponds to the bounding box specified by x. projection projection to transform the bounding box to. output output format of the bounding box, one of: • "bbox" a sf::bbox object, which is a numeric vector of 4: xmin, ymin, xmax, ymax. This representation used by the sf package. • "matrix" a 2 by 2 numeric matrix, where the rows correspond to x and y, and the columns to min and max. This representation used by the sp package. • "extent" an raster::extent object, which is a numeric vector of 4: xmin, xmax, ymin, ymax. This representation used by the raster pack- age. Details An existing bounding box (defined by x) can be modified as follows: • Using the extension factor ext. • Changing the width and height with width and height. The argument relavitve determines whether relative or absolute values are used. • Setting the x and y limits. The argument relavitve determines whether relative or absolute values are used. A new bounding box can be created from scratch as follows: • Using the extension factor ext. • Setting the center coorinates cx and cy, together with the width and height. • Setting the x and y limits xlim and ylim Value bounding box (see argument output) See Also geocode_OSM Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { ## load shapes data(NLD_muni) data(World) ## get bounding box (similar to sp's function bbox) bb(NLD_muni) ## extent it by factor 1.10 bb(NLD_muni, ext=1.10) ## convert to longlat bb(NLD_muni, projection=4326) ## change existing bounding box bb(NLD_muni, ext=1.5) bb(NLD_muni, width=2, relative = TRUE) bb(NLD_muni, xlim=c(.25, .75), ylim=c(.25, .75), relative = TRUE) } ## Not run: if (require(tmap)) { bb("Limburg", projection = "rd") bb_italy <- bb("Italy", projection = "eck4") tm_shape(World, bbox=bb_italy) + tm_polygons() # shorter alternative: tm_shape(World, bbox="Italy") + tm_polygons() } ## End(Not run) bb_poly Convert bounding box to a spatial polygon Description Convert bounding box to a spatial (sfc) object . Useful for plotting (see example). The function bb_earth returns a spatial polygon of the ’boundaries’ of the earth, which can also be done in other projections (if a feasible solution exists). Usage bb_poly(x, steps = 100, stepsize = NA, projection = NULL) bb_earth( projection = NULL, stepsize = 1, earth.datum = 4326, bbx = c(-180, -90, 180, 90), buffer = 1e-06 ) Arguments x object that can be coerced to a bounding box with bb steps number of intermediate points along the shortest edge of the bounding box. The number of intermediate points along the longest edge scales with the aspect ratio. These intermediate points are needed if the bounding box is plotted in another projection. stepsize stepsize in terms of coordinates (usually meters when the shape is projected and degrees of longlat coordinates are used). If specified, it overrules steps projection projection in which the coordinates of x are provided. For bb_earth, projection is the projection in which the bounding box is returned (if possible). earth.datum Geodetic datum to determine the earth boundary. By default EPSG 4326. bbx boundig box of the earth in a vector of 4 values: min longitude, max longi- tude, min latitude, max latitude. By default c(-180, 180, -90, 90). If for some projection, a feasible solution does not exist, it may be wise to choose a smaller bbx, e.g. c(-180, 180, -88, 88). However, this is also automatically done with the next argument, buffer. buffer In order to determine feasible earth bounding boxes in other projections, a buffer is used to decrease the bounding box by a small margin (default 1e-06). This value is subtracted from each the bounding box coordinates. If it still does not result in a feasible bounding box, this procedure is repeated 5 times, where each time the buffer is multiplied by 10. Set buffer=0 to disable this procedure. Value sfc object Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(NLD_muni) current.mode <- tmap_mode("view") qtm(bb_poly(NLD_muni)) # restore mode tmap_mode(current.mode) } calc_densities Calculate densities Description Transpose quantitative variables to densitiy variables, which are often needed for choroplets. For example, the colors of a population density map should correspond population density counts rather than absolute population numbers. Usage calc_densities( shp, var, target = "metric", total.area = NULL, suffix = NA, drop = TRUE ) Arguments shp a shape object, i.e., an sf object or a SpatialPolygons(DataFrame) from the sp package. var name(s) of a qualtity variable name contained in the shp data target the target unit, see approx_areas. Density values are calculated in var/target^2. total.area total area size of shp in number of target units (defined by unit), approx_areas. suffix character that is appended to the variable names. The resulting names are used as column names of the returned data.frame. By default, _sq_<target>, where target corresponds to the target unit, e.g. _sq_km drop boolean that determines whether an one-column data-frame should be returned as a vector Value Vector or data.frame (depending on whether length(var)==1 with density values. Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(NLD_muni) NLD_muni_pop_per_km2 <- calc_densities(NLD_muni, target = "km km", var = c("pop_men", "pop_women")) NLD_muni <- sf::st_sf(data.frame(NLD_muni, NLD_muni_pop_per_km2)) tm_shape(NLD_muni) + tm_polygons(c("pop_men_km.2", "pop_women_km.2"), title=expression("Population per " * km^2), style="quantile") + tm_facets(free.scales = FALSE) + tm_layout(panel.show = TRUE, panel.labels=c("Men", "Women")) } crop_shape Crop shape object Description Crop a shape object (from class sf, stars, sp, or raster). A shape file x is cropped, either by the bounding box of another shape y, or by y itself if it is a SpatialPolygons object and polygon = TRUE. Usage crop_shape(x, y, polygon = FALSE, ...) Arguments x shape object, i.e. an object from class sf, stars, sp, or raster. y bounding box, an st_bbox, extent (raster package), or a shape object from which the bounding box is extracted (unless polygon is TRUE and x is an sf object). polygon should x be cropped by the polygon defined by y? If FALSE (default), x is cropped by the bounding box of x. Polygon cropping only works when x is a spatial object and y is a SpatialPolygons object. ... not used anymore Details This function is similar to crop from the raster package. The main difference is that crop_shape also allows to crop using a polygon instead of a rectangle. Value cropped shape, in the same class as x See Also bb Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(World, NLD_muni, land, metro) #land_NLD <- crop_shape(land, NLD_muni) #qtm(land_NLD, raster="trees", style="natural") metro_Europe <- crop_shape(metro, World[World$continent == "Europe", ], polygon = TRUE) qtm(World) + tm_shape(metro_Europe) + tm_bubbles("pop2010", col="red", title.size="European cities") + tm_legend(frame=TRUE) } geocode_OSM Geocodes a location using OpenStreetMap Nominatim Description Geocodes a location (based on a search query) to coordinates and a bounding box. Similar to geocode from the ggmap package. It uses OpenStreetMap Nominatim. For processing large amount of queries, please read the usage policy (https://operations.osmfoundation.org/policies/ nominatim/). Usage geocode_OSM( q, projection = NULL, return.first.only = TRUE, keep.unfound = FALSE, details = FALSE, as.data.frame = NA, as.sf = FALSE, geometry = c("point", "bbox"), server = "https://nominatim.openstreetmap.org" ) Arguments q a character (vector) that specifies a search query. For instance "India" or "CBS Weg 11, Heerlen, Netherlands". projection projection in which the coordinates and bounding box are returned. See st_crs for details. By default latitude longitude coordinates (EPSG 4326). return.first.only Only return the first result keep.unfound Keep list items / data.frame rows with NAs for unfound search terms. By default FALSE details provide output details, other than the point coordinates and bounding box as.data.frame Return the output as a data.frame. If FALSE, a list is returned with at least two items: "coords", a vector containing the coordinates, and "bbox", the corre- sponding bounding box. By default false, unless q contains multiple queries. If as.sf = TRUE (see below), as.data.frame will set to TRUE. as.sf Return the output as sf object. If TRUE, return.first.only will be set to TRUE. Two geometry columns are added: bbox and point. The argument geometry determines which of them is set to the default geometry. geometry When as.sf, this argument determines which column (bbox or point) is set as geometry column. Note that the geometry can be changed afterwards with st_set_geometry. server OpenStreetMap Nominatim server name. Could also be a local OSM Nomina- tim server. Value If as.sf then a sf object is returned. Else, if as.data.frame, then a data.frame is returned, else a list. See Also rev_geocode_OSM, bb Examples ## Not run: if (require(tmap)) { geocode_OSM("India") geocode_OSM("CBS Weg 1, Heerlen") geocode_OSM("CBS Weg 1, Heerlen", projection = 28992) data(metro) # sample 5 cities from the metro dataset five_cities <- metro[sample(length(metro), 5), ] # obtain geocode locations from their long names five_cities_geocode <- geocode_OSM(five_cities$name_long, as.sf = TRUE) # change to interactive mode current.mode <- tmap_mode("view") # plot metro coordinates in red and geocode coordinates in blue # zoom in to see the differences tm_shape(five_cities) + tm_dots(col = "blue") + tm_shape(five_cities_geocode) + tm_dots(col = "red") # restore current mode tmap_mode(current.mode) } ## End(Not run) get_asp_ratio Get aspect ratio Description Get the aspect ratio of a shape object, a tmap object, or a bounding box Usage get_asp_ratio(x, is.projected = NA, width = 700, height = 700, res = 100) Arguments x A shape from class sf, stars, sp, or Raster, a bounding box (that can be coerced by bb), or a tmap object. is.projected Logical that determined wether the coordinates of x are projected (TRUE) or lon- gitude latitude coordinates (FALSE). By deafult, it is determined by the coordi- nates of x. width See details; only applicable if x is a tmap object. height See details; only applicable if x is a tmap object. res See details; only applicable if x is a tmap object. Details The arguments width, height, and res are passed on to png. If x is a tmap object, a temporarily png image is created to calculate the aspect ratio of a tmap object. The default size of this image is 700 by 700 pixels at 100 dpi. Value aspect ratio Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(World) get_asp_ratio(World) get_asp_ratio(bb(World)) tm <- qtm(World) get_asp_ratio(tm) } ## Not run: get_asp_ratio("Germany") #note: bb("Germany") uses geocode_OSM("Germany") ## End(Not run) get_brewer_pal Get and plot a (modified) Color Brewer palette Description Get and plot a (modified) palette from Color Brewer. In addition to the base function brewer.pal, a palette can be created for any number of classes. The contrast of the palette can be adjusted for sequential and diverging palettes. For categorical palettes, intermediate colors can be generated. An interactive tool that uses this function is palette_explorer. Usage get_brewer_pal(palette, n = 5, contrast = NA, stretch = TRUE, plot = TRUE) Arguments palette name of the color brewer palette. Run palette_explorer or see brewer.pal for options. n number of colors contrast a vector of two numbers between 0 and 1 that defines the contrast range of the palette. Applicable to sequential and diverging palettes. For sequential palettes, 0 stands for the leftmost color and 1 the rightmost color. For instance, when contrast=c(.25, .75), then the palette ranges from 1/4 to 3/4 of the available color range. For diverging palettes, 0 stands for the middle color and 1 for both outer colors. If only one number is provided, the other number is set to 0. The default value depends on n. See details. stretch logical that determines whether intermediate colors are used for a categorical palette when n is greater than the number of available colors. plot should the palette be plot, or only returned? If TRUE the palette is silently re- turned. Details The default contrast of the palette depends on the number of colors, n, in the following way. The default contrast is maximal, so (0, 1), when n = 9 for sequential palettes and n = 11 for diverging palettes. The default contrast values for smaller values of n can be extracted with some R magic: sapply(1:9, tmaptools:::default_contrast_seq) for sequential palettes and sapply(1:11, tmaptools:::default_contrast_div) for diverging palettes. Value vector of color values. It is silently returned when plot=TRUE. See Also palette_explorer Examples get_brewer_pal("Blues") get_brewer_pal("Blues", contrast=c(.4, .8)) get_brewer_pal("Blues", contrast=c(0, 1)) get_brewer_pal("Blues", n=15, contrast=c(0, 1)) get_brewer_pal("RdYlGn") get_brewer_pal("RdYlGn", n=11) get_brewer_pal("RdYlGn", n=11, contrast=c(0, .4)) get_brewer_pal("RdYlGn", n=11, contrast=c(.4, 1)) get_brewer_pal("Set2", n = 12) get_brewer_pal("Set2", n = 12, stretch = FALSE) get_neighbours Get neighbours list from spatial objects Description Get neighbours list from spatial objects. The output is similar to the function poly2nb of the spdep package, but uses sf instead of sp. Usage get_neighbours(x) Arguments x a shape object, i.e., a sf object or a SpatialPolygons(DataFrame) (sp pack- age). Value A list where the items correspond to the features. Each item is a vector of neighbours. map_coloring Map coloring Description Color the polygons of a map such that adjacent polygons have different colors Usage map_coloring( x, algorithm = "greedy", ncols = NA, minimize = FALSE, palette = NULL, contrast = 1 ) Arguments x Either a shape (i.e. a sf or SpatialPolygons(DataFrame) (sp package) ob- ject), or an adjacency list. algorithm currently, only "greedy" is implemented. ncols number of colors. By default it is 8 when palette is undefined. Else, it is set to the length of palette minimize logical that determines whether algorithm will search for a minimal number of colors. If FALSE, the ncols colors will be picked by a random procedure. palette color palette. contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when auto.palette.mapping=TRUE). Both num- bers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). Value If palette is defined, a vector of colors is returned, otherwise a vector of color indices. Examples if (require(tmap) && packageVersion("tmap") >= "2.0") { data(World, metro) World$color <- map_coloring(World, palette="Pastel2") qtm(World, fill = "color") # map_coloring used indirectly: qtm(World, fill = "MAP_COLORS") data(NLD_prov, NLD_muni) tm_shape(NLD_prov) + tm_fill("name", legend.show = FALSE) + tm_shape(NLD_muni) + tm_polygons("MAP_COLORS", palette="Greys", alpha = .25) + tm_shape(NLD_prov) + tm_borders(lwd=2) + tm_text("name", shadow=TRUE) + tm_format("NLD", title="Dutch provinces and\nmunicipalities", bg.color="white") } palette_explorer Explore color palettes Description palette_explorer() starts an interactive tool shows all Color Brewer and viridis palettes, where the number of colors can be adjusted as well as the constrast range. Categorical (qualitative) palettes can be stretched when the number of colors exceeds the number of palette colors. Output code needed to get the desired color values is generated. Finally, all colors can be tested for color blind- ness. The data.frame tmap.pal.info is similar to brewer.pal.info, but extended with the color palettes from viridis. Usage palette_explorer() tmap.pal.info Format An object of class data.frame with 40 rows and 4 columns. References https://www.color-blindness.com/types-of-color-blindness/ See Also get_brewer_pal, dichromat, RColorBrewer Examples ## Not run: if (require(shiny) && require(shinyjs)) { palette_explorer() } ## End(Not run) read_GPX Read GPX file Description Read a GPX file. By default, it reads all possible GPX layers, and only returns shapes for layers that have any features. Usage read_GPX( file, layers = c("waypoints", "routes", "tracks", "route_points", "track_points"), remove.empty.layers = TRUE, as.sf = TRUE ) Arguments file a GPX filename (including directory) layers vector of GPX layers. Possible options are "waypoints", "tracks", "routes", "track_points", "route_points". By dedault, all those layers are read. remove.empty.layers should empty layers (i.e. with 0 features) be removed from the list? as.sf not used anymore Details Note that this function returns sf objects, but still uses methods from sp and rgdal internally. Value a list of sf objects, one for each layer read_osm Read Open Street Map data Description Read Open Street Map data. OSM tiles are read and returned as a spatial raster. Vectorized OSM data is not supported anymore (see details). Usage read_osm( x, zoom = NULL, type = "osm", minNumTiles = NULL, mergeTiles = NULL, use.colortable = FALSE, ... ) Arguments x object that can be coerced to a bounding box with bb (e.g. an existing bounding box or a shape). In the first case, other arguments can be passed on to bb (see ...). If an existing bounding box is specified in projected coordinates, plesae specify current.projection. zoom passed on to openmap. Only applicable when raster=TRUE. type tile provider, by default "osm", which corresponds to OpenStreetMap Mapnik. See openmap for options. Only applicable when raster=TRUE. minNumTiles passed on to openmap Only applicable when raster=TRUE. mergeTiles passed on to openmap Only applicable when raster=TRUE. use.colortable should the colors of the returned raster object be stored in a colortable? If FALSE, a RasterStack is returned with three layers that correspond to the red, green and blue values betweeen 0 and 255. ... arguments passed on to bb. Details As of version 2.0, read_osm cannot be used to read vectorized OSM data anymore. The reason is that the package that was used under the hood, osmar, has some limitations and is not actively maintained anymore. Therefore, we recommend the package osmdata. Since this package is very user-friendly, there was no reason to use read_osm as a wrapper for reading vectorized OSM data. Value The output of read_osm is a raster object. Examples ## Not run: if (require(tmap)) { #### Choropleth with OSM background # load Netherlands shape data(NLD_muni) # read OSM raster data osm_NLD <- read_osm(NLD_muni, ext=1.1) # plot with regular tmap functions tm_shape(osm_NLD) + tm_rgb() + tm_shape(NLD_muni) + tm_polygons("population", convert2density=TRUE, style="kmeans", alpha=.7, palette="Purples") #### A close look at the building of Statistics Netherlands in Heerlen # create a bounding box around the CBS (Statistics Netherlands) building CBS_bb <- bb("CBS Weg 11, Heerlen", width=.003, height=.002) # read Microsoft Bing satellite and OpenCycleMap OSM layers CBS_osm1 <- read_osm(CBS_bb, type="bing") CBS_osm2 <- read_osm(CBS_bb, type="opencyclemap") # plot OSM raster data qtm(CBS_osm1) qtm(CBS_osm2) } ## End(Not run) rev_geocode_OSM Reverse geocodes a location using OpenStreetMap Nominatim Description Reverse geocodes a location (based on spatial coordinates) to an address. It uses OpenStreetMap Nominatim. For processing large amount of queries, please read the usage policy (https:// operations.osmfoundation.org/policies/nominatim/). Usage rev_geocode_OSM( x, y = NULL, zoom = NULL, projection = 4326, as.data.frame = NA, server = "https://nominatim.openstreetmap.org" ) Arguments x x coordinate(s), or a spatial points object (sf or SpatialPoints) y y coordinate(s) zoom zoom level projection projection in which the coordinates x and y are provided. as.data.frame return as data.frame (TRUE) or list (FALSE). By default a list, unless multiple coordinates are provided. server OpenStreetMap Nominatim server name. Could also be a local OSM Nomina- tim server. Value A data frame or a list with all attributes that are contained in the search result See Also geocode_OSM Examples ## Not run: if (require(tmap)) { data(metro) # sample five cities from metro dataset set.seed(1234) five_cities <- metro[sample(length(metro), 5), ] # obtain reverse geocode address information addresses <- rev_geocode_OSM(five_cities, zoom = 6) five_cities <- sf::st_sf(data.frame(five_cities, addresses)) # change to interactive mode current.mode <- tmap_mode("view") tm_shape(five_cities) + tm_markers(text="name") # restore current mode tmap_mode(current.mode) } ## End(Not run) simplify_shape Simplify shape Description Simplify a shape consisting of polygons or lines. This can be useful for shapes that are too detailed for visualization, especially along natural borders such as coastlines and rivers. The number of coordinates is reduced. Usage simplify_shape(shp, fact = 0.1, keep.units = FALSE, keep.subunits = FALSE, ...) Arguments shp an sf or sfc object. fact simplification factor, number between 0 and 1 (default is 0.1) keep.units prevent small polygon features from disappearing at high simplification (default FALSE) keep.subunits should multipart polygons be converted to singlepart polygons? This prevents small shapes from disappearing during simplification if keep.units = TRUE. De- fault FALSE ... other arguments passed on to the underlying function ms_simplify (except for the arguments input, keep, keep_shapes and explode) Details This function is a wrapper of ms_simplify. In addition, the data is preserved. Also sf objects are supported. Value sf object Examples ## Not run: if (require(tmap)) { data(World) # show different simplification factors tm1 <- qtm(World %>% simplify_shape(fact = 0.05), title="Simplify 0.05") tm2 <- qtm(World %>% simplify_shape(fact = 0.1), title="Simplify 0.1") tm3 <- qtm(World %>% simplify_shape(fact = 0.2), title="Simplify 0.2") tm4 <- qtm(World %>% simplify_shape(fact = 0.5), title="Simplify 0.5") tmap_arrange(tm1, tm2, tm3, tm4) # show different options for keeping smaller (sub)units tm5 <- qtm(World %>% simplify_shape(keep.units = TRUE, keep.subunits = TRUE), title="Keep units and subunits") tm6 <- qtm(World %>% simplify_shape(keep.units = TRUE, keep.subunits = FALSE), title="Keep units, ignore small subunits") tm7 <- qtm(World %>% simplify_shape(keep.units = FALSE), title="Ignore small units and subunits") tmap_arrange(tm5, tm6, tm7) } ## End(Not run) %>% Pipe operator Description The pipe operator from magrittr, %>%, can also be used in functions from tmaptools. Arguments lhs Left-hand side rhs Right-hand side
tsdisagg2
cran
R
Package ‘tsdisagg2’ October 14, 2022 Type Package Title Time Series Disaggregation Version 0.1.0 Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Disaggregates low frequency time series data to higher frequency series. Imple- ments the following methods for temporal disaggregation: Boot, Feibes and Lis- man (1967) <DOI:10.2307/2985238>, Chow and Lin (1971) <DOI:10.2307/1928739>, Fernan- dez (1981) <DOI:10.2307/1924371> and Litterman (1983) <DOI:10.2307/1391858>. Depends R (>= 3.1) License GPL (>= 2) RoxygenNote 5.0.1 Suggests R.rsp VignetteBuilder R.rsp NeedsCompilation no Repository CRAN Date/Publication 2016-11-29 08:10:07 R topics documented: tsdisagg... 2 tsdisagg2 Time Series Disaggregation. Description The "tsdisagg2" function performs temporal disaggregation or interpolation of low frequency to high frequency time series. Usage tsdisagg2(y, x, c = 0, method = "cl1", s = 4, type = "sum", ML = 0, rho, neg = 0, da, dz, plots = 0) Arguments y A data.frame, matrix, list or vector with low frequency data. x A data.frame, matrix, list or vector with high frequency data. c Constant; If c=1, the model will be estimated with a constant; If c=0, the oppo- site case. (Default: c=0) method Set disaggregation method; Available methods are Boot, Feibes and Lisman (method="bfl1" or method="bfl2"), Chow and Lin (method="cl1" or method="cl2"), Fernandez (method="f") and Litterman (method="l"). Default: method="cl1" s Frequency of observations; Available frequencies are 3, 4 or 12. For example, if s=4, we have quaterly observations. (Default: s=4) type Type of restriction; Could be "last", "first", "sum" or "average". (Default: type="sum") ML Maximum Likelihood (ML=1) or Generalised Least Squares (ML=0) "rho" es- timation. (Default: ML=0) rho Sets a value for "rho" (Default: rho=0) neg If neg=1, will be tested negative for "rho"; If neg=0, only positive values will be tested. (Default: neg=0) da First year considered on low frequency data. dz Last year considered on low frequency data. plots If plots=1, generates the plot of the estimated series and the plot withe Objective function values; If prints=0, the opposite case. (Default: plots=0) Details The function is used to disaggregate a low frequency to a higher frequency time series, while either the sum, the average, the first or the last value of the resulting high-frequency series is consistent with the low frequency series. Implements the following methods for temporal disaggregation: Boot, Feibes and Lisman (first and second differences), Chow and Lin (independent and AR(1) er- rors), Fernandez and Litterman. For Boot, Feibes and Lisman methods, the disaggregation can not be performed with help of indicator series. For the remaining methods, desaggregation can be per- formed with or without the help of one or more indicator series. If the high-frequency indicator(s) cover(s) a longer time span than the low-frequency series, an extrapolation is performed, using the same model as for interpolation. Value The function prints details of the disaggregation (smooth, loglik, ...), estimated parameters (sigma_ols, sigma_gls, model coefficients, ...) and the disaggregated series. The function also returns an invisi- ble list containing all the numeric results. Examples anual <- runif( 19, 300, 455 ) indicators <- data.frame( runif( 76, 500, 700 ), runif( 76, 800, 980 ) ) ### Constant ### tsdisagg2( y=anual, x=indicators, c=1, da=1995, dz=2013, plots=1 ) # Estimate model with constant ### Method selection ### tsdisagg2( y=anual, x=indicators, method="f", da=1995, dz=2013, plots=1 ) # Use option method ### "rho" value ### tsdisagg2( y=anual, x=indicators, method="cl2", da=1995, dz=2013, plots=1 ) # Search for positive optimal "rho" is enabled (if method="cl2" or method="l") tsdisagg2( y=anual, x=indicators, method="cl2", rho=0.35, da=1995, dz=2013, plots=1 ) # Set "rho" value manually (the grid search is not performed) ### Interpolation or distribution ### tsdisagg2( y=anual, x=indicators, da=1995, dz=2013, method="f", type="last" ) # Performs disaggregation by interpolation with type="last" or type="first" tsdisagg2( y=anual, x=indicators, da=1995, dz=2013, method="f", type="average" ) # Performs disaggregation by distribution with type="sum" or type="average" ### Use returned objects ### td <- tsdisagg2( y=anual, x=indicators, da=1995, dz=2013, method="f", type="average" ) names(td) td$BETA_ESTIMATION
appnn
cran
R
Package ‘appnn’ October 12, 2022 Type Package Title Amyloid Propensity Prediction Neural Network Version 1.0-0 Date 2015-07-11 Encoding UTF-8 Author <NAME>, <NAME>, <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description Amyloid propensity prediction neural network (APPNN) is an amyloidogenicity propen- sity predictor based on a machine learning approach through recursive feature selection and feed- forward neural networks, taking advantage of newly published sequences with experimen- tal, in vitro, evidence of amyloid formation. License GPL-3 LazyData TRUE NeedsCompilation no Repository CRAN Date/Publication 2015-07-12 18:27:00 R topics documented: appnn-packag... 2 appn... 3 plo... 4 prin... 4 appnn-package Amyloid propensity prediction neural network (APPNN) Description Amyloid propensity prediction neural network (APPNN) is an amyloidogenicity propensity pre- dictor based on a machine learning approach through recursive feature selection and feed-forward neural networks, taking advantage of newly published sequences with experimental, in vitro, evi- dence of amyloid formation. This approach relies on the assumptions that, i) small peptide stretches within an amyloidogenic protein can act as amyloid forming facilitators that will eventually direct the refolding of the protein along a path involving the formation of an energetically favourable amy- loid conformation; ii) the minimum length of these facilitator sequences or hot spots comprises six amino acids; iii) the amyloidogenicity propensity value per amino acid corresponds to the highest value obtained from all six amino acid windows that contain that amino acid; and iv) a peptide or protein is considered amyloidogenic if at least one stretch or hot spot is found within the sequence. Details Package: appnn Type: Package Version: 1.0 Date: 2015-04-13 License: GPL-3 The amyloidogenic propensity prediction neural network is composed by three functions, the func- tion appnn which performs the propensity prediction calculations, the function print that prints to the console the prediction results, and function plot that generate plots of the prediction results. Author(s) <NAME>, <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Manuscript under review. Examples sequences <- c('STVIIE','KKSSTT','KYSTVI') predictions <- appnn(sequences) print(predictions) plot(predictions,c(1,2,3)) appnn Prediction of the amyloidogenicity propensity for polypeptide se- quences. Description This function predicts the amyloidfogenicity propensity of polypeptide sequences through the amy- loid propensity prediction neural network (APPNN). Usage ## Default S3 method: appnn(sequences) Arguments sequences vector of sequences to submit to amyloidogenicity propension prediction neural network Value A list containing the amyloidogenicity propensity predictions for the polypeptides queried. overall The overall amyloidogenicity propensity prediction value for the sequence aminoacids The amyloidogenicity propensity prediction value per amino acid hotspots A list of the amyloidogenic hotspots predicted in the sequence, limited by the first and last amino acid Author(s) <NAME>, <NAME>, <NAME>, <NAME> Examples sequences <- c('STVIIE','KKSSTT','KYSTVI') predictions <- appnn(sequences) plot Plots generation of the amyloidogenicity propensity predicted values per amino acid residue. Description This function generates plots for the amyloidogenicity propensity predicted values per amino acid for the given sequences. Usage ## S3 method for class 'appnn' plot(x, indices, ...) Arguments x amyloidogenicity propensity prediction results. indices a vector containing the indices of the sequences to plot. ... not used. Author(s) <NAME>, <NAME>, <NAME>, <NAME> Examples sequences <- c('STVIIE','KKSSTT','KYSTVI') predictions <- appnn(sequences) plot(predictions,c(1,2,3)) print Print the amyloidogenicity propensity predicted values to the console. Description This function prints to the console the amyloidogenicity propensity predicted values for the given polypeptide sequences. Usage ## S3 method for class 'appnn' print(x, ...) Arguments x amyloidogenicity propensity prediction results. ... not used. Author(s) <NAME>, <NAME>, <NAME>, <NAME> Examples sequences <- c('STVIIE','KKSSTT','KYSTVI') predictions <- appnn(sequences) print(predictions)
ng-clp
rust
Rust
Enum ng_clp::ParseError === ``` pub enum ParseError { InternalArgumentCanNotHaveArgument { arg: String, }, InternalSeparatorCanNotHaveArgument, InternalInvalidEatCount { eat: usize, }, InternalIndexOutOfRange { index: usize, }, FlagWithArgument { name: String, }, OptionWithoutArgument { name: String, }, InvalidArgument { value: String, }, UnexpectedSeparator { value: String, }, UnknownFlagOrOption { name: String, }, InvalidString { s: String, }, } ``` Variants --- ### `InternalArgumentCanNotHaveArgument` #### Fields `arg: String`### `InternalSeparatorCanNotHaveArgument` ### `InternalInvalidEatCount` #### Fields `eat: usize`### `InternalIndexOutOfRange` #### Fields `index: usize`### `FlagWithArgument` #### Fields `name: String`### `OptionWithoutArgument` #### Fields `name: String`### `InvalidArgument` #### Fields `value: String`### `UnexpectedSeparator` #### Fields `value: String`### `UnknownFlagOrOption` #### Fields `name: String`### `InvalidString` #### Fields `s: String`Trait Implementations --- source### impl Debug for ParseError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for ParseError source#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for ParseError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<ParseError> for ParseError source#### fn eq(&self, other: &ParseError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ParseError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ParseError Auto Trait Implementations --- ### impl RefUnwindSafe for ParseError ### impl Send for ParseError ### impl Sync for ParseError ### impl Unpin for ParseError ### impl UnwindSafe for ParseError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Function ng_clp::is_argument === ``` pub fn is_argument(a: &str) -> bool ``` Function ng_clp::is_flag_or_option === ``` pub fn is_flag_or_option(a: &str) -> bool ``` Function ng_clp::is_invalid === ``` pub fn is_invalid(a: &str) -> bool ``` Function ng_clp::is_separator === ``` pub fn is_separator(a: &str) -> bool ``` Function ng_clp::next_index === ``` pub fn next_index(     argv: &[&str],     index: usize,     eat: usize ) -> Result<usize, ParseError> ``` Function ng_clp::parse === ``` pub fn parse<'s, 'a>(     argv: &'a [&'s str],     index: usize ) -> Result<(&'s str, Option<&'s str>), ParseError> ``` Function ng_clp::unwrap_argument === ``` pub fn unwrap_argument<'s>(     parse_result: (&'s str, Option<&'s str>) ) -> Result<&'s str, ParseError> ```
packagist_beapi_missed_schedule.jsonl
personal_doc
Unknown
* Site e-commerce * Commerce de Détail Créateurs de bonheur digital Unique agence en France, nous mobilisons toute la puissance de l’écosystème WordPress afin de créer des sites WordPress et des expériences digitales innovantes et impactantes : création de site internet, sites e-commerce, usine à sites & intranet. experts WordPress salariés au sein de l’agence Be API est partenaire WordPress VIP Be API, agence WordPress, est partenaire WordPress VIP.Nous sommes l’unique agence partenaire en France à proposer aux grands comptes l’offre WordPress VIP spécifiquement conçue pour eux. Be API, agence web spécialisée WordPress, est depuis 2009 experte dans la création de sites WordPress, la mise en place de stratégies digitales pour des projets digitaux impactants.Au quotidien, un projet web type à l’agence permet à partir d’un cahier des charges, de créer un site avec un thème WordPress avec une gestion de contenu simplifiée grâce à Gutenberg et une expérience utilisateur qualitative, optimisé pour la visibilité sur les moteurs de recherche, puis de gérer la maintenance de ce site notamment avec une veille proactive des mises à jour régulières de WordPress et des extensions. Be API a non seulement réalisé un véritable tour de force technique, pour répondre à des exigences contraignantes et inhabituelles, mais ils ont assuré le succès de la refonte par leur accompagnement attentif et leur expertise pointue en termes d’optimisation publicitaire et d’expérience utilisateur tout au long du projet, et encore maintenant pour les évolutions futures ! Agence leader WordPress en France Nous aidons nos clients à définir leurs objectifs, concrétiser leurs idées et les transformer en résultats durables. Pour cela, nous mobilisons toute notre expertise et la puissance de l’écosystème WordPress afin de créer des expériences digitales heureuses. Architecture, intéropérabilité, DevOps & Cloud ## Notre offre mobilise toute la puissance de l’ecosystème WordPress ## Les engagements de Be API ### Zéro outsourcing Nos développements sont 100 % réalisés en interne pour garantir une qualité irréprochable. Nous respectons le core du CMS pour permettre une maintenance aisée dans le temps. Nous vous aidons à y voir plus clair dans le vaste écosystème de WP, et reversons nos travaux à la communauté open-source. ## WordPress, le CMS le plus stable et agile du marché Si nous avons fait en 2009 le choix de WordPress, ce fut d’abord par intuition. Une intuition avec de solides arguments et à laquelle des années de succès ont donné raison. 40% du web mondial1/3 du CAC 40 Une technologie libre & open source Un CMS en innovation permanente Un outil interopérable ## La communauté WordPress fait notre bonheur, et nous le lui rendons bien. Conférences, formations, élaboration de plugins, extensions… chaque expert Be API est un spécialiste de WordPress dans son domaine et contribue activement à la communauté. Nous co-organisons, sponsorisons et participons aux plus grands rassemblements dans toute l’Europe et vous pouvez chaque année écouter nos experts aux WordCamps et WPTech de France. Vous êtes une marque, une entreprise ou une institution avec de fortes ambitions digitales ? Nous accompagnons des organisations partageant vos enjeux dans la réussite de leurs projets digitaux : site internet, e-commerce, audience et lead generation, usine à sites & intranet ou application métier optimisation technique, maintenance… ## Vos objectifs deviennent les notres… … lorsque vous nous confiez un projet stratégique. C’est d’ailleurs ensemble que nous les définissons et les ajustons. ## Leader WordPress en France… … nous sommes une agence digitale complète et mettons à votre service toutes nos expertises en stratégie, conception et développement. ## Adeptes du Design Thinking… … nous pensons, créons, développons, performons les sites WordPress en fonction de vos utilisateurs et de vos objectifs. ## Mettre le contributeur au centre… … de l’outil est la qualité première de WordPress avec une interface intuitive et soignée. ## L’écosystème riche en ressources… …de WordPress et sa communauté, permet de répondre à la majorité des besoins sans développement spécifique. projets mise en ligne et développés avec le CMS WordPress en 12 ans Pour créer le produit juste, nous commençons toujours un projet par apprendre. De nos clients mais surtout des clients de nos clients. Head of design – Be api ## Nous délivrons l’outil ou l’interface qui répondra au mieux aux besoins de vos utilisateurs En observant et analysant vos utilisateurs, nous tachons le plus possible de les comprendre et d’identifier leurs frustrations et leurs besoins.À la suite de parcours utilisateurs détaillés, nous prototypons les wireframes pour la réalisation de tests utilisateurs. Nos directeurs artistiques se nourrissent de cette réflexion pour innover et créer l’univers visuel et l’interface moderne les plus adaptés à votre cible et à votre marque. ## Nous disposons d’une très grande expérience dans la création d’usines à sites pour de grands groupes Nous avons développé de nombreuses extensions dans ce domaine, notamment des bibliothèques de médias partagés, du domain mapping évolué ou de la publication centralisée de contenu. La fonctionnalité multisite est native au sein de WordPress. ## Notre conception des sites e-commerce est portée par une vision marketing et des parcours utilisateurs optimisés pour la conversion Nous préconisons et mettons en œuvre l’extension WooCommerce depuis ses débuts. Nous accompagnons nos clients sur la mise en œuvre fonctionnelle, mais également sur les questions d’interopérabilité avec des outils métiers existants notamment pour gérer les questions de stocks, de comptabilité, etc. ## Nous réalisons des projets d’intranets de A à Z, toujours centrés sur les besoins de l’entreprise Nous transformons WordPress en un véritable réseau social d’entreprise (RSE) avec des fonctionnalités de profil utilisateur, groupes, forums de discussion, etc. # DéveloppementsWordPress # Développements WordPress Vous avez besoin de créer une fonctionnalité, une extension, une plateforme sous WordPress ? Depuis plus de 10 ans, nos équipes mettent leur expertise de développement pointue au service des projets de nos clients. Une expertise reconnue par Automattic, la société derrière WordPress : Be API est la première et unique agence partenaire WordPress VIP de France. plateformes WordPress développées téléchargements de nos extensions ## Les piliers de nos développements WordPress ### Analyse des besoins et benchmark des plugins de l’écosystème WordPress La force de WordPress, c’est son écosystème. C’est également son plus grand danger car un certain nombre de ressources de cet écosystème ne sont pas de bonne qualité. Ainsi l’agence met à disposition son expertise pour analyser et sélectionner la bonne ressource et répondre au mieux à votre besoin. La philosophie de l’agence, c’est avant tout de capitaliser sur les milliers de plugins présents dans l’écosystème WordPress. Forte de 10 ans d’expériences, l’agence dispose d’un référentiel d’extensions (environ 200) déjà sélectionnées, testées et éprouvées. ### Maintenabilité et sécurité Nos développeurs pensent l’architecture et le code de votre plateforme WordPress de manière à ce qu’ils soient le plus durables et le plus sécurisés possible. La condition sine qua non ? Que votre écosystème WordPress puisse intégrer les mises à jour du CMS et des extensions liées au fur et à mesure de la vie du projet. Objectif : que votre site soit maintenu à jour, dans les meilleures conditions de stabilité et de sécurité. ### Développements sur-mesure Il arrive fréquemment que les ressources de l’écosystème ne répondent pas aux exigences de nos clients, nous procédons alors à des développements sur-mesure. L’expertise WordPress de l’agence et l’expertise dans la programmation informatique en général nous permettent de créer des solutions personnalisées de A à Z, à partir de l’analyse de vos besoins et de ceux des futurs contributeurs. ### Scalabilité et industrialisation Votre plateforme, vos extensions sont développées de manière à pouvoir supporter une montée en charge si vos ambitions venaient à s’étendre. La question de la performance fait partie intégrante du design de nos développements. Nous apportons un soin particulier à ce sujet lorsque les enjeux de scalabilité font partie intégrante du projet, comme lors de la mise en place d’une usine à sites ou d’un site média. La performance web fait l’objet d’une attention particulière de nos équipes. Les temps de chargement des pages d’un projet WordPress sont bien entendus importants pour le référencement naturel, mais ils conditionnent également l’expérience de consultation de l’internaute. Nous mettons en place la stratégie de performances en fonction des projets, PWA, cache, serveur Varnish, CDN d’images, cloud multizones, etc. Notre créativité en matière de conception nous pousse à toujours innover en termes d’intégration. de nos talents sont techniques Plus de la moitié des talents de Be API sont des développeurs WordPress à la pointe de leur expertise. Passionnés, beaucoup sont aussi conférenciers, intervenants, formateurs et experts de WordPress. La majorité d’entre eux ont également créé des extensions open source partagées au sein de la communauté. ## Heureux en plugins Nous développons quotidiennement des plugins. <NAME>, directeur technique de l’agence, a développé l’extension open source Simple Image Sizes, qui est utilisée par plus de 100 000 sites WordPress et a été téléchargée près de 1 million de fois ! ## Hébergement : nous vous accompagnons Nous ne faisons pas d’hébergement de sites WordPress, mais nous vous conseillons et vous accompagnons sur le choix d’un hébergeur WordPress ou prestataire infogérant adapté à vos besoins. Notre expertise technique sur les problématiques serveurs (Linux, cache, HTTP, PHP MySQL) et notre expérience projets nous permettent d’être force de proposition et de dialoguer avec ces différents acteurs (hébergeur/DSI). agiles Vous avez un projet digital à déployer, mais des incertitudes ou un budget restreint ? Nous pouvons vous aider en développant pour vous un MVP (minimum viable product) en mode agile. L’intérêt ? Vous disposez rapidement d’un produit que vous pouvez tester, puis déployer au fur et à mesure, tout en bénéficiant du meilleur de l’expertise WordPress de Be API. ## Du MVP à l’industrialisation, il n’y a qu’un pas avec Be API Parce que WordPress est un outil de gestion de contenu extrêmement souple et évolutif, il permet de lancer un produit avec une structure et des fonctionnalités simples tout en ayant toute la capacité d’évoluer rapidement par la suite vers un écosystème digital plus ambitieux et des spécificités techniques plus sophistiquées – si votre MVP fait ses preuves. ## Réaliser un MVP à la carte avec WordPress La communauté WordPress fournit de nombreuses extensions pour répondre à vos besoins ; il est possible de réaliser des développements ad hoc si la solution n’existe pas. Avec plus de 50 000 ressources existantes, difficile de ne pas trouver l’extension désirée. ## A la clé : des gains de temps et des économies Chez Be API, nous capitalisons au maximum sur les ressources de l’écosystème WordPress pour apporter les meilleures solutions techniques aux besoins de nos clients. La flexibilité de l’outil et la richesse de l’écosystème WordPress permettent de répondre aux exigences de réactivité, d’expérimentation et d’agilité d’un MVP. Vous souhaitez réaliser un projet avec le CMS WordPress ? Très bonne décision ! L’agence WordPress Be API vous propose mieux, réaliser un projet durable* avec le CMS WordPress. En effet, traditionnellement, la durée de vie d’un projet web se situe entre 3 et 5 ans. Notre raison d’être : vous apporter du bonheur digital durant toute la phase d’exploitation de votre site WordPress. des projets réalisés par l’agence sont suivis en maintenance Les 1% correspond à des projets ou la TMA WordPress est réalisée par des équipes internes au client via la mise en place d’une phase de réversibilité. 4,2 années de durée de vie moyenne d’un projet chez Be API date de création du pôle dédié à la TMA WordPress projets actifs en TMA En fonction de vos attentes, nous pourrons vous accompagner pour tester et améliorer en continu la performance de votre plateforme web, dans une démarche ROIste. Par exemple, en effectuant un point régulier sur le SEO ou un A/B Testing mensuel avec recommandations d’optimisation de votre site <NAME> DE PROJETS CHEZ BE API ## Pourquoi choisir Be API pour la maintenance de votre site ? ### Une offre de maintenance calibrée pour vos besoins Nous mettons au point pour nos clients des offres de maintenance WordPress adaptées à vos demandes techniques (architecture technique spécifique au client ou à l’extérieur), et de moyens humains (account manager).En matière de maintenance aussi, nous ne pratiquons que le sur mesure. ### Flexibilité et montée en charge Nous savons que vos demandes et besoins de maintenance WordPress peuvent changer d’un mois sur l’autre, nous avons donc mis en place une organisation flexible, à même d’absorber les montées en charge nécessaires le cas échéant (en heures ouvrées comme en astreinte). En effet, en plus de notre équipe dédiée à la maintenance, l’ensemble de nos développeurs consacrent également des plages de travail à ces opérations. ### Maintenir votre site à un niveau de performance et de sécurité optimal Nous réalisons la maintenance préventive de projets WordPress, à travers les mises à jour régulières de WordPress (dans le cadre d’une nouvelle version par exemple) et de ses extensions afin d’éviter toute interruption de service. L’agence effectue une veille spécifique sur ces sujets afin d’appliquer les correctifs dès leur publication. Au quotidien, l’équipe prend également en charge les anomalies reportées en fonction de la criticité du problème. En cas d’anomalie en production, un rapport d’incident est systématiquement établi afin de tracer les différentes actions, et proposer des corrections long-terme. ### Maintenance évolutive des plateformes Nous pouvons également être amenés à faire évoluer et à apporter des changements significatifs à votre plateforme durant la phase de RUN de votre projet. L’agence propose alors au choix un mode de fonctionnement TMA ou projet, avec les étapes habituelles d’un projet web. L’agence effectue une veille continue et accompagne ses clients pour s’assurer de la conformité de leur plateforme (RGPD), tout en prenant en compte les évolutions des internets (Référencement, etc.) ## Votre projet web a été réalisé ailleurs ? Nous intervenons en maintenance de sites internet WordPress que nous avons développés pour nos clients, mais également pour des sites internet réalisés par des tiers. Nous parlons alors de tierce maintenance applicative (TMA). Dans ce cas de figure, votre projet est analysé par un expert WordPress et donne lieu à un pré-audit afin d’évaluer les possibilités de reprises, ainsi que les actions correctives à réaliser. Ce pré-audit ne vise pas à évaluer la qualité réelle et objective de votre projet, il vise principalement à évaluer si votre projet se rapproche des pratiques de développement de l’agence. L’utilisation de page builder de type Elementor, Divi, ou WPbakery sont par exemple des éléments bloquants à la reprise d’un projet en TMA,car ce ne sont pas des technologies utilisées par l’agence, et pour lesquelles nous ne souhaitons pas intervenir. Une fois le pré-audit passé, nous réalisons l’onboarding de votre projet sur nos environnements, une étape indispensable pour la réussite de la phase de maintenance. Vous avez besoin de l’analyse des meilleurs experts WordPress français ? De par notre expertise et notre expérience de ce CMS, nous pouvons diagnostiquer d’éventuelles faiblesses sur votre plateforme, ainsi que vous conseiller sur les évolutions à y apporter. Tout comme développer sous WordPress les interfaces ou fonctionnalités que nous vous recommandons. ## Partenaires et force de conseil Véritables partenaires, nous vous conseillerons dans toutes les missions stratégiques que vous nous soumettrez. Cela signifie que nous allons analyser en profondeur vos données, notamment celles concernant vos parcours utilisateurs, interroger les objectifs que vous vous êtes fixés et vous questionner en détail sur vos enjeux business. Ceci afin de vous proposer l’analyse la plus complète et les recommandations les plus pertinentes et impactantes. ## Réaliser l’audit dont vous avez besoin Nous réalisons différents types d’audits WordPress portant sur la qualité, l’état de l’art, la performance, la sécurité, ainsi que tout autre sujet lié à WordPress. Grâce à notre approche devops, nous intervenons également comme conciliateur entre agence WordPress et prestataire infogérant/hébergeur afin de résoudre des situations de blocage, comme lors d’une montée en charge douloureuse. Ces audits se concrétisent par la rédaction d’un compte rendu et peuvent donner lieu à des actions correctives, et pourquoi pas un partenariat à plus long terme. ## Faire monter vos équipes en compétence Nous proposons systématiquement une formation WordPress pour les contributeurs et administrateurs dans le cadre de nos projets de création ou refonte de sites WordPress. Celle-ci porte à la fois sur le CMS WordPress mais également sur les fonctionnalités spécifiques au projet. de formation par an chez nos clients Nous proposons également des formations WordPress techniques et des transferts de compétences si vous disposez d’équipes techniques susceptibles de contribuer au développement d’un projet. Plusieurs talents de notre agence WordPress dispensent des cours au sein de diverses écoles web en Ile-de-France sur des sujets techniques, mais également en design. Exploitez le potentiel d’une usine à sites : un dispositif permettant d’industrialiser la création, le déploiement et la maintenance d’un réseau de sites internet. ## Accélérez vos projets digitaux Il est fréquent pour une entreprise de se retrouver confronté à la gestion de multiples plateformes numériques. Cette situation peut entraîner des coûts considérables, des problématiques de sécurité et des défis en matière de gouvernance.Un dispositif d’usine à sites permet de répondre à ces différents enjeux. ## Capitalisez sur la puissance de WordPress et de Gutenberg, des outils parfaitement calibrés pour une usine à sites Centraliser l’hébergement et la maintenance Simplifier la contribution Rationaliser les coûts et les délais Harmoniser l’image de marque ## Les projets adaptés à la création d’une usine à sites Couverture internationale Vous êtes présents dans plusieurs pays en Europe et/ou à l’international. Les pays pourront disposer d’un site internet avec une approche multilingue. Organisation distribuée Vous êtes organisés avec un maillage territorial (pays, région, département, en local). La richesse du contenu des sites peut varier selon la représentation. Filiales Vous êtes organisés sur la forme d’un groupement d’entreprises ou de filiales. Les filiales peuvent adopter des déclinaisons graphiques et des fonctionnalités différentes. ## Vous avez un projet d’usine à sites ? Bénéficiez de notre expertise pour concevoir la solution adaptée aux besoins spécifiques de votre entreprise. ## À la clé : des gains de temps et des économies ### 1. Réalisez des économies d’échelle avec une usine à sites En tirant parti de la fonctionnalité Multisite intégrée dans WordPress, vous avez la possibilité de gérer plusieurs sites web à partir d’une infrastructure technique unique. Cela permet de réduire automatiquement les coûts de production et de maintenance.Si votre projet nécessite de conserver plusieurs instances techniques, le dispositif de Web Factory sera le plus adapté. Dans ce cas de figure, vous capitalisez quand même sur une seule offre d’hébergement et de maintenance WordPress. Bénéficiez des mises à jour des thèmes et des plugins appliquées automatiquement à l’ensemble de vos sites inclus dans l’usine à sites. ### 2. Harmonisez les contenus sur l’ensemble de vos sites La cohérence des messages et l’harmonisation de l’image de marque sur tous les canaux numériques est une préoccupation récurrente lorsque l’on gère un réseau de sites internet.Dans le cadre d’un projet d’usine à sites WordPress, nous déployons un thème WordPress unique qui garantit une cohérence visuelle, tout en offrant une autonomie de gestion de contenu à chaque responsable de site. En utilisant l’éditeur Gutenberg, vous pouvez également assurer une cohésion graphique au sein des pages, renforçant ainsi l’unité entre les différents sites. ### 3. Déployez rapidement de nouveaux sites (Time to Market) Nous développons des solutions techniques capables de dupliquer de nouvelles versions du site de référence bien plus rapidement, grâce à la fonctionnalité Multisite sur WordPress. Ces nouvelles versions sont templatisées, prêtes à être alimentée par les responsables de sites. ### 4. Fonction Multisite et partage de contenu L’architecture de la fonctionnalité Multisite chez WordPress permet de répondre facilement au besoin de partage de contenu d’un site à l’autre.On pense par exemple à une bibliothèque de médias partagée, au contenu de pages légales ou portail d’actualités. Le partage de contenu est rendu possible avec un effort de développement mesuré. Pour en savoir plus, découvrez nos plugins open-source “Content Sync Fusion” et ”Multisite Shared Block” ## Le Multisite, un terme propre à WordPress Le CMS WordPress possède un mode d’installation Multisite. Cette fonctionnalité a été intégrée il y a plus de 12 ans afin de faciliter la maintenance d’un réseau de site.Ainsi, WordPress inclus dans sa structure même les fondations idéales pour développer une usine à sites performante, pensée pour aider les entreprises et les contributeurs dans la gestion de leur réseau de site web au quotidien. Cette fonctionnalité a atteint un niveau de maturité et de stabilité important. Associée à l’éditeur Gutenberg, le Multisite WordPress constitue l’outil idéal pour concevoir et supporter une usine à sites efficace. Et avec plus de 50 000 ressources existantes, difficile de ne pas trouver l’extension désirée ! ## Une instance WordPress Multisite, c’est : ### Une base unique de code pour tous les sites ### Une base de données pour tous les sites * Des mu-plugins pour tous & non désactivables * Des plugins activés sur tous les sites * Des plugins activables site par site * Activable site par site * Limité ou non à un seul site ### Une base utilisateurs unique * Avec des rôles personnalisables site par site ## WordPress Multisite et noms de domaines personnalisés “Domain mapping”, “Domain routing”, “Routing”, “Alias”…Quelle que soit la façon de le nommer, vous pouvez définir un nom de domaine personnalisé pour chacun des sites d’une plateforme WordPress Multisite. Ainsi, organiser votre projet à travers une usines à sites WordPress vous permettra de répondre à toutes les demandes de vos équipes, et notamment aux exigences des équipes en charge du référencement naturel (SEO). ## Chez Be API, nous avons accompagné de nombreux clients dans la réalisation de projet d’usine à sites. ## Des questions sur un dispositif d’usine à sites ? Nos équipes ont l’expérience nécessaire pour vous accompagner sur différents types de projet : * La mise en place d’un écosystème d’usine à sites complet * La refonte de votre usine à sites * Le développement d’un site web au sein de votre usine à sites Vous avez un projet de refonte avec le CMS WordPress ? Be API vous propose un accompagnement complet sur la stratégie de migration à mettre en place dans votre projet. ## La migration, une thématique centrale et pluridisciplinaire durant votre projet En fonction du volume de contenus à reprendre, de vos besoins d’enrichissement et de requalification, la migration des contenus dicte de nombreuses décisions dès la phase de conception (architecture de l’information, wireframes, etc.). C’est pourquoi, à l’agence Be API, c’est une problématique adressée dès les premiers ateliers. Dans la suite du projet, les développements associés à la migration des contenus font partie des premiers sprints de développement afin de donner du temps à l’équipe projet de tester en profondeur cet élément clé, et éventuellement permettre un enrichissement de la part du client. Enfin, qui dit migration de sites et de contenus, implique une stratégie de redirections pour le SEO (et modifier les liens internes obsolètes) afin de maintenir le bon référencement naturel de votre site internet. ## Nos expériences Be API a réalisé de nombreuses refontes de projets sous le CMS WordPress, et la majeure partie du temps, une reprise des contenus est attendue par nos clients.Nous avons migré vers WordPress des sites réalisés avec les CMS suivants : Cette liste n’est pas exhaustive. Nous découvrons régulièrement de nouveaux CMS (open source comme propriétaire) au gré des projets, et nous appliquons la plupart du temps la même méthodologie de travail. Avec comme point de départ un échange technique avec l’éditeur du CMS, en l’absence de documentation à étudier. Nos équipes sont également habituées à travailler sur des bases de données comme MySQL, Microsoft SQL Server, Microsoft Access, Oracle, Filemaker mais également des fichiers à plat comme Excel, CSV, JSON ou bien XML. ## Migrer d’un site WordPress vers WordPress Contrairement à ce que l’on pourrait croire, la refonte de sites web WordPress, peut également inclure un chantier de migration de contenus.En effet, WordPress est un CMS particulièrement flexible, et il y a autant de façon de travailler avec ce système de gestion de contenus que de professionnels qui l’utilisent. Chez Be API, nous avons fait le choix de l’éditeur Gutenberg pour l’ensemble de nos projets. Cet éditeur de contenu apparu dans WordPress 5.0 a révolutionné les possibilités de contribution. De fait, lorsque l’on souhaite refondre un projet WordPress réalisé avant WordPress 5.0, il y a de très fortes chances que l’éditeur de contenu utilisé soit différent (WPbakery, Advanced Custom Fields-ACF, Beaver Builder, Elementor ou Divi). Dès lors, il faut prévoir un travail de migration des contenus afin de trouver des équivalences entres l’ancienne organisation des contenus, et la nouvelle proposée par Gutenberg et WordPress. ## Notre méthodologie Étude de l’existant Mapping des données Développement & Tests Migration unique ou incrémentielle Une migration réussie passe obligatoirement par un travail d’équipe entre le client et l’agence. Nos clients possèdent la connaissance métier de leur site existant, l’agence possède la connaissance technique, c’est l’association de ces 2 expertises qui va permettre une migration de qualité.Afin de mener à bien l’étude et la phase de tests, nous demandons à nos clients d’établir une liste qualitative de contenus significatifs à tester / recetter. Cette méthodologie a été éprouvée sur de très nombreux projets, quelle que soit la volumétrie ou la complexité des contenus. ## Enrichissement de contenus durant une migration de sites vers WordPress Il est fréquent qu’une refonte de sites amène à des évolutions significatives dans l’organisation des contenus (propriétés, taxinomies & classification). Ces évolutions trouvent généralement leur origine dans les ateliers de co-conception menées durant le projet. Pour faciliter le travail d’enrichissement associé, nous proposons généralement une étape intermédiaire durant la migration visant à générer un fichier Excel. Ce fichier au format tableur facilite la plupart du temps l’enrichissement des contenus dans un format adapté à des éditions de masse. Une fois le fichier complété, il intègre le processus de migration des contenus, et permet par exemple de re-catégoriser tous les contenus d’un site WordPress. Il arrive de temps à autre que nos clients souhaitent bénéficier de cette possibilité d’édition et d’enrichissement de masse, via l’export et l’import de fichiers CSV ou Excel. Pour cela, nous recommandons généralement les extensions “WP All Export” et “WP All Import” pour mener à bien ces routines, et intégrons dans le cadre de nos projets des modèles-types d’export et d’import de contenu via ces extensions. ## Migration Drupal vers WordPress Votre site actuel fonctionne avec le CMS Open Source Drupal (version 6, 7, 8 ou 9) et vous souhaitez migrer vers WordPress ? C’est déjà une bonne décision 🙂 Il existe différents moyens de migrer vos données Drupal vers le CMS WordPress. En premier lieu, il est possible d’utiliser l’extension premium “FG Drupal to WordPress” éditée par l’expert français <NAME>. C’est un outil de bonne facture mais qui s’éloigne de notre méthodologie, nous le recommandons généralement pour des migrations à bas coût dans le cadre de projets spécifiques. A l’agence, nous avons opté pour un fonctionnement alternatif respectant strictement notre méthodologie. Pour cela, nous développons quand cela est nécessaire un module Drupal afin d’exposer les données utiles à la migration via un flux JSON. Ainsi, nous avons la possibilité de faire des migrations complètes ou incrémentielles à tout moment à partir de votre ancien site Drupal actuellement en production. ## Migration Spip vers WordPress Spip fut par le passé un très bon CMS, mais il est désormais surpassé par de nombreux acteurs du marché dont WordPress. Tout comme Drupal (cf le chapitre précédent), il existe le plugin “FG Spip to WordPress” éditée par l’expert français <NAME>, et tout comme Drupal, nous recommandons une approche alternative basée sur une exposition des données via des flux spécifiques. Vous avez dit “Page builder” ? “Constructeur de pages” ? “Content Builder” ? “Editeur de contenus” ?Le marché du web est en constante évolution. Après les sites HTML gérés à 100% par les agences, puis les CMS ayant rendu éditables les contenus, le marché des CMS donne désormais la part belle aux outils offrant une liberté de contribution importante. Be API est pleinement convaincu par cette évolution et propose depuis longtemps à ses clients des outils au sein de WordPress afin de leur donner un maximum d’autonomie. ## WordPress, un CMS aux multiples facettes éditoriales WordPress est un CMS open source qui propose un écosystème foisonnant, en particulier pour les outils d’édition de contenus. En effet, il existe un certain nombre de « constructeurs de pages” et de “constructeurs de contenus”, les plus connus étant Elementor, Divi, WP Bakery, ou encore Beaver Builder. Nous pouvons également citer Advanced Custom Fields (ACF), et bien sûr, le nouvel éditeur de WordPress, Gutenberg. La flexibilité qu’apporte WordPress fait qu’il n’y a pas une seule manière de faire des sites sous WordPress, et que chaque agence web possède ses propres compétences. ## Pourquoi Gutenberg? A Be API, nous avons testé et avons travaillé avec la majorité des constructeurs cités précédemment; et nous avons pu les évaluer en conditions réelles, et comme toujours sur le long terme avec notre offre de maintenance WordPress. Et aucune d’entre elles ne nous apporté autant de certitudes, de fiabilité, d’évolutivité que Gutenberg, le “content builder” natif de WordPress. En effet, depuis décembre 2018, mois de sortie officielle de WordPress 5.0 et de cette fonctionnalité, Gutenberg a révolutionné les mises en page anciennement proposées par l’ancien éditeur de WordPress, ou même ACF. Gutenberg représente pour Be API : ### Le bon compromis liberté/simplicité Nos projets s’adressent à des contributeurs, et non à des webmasters. L’interface de Gutenberg est bien équilibrée en ce sens et apporte une expérience utilisateur plus agréable pour la contribution. ### Le bon niveau de performance L’outil permet d’obtenir de bonnes notations sur les indicateurs de PageSpeed et Web Core Vitals. ### Une accessibilité rendue possible Chez Be API, nous considérons que tous les projets doivent être accessibles au plus grand nombre. Conformité RGAA demandée ou non 🙂 ### Une maintenance apaisée L’approche technologique de Gutenberg permet d’envisager sereinement le long terme. Pour découvrir les fonctionnalités en détail, voici un site de démonstration, ainsi que les vidéos de présentation de l’agence. Aujourd’hui, un projet web avec WordPress + Gutenberg, ce n’est plus “juste” de créer des templates comme autrefois. La philosophie d’un outil comme Gutenberg tend vers l’atomic design, avec des blocs qui vont être utilisés dans des contextes et pour des contenus différents ; c’est pour cela que nous travaillons dans une logique de Design system. Pour chaque projet, nous mettons en place un design system de blocs Gutenberg qui permet ensuite de composer des patterns et des pages. Ces blocs apportent une grande flexibilité à la composition des pages, en permettant d’ajouter (en mode glisser déposer), organiser et mettre en forme du contenu riche facilement. Alors oui, c’est fabuleux de créer une boîte à outils, mais comment l’utiliser avec ses contenus, toute est la question. Inversement, sans réflexion sur les contenus réels, comment établir la bonne boîte à outils ? C’est à ces questions que nous tentons de répondre lors de chacun de nos projets faits avec Gutenberg, et pour cela, nos équipes de concepteurs mènent un travail d’UX Writing, pour créer à partir de vos contenus définitifs la boîte à outils correspondante, et pour vous donner des recommandations d’organisation et mise en page de vos contenus. Ce travail d’UX Writing est mené sur vos contenus clefs, mais il vous est possible de “commander” davantage de pages “UX Writing” durant toute la phase d’exploitation de votre projet. ## Une agence WordPress organisée pour Gutenberg Toute la méthodologie de l’agence est organisée autour de cet éditeur, et toutes les équipes sont formées à cet éditeur afin de proposer des projets le plus proche de l’état de l’art, et le plus respectueux du fonctionnement de Gutenberg. Notre objectif est de vous proposer un projet pensé pour Gutenberg du début à la fin. Et notamment la partie conception : * Des wireframes intégrant l’ensemble des concepts (blocs, patterns, templates) * Des maquettes graphiques compatibles avec les limitations de l’éditeur Pour plus d’informations, vous pouvez vous référer à l’un de nos articles sur le sujet : Comment concevoir efficacement avec Gutenberg ## Un projet-type Un projet type avec Gutenberg est donc basé sur un Design system de blocs, et ces blocs peuvent être de différents types : ### Blocs natifs stylisés Gutenberg intègre des blocs natifs (70 blocs existants dont 36 pour l’intégration d’embed de services tiers). Dans le cadre de nos projets nous utilisons le plus possible ces blocs natifs, que nous stylisons ensuite avec la charte graphique du projet ### Blocs créés sur mesure En plus de ces blocs natifs, en fonction des besoins, nous développons également des blocs sur-mesure soit via le framework ReactJS de Gutenberg, soit ponctuellement via le plugin ACF. ### Patterns pour faciliter la contribution (syndrome de la page blanche) Un Pattern est une Composition ou Ensemble de blocs, soit un groupe de blocs, qui ont été combinés ensemble, pour créer une mise en page (un modèle) réutilisable. Chez Be API, nous créons des patterns personnalisés pour traduire les maquettes graphiques au plus proche de la philosophie Gutenberg. Ces patterns réutilisables sont facilement disponibles pour nos clients. ### Blocs tiers-party (pack ou plugin) Une fois encore, la richesse de WordPress est de posséder un écosystème de plugins, et de blocs pléthoriques. Nous utilisons principalement des blocs standalone et évitons les packs afin de faciliter la maintenance et garantir un haut niveau de performance. ## Pourquoi nous ne recommandons pas Elementor vs Gutenberg Elementor est un outil très puissant. Un outil trop puissant, destiné dans les faits à des webmasters ayant des connaissances de HTML/CSS. C’est un outil qui donnera davantage de satisfaction au professionnel réalisant votre site WordPress, et peu à l’utilisateur final qui aura une autonomie de façade, n’ayant au final pas les savoirs-faires pour tirer parti de Elementor. Dans un contexte d’usine à sites, ou même d’environnements multiples (production, préproduction, etc), nous considérons qu’Elementor sauvegarde trop d’éléments de mise en page dans la base de données. De fait, dans un contexte d’usine à sites où il est par exemple possible de dupliquer un site modèle, cela distribue la maintenance WordPress dans autant de sites ce qui rend la maintenance beaucoup plus problématique. Si nous avons fait en 2009 le choix de WordPress, ce fut d’abord par intuition. Une intuition avec de solides arguments et à laquelle des années de succès ont donné raison. Site média, usine à sites, e-commerce ou intranet – WordPress est utilisé partout. Des sites web dans le monde 37% Des 10 000 sites web les plus visités au monde(Builtwith 2020) Nous croyons en l’open source pour apporter aux entreprises la souplesse, la créativité et la performance dans leurs outils digitaux. <NAME> de BeAPI ## Une technologie open source et innovante grâce à une communauté mondiale Les entreprises, organisations et agences partout dans le monde s’appuient sur la puissance de l’écosystème WordPress pour concevoir et faire tourner avec succès des sites web performants. WordPress est un CMS (outil de gestion de contenu) open source et gratuit. Il est mis à jour et amélioré quotidiennement par des experts en programmation, en conception et en documentation partout sur la planète. WordPress possède la plus grande et la plus active communauté au monde parmi tous les systèmes de gestion de contenu. ## Une plateforme unique, mais un monde de possibilités et de fonctionnalités WordPress a été construit sur une architecture qui peut être adaptée à 100 % à chaque projet, qu’il s’agisse de backend ou de frontend. L’écosystème permet de répondre aux besoins de nombreux types de projets : usine à sites, e-commerce, portail communautaire, site de contenu, intranet…WordPress est international et multilingue : le système de base a été traduit dans plus de 70 langues. ## Une solution stable, pérenne et fiable Dans tous les développements, le bon fonctionnement continu des applications existantes est assuré. Une mise en page (thème) datant de 2005 fonctionnera toujours parfaitement aujourd’hui. Un grand nombre d’utilisateurs mettent continuellement à l’épreuve le CMS obligeant les contributeurs à maintenir en permanence le code au coeur du système. Le développement et l’amélioration continus sont donc garantis. Nous cherchons toujours si une extension existe dans la communauté WordPress permettant de répondre au besoin. Nous développons ad hoc seulement si aucune solution solide n’existe. <NAME> CTO de BeAPI ## Un outil très évolutif, capable d’intégrer les dernières technologies WordPress est un écosystème extrêmement interopérable avec d’autres outils (analytics, CRM, ERP, e-marketing). Sa communauté de développeurs, très active, garantit que le système permette d’intégrer toute les nouvelles technologies de développement au fur et à mesure qu’elles apparaissent, comme AMP, GraphQL, React par exemple. ## Un outil complètement sécurisé CMS open source, le code source de WordPress est ouvert, ce qui signifie que des centaines de développeurs volontaires sont à la recherche de faiblesses et travaillent à y remédier. Une équipe de sécurité prend en charge les reports de vulnérabilités et publie les patchs de sécurité. La sécurité est une priorité absolue pour les développeurs contribuant au coeur de WordPress. ## Google aime WordPress WordPress dispose de l’ensemble des outils nécessaires à l’optimisation du SEO de tout site ou interface. Avec WordPress votre référencement naturel ne sera jamais un sujet de préoccupation, mais au contraire un atout stratégique pour votre projet. ## Le plus simple des CMS pour les contributeurs Le back office de WordPress est connu pour être le plus simple de tous les CMS pour les contributeurs, y compris les néophytes. L’interface utilisateur est très conviviale et en amélioration permanente. Chez Be API, nous adoptons une démarche partenariale, co-créative, itérative et rigoureuse afin de mobiliser pour nos clients tout le potentiel de l’écosystème WordPress. Concrètement, voici ce à quoi vous pouvez vous attendre si nous avons le plaisir de travailler ensemble sur l’un de vos projets digitaux stratégiques. ## Co-créer Nous menons avec nos clients des ateliers de co-création grâce à des méthodologies inspirées du design thinking pour faire émerger les meilleures idées. ## Main dans la main Nous considérons nos clients comme des partenaires, nous avançons côte-à-côte avec eux vers des objectifs précis que nous fixons ensemble. Bref, l’esprit d’équipe est la règle et votre projet devient le nôtre. ## Rester centré sur vos utilisateurs En travaillant avec nous, l’intérêt de vos clients ou utilisateurs finaux deviendra notre obsession, afin de créer des projets utiles et à forte valeur ajoutée. ## Faire parler les données Pour mieux connaître vos clients, nous disséquons vos données. Beaucoup d’entre elles nous donneront les clés des besoins, attentes et frustrations connus ou non de vos utilisateurs. ## Parler au bon expert au bon moment Les chefs de projet Be API sont à votre écoute au quotidien. Ils ont à coeur de vous présenter le bon expert pour votre problématique au bon moment du projet. UX, UI, back, front, SEO… vous ne rencontrerez que des passionnés WordPress ! ## Tester, apprendre et itérer Nous comprenons qu’il peut être parfois nécessaire de remettre en cause certains postulats au cours d’un projet. Entre le lancement et la date de mise en ligne, la situation peut changer : nous sommes habitués à nous adapter. Notre base méthodologique est souple et inspirée des méthodes agiles. ## Vous faire challenger Nous avons besoin de comprendre vos enjeux et votre contexte (interne et externe) pour résoudre au mieux vos problématiques. Nous n’hésiterons donc pas à vous poser des questions lorsque vous nous faites une demande et à “creuser” afin de vous apporter la plus forte valeur ajoutée et le meilleur conseil. ## Prioriser ce qui est important Et non ce qui est urgent. Pour arriver à atteindre les objectifs définis ensemble, nous vous aidons au quotidien à ne pas laisser les urgences prendre le dessus sur les chantiers importants à l’échelle de votre projet WordPress. ## Y voir clair à chaque étape du projet Vous avez besoin de savoir où vous en êtes et où vous allez. Nous sommes très attachés à la transparence de notre communication, de nos méthodes et de notre gestion de vos projets. Dès sa création en 2009, Be API s’est spécialisée sur WordPress, participe à son développement et contribue à la communauté open source. ## WordPress par passion <NAME>, qui a fondé Be API il y a maintenant 10 ans, est le “porte-étendard” de WordPress en France, et membre du bureau 2019 de l’association française. Il participe activement à l’expansion du CMS. Co-fondateur de l’association WordPress-Francophone et co-auteur du livre référence (by Pearson) sur WordPress, Amaury a contribué à traduire WordPress par le passé. Il est membre de l’organisation des WordCamps de WordPress France depuis le début. millions de téléchargement de nos extensions Enfin, il a développé de nombreuses extensions dont certaines ont été régulièrement dans le top 10 des téléchargements, comme l’extension Simple Tags par exemple. ## Nous sommes d’actifs contributeurs de la communauté Chaque expert Be API, du développeur au chef de projet en passant par le SEO, le design… est un spécialiste de WordPress dans son domaine. C’est pourquoi nous participons activement au sein de la communauté, par des conférences, des formations, ainsi que l’élaboration de plugins et extensions. Nous publions nos projets sur WP.org et Github. ## Depuis 10 ans nous donnons des conférences aux WordCamps Vous pouvez chaque année écouter nos speakers aux WordCamp & WPtech en France. Retrouvez ici les dernières conférences de nos speakers en WordCamp. ### Nos conférences ## Conférence de <NAME> – WordCamp Paris 2016 ## Conférence d’<NAME> – WordCamp Paris 2015 ## Conférence d’<NAME> – WordCamp Paris 2016 ## Conférence d’<NAME> – WordCamp Paris 2015 ## Conférence d’<NAME> – WordCamp Marseille 2017 ## Nos badges démontrent notre implication dans l’écosysteme Nous contribuons activement aux développements et aux améliorations du “core” de WordPress. Pour cette raison, nous possédons 6 badges. Aller au contenu Aller à la navigation Agence WordPress Be API | Actualités nos api news Évènements Immersion au WordCamp Biarritz Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress Évènements Immersion au WordCamp Biarritz WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs WordPress Usine à sites WordPress: l’allié stratégique des grandes organisations WordPress Multisite WordPress: une gestion de contenu efficace et une gouvernance renforcée Conception Optimiser son workflow sur Figma: 10 tips de designer Évènements WordCamp Europe 2023: Une immersion dans l’écosystème WordPress international WordPress L’inspirant parcours d’<NAME>, un pionnier français de WordPress WordPress Optimiser la conception WordPress avec Johannes Évènements WordCamp Paris 2023: retour d’un incontournable Tech Opquast, la cerise sur le gâteau 1 2 3 » # Communiqué de presse Communiqué de presse Be API remporte l’appel d’offres pour la refonte du site web du Secours populaire français # Conception Aller au contenu Aller à la navigation Agence WordPress Be API | Conception Conception Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress Conception Optimiser son workflow sur Figma: 10 tips de designer WordPress Optimiser la conception WordPress avec Johannes Tech Opquast, la cerise sur le gâteau Conception Optimiser ses conversions grâce à une stratégie CRO Conception WordPress UX design: comment concevoir efficacement avec Gutenberg ? ## Be API au WordCamp Europe 2022 à Porto Aller au contenu Aller à la navigation Agence WordPress Be API | Marketing Marketing Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress Marketing SEO et WordPress en 2021, conseils et recommandations Aller au contenu Aller à la navigation Agence WordPress Be API | RGPD RGPD Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress RGPD RGPD : Comment tenir un registre des traitements des données personnelles ? RGPD Comment paramétrer Google Analytics pour être conforme au RGPD ? RGPD Bandeau cookie et conformité RGPD – Comment choisir son plugin ? Aller au contenu Aller à la navigation Agence WordPress Be API | Tech Tech Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs Tech Opquast, la cerise sur le gâteau WordPress HTTP/2 et SEO, ce qu’il faut savoir ! Tech Comment le CI/CD avec Buddy a changé notre méthodologie de développement ? Tech Gutenberg , le content builder user-friendly de WordPress Tech Composer : Make Stable Tech Composer : Freeze Versions Aller au contenu Aller à la navigation Agence WordPress Be API | WordPress WordPress Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs WordPress Usine à sites WordPress: l’allié stratégique des grandes organisations WordPress Multisite WordPress: une gestion de contenu efficace et une gouvernance renforcée WordPress L’inspirant parcours d’<NAME>, un pionnier français de WordPress WordPress Optimiser la conception WordPress avec Johannes WordPress Débuter dans le développement de blocs Gutenberg en React : un retour d’expérience WordPress L’éco-conception, une pratique positive aussi pour votre business WordPress HTTP/2 et SEO, ce qu’il faut savoir ! Tech Gutenberg , le content builder user-friendly de WordPress RGPD RGPD : Comment tenir un registre des traitements des données personnelles ? 1 2 » Aller au contenu Aller à la navigation Agence WordPress Be API | WordPress | Page 2 WordPress Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress Conception WordPress UX design: comment concevoir efficacement avec Gutenberg ? Évènements WordPress WP BootCamp 2019 : le Lead DEV de Be API raconte son expérience Évènements WordPress Tout Be API présent au WordCamp Paris 2019 ! « 1 2 # Usine à sites WordPress: l’allié stratégique des grandes organisations Les grandes entreprises, notamment celles présentes sur plusieurs marchés ou régions, sont confrontées au défi d’améliorer leur efficacité tout en réduisant les coûts liés aux plateformes numériques. Dans ce contexte, l’industrialisation de la production digitale devient incontournable. L’usine à sites, déployée sous la forme d’une Web Factory ou d’un Multisite WordPress, est une réponse idéale à cette problématique. Avec la mise en place d’une plateforme centralisée, le dispositif facilite le déploiement, la gestion et la maintenance de plusieurs sites web, et s’impose comme un élément clé de la stratégie digitales des grands groupes. ## I. Industrialiser la production digitale L’usine à sites est un dispositif solide, spécialement calibré pour les structures telles que les multinationales, les institutions publiques ou les franchises. Ces entités sont fréquemment confrontées à la gestion d’un vaste réseau de sites web, aux besoins souvent similaires. La mise en place d’une usine à sites présente de nombreux avantages, à commencer par la garantie d’une cohérence de marque. Chaque site s’inscrit dans une homogénéité visuelle et fonctionnelle, consolidant l’identité et le message global de l’entreprise. L’organisation n’a plus à concevoir chaque nouveau site de A à Z, ce qui permet une réduction réelle des coûts de production. La gestion centralisée de la maintenance et des mises à jour facilite les interventions sur l’ensemble du réseau. La rapidité de déploiement est également un atout majeur. Les nouveaux sites s’insèrent plus facilement dans le dispositif existant. De plus, l’usine à sites permet l’optimisation des ressources: les équipes mutualisent les contenus et les outils. Mais si la standardisation est le maître mot, la flexibilité n’est pas en reste pour autant. Chaque site peut s’adapter selon les spécificités d’une audience ou d’un marché. Cette adaptabilité s’accompagne d’une intégration aisée des innovations technologiques, déployées uniformément sur chaque site. Chaque retour d’expérience alimente une démarche d’amélioration continue, permettant à l’organisation de peaufiner sa stratégie sur le long terme. Ainsi, loin de n’être qu’un outil technique, l’usine à sites devient un véritable levier stratégique. Pour les grandes organisations, c’est la promesse d’une présence digitale renforcée, d’une réactivité accrue face aux enjeux du marché et en définitive, d’un vrai avantage compétitif. ## II. WordPress: un CMS calibré pour soutenir une usine à sites Dans un précédent article , nous avons introduit la fonctionnalité Multisite de WordPress qui permet aux administrateurs de piloter simultanément plusieurs sites web depuis une interface unique. Ainsi, le CMS inclut dans sa structure même les fondations idéales pour développer une usine à sites solide. Associé à l’éditeur Gutenberg – dont les blocs modulaires et la flexibilité facilitent la conception de sites en masse tout en préservant une uniformité visuelle et fonctionnelle – WordPress est globalement le choix idéal pour soutenir tout système d’usine à sites. Propulsant 43% du web, il est le seul standard incontesté de qualité dans le domaine des CMS. Sa fiabilité et sa sécurité sont constamment vérifiées par une vaste communauté de développeurs professionnels qui scrutent, améliorent et renforcent continuellement sa structure. Enfin, et ce n’est pas un détail, le CMS propose une offre dédiées aux grands comptes, qui en fait l’outil de choix pour soutenir une usine à site complexe. Conçue pour répondre aux différents enjeux des grandes organisations, l’offre WordPress VIP tire sa force dans l’exhaustivité de ses services. * Une gestion optimale des pics de trafic, une sécurité renforcée, ainsi qu’un support d’experts disponible 24/7. * Des workflows sophistiqués, avec des environnements distincts pour le développement, les tests et la mise en production. * Une infrastructure solide, un réseau exclusif de partenaires triés sur le volets et des intégrations pré-configurées avec des services tiers performants. * Une gestion proactive des mises à jour pour bénéficier des dernières innovations et normes de sécurité, assurant des structure digitales durables dans le temps. En tant que première et unique agence partenaire WordPress VIP en France, Be API dispose d’une expertise avérée dans la gestion de projets d’envergure. Cette synergie entre notre savoir-faire d’agence digitale et la puissance de WordPress VIP nous permet de proposer des accompagnements haut de gamme en matière d’usines à sites. ## III. Les clés d’une mise en place réussie Vous souhaitez déployer un dispositif d’usine à sites? Voici quelques points essentiels à ne pas négliger : * Définition claire des besoins: Cela implique une compréhension approfondie des objectifs, des différents publics cibles et des spécificités techniques requises. * Conception flexible : L’architecture de votre usine à sites doit être conçue pour être facilement adaptable – afin d’ajouter, modifier ou retirer des fonctionnalités sans perturber l’ensemble du système * Uniformité vs personnalisation: Bien que l’objectif principal soit d’industrialiser la création de sites web, chaque site doit conserver une marge de manœuvre pour intégrer des personnalisations spécifiques. * Optimisation des performances : Il est essentiel de disposer d’une infrastructure robuste, conçue pour gérer efficacement le trafic engendré par la multiplicité des sites, tout en assurant des temps de chargement rapides. * Intégration continue: L’usine à site doit être conçue pour s’intégrer facilement avec d’autres outils et systèmes, qu’il s’agisse de solutions CRM, d’outils marketing ou d’autres technologies tierces. * Sécurité renforcée: Avec plusieurs sites gérés sur une seule infrastructure, les risques liés à la sécurité peuvent s’amplifier. Il est essentiel d’adopter des pratiques strictes de sécurité, de réaliser des audits réguliers et de s’assurer que les mises à jour sont appliquées en temps réel. * Formation et support: L’équipe en charge de l’usine à sites doit être formée non seulement sur WordPress, mais aussi sur les particularités de l’usine à sites elle-même. Un support continu doit être assuré pour résoudre les éventuels problèmes et accompagner les utilisateurs. En somme, la mise en place réussie d’une usine à sites WordPress ne réside pas seulement dans la technologie choisie, mais également dans une approche stratégique et une exécution méticuleuse. À ce titre, l’équipe Be API peut vous accompagner dans votre projet de refonte ou de création d’usine à sites. Nous accompagnons nos clients à chaque étape de leur réflexion – du conseil en stratégie, au développement, en passant par la conception et la formation des collaborateurs, couvrant à la fois WordPress et Gutenberg. Pour garantir une usine à sites fonctionnelle, sécurisée et durable, nous déployons des plans de maintenance WordPress adaptés aux besoins de chaque entreprise. Le monde numérique évolue rapidement et la nécessité d’optimiser et rationaliser les plateformes digitales n’a jamais été aussi important pour les organisations. L’usine à sites WordPress, alliant modularité et sécurité, s’impose comme un outil stratégique pour répondre aux enjeux actuels. Plus qu’une simple plateforme technique, c’est une vision qui allie technologie et marketing pour offrir une présence en ligne optimale. Avec l’aide de partenaires experts, les entreprises disposent d’un soutien de choix pour déployer un dispositif robuste et durable. Auteur : Dans le monde numérique d’aujourd’hui, les grandes organisations font face à une problématique de taille : gérer efficacement l’ensemble de leurs sites web, parfois déployés dans plusieurs pays. Le tout en garantissant l’harmonisation des contenus et en optimisant les coûts de maintenance et les délais de déploiement. Cette tâche peut s’avérer complexe, notamment lorsqu’il s’agit de préserver une gouvernance solide et maintenir une image de marque cohérente. La mise en place d’un Multisite WordPress apparait alors comme la solution pertinente pour répondre à ce défi technologique et stratégique. ## I. Les enjeux de la gestion d’un réseau de sites web pour les grandes organisations Dans le cadre d’une entreprise dotée d’un vaste réseau de sites Internet, la gestion individuelle de chaque site peut rapidement devenir un défi majeur. Ces plateformes nécessitent des mises à jour régulières, des correctifs de sécurité et des ajouts de contenu. Coordonner ces tâches de manière efficace peut s’avérer complexe et chronophage. De plus, la multiplication des interfaces d’administration et des comptes utilisateurs peut engendrer des confusions et de potentiels problèmes de sécurité. L’harmonisation de l’image de marque est également une problématique à adresser afin de garantir une expérience utilisateur optimale. Il faut veiller à ce que les messages clés, les bibliothèques de médias, les textes légaux ou les actualités soient unifiés correctement. L’absence d’un système efficace pour gérer automatiquement ces contenus peut entraîner des incohérences. Enfin, les contraintes budgétaires et les délais de production sont des facteurs importants. Chaque site web nécessite des ressources financières pour son développement, son hébergement, sa maintenance et ses mises à jour. Il y a donc un vrai enjeu à mutualiser les coûts, réduire les délais et optimiser les ressources disponibles. ## II. Le Multisite WordPress : ses avantages dans la gestion de contenu Le Multisite est une fonctionnalité intégrée directement à WordPress qui offre une approche centralisée pour gérer l’ensemble d’un réseau de sites. Au lieu d’installer et d’administrer chaque site de manière distincte, le Multisite WordPress les regroupe au sein d’une seule installation technique. Cette centralisation simplifie considérablement la gestion, permettant de mettre à jour, de maintenir et de gérer tous les sites depuis une interface unique. Cette approche présente des avantages notables pour l’harmonisation des contenus: elle permet notamment d’automatiser le processus en définissant des règles et des modèles globaux. * Par exemple: vous souhaitez partager une bibliothèque de médias sur l’ensemble de vos sites, tout en vous assurant que les crédits des images restent à jours. On peut ainsi déployer une solution technique pour supprimer automatiquement les images dont les crédits auraient expirés, et ce sur l’ensemble de vos sites. * Autre exemple: vous souhaitez modifier vos mentions légales. Chez Be API, nous développons des plugins spécifiques capable de gérer ces mises à jour automatiques – une optimisation du temps et des ressources, tout en assurant une vraie cohérence sur vos sites web. En outre, le Multisite WordPress offre une rapidité et une efficacité accrues dans le déploiement de nouveaux sites. Plutôt que de configurer une nouvelle installation, vous pouvez créer de nouveaux sites au sein du réseau existant, de façon beaucoup simple. Cette approche préétablie réduit considérablement les délais de déploiement. La gestion des mises à jour et des correctifs de sécurité est simplifiée, car ils peuvent être appliqués à l’ensemble du réseau en une seule opération. ## III. Mettre en place un Multisite WordPress pour votre entreprise Avant de mettre en place un Multisite WordPress, il faut s’assurer que votre infrastructure répond aux exigences techniques nécessaires. Des experts WordPress peuvent vous accompagner dans cette phase décisive. Ils vous aideront à configurer un hébergement adapté à votre réseau de sites, en veillant à disposer des ressources adéquates en termes de stockage et performance. En ce qui concerne les bonnes pratiques, une planification minutieuse de la structure de votre réseau de sites est recommandée. Définissez une stratégie claire pour les sites principaux et les sous-sites, en tenant compte de la logique organisationnelle de votre entreprise. Lors de la mise en place d’un Multisite WordPress, des décisions stratégiques sont nécessaires pour garantir une gouvernance optimale. Déterminez qui aura accès à l’administration du réseau et définissez les rôles et les permissions en conséquence. Vous pouvez nommer des administrateurs globaux pour superviser l’ensemble du réseau, ainsi que des administrateurs spécifiques pour chaque site. En ce qui concerne la gestion des thèmes et des plugins, il est recommandé de définir une politique visant à assurer une cohérence visuelle et fonctionnelle entre les sites. Vous pouvez sélectionner des thèmes et des plugins recommandés pour une utilisation commune, tout en permettant aux administrateurs de site d’activer des thèmes et des plugins supplémentaires en fonction de leurs besoins spécifiques. Ainsi la mise en place d’un Multisite WordPress représente une solution puissante pour les entreprises souhaitant optimiser leur gouvernance et leur stratégie de contenu. Cette fonctionnalité intégrée dans le CMS permet de centraliser la gestion, automatiser l’harmonisation des contenus et déployer rapidement de nouveaux sites. En cela, le Multisite offre des avantages significatifs en termes de productivité, de cohérence et d’efficacité. En suivant les bonnes pratiques et en prenant en compte les décisions stratégiques, votre entreprise peut bénéficier d’une gestion simplifiée, d’une image de marque unifiée et d’une optimisation des coûts et des délais. # Optimiser son workflow sur Figma: 10 tips de designer Figma s’est rapidement imposé comme l’outil incontournable pour nos travaux d’UI/UX au sein de l’agence. Grâce à ses fonctionnalités, il nous permet de collaborer en temps réel sur un même document, de présenter efficacement les designs à nos clients et de les partager instantanément avec l’équipe de développement. Dans cet article, Stéphane, UX designer chez Be API, partage ses conseils d’expert pour exploiter pleinement les fonctionnalités offertes par Figma. Que vous soyez un designer chevronné ou un débutant passionné, ces précieuses astuces vous permettront de fluidifier votre processus de production. ### 1. Créez vos propres composants de manière efficace Évitez les tâches répétitives et gagnez en productivité en créant vos propres composants grâce cette combinaison de touches: ⌥ + ⌘ + K. Pour une efficacité maximale, vous pouvez sélectionner plusieurs éléments et créer plusieurs composants simultanément, ou même créer un ensemble de composants à partir du menu dédié. En utilisant cette fonctionnalité, vous économiserez un temps précieux. ### 2. Optimisez l’utilisation des propriétés de composants Pour simplifier la gestion des instances de composants et réduire le nombre de variantes nécessaires, exploitez pleinement les propriétés de Figma. Par exemple : * Utilisez la propriété Boolean pour masquer certains éléments * Utilisez la fonctionnalité Instance swap pour remplacer des instances de composants internes par d’autres. * Utilisez la propriété Text pour modifier facilement un texte spécifique au sein de vos composants. En tirant parti de ces propriétés, vous réduisez la complexité de votre projet et gagnez du temps lors de la personnalisation de vos composants. ### 3. Transformez vos ensembles de composants en autolayout Cela peut sembler simple à première vue, mais c’est un véritable plaisir de ne plus avoir à ajuster les éléments au pixel près. Une astuce géniale consiste à utiliser l’autolayout pour vos ensembles de composants. Et la cerise sur le gâteau, c’est l’option wrap ⤶ de l’autolayout, qui permet de passer automatiquement les éléments à la ligne s’ils dépassent la largeur disponible. En adoptant cette approche, vous gagnerez énormément en flexibilité lors de la mise en page de vos designs, tout en évitant les ajustements fastidieux. ### 4. Utilisez la fonction « Coller pour remplacer » Dites adieu aux tâches répétitives de re-positionnement et de suppression d’éléments que vous souhaitez remplacer, en utilisant la fonction « Coller pour remplacer ». Il vous suffit d’utiliser la combinaison de touches ⇧ + ⌘ + R Cette fonctionnalité permet de copier un nouvel élément et de le coller directement sur un élément existant, le remplaçant instantanément tout en préservant son positionnement et ses propriétés. ### 5. Affichez les contours pour une meilleure visibilité Ne perdez plus vos éléments cachés dans Figma. Appuyez simplement sur ⇧ + O pour afficher les contours de tous les éléments, même (et surtout!) ceux qui sont masqués. Cette astuce pratique vous permettra de visualiser clairement la disposition de vos éléments et de travailler plus efficacement. ### 6. Organisez vos styles pour une gestion simplifiée Facilitez la gestion de vos styles en utilisant le symbole « / » dans leurs noms, pour les regrouper en groupes sémantiques. Par exemple, vous pouvez utiliser des noms tels que: * « border/text/background » pour les styles de couleurs * « title/paragraph/label » pour les styles de texte * « state/elevation » pour les styles d’effet De plus, vous pouvez sélectionner plusieurs styles et les regrouper en utilisant le raccourci ⌘ + G. Cette technique vous permettra d’associer vos styles en fonction de leur utilisation. Les raccourcis clavier, c’est vraiment le cheatcode de Figma : ça permet de faire du « pixel perfect » en passant moins de temps dans les menus. <NAME>, UX designer ### 7. Travaillez à partir d’un Design System Disons-le: partir d’une page blanche n’est pas toujours très inspirant. Mieux vaut s’accompagner d’un guide, pour bénéficier de bonnes pratiques préétablies. Chez Be API, nous travaillons à partir d’un kit de templates UX, spécialement conçu pour designer sur Gutenberg. Cet outil nous permet d’optimiser notre phase de conception au quotidien.Nous l’avons rendu disponible en open source sur Figma, alors n’hésitez pas à vous le procurer pour vos prochains projets WordPress. ### 8. Copiez un PNG pour un partage rapide Vous avez besoin de partager rapidement un élément? Vous pouvez simplement copier l’image en utilisant le raccourci ⇧ + ⌘ + C, puis la coller dans vos messageries ou vos présentations. Cela vous permet de la partager rapidement, sans avoir à l’exporter au préalable. ### 9. Partagez un lien vers une frame spécifique Lorsque vous partagez un lien Figma, l’idéal est de partager un accès direct vers l’élément spécifique qui vous intéresse. Pour cela, sélectionnez l’élément, faites un clic droit, puis choisissez « Copier le lien » dans le menu contextuel. Vous pouvez également coller un lien sur un texte, ce qui est particulièrement pratique pour créer des liens d’accès rapide au sein d’un document. ### 10. Effectuez vos calculs directement dans Figma Plus besoin de calculatrice pour déterminer les dimensions exactes ou les espacements dans Figma. L’outil vous permet de réaliser ces calculs directement dans le champ de dimension. Par exemple, vous pouvez ajouter « +42 » à vos dimensions actuelles ou les diviser en quatre directement. Figma prend en charge les opérations classiques telles que +, -, /, *, ainsi que les parenthèses. On espère que ces astuces vous aideront à booster votre productivité et libérer votre créativité. Si vous en avez d’autres en réserve, n’hésitez pas à les partager en retour. # WordCamp Europe 2023: Une immersion dans l’écosystème WordPress international Athènes, juin 2023 – Cette année, le WordCamp Europe a élu domicile dans la célèbre capitale grecque. Cette édition, qui honorait également les 20 ans du CMS, a réuni plus de 2545 experts du web, professionnels et/ou passionnés originaires de 94 pays différents. Retour sur un évènement particulièrement réussi ! ## Au coeur de l’évènement ### La Journée de Contribution : une parfaite entrée en matière La Journée de Contribution, qui s’est tenue la veille du WordCamp, a rassemblé 658 participants volontaires, dont quatre membres de l’équipe Be API : Nicolas, CTO de l’agence, Clément, plugin développeur, Anne-Marie, cheffe de projet, et moi-même, Alizée, chargée de communication. Nous étions présents aux tables Polyglot, consacrées à la traduction, et WP CLI, dédiées à l’interface en ligne de commande. Cette journée permet aux contributeurs expérimentés de mettre en pratique leur expertise et offre aux nouveaux venus une introduction en douceur dans l’univers WordPress. Des tables spécialement réservées aux débutants étaient disponibles pour les familiariser avec le processus de contribution. Participer à cette journée dédiée au « giving back » est une opportunité unique de contribuer à l’amélioration de la plateforme qui est au cœur de notre activité. C’est une plongée dans les coulisses de WordPress et l’occasion de profiter d’une première journée en « petit comité » avant le lancement officiel du WordCamp. ### Un programme adapté à tous les publics Le WordCamp Europe a ouvert ses portes le lendemain. Au programme: deux journées riches en conférences et en ateliers couvrant tous les aspects du CMS, allant de sujets techniques lié au développement aux problématiques d’accessibilité, de référencement, de marketing, et bien plus encore. Quel que soit votre domaine d’expertise ou votre niveau technique, des présentations adaptées ont été proposées, renforçant les principes d’inclusivité et de partage au sein de l’écosystème. La diversité des intervenants est également à souligner. Des experts et contributeurs renommés ont partagé leur savoir, tandis que de nouveaux orateurs ont apporté un vent de fraîcheur et d’innovation. Mention spéciale à <NAME> Valk, jeune speaker de 16 ans monté sur scène pour nous parler des perspectives de la génération Alpha sur WordPress – c’est-à-dire les personnes nées après 2010 (de rien). ### Des sponsors au rendez-vous Il est impossible d’évoquer le WordCamp sans mentionner les sponsors de l’événement. Cette année, ils se sont surpassés pour offrir des animations originales et immersives. La visite des stands est une expérience incontournable, offrant l’opportunité d’interagir avec les représentants de grandes entreprises du secteur, de découvrir de nouvelles offres et d’établir des partenariats potentiels. Les sponsors contribuent à créer une ambiance dynamique mêlant communication et divertissement. On se souviendra notamment de l’animation « attrape-casquette » sur le stand d’Elementor et du jeu vidéo immersif proposé par Ionos. Leur soutien financier permet aux organisateurs de mettre en place un événement d’envergure tout en proposant des billets abordables, contrairement à d’autres événements techs où les places peuvent coûter très chères. En échange, les sponsors bénéficient d’une visibilité exceptionnelle sur place. ## La communauté WordPress à l’échelle internationale ### Une effervescence fédératrice WordPress, en propulsant plus de 40% du web, bénéficie de l’énergie de milliers de personnes au sein de son écosystème. Et rien ne vaut l’expérience d’un WordCamp pour en saisir toute la dimension. L’enthousiasme et l’effervescence qui se dégagent lors de ces évènements internationaux sont véritablement uniques.Chaque détail d’organisation est soigneusement pensé pour assurer le bien-être des participants. Les retrouvailles joyeuses des anciens membres et l’accueil chaleureux des nouveaux créent une atmosphère conviviale qui dure tout au long de l’événement. ### L’incarnation de valeurs communes La communauté WordPress se distingue par sa capacité à incarner les principes fondamentaux du CMS. En cela, le WordCamp Europe a su pleinement remplir sa mission en favorisant l’inclusivité, la collaboration, le partage, le bien-être, l’entraide et l’innovation. À travers des messages forts et des actions concrètes, les organisateurs et les volontaires ont créé un événement qui nous plonge au cœur même de l’ADN de WordPress: un élément indispensable pour renforcer le sentiment d’appartenance, et ce même à l’échelle internationale. ## Le WCEU: un évènement incontournable pour Be API ### Une fenêtre sur l’innovation Les rendez-vous tels que le WordCamp Europe sont des opportunités uniques pour rester à la pointe des innovations du web, notamment celles liées à WordPress, dans un secteur en constante évolution. Les conférences et les discussions animées par des experts permettent d’explorer les avancées en matière de développement, de conception et de sécurité sur la plateforme. De nouvelles fonctionnalités, des bonnes pratiques et des conseils stratégiques sont partagés, offrant aux participants une perspective précieuse sur l’évolution du CMS et de son écosystème. L’occasion pour Be API, en tant qu’agence spécialisée WordPress, de renforcer notre expertise et d’enrichir nos offres pour proposer des solutions toujours plus performantes et durables à nos clients. ### La puissance du networking Egalement, l’un des aspects les plus précieux de ces rassemblements réside dans les opportunités d’entretenir des relations solides avec les autres professionnels du secteur. Les discussions et les ateliers sont des occasions privilégiées d’échanger avec nos pairs, de rencontrer de nouveaux acteurs et découvrir de nouvelles approches. Le WordCamp, c’est aussi l’occasion de rassembler des partenaires lors de soirées dédiées, en parallèles de l’évènement principal. Nos Be Apistes ont ainsi pu retrouver nos partenaires WordPress VIP lors d’une soirée organisée par ces derniers. Un moment important pour échanger sur les tendances, l’évolution des usages et des problématiques et se questionner sur les meilleures façons de répondre aux exigences du marché. Pour conclure, ce fut une édition 2023 très réussie, alliant la puissance d’un événement tech international à l’énergie inspirante de la communauté WordPress. Merci aux organisateurs et aux volontaires pour leur travail et leur engagement, et rendez-vous l’année prochaine à Turin, en Italie pour le prochain WordCamp Europe ! # L’inspirant parcours d’<NAME>, un pionnier français de WordPress Cette année, WordPress fête ses 20 ans : un véritable milestone que nous avons décidé de célébrer en replongeant dans le parcours passionnant du CEO de Be API, dont la trajectoire professionnelle est très liée au développement de WordPress en France. Derrière son humilité et son authenticité, on trouve un esprit visionnaire animé par de solides convictions. Le bon mix qui l’a conduit à co-fonder la communauté WordPress francophone en 2005, puis quelques années plus tard, la première agence WordPress française. Bien que cette histoire ne débute pas dans un garage, elle témoigne du succès de celui qui a oeuvré depuis ses débuts pour positionner le CMS au cœur du marché de l’entreprise. Ensemble, nous revenons sur les débuts de WPFR, l’âge d’or du web 2.0 et la place de l’humain dans la réussite de l’agence. ## WordPress & WPFR: les origines En 2005, <NAME> entame ses études supérieures en ingénierie informatique à Valenciennes, pour apprendre la programmation. Rapidement, il se démarque en manifestant un intérêt pour Internet et son aspect communautaire – une passion qui trouve peu d’écho parmi ses camarades, qui considèrent alors le web comme une introduction à la programmation. Malgré cela, Amaury persévère et commence à créer des sites Internet en parallèle de ses études. Nous sommes alors en plein âge d’or du blog : de nombreux acteurs cherchent à investir le web pour toucher un public plus large. Les contributeurs prennent progressivement le pas sur les webmasters, suscitant ainsi une demande sans précédent. Ce qui attire particulièrement Amaury, ce sont les communautés qui animent les plateformes de blogging. Il fait la découverte de WordPress en 2005, alors que le CMS n’a que deux ans d’existence. En France, la communauté se compose principalement de deux personnes : <NAME> et <NAME>. Attiré par cet outil prometteur et le projet naissant, Amaury propose de se joindre à eux, mettant à profit ses compétences de développeur informatique. Sous l’impulsion de ce trio, l’association WordPress francophone – WPFR – voit le jour. Amaury pose les bases du site WPFR et crée le forum d’entraide. Durant ces années, il consacre un temps considérable à intervenir sur le forum, accumulant plusieurs milliers messages à son actif. J’ai toujours été un « faux développer ». Dans les groupes de travail, j’étais celui qui se demandait si le projet était pratique pour les utilisateurs. <NAME> En plaçant la contribution, l’accessibilité et la facilité de navigation au cœur de ses préoccupations, Amaury répond aux besoins des utilisateurs à une époque où les blogs ne sont pas encore un outil grand public : ce sont les professionnels qui viennent directement frapper à sa porte. Grâce à son engagement actif sur le forum WPFR, il gagne en visibilité. Bien que son implication ait débuté de manière bénévole et désintéressée, elle lui offre ses premières opportunités professionnelles en 2006. Contacté par une agence de communication, il se voit confier la création d’un blog de festival hippique, ce qui l’amène à créer son statut d’auto-entrepreneur. De fil en aiguille, les demandes se multiplient, lui permettant d’acquérir une précieuse expérience. ## Du virtuel au réel Dans le cadre de ses études, Amaury commence un stage en agence web en 2007. C’est là qu’il travaille sur son premier projet d’envergure pour le journal Sud Ouest, qui commande une plateforme de blogs à l’occasion de l’élection présidentielle. Le projet conforte Amaury dans l’idée que l’aventure WordPress ne fait que commencer. Se sentant peu en adéquation avec les valeurs de l’entreprise, il démissionne après seulement quelques mois de stage pour se lancer dans l’entrepreneuriat et créer sa première entreprise, wp-box.fr. Cette société lui permet de proposer diverses missions liées à WordPress, faisant de lui l’un des premiers acteurs français à vivre grâce au CMS. Parallèlement, Amaury reste extrêmement actif au sein de la communauté francophone, alors en plein essor. Aux côtés de Xavier et Matthieu, il organise les premiers BarCamps français. Le concept est simple : « On se retrouve dans un café et ceux qui souhaitent échanger sur WordPress sont les bienvenus. » C’est artisanal, simple… mais efficace ! Ces ateliers participatifs prennent de l’ampleur et attirent de plus en plus de participants. Amaury joue un rôle clé en tant que responsable technique. “Là, on passe du virtuel au réel. La communauté, l’entraide : ça devient tangible”. Il contribue également en open source, en participant notamment à la traduction de WP-MU et en créant plus d’une dizaine de plugins open-source en parallèle de ses missions professionnelles. Depuis le début de sa carrière, il intègre à son quotidien la notion de « giving back », qui consiste à donner de son temps et de son expertise au CMS en retour des possibilités qu’offre l’outil. Lors d’un de ces fameux BarCamps, les trois amis reçoivent la visite de <NAME>, le créateur américain de WordPress. Il salue l’initiative française, ravi de voir son projet porté et animé avec passion. La journée se termine par un repas sur le toit du restaurant Georges Pompidou, un souvenir mémorable agrémenté d’un petit moment de solitude : « Lui parle avec un fort accent américain et moi je peine à déchiffrer. Alors Xavier traduit et j’écoute en buvant des bières. » Une reconnaissance importante qui reste gravée dans la mémoire de notre CEO. Les années passent et la demande ne cesse de croître. C’est à ce moment-là qu’Amaury fonde sa deuxième entreprise en s’associant à l’un de ses clients, <NAME>, alors directeur de l’agence Media Factory. Ainsi naît Be API, la première agence pure-player WordPress en France. ## Be API : WordPress vers l’infini et au-delà Une période de croissance insouciante débute pour Be API, marquée par l’arrivée des premiers clients, qui entraîne également les premiers recrutements. Des développeurs sont d’abord engagés pour épauler Amaury, puis des chefs de projets sont embauchés afin d’accompagner les clients sur des projets plus conséquents. Fidèle à lui même, le CEO accorde une grande importance au critère humain : il recherche des experts, mais aussi des personnes bienveillantes, respectueuses et positives. Cette philosophie, qui se démarque dans le secteur des agences, conduit certains membres de l’équipe à rester engagés dans l’aventure depuis plus de dix ans – les plus anciens sont même devenus associés (salutations Nicolas, Meriem, Alexandre et Yann !) Au niveau de sa stratégie, l’agence reste fidèle à ce qui a fait le succès de WordPress : une approche centrée sur les contributeurs. Nous ne voulions pas être une agence du passé, qui facture des modifications de texte sur un site. Ça, c’était interdit pour moi. <NAME> Dans les projets, tout doit être contribuable et l’objectif ultime est de rendre le client autonome. Avec cette approche et en répondant aux besoins techniques et organisationnels, l’agence réussit à mener une véritable mission d’évangélisation et démontre la pertinence de WordPress pour soutenir des projets digitaux de grande envergure. Be API se structure pour accompagner des DSI et des clients grands comptes sur des budgets et des enjeux de plus en plus ambitieux: l’offre et la méthodologie évoluent pour répondre à des besoins sur des sites à fort trafic et travailler sur des projets complexes de création, de refonte ou de maintenance WordPress. On est loin des blogs des débuts ! En parallèle, Amaury et son équipe s’assurent de garder du temps pour continuer à s’impliquer dans la communauté française WordPress. Lorsque les WordCamps arrivent en France à partir de 2012, l’agence y participe en tant que soutien logistique, artistique, technique et financier. Les membres de l’équipe s’impliquent également dans de nombreux événements professionnels inter-CMS, tels que l’agoraCMS et le CMSday. Cet engagement et la réalisation de projets ambitieux sont récompensés en 2019, lorsque Be API devient la première agence française partenaire de WordPress VIP. ## Perspectives & avenir Mais alors, est-ce une fin en soi? Bien sûr que non ! L’esprit visionnaire des débuts reste attentif au marché et reconnait l’importance de faire évoluer l’agence pour répondre aux nouveaux besoins. « Ce n’est jamais figée. Il faut constamment faire évoluer les équipes, les process, les méthodologies pour s’adapter aux problématiques des clients et pour atteindre nos objectifs« . Ainsi, l’heure est au développement, avec de nombreuses idées actuellement en cours de chantier – oui, c’est presque un teasing ! En ce qui concerne son sentiment sur le chemin parcouru, une pointe de nostalgie se fait ressentir lorsque Amaury se remémore l’ère prospère des CMS. Une période de croissance rapide et une solution technique disruptive qui venaient perturber le marché – un véritable « game changer ». Aujourd’hui, avec le succès de WordPress à l’échelle mondiale et l’ampleur de son écosystème, le défi est différent. L’enjeu est de montrer que la valeur ajoutée ne provient pas que de l’outil, mais aussi de l’expertise qui l’accompagne, pour créer des solutions adaptées <NAME> Enfin, au-delà des enjeux commerciaux, Amaury conclut sur l’aspect qui lui tient le plus à coeur au regard du parcours: l’expérience humaine exceptionnelle qu’a représenté la création de WPFR, puis de Be API, qu’il décrit comme un véritable succès collectif. « En 15 ans, plus de 150 personnes ont contribué au succès de l’agence. Elle a révélé des vocations, créé des relations solides et construit de belles histoires. C’est grâce à toutes ces personnes que nous sommes arrivés là où nous en sommes aujourd’hui”. Un grand merci à Amaury pour cet entretien ! Ça vous a plu? Retrouvez-le sur LinkedIn pour découvrir son parcours en détail, ou suivez la page Be API, pour ne rater aucun des prochains récits. # Optimiser la conception WordPress avec Johannes L’arrivée de Gutenberg en 2018 a marqué un tournant chez Be API. Le dernier éditeur de contenu WordPress, plus souple et “user-friendly” que ses prédécesseurs, s’est rapidement imposé comme la solution optimale pour mener des projets ambitieux en parfaite co-création avec nos clients. Convaincus que la réussite et la pérennité d’un site dépendent en grande partie de son caractère pratique et intuitif pour les contributeurs finaux, nous avons totalement adapté notre méthodologie autour de cet éditeur. Et cela implique bien entendu la phase de conception ! C’est pourquoi notre équipe a créé Johannes, un kit UX « Gutenberg ready » qui permet de concevoir simplement et rapidement des wireframes sur WordPress. Disponible en open source sur Figma, l’outil fait aujourd’hui partie intégrante de notre méthodologie à l’agence. Dans cet article, nous revenons sur l’origine du projet et sur l’intégration de Johannes dans notre processus de production. ## La genèse: penser Gutenberg dès la phase de conception Été 2021. Les Team Leaders de Be API se réunissent lors d’un séminaire pour réfléchir à la problématique suivante: comment intégrer Gutenberg dans toutes les étapes de production d’un site WordPress, pour exploiter pleinement le potentiel de l’outil? On pense alors directement à la façon de réorganiser la partie technique, le suivi avec nos clients et à former l’ensemble des équipes sur l’éditeur de contenu. Se pose aussi la question de l’intégration de Gutenberg dès la phase de design. En pratique, les équipes de conception conçoivent déjà des wireframes pour des projets Gutenberg depuis quelques temps. Cette étape est essentielle car la qualification des blocs a une incidence considérable sur la suite du projet. Naît alors l’idée de créer un kit UX intégrant tous les blocs natifs Gutenberg, ainsi que des extensions et les packs de blocs les plus utilisés, déjà calibrés pour l’éditeur. L’équipe de conception s’attèle à la création de ces templates, et la première version de Johannes voit le jour. Pourquoi ce type d’outil fait sens ? Parce que l’arrivée de Gutenberg et son principe d’atomic design ont complètement transformé la manière de concevoir les sites sur WordPress. Nous sommes passés des gabarits fixes, entièrement personnalisables, à la possibilité de créer des modèles (patterns), d’utiliser des blocs natifs présents dans l’éditeur et de créer des blocs sur-mesure. En amont de chaque phase de design, nous créons un document appelé « design system », qui regroupe tous les types de blocs qui seront présents et déclinés dans l’ensemble du site. C’est un peu ce qu’est Johannes: un document de référence ultra complet, que l’on peut dérouler et adapter à chaque projet. Il permet de créer rapidement et facilement des wireframes propres, déjà compatibles avec l’éditeur. ## Un outil au service de notre méthodologie Chez Be API, concevoir des sites WordPress avec l’éditeur Gutenberg est notre quotidien. De ce fait, la mise en place d’outils tels que Johannes apportent une véritable valeur ajoutée au sein de notre chaîne de production, en nous permettant notamment: * d’optimiser nos phases de design – la base des wireframes existe déjà, prête à être adaptée à chaque projet * de nous concentrer sur l’essentiel – à savoir l’UI, le storytelling et le parcours utilisateur En standardisant la conception, nous pouvons guider la direction artistique et créer des maquettes fidèles au rendu final tout en respectant les prérequis techniques. En outre, les projets sont déjà calibrés pour Gutenberg, ce qui nous permet aussi de gagner du temps lors de la phase de développement. À l’agence, nous accompagnons nos clients sur des projets d’envergure. Ainsi nous accordons une grande importance à la mise en place et au déroulement d’une méthodologie bien rodée. Une bonne méthodologie repose sur son organisation détaillée – menée d’une main de maître par l’équipe projet – mais aussi sur l’utilisation d’outils adaptés. C’est pourquoi la mise en place de Johannes est particulièrement appropriée à notre organisation. Nous nous appuyons sur notre expérience pour améliorer constamment nos process, tout en prenant soin de garder le principe de co-création au centre de notre méthodologie. Le but est d’intégrer l’ensemble des parties prenantes du projet et s’assurer que nous travaillons tous dans la même direction. ## Embarquer toute l’équipe dans le même bateau Et pour cause ! Collaborer étroitement reste un élément clé de réussite pour tout projet. En intégrant le principe de blocs et d’atomic design dès la phase de maquettes, nous travaillons avec Gutenberg en conception, mais aussi avec nos clients, dès le commencement du projet. Les wireframes « Gutenberg ready » que nous présentons nous permettent d’intégrer l’interface WordPress dès la phase d’UX. Cela nous aide à expliquer le fonctionnement du CMS aux contributeurs finaux et à clarifier les blocs natifs que nous utiliserons, ainsi que les besoins en développement supplémentaires pour créer des blocs sur-mesure. Et je pense que tout le monde s’accorde à dire qu’un projet est bien plus agréable et fluide lorsque toutes les parties prenantes parlent la même langue – et rassurez-vous, pas besoin d’être webmaster pour comprendre la langue de Johannes. L’utilisation de ces gabarits nous permet de produire une conception plus détaillée et plus compréhensible par tous les acteurs du projet. Côté clients, il y a une vraie valeur ajoutée au niveau de la projection dans la contribution avec Gutenberg et la préparation des contenus dès la phase de maquette. Elisabeth, Chef de projet digital En travaillant sur ce kit, nous avons conservé l’ambition de départ qui nous motive chez Be API: créer des projets pratiques et utiles pour les utilisateurs finaux et les contributeurs. C’est en combinant nos expertises métiers et la puissance de vos contenus que nous créons des expériences digitales pertinentes et durables. Nous sommes également heureux de partager cette ressource avec la communauté WordPress, dans un esprit de « giving back ». Vous retrouverez le kit UX disponible en open source sur Figma ici. Vous êtes designer ou freelance? N’hésitez pas à vous le procurer, à l’utiliser et à nous faire part de vos feedback ! # WordCamp Paris 2023: retour d’un incontournable Ce vendredi 21 avril 2023 a eu lieu le WordCamp Paris: une journée destinée à réunir physiquement les membres de la communauté WordPress francophone. Le rassemblement s’est tenu à la Halle Pajol et a signé le retour très attendu de cet évènement phare pour tous les passionnés et les professionnels de WordPress. Retour sur une journée rythmée par des rencontres passionnantes et des échanges enrichissants. Pour cette édition 2023, il flottait dans l’air un vrai vent d’excitation. Et pour cause: le dernier WordCamp Paris s’était tenu en 2019. Entre temps, la pandémie mondiale avait suspendu toute possibilité d’organiser l’événement suivant prévu pour 2020. Quatre ans plus tard, le WordCamp Paris est enfin de retour, sonnant comme des retrouvailles après une longue pause. Au programme de la journée: une liste de conférences et d’ateliers sur des thèmes très variés touchant au CMS. Du développement pur au référencement, en passant par le design ou la philosophie WordPress, il y en avait (vraiment) pour tous les goûts. Assister à un WordCamp, c’est mettre en pratique le principe du projet open source: le « giving back », c’est-à-dire contribuer pour enrichir l’écosystème WordPress et faire vivre la communauté. C’est également l’occasion de rencontrer des acteurs du numérique, tels que des hébergeurs spécialisés ou des éditeurs d’extension venus présenter leur offre en tant que sponsors de l’évènement. Ainsi ces rassemblements sont un passage obligatoire pour tout professionnel de WordPress, et ce pour notre plus grand plaisir. ## L’équipe Be API au rendez-vous Plus d’une vingtaine de membres de l’équipe avaient fait le déplacement. À l’instar des participants du WordCamp, tous les métiers de l’agence étaient représentés – de l’équipe de développement, au SEO, à la conception, en passant par le marketing et la communication. Nous sommes venus en groupe pour assister aux diverses conférences, mais aussi pour soutenir nos quatre Be APIstes orateurs cette année <NAME>, CEO et membre fondateur de la communauté WordPress en France, a présenté « Le guide du détective pour le dépannage WordPress« . Une conférence pour apprendre à reconnaître les différentes erreurs et à les diagnostiquer. <NAME>, Head of Design, et <NAME>, UX Designer, ont présenté Johannes, notre « UX kit pour designer avec Gutenberg ». L’outil idéal pour optimiser vos phases de conception sous WordPress, disponible en open source sur Figma. <NAME>, expert en acquisition, nous a éclairé sur: “que faire de plus quand on pense avoir déjà tout fait en SEO?” Une conférence pour aborder les étapes supplémentaires sur lesquelles s’attarder une fois les techniques de référencement de base intégrées sur son site. Tout au long de la journée, les présentations se sont enchaînées, rythmées par des formats divers – conférences, propos éclairs (présentations de 10 minutes) ou « parlottes » (discussions centrées sur des sujets définis, ouvertes à tous). A noter la présence de nouveaux sujets bien sûrs très attendus, comme la place de l’IA dans WordPress, Gutenberg et son utilisation au quotidien ou encore l’intégration de l’éco-conception au sein d’une méthodologie. ## La recette du succès De façon assez unanime, ce WordCamp a été décrit comme une vraie réussite, portée bien sûr par les orateurs, les organisateurs, les bénévoles, les sponsors et tous les participants. L’évènement met en lumière la véritable force de WordPress qu’est sa communauté et les valeurs d’inclusivité et de partage qu’elle défend. Que l’on travaille en agence ou en entreprise, que l’on soit freelance, utilisateur ou passionné, tout le monde est le bienvenu pour participer à un WordCamp Un autre élément clé de son succès est le fait que les professionnels soient invités à ne pas faire la publicité de leurs services pendant les conférences. Un curseur placé sur la contribution et des acteurs qui ont joué le jeu, garantissant une véritable ambiance “good vibes”. Tout ça, je l’ai vécu en tant que débutante: et oui, il s’agissait de mon premier WordCamp! Je m’y suis rendue sans attentes particulières et je n’ai pas été déçue. J’y ai découvert la “secret sauce” de WordPress: une communauté engagée, qui travaille à l’amélioration continue d’un excellent produit. La promesse de l’état d’esprit “giving back” a été tenue: donner, c’est aussi recevoir en retour, tant que sur le plan professionnel que sur le plan humain. ## En bref Ce WordCamp Paris a bien été à la hauteur des attentes et nous invitons tous les curieux passionnés WordPress a participer à un WordCamp à leur tour. Profitons-en pour remercier l’équipe organisatrice et les bénévoles pour leur implication et l’organisation de cette journée en très peu de temps – chapeau à Marjorie et son équipe ! De notre coté, nous vous donnons rendez-vous début juin 2023 pour le WordCamp Europe à Athènes, où nous vous emmènerons au coeur de l’évènement. # Opquast, la cerise sur le gâteau Aujourd’hui plus que jamais, nos clients et nous, parlons et échangeons sans cesse sur des sujets tels que l’accessibilité, l’écoconception, la sécurité, la performance, le SEO des sites Internet que l’on construit pour et avec eux… autant de démarches, référentiels et bonnes pratiques qui permettent de booster le sens, l’intérêt et la valeur des sites que l’on produit. Ces sujets pourraient ainsi être regouprés sous un seul et même mot, celui de la qualité, mot dont ces 5 sujets sont à la fois les piliers et des indicateurs précieux. Si les process de production (sur l’aspect outillage et organisationnel) sont bien en place et éprouvés chez Be API, appliquer l’état d’esprit de la qualité sur tout ce qui touche à l’expérience utilisateur restait à consolider de manière transversale à l’agence (tous les métiers). Et comme il n’y a pas 4 chemins pour se mettre dans le bon ordre de marche, l’année 2022 a bien été l’année d’une “opquastisation aigüe” chez Be API ! ## Pour les retardataires… Oui, on aurait complètement pu se passer de rappeler qui est Opquast, mais ce qui suit aura au moins le mérite de permettre à ceux qui ne connaissent pas encore (shame on you) cet organisme de raccrocher les wagons. Parce que le site officiel l’expliquera bien mieux que ce que nous pourrions le faire ici, Opquast (Open Quality Standards), en résumé et avec des liens, c’est : * un état d’esprit au service des utilisateurs * une vision transversale transcrite dans un modèle simple à comprendre, à retenir et à utiliser * plein de règles claires et consensuelles, matérialisées et catégorisées dans une checklist * une certification reconnue dans le monde du web * des formations pour aller plus loin * un réseau de professionnels qui parlent la même langue De quoi vous permettre de comprendre par quelle case Be API et un très grand nombre de professionnels et étudiants du web sont passés ces dernières années. ## La qualité dans l’ADN de Be API Depuis sa création en 2009, la qualité est une valeur qui a toujours fait partie de la vision de Be API et des actions mises en place pour satisfaire ce besoin : * Au niveau agence : la qualité est avant tout ce pourquoi l’agence veut être connue et reconnue ; * Au niveau client : la qualité est forcément et quasi systématiquement en haut de la pile de ses exigences. Be API se positionne très clairement et depuis longtemps en tant que “Créateurs de bonheur digital”. Comment l’agence pourrait-elle faire une telle promesse à ses clients si la qualité n’était pas déjà au coeur de sa façon de travailler ? L’agence s’est sans cesse adaptée aux évolutions du marché pour progresser et faire progresser ses équipes afin de proposer à ses clients une collaboration constructive, logique et efficace : * Techniquement, cela passe par l’adoption progressive de méthodes de travail adaptées aux évolutions du métier. De notre côté, Code architecture, Pair programming, Versionning, Pull requests et Code review font partie de nos habitudes depuis bien longtemps déjà, sans parler de nos nombreuses actions en mode maintenance (TMA) pour assurer le suivi et faire vivre/évoluer les différents sites de nos clients ; * Au niveau organisationnel, tous les process (ateliers agence/client, suivi de conception, estimations, découpage, suivi de production) sont également en place depuis longtemps, permettant ainsi de s’assurer que tous les participants au projet sont bien sur la même longueur d’ondes, et ce, du début à la fin du projet ; * Enfin, le développement de la qualité au sein de l’agence ne pourrait pas se faire sans une interaction/confrontation avec les pairs du métier. Participer aux événements dédiés de la communauté (communauté WordPress notamment avec les WordCamps nationaux et internationaux), participer techniquement au développement de notre outil favori (équipe open-source dédiée), se former en continu (logiciels, langages, frameworks, bonnes pratiques, …) sont autant de bonnes manières de challenger (et a minima maintenir) un niveau de qualité élevé. ## Tournée générale, certification pour tout le monde ! La qualité web ne se résume heureusement pas à la seule obtention d’une certification (même si ca fait un joli badge à montrer dans les présentations client ou sur le CV), ni à la consultation et au remplissage d’une checklist en fin de projet. Malgré cela, la préparation de certification “Maîtrise de la qualité en projet web” reste la meilleure manière de commencer (ou de se mettre à jour) avec la qualité web. L’été 2022 a donc été studieux chez Be API. De juin à septembre, chaque BeApiste a appris, révisé, préparé son sujet. Les certifications ont été passées au fil de l’eau selon l’avancement de chacun, et communiquées au reste de l’équipe au fur et à mesure. Les très bons scores tombant au compte-gouttes ont motivé les troupes à se challenger et à essayer de faire mieux que le voisin. Les règles se sont d’ailleurs retrouvées plusieurs fois au centre des conversations informelles au sein des équipes (parfois en fin de réunions, pendant les pauses déjeuner, ou même en pleine partie de babyfoot), créant une synergie globale entre les apprenants. Et c’est bien là l’un des intérêts les plus forts de la préparation de cette certification : créer du liant et un langage commun entre des membres d’une équipe pourtant habitués à travailler ensemble au quotidien. ## Be API, agence partenaire L’organisme Opquast, au-delà des règles et des formations qu’il dispense, est également un outil puissant de mise en relation des acteurs du web (français avant tout), ce qui colle d’ailleurs tout à fait avec l’objectif principal d’Opquast d’extraire et uniformiser un langage commun. Opquast parle d’ailleurs ouvertement de communauté, regroupant les agences, organisations, écoles et centres de formation dont Opquast est partenaire, sans compter les (+ de) 17 000 certifiés qui sont passés par la formation de base d’Opquast. Il était donc assez logique pour Be API de se positionner au-delà de la certification, notamment en tant que partenaire Opquast…. ce qui est chose faite depuis décembre 2022. ## Pour conclure Be API est désormais mieux armé pour accompagner ses clients et les orienter toujours plus efficacement vers des choix de conception, techniques, ou encore stratégiques pertinents, réalistes et ambitieux. Alors si vous avez envie d’un projet web de qualité, durable dans le temps, et bien entendu basé sur WordPress, contactez-nous sur le chat ou via le formulaire de contact de notre site ! Image principale par vectorjuice sur Freepik Aller au contenu Aller à la navigation Agence WordPress Be API | Actualités nos api news Évènements Immersion au WordCamp Biarritz Filtrer par Communiqué de presse Conception Évènements Marketing RGPD Tech WordPress WordPress Débuter dans le développement de blocs Gutenberg en React : un retour d’expérience Communiqué de presse Be API remporte l’appel d’offres pour la refonte du site web du Secours populaire français Évènements Be API au WordCamp Europe 2022 à Porto WordPress L’éco-conception, une pratique positive aussi pour votre business Conception Optimiser ses conversions grâce à une stratégie CRO WordPress HTTP/2 et SEO, ce qu’il faut savoir ! Tech Comment le CI/CD avec Buddy a changé notre méthodologie de développement ? Tech Gutenberg , le content builder user-friendly de WordPress RGPD RGPD : Comment tenir un registre des traitements des données personnelles ? RGPD Comment paramétrer Google Analytics pour être conforme au RGPD ? « 1 2 3 » * Conception * WordPress ## UX design: comment concevoir efficacement avec Gutenberg ? ## Vous avez un projet ? ## Envie de nous rejoindre ? ## Envie de parler WordPress ? ## Nous sommes présents sur tout le territoire français ## SIÈGE ## Bureaux * E-commerce website * Commerce de Détail Digital happiness creators Unique agency in France, we harness the full power of the WordPress ecosystem to create innovative, high-impact WordPress sites and digital experiences: website creation, e-commerce sites, website factory & intranet. in-house WordPress experts Be API is a WordPress VIP partner Be API, WordPress agency, is a WordPress VIP partner.We're the only partner agency in France to offer a WordPress VIP package specifically designed for key accounts. Since 2009, Be API, a web agency specialized in WordPress, has been an expert in the creation of WordPress websites and the implementation of digital strategies for impactful digital projects.On a daily basis, a typical web project at the agency enables us to create a site with a WordPress theme, simplified content management thanks to Gutenberg and a qualitative user experience, optimized for visibility on search engines. We then manage the maintenance of this site, notably with a proactive watch for regular WordPress updates and extensions. Not only did Be API pull off a real technical tour de force, to meet demanding and unusual requirements, but they ensured the success of the redesign through their attentive support and cutting-edge expertise in terms of advertising optimization and user experience throughout the project, and even now for future developments! Leading WordPress agency in France We help our customers define their goals, turn their ideas into reality and transform them into lasting results. To do this, we mobilize all our expertise and the power of the WordPress ecosystem to create happy digital experiences. Architecture, interoperability, DevOps & Cloud ## Our offer harnesses the full power of the WordPress ecosystem ## Be API's commitments ### Zero outsourcing Our developments are 100% in-house to guarantee impeccable quality. We respect the CMS core to enable easy maintenance over time. We help you make sense of WP's vast ecosystem, and donate our work to the open-source community. ## WordPress, the most stable and agile CMS on the market When we chose WordPress in 2009, it was by intuition. An intuition with solid arguments that years of success have proven right. 40% of the world's web1/3 of the CAC 40 Free & open source technology A constantly innovating CMS An interoperable tool ## The WordPress community makes us happy, and we're very grateful. Conferences, training courses, plugin development, extensions... each Be API expert is a WordPress specialist in his or her field and actively contributes to the community. We co-organize, sponsor and participate in the biggest gatherings all over Europe, and you can hear our experts speak at France's WordCamps and WPTech every year. Are you a brand, company or institution with strong digital ambitions? We support organizations that share your challenges in the success of their digital projects: website, e-commerce, audience and lead generation, website factory & intranet or business application technical optimization, maintenance... ## Your goals become our goals... ... when you entrust us with a strategic project. We define and adjust them together. ## WordPress leader in France... ... we are a full-service digital agency offering you all our expertise in strategy, design and development. ## Design Thinking enthusiasts... ... we think, create, develop, perform WordPress sites according to your users and your objectives. ## Putting the contributor at the center... ... of the tool is the primary quality of WordPress, with an intuitive, polished interface. ## The resource-rich ecosystem... ...of WordPress and its community, can meet most needs without any specific development. projects put online and developed with the WordPress CMS in 12 years To create the right product, we always start a project by learning. From our customers, but especially from our customers' customers. Head of design - Be api ## We deliver the tool or interface that best meets your users' needs By observing and analyzing your users, we do our utmost to understand them and identify their frustrations and needs.Following detailed user journeys, we prototype wireframes for user testing. Our art directors draw on this input to innovate and create thevisual universe andmodern interface best suited to your target and your brand. ## We have extensive experience in setting up website factory for major groups We've developed numerous extensions in this area, including shared media libraries, advanced domain mapping and centralized content publishing. Multisite functionality is native to WordPress. ## Our e-commerce site design is driven by a marketing vision and conversion-optimized user paths. We've been recommending and implementing the WooCommerce extension since its inception. We support our customers not only in functional implementation, but also in interoperability with existing business tools, in particular for managing inventory, accounting, etc. ## We carry out intranet projects from A to Z, always centered on your company's needs. We turn WordPress into a true enterprise social network (ESN ), with user profiles, groups, discussion forums and more. # DevelopmentsWordPress # Developments WordPress Need to create a WordPress feature, extension or platform? For over 10 years, our teams have been putting their cutting-edge development expertise to work on our customers' projects. An expertise recognized by Automattic, the company behind WordPress: Be API is the first and only WordPress VIP partner agency in France. WordPress platforms developed ## The pillars of our WordPress development ### Analysis of needs and benchmarking of plugins in the WordPress ecosystem WordPress' strength is its ecosystem. It's also its greatest danger, as some of the resources in this ecosystem are not of the highest quality. That's why the agency offers its expertise to analyze and select the right resource to best meet your needs. The agency's philosophy is first and foremost to capitalize on the thousands of plugins in the WordPress ecosystem. Backed by 10 years' experience, the agency has a repository of extensions (around 200) that have already been selected, tested and proven. ### Maintainability and safety Our developers design the architecture and code of your WordPress platform to be as durable and secure as possible. The sine qua non? That your WordPress ecosystem can integrate updates to the CMS and related extensions as the project progresses. Our aim is to keep your site up-to-date, stable and secure. ### Custom developments It often happens that ecosystem resources don't meet our customers' requirements, so we proceed with customized developments. The agency's WordPress expertise and expertise in computer programming in general enable us to create customized solutions from A to Z, based on an analysis of your needs and those of future contributors. ### Scalability and industrialization Your platform and extensions are developed in such a way as to be able to withstand an increase in load should your ambitions expand. Performance is an integral part of our development design. We pay particular attention to this issue when scalability issues are an integral part of the project, such as when setting up a website factory or a media site. Our teams pay particular attention to web performance. Page load times for a WordPress project are of course important for natural referencing, but they also condition the user's browsing experience. We set up the performance strategy according to the project: PWA, cache, Varnish server, image CDN, multi-zone cloud, etc. Our design creativity drives us to constantly innovate in terms of integration. of our talents are technical More than half of Be API's talents are WordPress developers at the cutting edge of their expertise. Passionate about their work, many of them are also speakers, trainers and WordPress experts. The majority of them have also created open source extensions shared within the community. ## Happy in plugins We develop plugins on a daily basis. <NAME>, the agency's technical director, developed the open source extension Simple Image Sizes, which is used by over 100,000 WordPress sites and has been downloaded nearly 1 million times! ## Accommodation: we're with you every step of the way We don'thost WordPress sites, but we can advise and support you in your choice of a WordPress host or outsourcer that's right for your needs. Our technical expertise in server issues (Linux, caching, HTTP, PHP MySQL) and our project experience enable us to be a source of proposals and to dialogue with the various players (host/IT). agiles Do you have a digital project to roll out, but uncertainties or a limited budget? We can help you by developing an MVP (minimum viable product) in agile mode. The advantage? You'll quickly have a product that you can test, then deploy as you go, while benefiting from the best of Be API's WordPress expertise. ## From MVP to industrialization: just one step with Be API Because WordPress is an extremely flexible and scalable content management tool, it allows you to launch a product with a simple structure and functionality, yet with all the capacity to rapidly evolve later on towards a more ambitious digital ecosystem and more sophisticated technical specifications - if your MVP proves itself. ## Creating a custom MVP with WordPress The WordPress community provides numerous extensions to meet your needs; ad hoc development is possible if the solution doesn't exist. With over 50,000 existing resources, it's hard not to find the extension you're looking for. ## Save time and money! At Be API, we capitalize on the resources of the WordPress ecosystem to bring the best technical solutions to our customers' needs. The flexibility of the tool and the richness of the WordPress ecosystem enable us to meet the demands for responsiveness, experimentation and agility of an MVP. Would you like to build a project using the WordPress CMS? You've made the right decision! The Be APIWordPress agency offers you a better way to achieve a sustainable* project with the WordPress CMS. Traditionally, the lifespan of a web project is between 3 and 5 years. Our raison d'être: to bring you digital happiness throughout the operational phase of your WordPress site. of projects carried out by the agency are monitored under maintenance The 1% corresponds to projects where WordPress TMA is carried out by the customer's in-house teams via a reversibility phase. 4.2 years average project life at Be API date of creation of the WordPress TMA division active TMA projects Depending on your requirements, we can support you in testing and continuously improving the performance of your web platform, with a view to ROI. For example, we can carry out regular SEO reviews or monthly A/B testing, with recommendations for optimizing your site. <NAME> PROJECT MANAGER AT BE API ## Why choose Be API to maintain your site? ### A maintenance offer tailored to your needs For our customers, we develop WordPress maintenance packages tailored to your technical requirements (customer-specific or external technical architecture), and human resources (account manager).When it comes to maintenance, too, we're all about customized solutions. ### Flexibility and scalability We know that your WordPress maintenance needs and requirements can change from month to month, so we've put in place a flexible organization, able to absorb any increases in workload that may be required (both during working hours and on-call). In fact, in addition to our dedicated maintenance team, all our developers also devote time to these operations. ### Maintain your site at optimum performance and security levels We carry out preventive maintenance on WordPress projects, through regular updates of WordPress (as part of a new version, for example) and its extensions, to avoid any interruption in service. The agency keeps a close watch on these issues, so that patches can be applied as soon as they are released. On a daily basis, the team also deals with reported anomalies, depending on the criticality of the problem. In the event of an anomaly in production, an incident report is systematically drawn up to trace the various actions taken, and propose long-term corrections. ### Upgradeable platform maintenance We may also be called upon to make significant changes to your platform during the RUN phase of your project. The agency then offers a choice of TMA or project mode, with the usual stages of a web project. The agency carries out continuous monitoring and supports its customers to ensure that their platform is compliant (RGPD), while taking into account developments in the internets (SEO, etc.). ## Has your web project been carried out elsewhere? We maintain the WordPress websites we've developed for our customers, as well as websites created by third parties. We call this third-party application maintenance (TMA). In this case, your project is analyzed by a WordPress expert, and a pre-audit is carried out to assess the possibilities of resumption, as well as any corrective actions to be taken. This pre-audit is not intended to assess the real, objective quality of your project, but rather to evaluate whether your project is in line with the agency's development practices. For example, the use of page builders such as Elementor, Divi or WPbakery are blocking the resumption of a TMA project, as they are not technologies used by the agency, and for which we do not wish to intervene. Once the pre-audit has been completed, we carry out the onboarding of your project on our environments, an essential step for the success of the maintenance phase. Do you need an analysis from France's top WordPress experts? Thanks to our expertise and experience with this CMS, we can diagnose any weaknesses in your platform, and advise you on how to improve it. We can also develop the WordPress interfaces and functionalities we recommend. ## Partners and advisors As your true partners, we'll advise you on every strategic assignment you submit to us. This means that we'll analyze your data in depth, particularly that concerning your user journeys, question the objectives you've set yourself, and ask you detailed questions about your business challenges. Our aim is to provide you with the most comprehensive analysis and the most relevant, high-impact recommendations. ## Carry out the audit you need We carry out various types of WordPress audits, covering quality, state of the art, performance, security and any other WordPress-related subject. Thanks to our devops approach, we can also act as conciliator between WordPress agency and hosting provider to resolve bottlenecks, such as painful scalability issues. These audits result in a written report and can lead to corrective action, and why not a longer-term partnership. ## Upgrade your teams' skills We systematically offer WordPress training for contributors and administrators as part of our WordPress site creation or redesign projects. This covers both the WordPress CMS and project-specific functionalities. training courses per year We also offer technical WordPress training and skills transfer if you have technical teams who can contribute to the development of a project. Several of our WordPress agency's talents give courses at various web schools in the Paris region on technical subjects, as well as design. Exploit the potential of website factory : a system for industrializing the creation, deployment and maintenance of a network of websites. ## Accelerate your digital projects It's not uncommon for companies to find themselves managing multiple digital platforms. This can lead to considerable costs, security issues and governance challenges.website factory can help you meet these challenges. ## Capitalize on the power of WordPress and Gutenberg, tools that are perfectly calibrated to meet your needs. website factory Centralize hosting and maintenance Simplifying contributions Streamline costs and lead times Harmonizing brand image ## Projects adapted to the creation of a website factory International coverage You are present in several European and/or international countries. Each country can have its own multilingual website. Distributed organization You are organized on a territorial basis (country, region, department, locally). The richness of site content can vary according to the representation. Subsidiaries You are organized as a group of companies or subsidiaries. Subsidiaries can adopt different graphics and functionalities. ## Do you have a project forwebsite factory ? Take advantage of our expertise to design the right solution for your company's specific needs. ## Saving time and money ### 1. Achieve economies of scale with a site plants By taking advantage of the Multisite functionality built into WordPress, you can manage multiple websites from a single technical infrastructure, automatically reducing production and maintenance costs. This automatically reduces production and maintenance costs.If your project requires you to maintain several technical instances, the Web Factory solution will be the most appropriate. In this case, you still capitalize on a single WordPress hosting and maintenance package. Benefit from theme and plugin updates automatically applied to all your sites included inwebsite factory. ### 2. Harmonize content across all your sites Consistent messaging and harmonized branding across all digital channels is a recurring concern when managing a network of websites.As part of awebsite factory WordPress project, we deploy a single WordPress theme that ensures visual consistency, while offering content management autonomy to each site manager. By using the Gutenberg editor, you can also ensure graphic cohesion within pages, reinforcing unity between different sites. ### 3. Rapid deployment of new sites (Time to Market) We develop technical solutions capable of duplicating new versions of the reference site much more quickly, thanks to the Multisite functionality on WordPress. These new versions are templatized, ready to be fed by site managers. ### 4. Multisite function and content sharing The architecture of WordPress' Multisite functionality makes it easy to meet the need to share content from one site to another.Examples include a shared media library, legal page content or a news portal. Content sharing is made possible with little development effort. To find out more, discover our open-source plugins "Content Sync Fusion" and "Multisite Shared Block". ## Multisite, a WordPress term The WordPress CMS has a Multisite installation mode. This feature was integrated over 12 years ago to facilitate the maintenance of a network of sites.Thus, WordPress includes in its very structure the ideal foundations for developing a high-performance website factory , designed to help businesses and contributors manage their network of websites on a daily basis. This functionality has reached a significant level of maturity and stability. Combined with the Gutenberg editor, WordPress Multisite is the ideal tool for designing and supporting an effective website factory . And with over 50,000 existing resources, it's hard not to find the extension you're looking for! ## A WordPress Multisite instance is : ### A single code base for all sites ### One database for all sites ### A list of plugins shared on the platform * Mu-plugins for all & can't be disabled * Plugins activated on all sites * Plugins that can be activated on a site-by-site basis ### A list of themes shared on the platform * Site-by-site activation * Limited or not to a single site ### A unique user base * With customizable roles on a site-by-site basis ## WordPress Multisite and custom domain names "Domain mapping", "Domain routing", "Routing", "Alias"...Whatever you call it, you can define a personalized domain name for each of the sites on a WordPress Multisite platform. Organizing your project through a WordPress site factory will enable you to meet all the demands of your teams, and in particular the requirements of the teams in charge of natural search engine optimization (SEO). ## At Be API, we have assisted numerous customers with theirwebsite factory projects. ## Questions about a device fromwebsite factory ? Our teams have the necessary experience to support you on different types of projects: * Setting up a completewebsite factory ecosystem * Redesigning your website factory * The development of a website within your website factory Do you have a redesign project with the WordPress CMS? Be API can provide you with a complete migration strategy for your project. ## Migration, a central, multidisciplinary theme for your project Depending on the volume of content to be taken over, and your enrichment and requalification needs, content migration dictates numerous decisions right from the design phase (information architecture, wireframes, etc.). That's why, at Be API, it's an issue we address right from the first workshops. As the project progresses, developments associated with content migration are part of the first development sprints, to give the project team time to thoroughly test this key element, and possibly allow for enrichment on the part of the customer. Finally, site and content migration implies a strategy of SEO redirects (and modifying obsolete internal links) to maintain your website's good natural referencing. ## Our experience Be API has carried out numerous redesigns of projects using the WordPress CMS, and most of the time, our customers expect us to rework the content.We have migrated sites created with the following CMS to WordPress: This list is not exhaustive. We regularly discover new CMS (both open source and proprietary) as projects come along, and most of the time we apply the same working methodology. Our starting point is a technical exchange with the CMS publisher, in the absence of documentation to study. Our teams are also used to working with databases such as MySQL, Microsoft SQL Server, Microsoft Access, Oracle and Filemaker, as well as flat files such as Excel, CSV, JSON and XML. ## Migrating from a WordPress site to WordPress Contrary to popular belief, the redesign of WordPress websites can also include a content migration project.WordPress is a particularly flexible CMS, and there are as many ways of working with this content management system as there are professionals who use it. At Be API, we've chosen theGutenberg editor for all our projects. This content editor, which appeared in WordPress 5.0, has revolutionized contribution possibilities. In fact, when we want to redesign a WordPress project created before WordPress 5.0, there's a very good chance that the content editor used will be different (WPbakery, Advanced Custom Fields-ACF, Beaver Builder, Elementor or Divi). As a result, you'll need to migrate your content in order to find equivalents between the old content organization and the new one proposed by Gutenberg and WordPress. ## Our methodology Survey of existing facilities Data mapping Development & Testing Single or incremental migration A successful migration requires teamwork between the customer and the agency. Our customers have the business knowledge of their existing site, while the agency has the technical knowledge, and it's the combination of these 2 areas of expertise that will ensure a quality migration.In order to carry out the study and testing phase successfully, we ask our customers to draw up a qualitative list of significant content to be tested/received. This methodology has been tried and tested on numerous projects, whatever the volume or complexity of the content. ## Content enrichment during site migration to WordPress Site redesigns often lead to significant changes in content organization (properties, taxonomies & classification). These evolutions generally originate in the co-design workshops carried out during the project. To facilitate the associated enrichment work, we generally propose an intermediate step during migration, aimed at generating an Excel file. In most cases, this spreadsheet file facilitates content enrichment in a format suitable for mass publishing. Once the file has been completed, it integrates the content migration process, making it possible, for example, to re-categorize all the content on a WordPress site. From time to time, our customers wish to benefit from this mass editing and enrichment capability, via export and import of CSV or Excel files. We generally recommend the "WP All Export" and "WP All Import" extensions for these routines, and integrate templates for exporting and importing content via these extensions into our projects. ## Drupal to WordPress migration Is your current site running on the open source CMS Drupal (version 6, 7, 8 or 9) and you'd like to migrate to WordPress? It's already a good decision 🙂 There are several ways to migrate your Drupal data to the WordPress CMS. First of all, you can use the premium "FG Drupal to WordPress" extension edited by French expert <NAME>. It's a well-made tool, but it's a long way from our methodology, so we generally recommend it for low-cost migrations in the context of specific projects. At the agency, we've opted for an alternative approach that strictly adheres to our methodology. When necessary, we develop a Drupal module to expose migration data via a JSON stream. In this way, we can perform complete or incremental migrations at any time from your old Drupal site currently in production. ## Spip to WordPress migration Spip used to be a very good CMS, but it has now been surpassed by a number of market players, including WordPress. Like Drupal (see previous chapter), there's the "FG Spip to WordPress" plugin edited by French expert <NAME>, and like Drupal, we recommend an alternative approach based on data exposure via specific feeds. Did you say "page builder"? "Page builder? "Content builder? "Content editor?The web market is constantly evolving. After HTML sites managed 100% by agencies, then CMS that made content editable, the CMS market is now giving pride of place to tools offering considerable freedom of contribution. Be API is fully convinced by this evolution, and has long been offering its customers tools within WordPress to give them maximum autonomy. ## WordPress, a CMS with multiple editorial facets WordPress is an open-source CMS with a rich ecosystem, especially when it comes to content editing tools. In fact, there are a number of "page builders" and "content builders", the best-known being Elementor, Divi, WP Bakery and Beaver Builder. We can also mention Advanced Custom Fields (ACF), and of course, the new WordPress editor, Gutenberg. The flexibility that WordPress brings means that there's no single way to make WordPress sites, and that every web agency has its own skills. ## Why Gutenberg? At Be API, we've tested and worked with the majority of the manufacturers listed above; and we've been able to evaluate them in real-life conditions, and as always over the long term with our WordPress maintenance offer. And none of them brings us as much certainty, reliability and scalability as Gutenberg, WordPress' native content builder. Indeed, since December 2018, the month of the official release of WordPress 5.0 and this feature, Gutenberg has revolutionized layouts formerly offered by the old WordPress editor, or even ACF. For Be API, Gutenberg represents : ### The right compromise between freedom and simplicity Our projects are aimed at contributors, not webmasters. Gutenberg's interface is well-balanced in this sense, providing a more pleasant user experience for contribution. ### The right level of performance The tool provides good ratings on PageSpeed and Web Core Vitals indicators. ### Accessibility made possible At Be API, we believe that all projects should be accessible to as many people as possible. RGAA compliance requested or not 🙂 ### Smoother maintenance Gutenberg's technological approach allows us to look to the long term with confidence. To discover all the features in detail, here's a demo site, as well as the agency's presentation videos. Today, a web project with WordPress + Gutenberg isn't "just" about creating templates like it used to be. The philosophy of a tool like Gutenberg tends towardsatomic design, with blocks that will be used in different contexts and for different content; that's why we work within a Design system logic. For each project, we set up a design system of Gutenberg blocks, which are then used to compose patterns and pages. These blocks bring great flexibility to page composition, making it easy to add (drag & drop), organize and format rich content. So yes, it's great to create a toolbox, but the question is how to use it with your content. Conversely, if you don't think about the actual content, how can you create the right toolbox? It's these questions that we try to answer in each of our Gutenberg projects, and to do so, our design teams carry outUX Writing work, to create the corresponding toolbox from your final content, and to give you recommendations for the organization and layout of your content. This UX Writing work is carried out on your key content, but you can also "order" more "UX Writing" pages during the entire exploitation phase of your project. ## A WordPress agency organized for Gutenberg The agency's entire methodology is organized around this editor, and all our teams are trained in its use so as to offer projects that are as close as possible to the state of the art, and as respectful as possible of the way Gutenberg works. Our aim is to offer you a project designed for Gutenberg from start to finish. In particular, the design part: * Wireframes integrating all concepts (blocks, patterns, templates) * Graphic layouts compatible with publisher limitations For more information, please refer to one of our articles on the subject: How to design efficiently with Gutenberg ## A typical project A typical Gutenberg project is therefore based on a design system of blocks, and these blocks can be of different types: ### Stylized native blocks Gutenberg integrates native blocks (70 existing blocks, including 36 for embedding third-party services). In our projects, we use these native blocks as much as possible, which we then style according to the project's graphic charter. ### Custom-made blocks In addition to these native blocks, depending on requirements, we also develop bespoke blocks either via the Gutenberg ReactJS framework, or occasionally via the ACF plugin. ### Patterns to facilitate contribution (blank page syndrome) A Pattern is a Composition or Set of blocks, or a group of blocks, that have been combined together to create a reusable layout (template). At Be API, we create customized patterns to translate graphic layouts as closely as possible to the Gutenberg philosophy. These reusable patterns are readily available to our customers. ### Third-party blocks (pack or plugin) Once again, the richness of WordPress lies in its plethora of plugins and blocks. We mainly use standalone blocks and avoid packs to facilitate maintenance and guarantee a high level of performance. ## Why we don't recommend Elementor vs Gutenberg Elementor is a very powerful tool. Too powerful, in fact, for webmasters with some knowledge of HTML/CSS. It's a tool that will give more satisfaction to the professional building your WordPress site, and little to the end-user who will have a façade of autonomy, ultimately lacking the know-how to take advantage of Elementor. In the context ofwebsite factoryor even multiple environments (production, pre-production, etc.), we feel that Elementor saves too many layout elements in the database. In fact, in awebsite factory context, where it's possible to duplicate a template site, for example, this distributes WordPress maintenance across as many sites, making maintenance much more problematic. When we chose WordPress in 2009, it was initially based on intuition. An intuition with solid arguments, and one that years of success have proved right. Media site, website factory, e-commerce or intranet - WordPress is used everywhere. Websites around the world 37% Of the world's 10,000 most visited websites(Builtwith 2020) We believe in open source to bring companies flexibility, creativity and performance in their digital tools. <NAME> Founder of BeAPI ## Innovative open-source technology thanks to a global community Companies, organizations and agencies around the world rely on the power of the WordPress ecosystem to design and run successful, high-performance websites. WordPress is a free, open-source CMS (content management system). It is updated and improved daily by programming, design and documentation experts around the world. WordPress has the world's largest and most active community of any content management system. ## A single platform, but a world of possibilities and features WordPress was built on an architecture that can be 100% adapted to each project, whether backend or frontend. The ecosystem can meet the needs of many types of project: website factoryWordPress is international and multilingual: the core system has been translated into over 70 languages. ## A stable, durable and reliable solution In all developments, we ensure the continued smooth operation of existing applications. A layout (theme) from 2005 will still work perfectly today. A large number of users continually put the CMS to the test, forcing contributors to constantly maintain the code at the heart of the system. Continuous development and improvement are therefore guaranteed. We always look to see if an extension exists in the WordPress community to meet the need. We only develop ad hoc if no solid solution exists. Nicolas Juen CTO of BeAPI ## A highly scalable tool, capable of integrating the latest technologies WordPress is a highly interoperable ecosystem with other tools (analytics, CRM, ERP, e-marketing). Its very active developer community ensures that the system can integrate all new development technologies as they emerge, such as AMP, GraphQL and React. ## A completely secure tool An open source CMS, WordPress' source code is open, which means that hundreds of volunteer developers are on the lookout for weaknesses and working to remedy them. A security team handles vulnerability reports and publishes security patches. Security is a top priority for the developers who contribute to the heart of WordPress. ## Google loves WordPress WordPress has all the tools you need to optimize the SEO of any site or interface. With WordPress, your natural referencing will never be a worry, but on the contrary a strategic asset for your project. ## The simplest CMS for contributors The WordPress back office is renowned for being the simplest of all CMS for contributors, including neophytes. The user interface is very user-friendly and is constantly being improved. At Be API, we take a partnership-based, co-creative, iterative and rigorous approach to mobilizing the full potential of the WordPress ecosystem for our customers. In concrete terms, here's what you can expect if we have the pleasure of working together on one of your strategic digital projects. ## Co-create We conduct co-creation workshops with our customers, using methodologies inspired by design thinking to bring out the best ideas. ## Hand in hand We see our customers as partners, working side-by-side with them towards specific goals that we set together. In short, team spirit is the rule, and your project becomes our project. ## Staying focused on your users Working with us, your customers' or end-users' interests will become our obsession, in order to create useful, high value-added projects. ## Making data talk To get to know your customers better, we dissect your data. Much of it will give us the key to your users' known and unknown needs, expectations and frustrations. ## Talking to the right expert at the right time Be API project managers are there to listen to you every day. They are committed to introducing you to the right expert for your problem at the right time in the project. UX, UI, back, front, SEO... you'll only meet people who are passionate about WordPress! ## Test, learn and iterate We understand that it may sometimes be necessary to question certain assumptions during the course of a project. Between launch and go-live, the situation can change: we're used to adapting. Our methodology is flexible and inspired by agile methods. ## Challenge yourself We need to understand your challenges and your context (internal and external) to best solve your problems. That's why we won't hesitate to ask you questions when you make a request, and to "dig deeper" in order to bring you the highest added value and the best advice. ## Prioritize what's important Not what's urgent. To achieve the goals we've defined together, we help you on a daily basis not to let emergencies get the better of the important work on your WordPress project. ## Clarity at every stage of the project You need to know where you stand and where you're going. We are committed to transparency in our communication, our methods and our management of your projects. Since its creation in 2009, Be API has specialized in WordPress, participating in its development and contributing to the open source community. ## WordPress by passion <NAME>, who founded Be API 10 years ago, is the "standard-bearer" of WordPress in France, and a member of the French association's 2019 board. He plays an active role in the CMS's expansion. Co-founder of the WordPress-Francophone association and co-author of the reference book (by Pearson) on WordPress, Amaury has helped translate WordPress in the past. He has been a member of the WordPress France WordCamps organization since its inception. Last but not least, he has developed numerous extensions, some of which have regularly been in the top 10 downloads, such as the Simple Tags extension. ## We're active contributors to the community Every Be API expert, from developer to project manager, SEO, design... is a WordPress specialist in his or her field. That's why we're actively involved in the community, through conferences, training courses and the development of plugins and extensions. We publish our projects on WP.org and Github. ## For 10 years we have been giving talks at WordCamps Every year, you can listen to our speakers at WordCamp & WPtech in France. Click here to see the latest WordCamp conferences. ### Our conferences ## <NAME>'s conference - WordCamp Paris 2016 ## <NAME>'s conference - WordCamp Paris 2015 ## Amaury Balmer's conference - WordCamp Paris 2016 ## Amaury Balmer's conference - WordCamp Paris 2015 ## Amaury Balmer's conference - WordCamp Marseille 2017 ## Our badges show our commitment to the ecosystem We actively contribute to the development and improvement of the WordPress core. For this reason, we have 6 badges. Go to content Go to navigation Agence WordPress Be API | News our api news Events Immersion au WordCamp Biarritz Filter by Press release Design Events Marketing RGPD Tech WordPress Events Immersion au WordCamp Biarritz WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs WordPress website factory WordPress: a strategic ally for large organizations WordPress WordPress multisite: efficient content management and enhanced governance Design Optimizing your Figma workflow: 10 designer tips Events WordCamp Europe 2023: An immersion in the international WordPress ecosystem WordPress The inspiring story of Amaury Balmer, a French WordPress pioneer WordPress Optimizing WordPress design with Johannes Events WordCamp Paris 2023: a must-attend event returns Tech Opquast, the icing on the cake 1 2 3 » # Press release Go to content Go to navigation Agence WordPress Be API | Press release Press release Filter by Press release Design Events Marketing RGPD Tech WordPress Press release Be API wins tender to redesign Secours Populaire Français website # Design Go to content Go to navigation Agence WordPress Be API | Design Design Filter by Press release Design Events Marketing RGPD Tech WordPress Design Optimizing your Figma workflow: 10 designer tips WordPress Optimizing WordPress design with Johannes Tech Opquast, the icing on the cake Design Optimize conversions with a CRO strategy Design WordPress UX design: how to design efficiently with Gutenberg? Go to content Go to navigation Agence WordPress Be API | Marketing Marketing Filter by Press release Design Events Marketing RGPD Tech WordPress Marketing SEO and WordPress in 2021, tips and recommendations Go to content Go to navigation Agence WordPress Be API | RGPD RGPD Filter by Press release Design Events Marketing RGPD Tech WordPress RGPD RGPD: How to keep a register of personal data processing? RGPD How to set up Google Analytics to comply with the RGPD? RGPD Cookie banner and RGPD compliance - How to choose your plugin? Go to content Go to navigation Agence WordPress Be API | Tech Tech Filter by Press release Design Events Marketing RGPD Tech WordPress WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs Tech Opquast, the icing on the cake WordPress HTTP/2 and SEO, what you need to know! Tech How has CI/CD with Buddy changed our development methodology? Tech Gutenberg , the user-friendly content builder for WordPress Tech Composer : Make Stable Tech Composer : Freeze Versions Go to content Go to navigation Agence WordPress Be API | WordPress WordPress Filter by Press release Design Events Marketing RGPD Tech WordPress WordPress L’interactivité sur WordPress: une nouvelle API pour l’éditeur de blocs WordPress website factory WordPress: a strategic ally for large organizations WordPress WordPress multisite: efficient content management and enhanced governance WordPress The inspiring story of Amaury Balmer, a French WordPress pioneer WordPress Optimizing WordPress design with Johannes WordPress Getting started with Gutenberg block development in React: feedback WordPress Eco-design, a positive practice for your business too WordPress HTTP/2 and SEO, what you need to know! Tech Gutenberg , the user-friendly content builder for WordPress RGPD RGPD: How to keep a register of personal data processing? 1 2 » # website factory WordPress: a strategic ally for large organizations Large companies, particularly those with a presence in several markets or regions, are faced with the challenge of improving efficiency while reducing the costs associated with digital platforms. In this context, the industrialization of digital production is becoming essential. Thewebsite factory website, deployed as a Web Factory or WordPress Multisite, is an ideal response to this challenge. With a centralized platform, the system facilitates the deployment, management and maintenance of several websites, and is a key element in the digital strategy of major groups. ## I. Industrializing digital production website factory is a robust system, specially calibrated for structures such as multinationals, public institutions or franchises. These entities are frequently faced with managing a vast network of websites, often with similar needs. There are many advantages to setting up a website factory site, starting with the guarantee of brand consistency. Each site is visually and functionally consistent, consolidating the company's identity and overall message. The organization no longer has to design each new site from A to Z, which means a real reduction in production costs. Centralized management of maintenance and updates facilitates interventions across the entire network. Rapid deployment is also a major advantage. New sites fit more easily into existing systems. What's more,website factory optimizes resources: teams can share content and tools. But while standardization is the watchword, flexibility is not. Each site can be adapted to the specific needs of a given audience or market. This adaptability goes hand in hand with the easy integration of technological innovations, deployed uniformly on each site. Every piece of feedback feeds into a continuous improvement process, enabling the organization to fine-tune its strategy over the long term. So, far from being just a technical tool,website factory is becoming a genuine strategic lever. For large organizations, it means a stronger digital presence, greater responsiveness to market challenges and, ultimately, a real competitive advantage. ## II. WordPress: a CMS calibrated to support website factory In a previous article we introduced the WordPress Multisite feature, which enables administrators to manage multiple websites simultaneously from a single interface. In this way, the CMS includes in its very structure the ideal foundations for developing a solid website factory . Associated with Gutenberg editor - editor, whose modular blocks and flexibility make it easy to design sites en masse while preserving visual and functional uniformity - WordPress is overall the ideal choice to support anywebsite factory system. Powering 43% of the web, it is the only undisputed standard of quality in the CMS field. Its reliability and security are constantly verified by a vast community of professional developers who continually scrutinize, improve and reinforce its structure. Last but not least, the CMS offers a dedicated package for key accounts, making it the tool of choice to support a complex site factory. Designed to meet the different needs of large organizations, WordPress VIP's strength lies in the comprehensiveness of its services. * Optimal management of traffic peaks, enhanced security and 24/7 expert support. * Sophisticated workflows, with separate environments for development, testing and production. * A solid infrastructure, an exclusive network of hand-picked partners and pre-configured integrations with powerful third-party services. * Proactive management of updates to take advantage of the latest innovations and safety standards, ensuring long-lasting digital structures. As the first and only WordPress VIP partner agency in France, Be API has proven expertise in managing large-scale projects. This synergy between our digital agency know-how and the power of WordPress VIP enables us to offer top-of-the-range support for site factories. ## III. Keys to successful implementation If you are planning to deploy awebsite factory system, here are some key points to bear in mind: * Clear definition of needs: This implies a thorough understanding of the objectives, the different target audiences and the technical specifications required. * Flexible design: The architecture of your website factory must be designed to be easily adaptable - to add, modify or remove functionality without disrupting the overall system. * Uniformity vs. customization: Although the main objective is to industrialize website creation, each site must retain the flexibility to integrate specific customizations. * Optimized performance: A robust infrastructure is essential, designed to efficiently handle the traffic generated by multiple sites, while ensuring fast loading times. * Seamless integration: The site factory must be designed to integrate easily with other tools and systems, whether CRM solutions, marketing tools or other third-party technologies. * Enhanced security: With multiple sites managed on a single infrastructure, security risks can be amplified. Strict security practices, regular audits and real-time updates are essential. * Training and support: The team in charge ofwebsite factory must be trained not only in WordPress, but also in the particularities ofwebsite factory itself. Ongoing support must be provided to resolve any problems and assist users. In short, the successful implementation of website factory WordPress lies not only in the technology chosen, but also in a strategic approach and meticulous execution. The Be API team can help you with your project to redesign or creation ofwebsite factory. We support our customers every step of the way - from strategy consulting to development, design and staff training, covering both WordPress and Gutenberg. To ensure that website factory is functional, secure and sustainable, we deploy WordPress maintenance tailored to the needs of each company. The digital world is evolving rapidly, and the need to optimize and streamline digital platforms has never been more important for organizations. Thewebsite factory WordPress platform, combining modularity and security, is a strategic tool for meeting today's challenges. More than just a technical platform, it's a vision that combines technology and marketing to deliver an optimal online presence. With the help of expert partners, companies have access to the support they need to deploy a robust and durable system. # WordPress multisite: efficient content management and enhanced governance In today's digital world, large organizations face a major challenge: efficiently managing all their websites, sometimes deployed in several countries. All the while ensuring content harmonization and optimizing maintenance costs and deployment times. This can be a complex task, especially when it comes to preserving solid governance and maintaining a consistent brand image. Setting up a WordPress Multisite is the right solution to meet this technological and strategic challenge. ## I. The challenges of managing a network of websites for large organizations In a company with a vast network of websites, managing each individual site can quickly become a major challenge. These platforms require regular updates, security patches and content additions. Coordinating these tasks efficiently can be complex and time-consuming. What's more, the multiplication of administration interfaces and user accounts can lead to confusion and potential security problems. Brand harmonization is also an issue to be addressed in order to guarantee an optimal user experience. Care must be taken to ensure that key messages, media libraries, legal texts or news are properly unified. The absence of an efficient system to automatically manage this content can lead to inconsistencies. Finally, budget constraints and production deadlines are important factors. Every website requires financial resources for development, hosting, maintenance and updates. So there's a real challenge in pooling costs, reducing lead times and optimizing available resources. ## II. WordPress Multisite: its advantages in content management Multisite is a feature built directly into WordPress that offers a centralized approach to managing an entire network of sites. Instead of installing and administering each site separately, WordPress Multisite brings them together in a single technical installation. This centralization greatly simplifies management, enabling all sites to be updated, maintained and managed from a single interface. This approach offers significant advantages for content harmonization: in particular, it automates the process by defining global rules and templates. * For example: you want to share a media library across all your sites, while ensuring that image credits remain up to date. We can deploy a technical solution to automatically remove images whose credits have expired, across all your sites. * Another example: you want to modify your legal notices. At Be API, we develop specific plugins capable of handling these automatic updates - optimizing time and resources, while ensuring real consistency across your websites. What's more, WordPress Multisite makes deploying new sites faster and more efficient. Rather than setting up a new installation, you can easily create new sites within the existing network. This pre-established approach significantly reduces deployment times. Managing security updates and patches is simplified, as they can be applied to the entire network in a single operation. ## III. Setting up a WordPress Multisite for your business Before setting up a WordPress Multisite, you need to make sure that your infrastructure meets the necessary technical requirements. WordPress experts can help you in this decisive phase. They'll help you set up the right hosting for your network of sites, making sure you have the right resources in terms of storage and performance. In terms of best practice, careful planning of your site network structure is recommended. Define a clear strategy for the main sites and sub-sites, taking into account your company's organizational logic. When setting up a WordPress Multisite, strategic decisions are necessary to ensure optimal governance. Determine who will have access to network administration, and define roles and permissions accordingly. You can appoint global administrators to oversee the entire network, as well as specific administrators for each site. When it comes to managing themes and plugins, it's advisable to define a policy aimed at ensuring visual and functional consistency between sites. You can select recommended themes and plugins for common use, while allowing site administrators to activate additional themes and plugins according to their specific needs. Setting up a WordPress Multisite is a powerful solution for companies wishing to optimize their governance and content strategy. This CMS-integrated feature enables centralized management, automated content harmonization and rapid deployment of new sites. Multisite offers significant advantages in terms of productivity, consistency and efficiency. By following best practices and taking strategic decisions into account, your company can benefit from simplified management, a unified brand image and optimized costs and lead times. # Optimizing your Figma workflow: 10 designer tips Figma quickly established itself as the essential tool for our UI/UX work within the agency. Thanks to its functionalities, it enables us to collaborate in real time on the same document, present designs effectively to our customers and share them instantly with the development team. In this article, Stéphane, UX designer at Be API, shares his expert advice on how to make the most of Figma's features. Whether you're a seasoned designer or a passionate beginner, these valuable tips will help you streamline your production process. ### 1. Create your own components efficiently Avoid repetitive tasks and gain in productivity by creating your own components with this key combination: ⌥ + ⌘ + K. For maximum efficiency, you can select multiple elements and create several components simultaneously, or even create a set of components from the dedicated menu. By using this feature, you'll save precious time. ### 2. Optimize the use of component properties To simplify the management of component instances and reduce the number of variants required, take full advantage of Figma's properties. For example: * Use the Boolean property to hide certain elements * Use Instance swap to replace instances of internal components with others. * Use the Text property to easily modify specific text within your components. By taking advantage of these properties, you can reduce the complexity of your project and save time when customizing your components. ### 3. Turn your component sets into autolayouts It may seem simple at first glance, but it's a real pleasure not to have to adjust elements to the nearest pixel. A great trick is to use autolayout for your component sets. And the icing on the cake is autolayout's wrap ⤶ option, which automatically passes elements to the line if they exceed the available width. By adopting this approach, you'll gain enormous flexibility when laying out your designs, while avoiding tedious adjustments. ### 4. Use the "Paste to replace" function Say goodbye to the repetitive tasks of re-positioning and deleting items you want to replace, using the "Paste to Replace" function. Simply use the key combination ⇧ + ⌘ + R This feature lets you copy a new element and paste it directly onto an existing one, instantly replacing it while preserving its positioning and properties. ### 5. Display contours for better visibility Never lose your hidden elements in Figma again. Simply press ⇧ + O to display the outlines of all elements, even (and especially!) those that are hidden. This handy trick will enable you to clearly visualize the layout of your elements and work more efficiently. ### 6. Organize your styles for easier management Make it easy to manage your styles by using the "/" symbol in their names, to group them into semantic groups. For example, you can use names such as: * "border/text/background for color styles * "title/paragraph/label" for text styles * "state/elevation" for effect styles In addition, you can select several styles and group them together using the shortcut ⌘ + G. This technique will allow you to associate your styles according to their use. Keyboard shortcuts are really Figma's cheat code: they allow you to achieve "pixel perfection" by spending less time in the menus. <NAME>, UX designer ### 7. Work from a Design System Let's face it: starting from a blank page isn't always very inspiring. It's better to work with a guide, to benefit from pre-established best practices. At Be API, we work with a UX template kit specially designed for Gutenberg designers.We 've made it available as open source on Figma, so don't hesitate to get it for your next WordPress project. ### 8. Copy a PNG for quick sharing Need to quickly share an item? You can simply copy the image using the shortcut ⇧ + ⌘ + C, then paste it into your messengers or presentations. This lets you share it quickly, without having to export it first. ### 9. Share a link to a specific frame When sharing a Figma link, it's best to share direct access to the specific item you're interested in. To do this, select the element, right-click and choose "Copy link" from the context menu. You can also paste a link onto text, which is particularly useful for creating quick-access links within a document. ### 10. Perform your calculations directly in Figma You no longer need a calculator to determine exact dimensions or spacings in Figma. The tool lets you perform these calculations directly in the dimension field. For example, you can add "+42" to your current dimensions, or divide them into four directly. Figma supports classic operations such as +, -, /, *, as well as parentheses. We hope these tips will help you boost your productivity and unleash your creativity. If you have any others in store, please don't hesitate to share them in return. # WordCamp Europe 2023: An immersion in the international WordPress ecosystem Athens, June 2023 - This year, WordCamp Europe took up residence in the famous Greek capital. The event, which also celebrated the 20th anniversary of the CMS, brought together over 2,545 web experts, professionals and/or enthusiasts from 94 different countries. A look back at a particularly successful event! ## At the heart of the event ### Contribution Day: the perfect way to get started The Contribution Day, held the day before WordCamp, brought together 658 volunteer participants, including four members of the Be API team: Nicolas, agency CTO, Clément, plugin developer, Anne-Marie, project manager, and myself, Alizée, communications manager. We were present at the Polyglot table, dedicated to translation, and the WP CLI table, dedicated to the command-line interface. This day gives experienced contributors the opportunity to put their expertise into practice, and offers newcomers a gentle introduction to the world of WordPress. Tables specially reserved for beginners were available to familiarize them with the contribution process. Taking part in this day dedicated to "giving back" is a unique opportunity to contribute to the improvement of the platform at the heart of our business. It's a behind-the-scenes look at WordPress, and an opportunity to enjoy a first day in a small group before the official launch of WordCamp. ### A program for all audiences WordCamp Europe opened its doors the following day. On the program: two days packed with conferences and workshops covering all aspects of the CMS, from technical development issues to accessibility, SEO, marketing and much more. Whatever your area of expertise or technical level, tailored presentations were offered, reinforcing the principles of inclusivity and sharing within the ecosystem. The diversity of the speakers was also noteworthy. Renowned experts and contributors shared their knowledge, while new speakers brought a breath of fresh air and innovation. A special mention goes to <NAME>, a 16-year-old speaker who took to the stage to talk about the prospects of Generation Alpha on WordPress - i.e. people born after 2010 (you're welcome). ### A host of sponsors It's impossible to talk about WordCamp without mentioning the event's sponsors. This year, they outdid themselves to offer original and immersive entertainment. Visiting the stands is an unmissable experience, offering the opportunity to interact with representatives of major companies in the sector, discover new offerings and establish potential partnerships. The sponsors helped to create a dynamic atmosphere combining communication and entertainment. Of particular note were the "cap grabber" animation on the Elementor stand and the immersive video game offered by Ionos. Their financial support enables the organizers to put on a large-scale event while offering affordable tickets, unlike other tech events where tickets can be very expensive. In exchange, sponsors benefit from exceptional on-site visibility. ## The international WordPress community ### A unifying effervescence WordPress, powering over 40% of the web, benefits from the energy of thousands of people within its ecosystem. And there's nothing like the experience of a WordCamp to grasp its full dimension. The enthusiasm and effervescence that emanate from these international events are truly unique, and every detail of organization is carefully thought out to ensure the well-being of participants. The joyful reunion of old members and the warm welcome of new ones create a convivial atmosphere that lasts throughout the event. ### Embodying shared values The WordPress community stands out for its ability to embody the fundamental principles of the CMS. In this respect, WordCamp Europe fulfilled its mission by promoting inclusiveness, collaboration, sharing, well-being, mutual aid and innovation. Through strong messages and concrete actions, the organizers and volunteers created an event that plunges us right into the heart of the WordPress DNA: an essential element for strengthening the sense of belonging, even on an international scale. ## WCEU: a must-attend event for Be API ### A window on innovation Events such as WordCamp Europe are unique opportunities to stay at the cutting edge of web innovations, particularly those related to WordPress, in a constantly evolving industry. Conferences and discussions led by experts explore advances in development, design and security on the platform. New features, best practices and strategic advice are shared, giving participants a valuable perspective on the evolution of the CMS and its ecosystem. This is an opportunity for Be API, as a specialized WordPress agency, to strengthen our expertise and enrich our offerings to deliver ever more effective and sustainable solutions to our customers. ### The power of networking And one of the most valuable aspects of these gatherings is the opportunity to build strong relationships with other industry professionals. Discussions and workshops are privileged opportunities to exchange ideas with our peers, meet new players and discover new approaches. WordCamp is also an opportunity to bring partners together at dedicated evenings, in parallel with the main event. Our Be Apistes were able to meet up with our WordPress VIP partners at a party organized by them. An important opportunity to discuss trends, evolving uses and issues, and to question the best ways of meeting market requirements. In conclusion, it was a very successful 2023 edition, combining the power of an international tech event with the inspiring energy of the WordPress community. Many thanks to the organizers and volunteers for their hard work and commitment, and see you next year in Turin, Italy, for the next WordCamp Europe! # The inspiring story of <NAME>, a French WordPress pioneer This year, WordPress celebrates its 20th anniversary: a milestone we've decided to celebrate by delving into the fascinating career of Be API's CEO, whose professional path is closely linked to the development of WordPress in France. Behind his humility and authenticity lies a visionary spirit driven by solid convictions. The right mix that led him to co-found the French WordPress community in 2005, then a few years later, the first French WordPress agency. Although this story didn't begin in a garage, it's a testament to the success of a man who has worked from the outset to position the CMS at the heart of the enterprise market. Together, we look back at WPFR's beginnings, the golden age of web 2.0 and the role of people in the agency's success. ## WordPress & WPFR: the origins In 2005, <NAME> began studying computer engineering in Valenciennes, France, to learn programming. He quickly made a name for himself with his interest in the Internet and its community aspect - a passion that found little echo among his fellow students, who saw the web as an introduction to programming. Despite this, Amaury persevered and began creating websites alongside his studies. This was the golden age of blogging, with many players seeking to invest in the web to reach a wider audience. Contributors were gradually taking over from webmasters, creating unprecedented demand. What particularly attracts Amaury are the communities that animate blogging platforms. He discovered WordPress in 2005, when the CMS was just two years old. In France, the community consisted mainly of two people: <NAME> and <NAME>. Attracted by this promising tool and fledgling project, Amaury offered to join them, putting his skills as a computer developer to good use. Under the impetus of this trio, the French-speaking WordPress association - WPFR - is born. Amaury lays the foundations of the WPFR site and creates the support forum. Over the years, he has devoted considerable time to contributing to the forum, accumulating several thousand posts to his credit. I've always been a "false developer". In working groups, I was the one who wondered whether the project was practical for users. <NAME> By placing contribution, accessibility and ease of navigation at the heart of its concerns, Amaury is responding to the needs of users at a time when blogs are not yet a mainstream tool: it's the professionals who come knocking directly at its door. Thanks to his active involvement on the WPFR forum, he is gaining in visibility. Although his involvement began on a voluntary and disinterested basis, it offered him his first professional opportunities in 2006. Contacted by a communications agency, he was entrusted with the creation of a horse-racing festival blog, which led him to set up his auto-entrepreneur status. One thing led to another, and the requests multiplied, enabling him to gain valuable experience. ## From virtual to real As part of his studies, Amaury began an internship with a web agency in 2007. It was here that he worked on his first major project for the Sud Ouest newspaper, which commissioned a blog platform to coincide with the presidential election. The project confirmed Amaury's belief that the WordPress adventure was just beginning. Feeling that he didn't quite fit in with the company's values, he resigned after only a few months' internship to launch his own business, wp-box.fr. This company enabled him to offer a range of WordPress-related services, making him one of the first French players to make a living from the CMS. At the same time, Amaury remains extremely active in the growing French-speaking community. Alongside Xavier and Matthieu, he organized the first French BarCamps. The concept is simple: " We meet in a café, and anyone who wants to talk about WordPress is welcome. It's homemade, simple... but effective! These participative workshops are growing and attracting more and more participants. Amaury plays a key role as technical manager. "We're moving from the virtual to the real. The community, the mutual support: it becomes tangible". He also contributes to open source, notably participating in the translation of WP-MU and creating more than a dozen open-source plugins in parallel with his professional missions. Since the start of his career, he has incorporated the notion of "giving back" into his day-to-day work, which consists of giving his time and expertise to the CMS in return for the possibilities offered by the tool. During one of these famous BarCamps, the three friends were visited by <NAME>, the American creator of WordPress. He salutes the French initiative, delighted to see his project so passionately supported and animated. The day ends with a meal on the roof of the Georges Pompidou restaurant, a memorable moment enhanced by a little solitude: " He speaks with a strong American accent, and I can barely decipher. So Xavier translates and I listen, drinking beers. An important recognition that remains engraved in our CEO's memory. As the years went by, demand continued to grow. It was at this point that Amaury founded his second company, joining forces with one of his customers, <NAME>, then director of the Media Factory agency. The result is Be API, France's first pure-player WordPress agency. ## Be API: WordPress to infinity and beyond A period of carefree growth begins for Be API, marked by the arrival of the first customers, which also leads to the first recruitments. Developers were first hired to support Amaury, then project managers were hired to support customers on larger projects. True to form, the CEO attaches great importance to the human criterion: he looks for experts, but also for people who are caring, respectful and positive. This philosophy, which stands out in the agency sector, has led some team members to remain involved in the adventure for over ten years - the longest-serving have even become partners (greetings Nicolas, Meriem, Alexandre and Yann!). In terms of strategy, the agency remains true to what has made WordPress so successful: a contributor-centric approach. We didn't want to be an agency of the past, charging for text changes on a site. That was forbidden to me. <NAME> In these projects, everything must be contributory, and the ultimate goal is to make the customer autonomous. With this approach, and by responding to technical and organizational needs, the agency has succeeded in carrying out a real mission of evangelization and demonstrating the relevance of WordPress to support large-scale digital projects. Be API is structured to support CIOs and key accounts with increasingly ambitious budgets and challenges: our offering and methodology are evolving to meet the needs of high-traffic sites and work on complex WordPress creation, redesign or maintenance projects. We're a long way from the blogs of the early days! At the same time, Amaury and his team make sure they have time to continue their involvement in the French WordPress community. When WordCamps arrived in France in 2012, the agency took part, providing logistical, artistic, technical and financial support. Team members are also involved in numerous inter-CMS professional events, such as agoraCMS and CMSday. This commitment and the realization of ambitious projects were rewarded in 2019, when Be API became the first French WordPress VIP partner agency. ## Outlook & future But is it an end in itself? Of course not! The visionary spirit of the early days remains attentive to the market and recognizes the importance of evolving the agency to meet new needs. "It's never static.We haveto constantly evolve our teams, processes and methodologies to adapt to customer issues and achieve our objectives". So it' s time for development, with many ideas currently in the pipeline - yes, that's almost teasing! As for his feelings on how far he's come, a hint of nostalgia can be felt when Amaury recalls the prosperous era of CMS. A period of rapid growth and a disruptive technical solution that disrupted the market - a real "game changer". Today, with the global success of WordPress and the breadth of its ecosystem, the challenge is different. The challenge is to show that added value comes not only from the tool itself, but also from the expertise that accompanies it, to create customized solutions. <NAME> Finally, beyond the commercial stakes, Amaury concludes with the aspect that is closest to his heart in terms of the journey: the exceptional human experience represented by the creation of WPFR, then Be API, which he describes as a real collective success. " In 15 years, over 150 people have contributed to the agency's success. It has revealed vocations, created solid relationships and built great stories. It's thanks to all these people that we got to where we are today". Many thanks to Amaury for this interview! Like it? Find him on LinkedIn to find out more about his career, or follow the Be API page, so you don't miss any future stories. # Optimizing WordPress design with Johannes The arrival of Gutenberg in 2018 marked a turning point at Be API. The latest WordPress content editor, more flexible and "user-friendly" than its predecessors, quickly established itself as the optimal solution for carrying out ambitious projects in perfect co-creation with our customers. Convinced that the success and longevity of a site depend to a large extent on its practicality and intuitiveness for end contributors, we have totally adapted our methodology around this editor. And that of course includes the design phase! That's why our team has created Johannes, a "Gutenberg ready" UX kit that lets you quickly and easily design wireframes on WordPress. Available as open source on Figma, the tool is now an integral part of our agency methodology. In this article, we look back at the origins of the project and the integration of Johannes into our production process. ## The genesis: thinking Gutenberg right from the design phase Summer 2021. The Be API Team Leaders get together at a seminar to think about the following problem: how to integrate Gutenberg into all the production stages of a WordPress site, to exploit the tool's full potential? We're thinking directly about how to reorganize the technical side of things, how to follow up with our customers and how to train all our teams in the content editor. There's also the question of integrating Gutenberg right from the design phase. In practice, design teams have already been creating wireframes for Gutenberg projects for some time. This step is essential, as the qualification of the blocks has a considerable impact on the rest of the project. This gave rise to the idea of creating a UX kit integrating all the native Gutenberg blocks, as well as extensions and the most commonly used block packs, already calibrated for the editor. The design team set about creating these templates, and the first version of Johannes was born. Why does this type of tool make sense? Because the arrival of Gutenberg and its atomic design principle has completely transformed the way we design sites on WordPress. We've gone from fixed, fully customizable templates to the ability to create patterns, use native blocks in the editor and create custom blocks. Upstream of each design phase, we create a document called the "design system", which groups together all the types of blocks that will be present and used throughout the site. In a way, this is what Johannes is: an ultra-complete reference document that can be rolled out and adapted to each project. It allows you to quickly and easily create your own wireframes, already compatible with the editor. ## A tool to support our methodology At Be API, designing WordPress sites with the Gutenberg editor is our daily business. That's why tools like Johannes add real value to our production chain, enabling us to..: * optimize our design phases - the wireframe base already exists, ready to be adapted to each project * focus on the essentials - namely UI, storytelling and the user journey By standardizing the design, we can guide the artistic direction and create mock-ups that are faithful to the final rendering, while respecting the technical prerequisites. What's more, projects are already calibrated for Gutenberg, which also saves us time during the development phase. At the agency, we work with our customers on large-scale projects. As such, we attach great importance to the implementation and running of a well-honed methodology. A good methodology is not only based on detailed organization - masterfully led by the project team - but also on the use of appropriate tools. That's why Johannes is particularly well-suited to our organization. We draw on our experience to constantly improve our processes, while taking care to keep the principle of co-creation at the heart of our methodology. The aim is to integrate all project stakeholders and ensure that we're all working in the same direction. ## Get the whole team in the same boat And for good reason! Close collaboration remains a key element in the success of any project. By integrating the principle of block and atomic design right from the mock-up phase, we not only work with Gutenberg in design, but also with our customers, right from the start of the project. The "Gutenberg ready" wireframes we present allow us to integrate the WordPress interface right from the UX phase. This helps us explain how the CMS works to end contributors, and clarifies the native blocks we'll be using, as well as any additional development needs to create custom blocks. And I think everyone agrees that a project is much more pleasant and fluid when all stakeholders speak the same language - and rest assured, you don't need to be a webmaster to understand Johannes' language. The use of these templates enables us to produce a more detailed design that is easier to understand by all those involved in the project. On the customer side, there's real added value in being able to project contributions with Gutenberg and prepare content right from the mock-up phase. Elisabeth, Digital Project Manager In working on this kit, we have retained the original ambition that motivates us at Be API: to create practical, useful projects for end-users and contributors alike. It's by combining our business expertise with the power of your content that we create relevant and lasting digital experiences. We're also happy to share this resource with the WordPress community, in the spirit of "giving back". You can find the UX kit available as open source on Figma here. Are you a designer or freelancer? Don't hesitate to get it, use it and give us your feedback! # WordCamp Paris 2023: a must-attend event returns On Friday, April 21, 2023, WordCamp Paris took place: a day dedicated to physically bringing together members of the French-speaking WordPress community. The gathering was held at the Halle Pajol, and marked the long-awaited return of this flagship event for all WordPress enthusiasts and professionals. A look back at a day punctuated by fascinating encounters and enriching exchanges. For this 2023 edition, there was a real buzz of excitement in the air. And with good reason: the last WordCamp Paris was held in 2019. In the meantime, the global pandemic had suspended any possibility of organizing the next event, scheduled for 2020. Four years later, WordCamp Paris is finally back, sounding like a reunion after a long break. On the program for the day: a list of conferences and workshops on a wide variety of CMS topics. From pure development to SEO, from design to WordPress philosophy, there was (truly) something for everyone. Attending a WordCamp means putting into practice the principle of the open source project: "giving back", i.e. contributing to enrich the WordPress ecosystem and keep the community alive. It's also an opportunity to meet digital players such as specialized hosting providers and extension editors, who have come to present their offerings as sponsors of the event. These gatherings are a must for all WordPress professionals, and we're delighted to see them. ## The Be API team at the rendezvous More than twenty members of the team made the trip. Like the WordCamp participants, all the agency's professions were represented - from the development team, to SEO, design, marketing and communications. We came as a group to attend the various conferences, but also to support our four Be APIstes speakers this year <NAME>, CEO and founding member of the WordPress community in France, presented " The Detective's Guide to WordPress Troubleshooting". A conference to learn how to recognize and diagnose various errors. <NAME>, Head of Design, and <NAME>, UX Designer, presented Johannes, our "UX kit for designing with Gutenberg".. The ideal tool for optimizing your WordPress design phases, available in open source on Figma. Acquisition expert <NAME> shed some light on: "what more can you do when you think you've already done everything in SEO?" A conference to discuss the additional steps to take once the basic SEO techniques have been integrated into your site. Presentations followed one another throughout the day, in a variety of formats - lectures, lightning talks (10-minute presentations) or "parlottes" (discussions focused on specific topics, open to all). Of course, there will also be some eagerly awaited new topics, such as the place of AI in WordPress, Gutenberg and its everyday use, and the integration of eco-design into a methodology. ## A recipe for success The WordCamp was unanimously described as a real success, thanks of course to the speakers, organizers, volunteers, sponsors and participants. The event highlights the real strength of the WordPress community, and the values of inclusivity and sharing it stands for. Whether you work for an agency or a company, are a freelancer, user or enthusiast, everyone is welcome to take part in a WordCamp. Another key to its success was the fact that professionals were invited not to advertise their services during the conferences. The focus was on contribution, and the players played the game, guaranteeing a real "good vibes" atmosphere. I experienced all this as a beginner: yes, it was my first WordCamp! I went there with no particular expectations, and I wasn't disappointed. I discovered the "secret sauce" of WordPress: a committed community, working to continuously improve an excellent product. The promise of the "giving back" mindset has been kept: giving is also receiving in return, both professionally and on a human level. ## In brief This WordCamp Paris lived up to expectations, and we invite all curious WordPress enthusiasts to take part in a WordCamp of their own. We'd like to take this opportunity to thank the organizing team and volunteers for their commitment and for organizing the day in such a short space of time - hats off to Marjorie and her team! For our part, we look forward to seeing you in early June 2023 at WordCamp Europe in Athens, where we'll be taking you right to the heart of the event. # Opquast, the icing on the cake Today, more than ever, our customers and we are constantly discussing and exchanging ideas on subjects such as accessibility,eco-design, security, performance and SEO for the websites we build for and with them... all approaches, standards and best practices that boost the meaning, interest and value of the sites we produce. These topics could thus be grouped together under a single umbrella term, that of quality, a term of which these 5 topics are both pillars and valuable indicators. While production processes (in terms of tooling and organization) are well established and tried and tested at Be API, applying the quality mindset to all aspects of the user experience had yet to be consolidated across the agency (all professions). And as there are no 4 ways to get into the right running order, the year 2022 was indeed the year of an "acute opquastization" at Be API! ## For latecomers... Yes, we could have done without reminding you who Opquast is, but what follows will at least have the merit of enabling those who don't yet know(shame on you) about this organization to jump on the bandwagon. Because the official website will explain it much better than we can here, Opquast (Open Quality Standards), in a nutshell and with links, is : * a state of mind at the service of users * a cross-functional vision in a model that is easy to understand, remember and use * full of clear, consensual rules, materialized and categorized in a checklist * certification recognized in the web world * training to take you further * a network of professionals who speak the same language This will help you understand what Be API and many other web professionals and students have gone through in recent years. ## Quality in Be API's DNA Since the company was founded in 2009, quality has always been part of Be API's vision and the actions implemented to meet this need: * At agency level: quality is above all what the agency wants to be known and recognized for; * At the customer level: quality is inevitably and almost systematically at the top of the pile of customer requirements. Be API has long clearly positioned itself as a "Creator of Digital Happiness". How could the agency make such a promise to its customers if quality wasn't already at the heart of its way of working? The agency has constantly adapted to changes in the market in order to progress and develop its teams and offer its customers constructive, logical and efficient collaboration: * Technically, this involves the gradual adoption of working methods that are adapted to changes in the business. For our part, code architecture, peer programming, versioning, pull requests and code review have been part of our routine for a long time now, not to mention our numerous maintenance activities (TMA) to ensure the follow-up and evolution of our customers' various sites; * On an organizational level, all processes (agency/client workshops, design follow-up, estimates, cutting, production follow-up) have also been in place for a long time, ensuring that all project participants are on the same wavelength, from start to finish; * Finally, quality development within the agency would not be possible without interaction/confrontation with peers in the industry. Taking part in dedicated community events (notably the WordPress community, with national and international WordCamps), participating technically in the development of our favorite tool (dedicated open-source team), continuous training (software, languages, frameworks, best practices, etc.) are all good ways of challenging (and at least maintaining) a high level of quality. ## General tour, certification for everyone! Fortunately, web quality isn't just a matter of obtaining certification (even if it does make a nice badge to show off in client presentations or on your CV), or of consulting and filling in a checklist at the end of a project. Despite this, preparing for the "Mastering Quality in Web Projects" certification is still the best way to get started (or get up to speed) with web quality. Summer 2022 was a studious one at Be API. From June to September, each BeApiste learned, revised and prepared his or her subject. Certifications were passed as each person progressed, and communicated to the rest of the team as they went along. The very good scores, which came in dribs and drabs, motivated the troops to challenge each other and try to outdo their neighbors. On several occasions, the rules were at the heart of informal conversations within the teams (sometimes at the end of meetings, during lunch breaks, or even in the middle of a game of table soccer), creating an overall synergy between the learners. And this is one of the greatest benefits of preparing for this certification: creating a bond and a common language between team members who are used to working together on a daily basis. ## Be API, partner agency In addition to the rules and training it provides, the Opquast organization is also a powerful tool for bringing together players in the (primarily French) web industry, which fits perfectly with Opquast's main objective of extracting and standardizing a common language. In fact, Opquast talks openly of a community, comprising agencies, organizations, schools and training centers with which Opquast is partnered, not to mention the (+) 17,000 certified people who have gone through Opquast basic training. It was therefore quite logical for Be API to position itself beyond certification, notably as an Opquast.... partner, which has been the case since December 2022. ## To conclude Be API is now better equipped to support its customers and guide them ever more effectively towards relevant, realistic and ambitious design, technical and strategic choices. So if you're looking for a quality web project that's sustainable over time, and of course based on WordPress, get in touch with us via the chat or contact form on our site! Main image by vectorjuice on Freepik Go to content Go to navigation Agence WordPress Be API | News our api news Events Immersion au WordCamp Biarritz Filter by Press release Design Events Marketing RGPD Tech WordPress WordPress Getting started with Gutenberg block development in React: feedback Press release Be API wins tender to redesign Secours Populaire Français website Events Be API at WordCamp Europe 2022 in Porto WordPress Eco-design, a positive practice for your business too Design Optimize conversions with a CRO strategy WordPress HTTP/2 and SEO, what you need to know! Tech How has CI/CD with Buddy changed our development methodology? Tech Gutenberg , the user-friendly content builder for WordPress RGPD RGPD: How to keep a register of personal data processing? RGPD How to set up Google Analytics to comply with the RGPD? « 1 2 3 » Go to content Go to navigation Agence WordPress Be API | News our api news Events Immersion au WordCamp Biarritz Filter by Press release Design Events Marketing RGPD Tech WordPress Design WordPress UX design: how to design efficiently with Gutenberg? RGPD Cookie banner and RGPD compliance - How to choose your plugin? Marketing SEO and WordPress in 2021, tips and recommendations Events WordPress WP BootCamp 2019: Be API's Lead DEV recounts his experience Events WordPress Tout Be API at WordCamp Paris 2019! Tech Composer : Make Stable Tech Composer : Freeze Versions Events Be API at WordCamp Paris 2018 « 1 2 3 # Contact a WordPress expert Go to content Go to navigation Contact a WordPress expert 01 76 41 07 91 Do you have a project in mind? Contact us Want to join us? <EMAIL> Want to talk WordPress? <EMAIL> We are present throughout France HEADQUARTERS 9 rue des colonnes 75002 Paris Offices Paris Strasbourg Biarritz Nantes BNP Paribas Assets Management, filiale du groupe BNP Paribas, se consacre entièrement à la gestion d'actifs. Avec une empreinte mondiale, cette entité pilote de nombreux sites dans plus de 35 pays. Face à la complexité de gérer une multitude de sites pour ses marques et filiales, le client souhaitait construire une usine à sites internationale. Objectif: optimiser la gestion et la production digitale, tout en respectant les spécificités et déclinaisons locales de chaque entité. En coproduction avec l’agence <NAME> et Théodo, Be API a conçu une usine à sites WordPress solide, enrichie de composants ReactJS pour une présentation fluide des fonds financiers. Notre défi majeur a été de développer un moteur de synchronisation permettant la diffusion simultanée d’actualités sur tous les sites, tout en gérant les nuances linguistiques. Malgré les challenges, comme un rebranding intervenu en cours de projet, notre équipe a fait preuve d’une adaptabilité pour assurer la partie technique sur ce projet d’envergure. Le client dispose d’une système performant, mettant en avant un moteur de synchronisation d’actualités et une gestion multilingue des contenus. Lors de la phase de maintenance, Be API a assuré un transfert de compétences fluide vers l’équipe de BNP Paribas Assets Management, garantissant une transition en toute confiance vers l’autonomie. We're proud to be the only French partner of WordPress VIP, a unique program dedicated to key account offers.This VIP Silver label is the result of Be API's long-standing active involvement in the WordPress community and ecosystem. ## Why work with a WordPress VIP partner agency? WordPress VIP partner agencies around the world are hand-picked by Automattic, the company behind WordPress created by <NAME> (founder of WordPress). ## What do they have in common? They all have the expertise, skills and knowledge to handle large-scale WordPress projects. These projects are increasingly demanding in terms of security, flexibility and performance. Thanks to this partnership, Be API's WordPress service offering is moving up a gear to meet the challenges posed by the most complex, high-traffic projects. Be API is one of 30 WordPress VIP partner agencies worldwide. ## The 3 advantages of working with Be API and WordPress VIP ### Unique expertise When you become a Be API VIP customer, you're guaranteed to benefit from the most advanced WordPress expertise in France. Daily technical expertise, including code reviews. ### Unique know-how WordPress VIP ensures the smooth running of your WordPress platforms 24/7/365, anywhere in the world. Unlike other market players, VIP is able to intervene directly at the heart of your WordPress project in the event of an incident. ### Unlimited capacities Working with VIP and Be API means choosing an offer that adapts to all your audience needs. VIP uses its own datacenters, and has built a platform that is 100% optimized for WordPress. ## Go VIP with Be API: our 3 à la carte offers Architecture, interoperability, DevOps & Cloud Do you have a WordPress platform project?email <EMAIL> ## Definitions Customer: any professional or capable natural person within the meaning of Articles 1123 et seq. of the French Civil Code, or legal entity, who visits the Site which is the subject of these General Terms and Conditions.Services: beapi. fr provides Customers with : Content: All the elements making up the information on the Site, in particular text - images - videos. Customer information: Hereinafter referred to as "Information (s)" which correspond to all personal data that may be held by beapi. fr for the management of your account, customer relationship management and for analysis and statistical purposes. User: Internet user connecting to and using the above-mentioned site. Personal information: "Information which enables, in any form whatsoever, directly or indirectly, the identification of the natural persons to whom it applies" (article 4 of law no. 78-17 of January 6, 1978). The terms "personal data", "data subject", "sub-processor" and "sensitive data" have the meaning defined by the General Data Protection Regulation (GDPR: no. 2016-679) ## 1. Website presentation. In accordance with article 6 of French law no. 2004-575 of June 21, 2004 on confidence in the digital economy, users of the beapi. fr website are informed of the identity of the various parties involved in its creation and follow-up: Owner: SAS BE API Capital social de 510€ Numéro de TVA: FR39514178094 - 9 rue des Colonnes 75002 PARIS Responsible for publication: <EMAIL> - <EMAIL>The person responsible for publication is a natural person or a legal entity. Webmaster: <NAME> - <EMAIL> Host: o2switch - 222 Bd <NAME>, 63000 Clermont-Ferrand Data Protection Officer: <NAME> - <EMAIL> ## 2. General conditions of use of the site and the services offered. The Site constitutes an intellectual work protected by the provisions of the French Intellectual Property Code and applicable international regulations.The Customer may not in any way reuse, transfer or exploit for his own account all or part of the elements or works on the Site. Use of the beapi. fr website implies full acceptance of the general conditions of use described below. These conditions of use may be amended or supplemented at any time, and users of the beapi. fr website are therefore advised to consult them regularly. This website is normally accessible to users at all times. However, beapi.fr may decide to interrupt the site for technical maintenance purposes, and will endeavor to inform users of the dates and times of such interventions in advance.The beapi.fr website is regularly updated by beapi. fr responsable. Similarly, the legal notices may be modified at any time: they are nevertheless binding on the user, who is invited to refer to them as often as possible in order to take cognizance of them. ## 3. Description of services provided. The purpose of the beapi. fr website is to provide information about all the company's activities.beapi. fr strives to provide information on the beapi. fr website that is as accurate as possible. However, beapi.fr cannot be held responsible for any omissions, inaccuracies or failure to update information, whether caused by beapi.fr or by third-party partners supplying such information. All information on the beapi.fr website is provided for information purposes only and is subject to change. Furthermore, the information contained on the beapi. fr website is not exhaustive. It is subject to modifications having been made since it was put online. ## 4. Contractual limitations on technical data. The site uses JavaScript technology. The website cannot be held responsible for any material damage linked to the use of the site. In addition, the user of the site undertakes to access the site using recent equipment, free of viruses and with a browser of the latest generation updatedThe site beapi.fr is hosted by a provider on the territory of the European Union in accordance with the provisions of the General Regulation on Data Protection (RGPD: No. 2016-679) The aim is to provide a service that ensures the best possible level of accessibility. The host ensures continuity of service 24 hours a day, every day of the year. However, it reserves the right to interrupt the hosting service for the shortest possible time, in particular for maintenance purposes, to improve its infrastructures, in the event of infrastructure failure, or if the Services generate traffic deemed abnormal. beapi.fr and the host cannot be held responsible in the event of malfunctioning of the Internet network, telephone lines or computer and telephony equipment, particularly due to network congestion preventing access to the server. ## 5. Intellectual property and counterfeiting. beapi.fr is the owner of the intellectual property rights and holds the rights of use on all the elements accessible on the website, in particular the texts, images, graphics, logos, videos, icons and sounds.Any reproduction, representation, modification, publication, adaptation of all or part of the elements of the site, whatever the means or the process used, is prohibited, except prior written authorization of: beapi.fr. Any unauthorized use of the site or of any of the elements it contains will be considered as counterfeiting and will be prosecuted in accordance with the provisions of articles L.335-2 et seq. of the French Intellectual Property Code. ## 6. Limitation of liability. beapi.fr acts as the publisher of the site. beapi.fr is responsible for the quality and accuracy of the Content it publishes. beapi.fr cannot be held responsible for any direct or indirect damage caused to the user's equipment when accessing the beapi.fr website, and resulting either from the use of equipment that does not meet the specifications indicated in point 4, or from the appearance of a bug or incompatibility. beapi.fr cannot be held responsible for indirect damages (such as loss of market or loss of opportunity) resulting from the use of the beapi.fr website.Interactive areas (possibility of asking questions in the contact area) are available to users. beapi.fr reserves the right to delete, without prior notice, any content posted in this area which contravenes French legislation, in particular data protection provisions. Where applicable, beapi.fr also reserves the right to hold the user civilly and/or criminally liable, particularly in the event of messages of a racist, insulting, defamatory or pornographic nature, whatever the medium used (text, photographs, etc.). ## 7. Personal data management. The Customer is informed of the regulations concerning marketing communication, the law of June 21, 2014 for confidence in the Digital Economy, the Data Protection Act of August 06, 2004 as well as the General Data Protection Regulation (RGPD: n° 2016-679). ### 7.1 Persons responsible for the collection of personal data For Personal Data collected as part of the creation of the User's personal account and browsing on the Site, the person responsible for processing Personal Data is: BE API. beapi.fr isrepresented by <NAME>, its legal representative. As the party responsible for processing the data it collects, beapi. fr undertakes to comply with the legal provisions in force. In particular, it is the Customer's responsibility to establish the purposes of its data processing, to provide its prospects and customers with complete information on the processing of their personal data, once their consent has been obtained, and to maintain an accurate register of processing.Whenever beapi.fr processes Personal Data, beapi.fr takes all reasonable steps to ensure the accuracy and relevance of the Personal Data with regard to the purposes for which beapi.fr processes them. ### 7.2 Purpose of the data collected beapi. fr may process some or all of the following data: * to enable browsing on the Site and the management and traceability of services ordered by the user: Site connection and usage data, billing, order history, etc. * to prevent and combat computer fraud (spamming, hacking, etc.): hardware used for browsing, IP address, password (hashed), etc. * to improve browsing on the Site: connection and usage data * to conduct optional satisfaction surveys on beapi.fr: email address * for communication campaigns (sms, e-mail): telephone number, e-mail address beapi.fr does not sell your personal data, which is only used for statistical and analysis purposes. ### 7.3 Right of access, rectification and opposition In accordance with current European regulations, users of beapi.fr have the following rights: * right of access (Article 15 RGPD) and rectification (Article 16 RGPD), updating, completeness of Users' data right to block or erase Users' personal data (Article 17 RGPD), when they are inaccurate, incomplete, equivocal, outdated, or whose collection, use, communication or storage is prohibited. * right to withdraw consent at any time (article 13-2c RGPD) * right to portability of data provided by Users, where such data is subject to automated processing based on their consent or on a contract (Article 20 RGPD) * the right to determine what happens to Users' data after their death, and to choose to whom beapi. fr will communicate (or not) their data to a third party that they have previously designated As soon as beapi.fr becomes aware of a User's death, and in the absence of any instructions from the User, beapi. fr undertakes to destroy the User's data, unless its retention is necessary for evidential purposes or to comply with a legal obligation. If the User wishes to know how beapi. fr uses his/her Personal Data, ask for them to be rectified or object to their processing, the User may contact beapi. fr in writing at the following address: BE API - DPO, <NAME>MER118/130 avenue Jean Jaurès 75019 PARIS. In this case, the User must indicate the Personal Data that he would like beapi. fr to correct, update or delete, identifying himself precisely with a copy of an identity document (identity card or passport). Requests for the deletion of Personal Data will be subject to the obligations imposed on beapi. fr by law, in particular as regards the retention or archiving of documents. Finally, users of beapi.fr may lodge a complaint with the supervisory authorities, in particular the CNIL (https://www.cnil.fr/fr/plaintes). ### 7.4 Non-disclosure of personal data beapi. fr will not process, host or transfer Information collected about its Customers to a country outside the European Union or recognized as "not adequate" by the European Commission without informing the Customer in advance. For all that, beapi. fr remains free to choose its technical and commercial subcontractors on condition that they present sufficient guarantees with regard to the requirements of the General Data Protection Regulation (RGPD: n° 2016-679). beapi.fr undertakes to take all necessary precautions to preserve the security of the Information and in particular to ensure that it is not communicated to unauthorized persons. However, if beapi.fr becomes aware of an incident affecting the integrity or confidentiality of the Customer's Information, it must inform the Customer as soon as possible and inform him of the corrective measures taken. Furthermore, beapi.fr does not collect any "sensitive data". The User's Personal Data may be processed by beapi. fr subsidiaries and subcontractors (service providers), exclusively in order to achieve the purposes of this policy. Within the limits of their respective responsibilities and for the purposes mentioned above, the main persons likely to have access to beapi. fr Users' data are mainly our customer service agents. ## 8. Incident notification Despite our best efforts, no method of transmission over the Internet and no method of electronic storage is completely secure. We therefore cannot guarantee absolute security.Should we become aware of a security breach, we will notify the users concerned so that they can take appropriate action. Our incident notification procedures take account of our legal obligations, whether at national or European level. We are committed to keeping our customers fully informed of all matters relating to their account security, and to providing them with all the information they need to help them meet their own regulatory reporting obligations. No personal information concerning the user of the beapi.fr website is published without the user's knowledge, nor is it exchanged, transferred, assigned or sold on any medium whatsoever to third parties. Only the assumption of the purchase of beapi.fr and its rights would allow the transmission of the said information to the eventual purchaser who would in turn be bound by the same obligation of conservation and modification of the data with respect to the user of the site beapi.fr. ### Security To ensure the security and confidentiality of Personal Data and Personal Health Data, beapi. fr uses networks protected by standard devices such as firewalls, pseudonymization, encryption and passwords. When processing Personal Data, beapi.fr takesall reasonable steps to protect it against loss, misuse, unauthorized access, disclosure, alteration or destruction. ## 9. Hypertext links, cookies and internet tags The beapi.fr website contains a number of hyperlinks to other sites, set up with the authorization of beapi.fr. However, beapi.fr is not in a position to check the content of sites visited in this way, and consequently assumes no responsibility in this respect. Unless you decide to deactivate cookies, you accept that the site may use them. You may deactivate these cookies at any time, free of charge, using the deactivation options provided below, although this may reduce or prevent access to all or part of the Services offered by the site. ### 9.1. COOKIES A "cookie" is a small data file sent to the User's browser and stored on the User's terminal (e.g. computer, smartphone), (hereinafter "Cookies"). This file includes information such as the User's domain name, the User's Internet service provider, the User's operating system, and the date and time of access. Cookies are in no way likely to damage the User's terminal. beapi.fr may process information concerning the User's visit to the Site, such as the pages consulted and searches carried out. This information enables beapi.fr to improve the content of the Site and the User's browsing experience. As Cookies facilitate browsing and/or the provision of services offered by the Site, the User may configure his/her browser to allow him/her to decide whether or not to accept them, so that Cookies are stored in the terminal or, on the contrary, are rejected, either systematically or according to their sender. The User may also configure his browser software so that acceptance or rejection of Cookies is proposed from time to time, before a Cookie is likely to be recorded in his terminal. beapi.fr informs the User that, in this case, it is possible that not all the functionalities of his browser software will be available. If the User refuses to accept cookies on his or her terminal or browser, or if the User deletes cookies stored on his or her terminal or browser, the User is informed that his or her browsing and experience on the Site may be limited. This may also be the case when beapi.fr or one of its service providers cannot recognize, for technical compatibility purposes, the type of browser used by the terminal, the language and display settings or the country from which the terminal appears to be connected to the Internet. Where applicable, beapi.fr declines all responsibility for the consequences linked to the degraded operation of the Site and any services offered by beapi.fr, resulting from (i) the refusal of Cookies by the User (ii) the impossibility for beapi.fr to record or consult the Cookies necessary for their operation due to the User's choice. To manage Cookies and the User's choices, the configuration of each browser is different. It is described in the browser's help menu, which will indicate how the User can modify his or her wishes with regard to Cookies. At any time, the User may choose to express and modify his/her wishes with regard to Cookies. beapi.fr may also use the services of external service providers to help it collect and process the information described in this section. Finally, by clicking on the icons dedicated to the social networks Twitter, Facebook, Linkedin and Google Plus appearing on the beapi. fr Website or in its mobile application, and if the User has accepted the deposit of cookies by continuing to navigate on the beapi.fr Website or mobile application, Twitter, Facebook, Linkedin and Google Plus may also deposit cookies on your terminals (computer, tablet, cell phone). These types of cookies are deposited on your terminals only if you consent to them, by continuing your browsing on the beapi.fr website or mobile application. However, Users may withdraw their consent to beapi. fr depositing this type of cookie at any time. ### Article 9.2. INTERNET TAGS beapi.fr may occasionally employ web beacons (also known as "tags", action tags, single-pixel GIFs, clear GIFs, invisible GIFs and one-to-one GIFs) and deploy them via a specialist web analytics partner who may be located (and therefore store related information, including the User's IP address) in a foreign country. These beacons are placed both in online advertisements enabling Internet users to access the Site, and on the various pages of the Site.This technology enables beapi. fr to evaluate visitors' responses to the Site and the effectiveness of its actions (for example, the number of times a page is opened and the information consulted), as well as the use of this Site by the User. The external service provider may collect information about visitors to the Site and other websites using these tags, compile reports on Site activity for beapi.fr, and provide other services relating to the use of the Site and the Internet. ## 10. Applicable law and jurisdiction. All disputes arising in connection with the use of the beapi. fr website are subject to French law.Except in cases where the law does not allow it, exclusive jurisdiction is granted to the competent courts of Paris. # Site map Go to content Go to navigation Agence WordPress Be API | Site map Site map WordPress Agency WordPress expertise Digital strategy WordPress development MVP / Agile project TMA WordPress WordPress audit website factory WordPress Site migration Agence Gutenberg Discover the agency Why WordPress Our approach The WordPress community Our case studies The blog Contact a WordPress expert Nous sommes fiers d’être le seul partenaire français de WordPress VIP, un programme unique dédié aux offres grands comptes.Cette labellisation VIP Silver est le résultat d’un engagement actif de longue date de la part de Be API dans la communauté et l’écosystème WordPress. ## Pourquoi travailler avec une agence partenaire WordPress VIP Les agences partenaires WordPress VIP, partout dans le monde, sont triées sur le volet par Automattic, l’entreprise derrière WordPress créée par <NAME> (fondateur de WordPress). ## Leur point commun ? Elles possèdent toutes l’expertise, les compétences et les connaissances nécessaires à la réalisation des projets WordPress d’envergure. Ces derniers présentent des objectifs de sécurité, de flexibilité et de performance toujours plus avancés. Grâce à ce partenariat, l’offre de services WordPress de Be API passe à la vitesse supérieure et relève les défis posés par les projets les plus complexes et à fort trafic. Be API est l’une des 30 agences partenaires VIP WordPress dans le monde. ## Les 3 avantages de travailler avec Be API et WordPress VIP ### Une expertise unique En devenant client VIP avec Be API, vous avez l’assurance de bénéficier de l’expertise WordPress la plus pointue en France. Une expertise technique quotidienne, avec la possibilité de revue de code. ### Un savoir faire unique WordPress VIP vous assure du bon fonctionnement de vos plateformes WordPress 24/7/365, partout dans le monde. Contrairement aux autres acteurs du marché, VIP est en capacité d’intervenir directement au coeur de votre projet WordPress en cas d’incident. ### Des capacités sans limites Travailler avec VIP et Be API, c’est choisir une offre qui s’adapte à tous vos besoins en termes d’audience. VIP s’appuie sur ses propres datacenters, et a bâti une plateforme 100 % optimisée pour WordPress. ## Passez VIP avec Be API : nos 3 offres à la carte Architecture, intéropérabilité, DevOps & Cloud Vous avez un projet de plateforme WordPress ?écrivez nous à <EMAIL> ## Définitions Client : tout professionnel ou personne physique capable au sens des articles 1123 et suivants du Code civil, ou personne morale, qui visite le Site objet des présentes conditions générales.Prestations et Services : beapi.fr met à disposition des Clients : Contenu : Ensemble des éléments constituants l’information présente sur le Site, notamment textes – images – vidéos. Informations clients : Ci après dénommé « Information (s) » qui correspondent à l’ensemble des données personnelles susceptibles d’être détenues par beapi.fr pour la gestion de votre compte, de la gestion de la relation client et à des fins d’analyses et de statistiques. Utilisateur : Internaute se connectant, utilisant le site susnommé. Informations personnelles : « Les informations qui permettent, sous quelque forme que ce soit, directement ou non, l’identification des personnes physiques auxquelles elles s’appliquent » (article 4 de la loi n° 78-17 du 6 janvier 1978). Les termes « données à caractère personnel », « personne concernée », « sous traitant » et « données sensibles » ont le sens défini par le Règlement Général sur la Protection des Données (RGPD : n° 2016-679) ## 1. Présentation du site internet. En vertu de l’article 6 de la loi n° 2004-575 du 21 juin 2004 pour la confiance dans l’économie numérique, il est précisé aux utilisateurs du site internet beapi.fr l’identité des différents intervenants dans le cadre de sa réalisation et de son suivi: Propriétaire : SAS BE API Capital social de 510€ Numéro de TVA: FR39514178094 – 9 rue des Colonnes 75002 PARIS Responsable publication : <EMAIL> – <EMAIL>Le responsable publication est une personne physique ou une personne morale. Webmaster : <NAME> – <EMAIL> Hébergeur : o2switch – 222 Bd <NAME>, 63000 Clermont-Ferrand Délégué à la protection des données : <NAME> – <EMAIL> ## 2. Conditions générales d’utilisation du site et des services proposés. Le Site constitue une œuvre de l’esprit protégée par les dispositions du Code de la Propriété Intellectuelle et des Réglementations Internationales applicables.Le Client ne peut en aucune manière réutiliser, céder ou exploiter pour son propre compte tout ou partie des éléments ou travaux du Site. L’utilisation du site beapi.fr implique l’acceptation pleine et entière des conditions générales d’utilisation ci-après décrites. Ces conditions d’utilisation sont susceptibles d’être modifiées ou complétées à tout moment, les utilisateurs du site beapi.fr sont donc invités à les consulter de manière régulière. Ce site internet est normalement accessible à tout moment aux utilisateurs. Une interruption pour raison de maintenance technique peut être toutefois décidée par beapi.fr, qui s’efforcera alors de communiquer préalablement aux utilisateurs les dates et heures de l’intervention.Le site web beapi.fr est mis à jour régulièrement par beapi.fr responsable. De la même façon, les mentions légales peuvent être modifiées à tout moment : elles s’imposent néanmoins à l’utilisateur qui est invité à s’y référer le plus souvent possible afin d’en prendre connaissance. ## 3. Description des services fournis. Le site internet beapi.fr a pour objet de fournir une information concernant l’ensemble des activités de la société.beapi.fr s’efforce de fournir sur le site beapi.fr des informations aussi précises que possible. Toutefois, il ne pourra être tenu responsable des oublis, des inexactitudes et des carences dans la mise à jour, qu’elles soient de son fait ou du fait des tiers partenaires qui lui fournissent ces informations. Toutes les informations indiquées sur le site beapi.fr sont données à titre indicatif, et sont susceptibles d’évoluer. Par ailleurs, les renseignements figurant sur le site beapi.fr ne sont pas exhaustifs. Ils sont donnés sous réserve de modifications ayant été apportées depuis leur mise en ligne. ## 4. Limitations contractuelles sur les données techniques. Le site utilise la technologie JavaScript. Le site Internet ne pourra être tenu responsable de dommages matériels liés à l’utilisation du site. De plus, l’utilisateur du site s’engage à accéder au site en utilisant un matériel récent, ne contenant pas de virus et avec un navigateur de dernière génération mis-à-jourLe site beapi.fr est hébergé chez un prestataire sur le territoire de l’Union Européenne conformément aux dispositions du Règlement Général sur la Protection des Données (RGPD : n° 2016-679) L’objectif est d’apporter une prestation qui assure le meilleur taux d’accessibilité. L’hébergeur assure la continuité de son service 24 Heures sur 24, tous les jours de l’année. Il se réserve néanmoins la possibilité d’interrompre le service d’hébergement pour les durées les plus courtes possibles notamment à des fins de maintenance, d’amélioration de ses infrastructures, de défaillance de ses infrastructures ou si les Prestations et Services génèrent un trafic réputé anormal. beapi.fr et l’hébergeur ne pourront être tenus responsables en cas de dysfonctionnement du réseau Internet, des lignes téléphoniques ou du matériel informatique et de téléphonie lié notamment à l’encombrement du réseau empêchant l’accès au serveur. ## 5. Propriété intellectuelle et contrefaçons. beapi.fr est propriétaire des droits de propriété intellectuelle et détient les droits d’usage sur tous les éléments accessibles sur le site internet, notamment les textes, images, graphismes, logos, vidéos, icônes et sons.Toute reproduction, représentation, modification, publication, adaptation de tout ou partie des éléments du site, quel que soit le moyen ou le procédé utilisé, est interdite, sauf autorisation écrite préalable de : beapi.fr. Toute exploitation non autorisée du site ou de l’un quelconque des éléments qu’il contient sera considérée comme constitutive d’une contrefaçon et poursuivie conformément aux dispositions des articles L.335-2 et suivants du Code de Propriété Intellectuelle. ## 6. Limitations de responsabilité. beapi.fr agit en tant qu’éditeur du site. beapi.fr est responsable de la qualité et de la véracité du Contenu qu’il publie. beapi.fr ne pourra être tenu responsable des dommages directs et indirects causés au matériel de l’utilisateur, lors de l’accès au site internet beapi.fr, et résultant soit de l’utilisation d’un matériel ne répondant pas aux spécifications indiquées au point 4, soit de l’apparition d’un bug ou d’une incompatibilité. beapi.fr ne pourra également être tenu responsable des dommages indirects (tels par exemple qu’une perte de marché ou perte d’une chance) consécutifs à l’utilisation du site beapi.fr.Des espaces interactifs (possibilité de poser des questions dans l’espace contact) sont à la disposition des utilisateurs. beapi.fr se réserve le droit de supprimer, sans mise en demeure préalable, tout contenu déposé dans cet espace qui contreviendrait à la législation applicable en France, en particulier aux dispositions relatives à la protection des données. Le cas échéant, beapi.fr se réserve également la possibilité de mettre en cause la responsabilité civile et/ou pénale de l’utilisateur, notamment en cas de message à caractère raciste, injurieux, diffamant, ou pornographique, quel que soit le support utilisé (texte, photographie …). ## 7. Gestion des données personnelles. Le Client est informé des réglementations concernant la communication marketing, la loi du 21 Juin 2014 pour la confiance dans l’Economie Numérique, la Loi Informatique et Liberté du 06 Août 2004 ainsi que du Règlement Général sur la Protection des Données (RGPD : n° 2016-679). ### 7.1 Responsables de la collecte des données personnelles Pour les Données Personnelles collectées dans le cadre de la création du compte personnel de l’Utilisateur et de sa navigation sur le Site, le responsable du traitement des Données Personnelles est : BE API. beapi.frest représenté par <NAME>, son représentant légal En tant que responsable du traitement des données qu’il collecte, beapi.fr s’engage à respecter le cadre des dispositions légales en vigueur. Il lui appartient notamment au Client d’établir les finalités de ses traitements de données, de fournir à ses prospects et clients, à partir de la collecte de leurs consentements, une information complète sur le traitement de leurs données personnelles et de maintenir un registre des traitements conforme à la réalité.Chaque fois que beapi.fr traite des Données Personnelles, beapi.fr prend toutes les mesures raisonnables pour s’assurer de l’exactitude et de la pertinence des Données Personnelles au regard des finalités pour lesquelles beapi.fr les traite. ### 7.2 Finalité des données collectées beapi.fr est susceptible de traiter tout ou partie des données : * pour permettre la navigation sur le Site et la gestion et la traçabilité des prestations et services commandés par l’utilisateur : données de connexion et d’utilisation du Site, facturation, historique des commandes, etc. * pour prévenir et lutter contre la fraude informatique (spamming, hacking…) : matériel informatique utilisé pour la navigation, l’adresse IP, le mot de passe (hashé) * pour améliorer la navigation sur le Site : données de connexion et d’utilisation * pour mener des enquêtes de satisfaction facultatives sur beapi.fr : adresse email * pour mener des campagnes de communication (sms, mail) : numéro de téléphone, adresse email beapi.fr ne commercialise pas vos données personnelles qui sont donc uniquement utilisées par nécessité ou à des fins statistiques et d’analyses. ### 7.3 Droit d’accès, de rectification et d’opposition Conformément à la réglementation européenne en vigueur, les Utilisateurs de beapi.fr disposent des droits suivants : * droit d’accès (article 15 RGPD) et de rectification (article 16 RGPD), de mise à jour, de complétude des données des Utilisateurs droit de verrouillage ou d’effacement des données des Utilisateurs à caractère personnel (article 17 du RGPD), lorsqu’elles sont inexactes, incomplètes, équivoques, périmées, ou dont la collecte, l’utilisation, la communication ou la conservation est interdite * droit de retirer à tout moment un consentement (article 13-2c RGPD) * droit à la limitation du traitement des données des Utilisateurs (article 18 RGPD) * droit d’opposition au traitement des données des Utilisateurs (article 21 RGPD) * droit à la portabilité des données que les Utilisateurs auront fournies, lorsque ces données font l’objet de traitements automatisés fondés sur leur consentement ou sur un contrat (article 20 RGPD) * droit de définir le sort des données des Utilisateurs après leur mort et de choisir à qui beapi.fr devra communiquer (ou non) ses données à un tiers qu’ils aura préalablement désigné Dès que beapi.fr a connaissance du décès d’un Utilisateur et à défaut d’instructions de sa part, beapi.fr s’engage à détruire ses données, sauf si leur conservation s’avère nécessaire à des fins probatoires ou pour répondre à une obligation légale. Si l’Utilisateur souhaite savoir comment beapi.fr utilise ses Données Personnelles, demander à les rectifier ou s’oppose à leur traitement, l’Utilisateur peut contacter beapi.fr par écrit à l’adresse suivante : BE API – DPO, Amaury BALMER118/130 avenue <NAME>aurès 75019 PARIS. Dans ce cas, l’Utilisateur doit indiquer les Données Personnelles qu’il souhaiterait que beapi.fr corrige, mette à jour ou supprime, en s’identifiant précisément avec une copie d’une pièce d’identité (carte d’identité ou passeport). Les demandes de suppression de Données Personnelles seront soumises aux obligations qui sont imposées à beapi.fr par la loi, notamment en matière de conservation ou d’archivage des documents. Enfin, les Utilisateurs de beapi.fr peuvent déposer une réclamation auprès des autorités de contrôle, et notamment de la CNIL (https://www.cnil.fr/fr/plaintes). ### 7.4 Non-communication des données personnelles beapi.fr s’interdit de traiter, héberger ou transférer les Informations collectées sur ses Clients vers un pays situé en dehors de l’Union européenne ou reconnu comme « non adéquat » par la Commission européenne sans en informer préalablement le client. Pour autant, beapi.fr reste libre du choix de ses sous-traitants techniques et commerciaux à la condition qu’il présentent les garanties suffisantes au regard des exigences du Règlement Général sur la Protection des Données (RGPD : n° 2016-679). beapi.fr s’engage à prendre toutes les précautions nécessaires afin de préserver la sécurité des Informations et notamment qu’elles ne soient pas communiquées à des personnes non autorisées. Cependant, si un incident impactant l’intégrité ou la confidentialité des Informations du Client est portée à la connaissance de beapi.fr, celle-ci devra dans les meilleurs délais informer le Client et lui communiquer les mesures de corrections prises. Par ailleurs beapi.fr ne collecte aucune « données sensibles ». Les Données Personnelles de l’Utilisateur peuvent être traitées par des filiales de beapi.fr et des sous-traitants (prestataires de services), exclusivement afin de réaliser les finalités de la présente politique. Dans la limite de leurs attributions respectives et pour les finalités rappelées ci-dessus, les principales personnes susceptibles d’avoir accès aux données des Utilisateurs de beapi.fr sont principalement les agents de notre service client. ## 8. Notification d’incident Quels que soient les efforts fournis, aucune méthode de transmission sur Internet et aucune méthode de stockage électronique n’est complètement sûre. Nous ne pouvons en conséquence pas garantir une sécurité absolue.Si nous prenions connaissance d’une brèche de la sécurité, nous avertirions les utilisateurs concernés afin qu’ils puissent prendre les mesures appropriées. Nos procédures de notification d’incident tiennent compte de nos obligations légales, qu’elles se situent au niveau national ou européen. Nous nous engageons à informer pleinement nos clients de toutes les questions relevant de la sécurité de leur compte et à leur fournir toutes les informations nécessaires pour les aider à respecter leurs propres obligations réglementaires en matière de reporting. Aucune information personnelle de l’utilisateur du site beapi.fr n’est publiée à l’insu de l’utilisateur, échangée, transférée, cédée ou vendue sur un support quelconque à des tiers. Seule l’hypothèse du rachat de beapi.fr et de ses droits permettrait la transmission des dites informations à l’éventuel acquéreur qui serait à son tour tenu de la même obligation de conservation et de modification des données vis à vis de l’utilisateur du site beapi.fr. ### Sécurité Pour assurer la sécurité et la confidentialité des Données Personnelles et des Données Personnelles de Santé, beapi.fr utilise des réseaux protégés par des dispositifs standards tels que par pare-feu, la pseudonymisation, l’encryption et mot de passe. Lors du traitement des Données Personnelles, beapi.frprend toutes les mesures raisonnables visant à les protéger contre toute perte, utilisation détournée, accès non autorisé, divulgation, altération ou destruction. ## 9. Liens hypertextes « cookies » et balises (“tags”) internet Le site beapi.fr contient un certain nombre de liens hypertextes vers d’autres sites, mis en place avec l’autorisation de beapi.fr. Cependant, beapi.fr n’a pas la possibilité de vérifier le contenu des sites ainsi visités, et n’assumera en conséquence aucune responsabilité de ce fait. Sauf si vous décidez de désactiver les cookies, vous acceptez que le site puisse les utiliser. Vous pouvez à tout moment désactiver ces cookies et ce gratuitement à partir des possibilités de désactivation qui vous sont offertes et rappelées ci-après, sachant que cela peut réduire ou empêcher l’accessibilité à tout ou partie des Services proposés par le site. ### 9.1. « COOKIES » Un « cookie » est un petit fichier d’information envoyé sur le navigateur de l’Utilisateur et enregistré au sein du terminal de l’Utilisateur (ex : ordinateur, smartphone), (ci-après « Cookies »). Ce fichier comprend des informations telles que le nom de domaine de l’Utilisateur, le fournisseur d’accès Internet de l’Utilisateur, le système d’exploitation de l’Utilisateur, ainsi que la date et l’heure d’accès. Les Cookies ne risquent en aucun cas d’endommager le terminal de l’Utilisateur. beapi.fr est susceptible de traiter les informations de l’Utilisateur concernant sa visite du Site, telles que les pages consultées, les recherches effectuées. Ces informations permettent à beapi.fr d’améliorer le contenu du Site, de la navigation de l’Utilisateur. Les Cookies facilitant la navigation et/ou la fourniture des services proposés par le Site, l’Utilisateur peut configurer son navigateur pour qu’il lui permette de décider s’il souhaite ou non les accepter de manière à ce que des Cookies soient enregistrés dans le terminal ou, au contraire, qu’ils soient rejetés, soit systématiquement, soit selon leur émetteur. L’Utilisateur peut également configurer son logiciel de navigation de manière à ce que l’acceptation ou le refus des Cookies lui soient proposés ponctuellement, avant qu’un Cookie soit susceptible d’être enregistré dans son terminal. beapi.fr informe l’Utilisateur que, dans ce cas, il se peut que les fonctionnalités de son logiciel de navigation ne soient pas toutes disponibles. Si l’Utilisateur refuse l’enregistrement de Cookies dans son terminal ou son navigateur, ou si l’Utilisateur supprime ceux qui y sont enregistrés, l’Utilisateur est informé que sa navigation et son expérience sur le Site peuvent être limitées. Cela pourrait également être le cas lorsque beapi.fr ou l’un de ses prestataires ne peut pas reconnaître, à des fins de compatibilité technique, le type de navigateur utilisé par le terminal, les paramètres de langue et d’affichage ou le pays depuis lequel le terminal semble connecté à Internet. Le cas échéant, beapi.fr décline toute responsabilité pour les conséquences liées au fonctionnement dégradé du Site et des services éventuellement proposés par beapi.fr, résultant (i) du refus de Cookies par l’Utilisateur (ii) de l’impossibilité pour beapi.fr d’enregistrer ou de consulter les Cookies nécessaires à leur fonctionnement du fait du choix de l’Utilisateur. Pour la gestion des Cookies et des choix de l’Utilisateur, la configuration de chaque navigateur est différente. Elle est décrite dans le menu d’aide du navigateur, qui permettra de savoir de quelle manière l’Utilisateur peut modifier ses souhaits en matière de Cookies. À tout moment, l’Utilisateur peut faire le choix d’exprimer et de modifier ses souhaits en matière de Cookies. beapi.fr pourra en outre faire appel aux services de prestataires externes pour l’aider à recueillir et traiter les informations décrites dans cette section. Enfin, en cliquant sur les icônes dédiées aux réseaux sociaux Twitter, Facebook, Linkedin et Google Plus figurant sur le Site de beapi.fr ou dans son application mobile et si l’Utilisateur a accepté le dépôt de cookies en poursuivant sa navigation sur le Site Internet ou l’application mobile de beapi.fr, Twitter, Facebook, Linkedin et Google Plus peuvent également déposer des cookies sur vos terminaux (ordinateur, tablette, téléphone portable). Ces types de cookies ne sont déposés sur vos terminaux qu’à condition que vous y consentiez, en continuant votre navigation sur le Site Internet ou l’application mobile de beapi.fr. À tout moment, l’Utilisateur peut néanmoins revenir sur son consentement à ce que beapi.fr dépose ce type de cookies. ### Article 9.2. BALISES (“TAGS”) INTERNET beapi.fr peut employer occasionnellement des balises Internet (également appelées « tags », ou balises d’action, GIF à un pixel, GIF transparents, GIF invisibles et GIF un à un) et les déployer par l’intermédiaire d’un partenaire spécialiste d’analyses Web susceptible de se trouver (et donc de stocker les informations correspondantes, y compris l’adresse IP de l’Utilisateur) dans un pays étranger. Ces balises sont placées à la fois dans les publicités en ligne permettant aux internautes d’accéder au Site, et sur les différentes pages de celui-ci.Cette technologie permet à beapi.fr d’évaluer les réponses des visiteurs face au Site et l’efficacité de ses actions (par exemple, le nombre de fois où une page est ouverte et les informations consultées), ainsi que l’utilisation de ce Site par l’Utilisateur. Le prestataire externe pourra éventuellement recueillir des informations sur les visiteurs du Site et d’autres sites Internet grâce à ces balises, constituer des rapports sur l’activité du Site à l’attention de beapi.fr, et fournir d’autres services relatifs à l’utilisation de celui-ci et d’Internet. ## 10. Droit applicable et attribution de juridiction. Tout litige en relation avec l’utilisation du site beapi.fr est soumis au droit français.En dehors des cas où la loi ne le permet pas, il est fait attribution exclusive de juridiction aux tribunaux compétents de Paris # Plan du site Aller au contenu Aller à la navigation Agence WordPress Be API | Plan du site Plan du site Agence WordPress Expertises WordPress Stratégie digitale Développement WordPress Projet MVP / Agile TMA WordPress Audit WordPress Usine à sites WordPress Migration de sites Agence Gutenberg Découvrir l’agence Pourquoi WordPress Notre approche La communauté WordPress Nos cas clients Le blog Contacter un expert WordPress
growthPheno
cran
R
Package ‘growthPheno’ August 22, 2023 Version 2.1.21 Date 2023-08-22 Title Functional Analysis of Phenotypic Growth Data to Smooth and Extract Traits Depends R (>= 3.5.0) Imports dae, GGally, ggplot2, grDevices, Hmisc, JOPS, methods, RColorBrewer, readxl, reshape, stats, stringi, utils Suggests testthat, nlme, R.rsp, scales VignetteBuilder R.rsp Description Assists in the plotting and functional smoothing of traits measured over time and the extraction of features from these traits, implementing the SET (Smoothing and Extraction of Traits) method described in Brien et al. (2020) Plant Methods, 16. Smoothing of growth trends for individual plants using natural cubic smoothing splines or P-splines is available for removing transient effects and segmented smoothing is available to deal with discontinuities in growth trends. There are graphical tools for assessing the adequacy of trait smoothing, both when using this and other packages, such as those that fit nonlinear growth models. A range of per-unit (plant, pot, plot) growth traits or features can be extracted from the data, including single time points, interval growth rates and other growth statistics, such as maximum growth or days to maximum growth. The package also has tools adapted to inputting data from high-throughput phenotyping facilities, such from a Lemna-Tec Scananalyzer 3D (see <https://www.youtube.com/watch?v=MRAF_mAEa7E/> for more information). The package 'growthPheno' can also be installed from <http://chris.brien.name/rpackages/>. License GPL (>= 2) URL http://chris.brien.name/ RoxygenNote 5.0.1 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-0581-1817>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-08-22 16:00:02 UTC R topics documented: growthPheno-packag... 3 ano... 8 args4chosen_plo... 9 args4chosen_smoot... 11 args4devnboxes_plo... 12 args4meddevn_plo... 14 args4profile_plo... 16 args4smoothin... 19 as.smooths.fram... 22 byIndv4Intvl_GRsAv... 23 byIndv4Intvl_GRsDif... 25 byIndv4Intvl_ValueCal... 27 byIndv4Intvl_WaterUs... 30 byIndv4Times_GRsDif... 32 byIndv4Times_SplinesGR... 34 byIndv_ValueCal... 38 calcLagge... 40 calcTime... 41 cumulat... 43 designFactor... 44 exampleDat... 46 fitSplin... 46 getTimesSubse... 50 growthPheno-deprecate... 51 GrowthRate... 52 importExce... 53 intervalGRaverag... 55 intervalGRdif... 57 intervalPVA.data.fram... 59 intervalValueCalculat... 61 intervalWU... 63 is.smooths.fram... 65 longitudinalPrim... 66 plotAno... 69 plotCorrmatri... 72 plotDeviationsBoxe... 73 plotImagetime... 75 plotLongitudina... 76 plotMedianDeviation... 79 plotProfile... 81 plotSmoothsCompariso... 84 plotSmoothsDevnBoxplot... 87 plotSmoothsMedianDevn... 89 prepImageDat... 92 probeSmoothin... 96 probeSmooth... 99 PV... 105 PVA.data.fram... 106 PVA.matri... 108 rcontri... 109 rcontrib.data.fram... 111 rcontrib.matri... 112 RicePrepped.da... 113 RiceRaw.da... 115 smooths.fram... 116 smoothSplin... 117 splitContGRdif... 121 splitSpline... 123 splitValueCalculat... 126 tomato.da... 128 traitExtractFeature... 129 traitSmoot... 136 twoLevelOpcreat... 142 validSmoothsFram... 144 WU... 145 growthPheno-package Functional Analysis of Phenotypic Growth Data to Smooth and Ex- tract Traits Description Assists in the plotting and functional smoothing of traits measured over time and the extraction of features from these traits, implementing the SET (Smoothing and Extraction of Traits) method de- scribed in Brien et al. (2020) Plant Methods, 16. Smoothing of growth trends for individual plants using natural cubic smoothing splines or P-splines is available for removing transient effects and segmented smoothing is available to deal with discontinuities in growth trends. There are graphi- cal tools for assessing the adequacy of trait smoothing, both when using this and other packages, such as those that fit nonlinear growth models. A range of per-unit (plant, pot, plot) growth traits or features can be extracted from the data, including single time points, interval growth rates and other growth statistics, such as maximum growth or days to maximum growth. The package also has tools adapted to inputting data from high-throughput phenotyping facilities, such from a Lemna-Tec Scananalyzer 3D (see <https://www.youtube.com/watch?v=MRAF_mAEa7E/> for more informa- tion). The package ’growthPheno’ can also be installed from <http://chris.brien.name/rpackages/>. Version: 2.1.21 Date: 2023-08-22 Index The following list of functions does not include those that are soft-deprecated, i.e. those that have been available in previous versions of growthPheno but will be removed in future versions. For a description of the use of the listed functions and vignettes that are available, see the Overview section below. (i) Wrapper functions traitSmooth Obtain smooths for a trait by fitting spline functions and, having compared several smooths, allows one of them to be chosen and returned in a data.frame. traitExtractFeatures Extract features, that are single-valued for each individual, from smoothed traits over time. (ii) Helper functions args4chosen_plot Creates a list of the values for the options of profile plots for the chosen smooth. args4chosen_smooth Creates a list of the values for the smoothing parameters for which a smooth is to be extracted. args4meddevn_plot Creates a list of the values for the options of median deviations plots for smooths. args4profile_plot Creates a list of the values for the options of profile plots for comparing smooths. args4smoothing Creates a list of the values for the smoothing parameters to be passed to a smoothing function. (iii) Data exampleData A small data set to use in function examples. RicePrepped.dat Prepped data from an experiment to investigate a rice germplasm panel. RiceRaw.dat Data for an experiment to investigate a rice germplasm panel. tomato.dat Longitudinal data for an experiment to investigate tomato response to mycorrhizal fungi and zinc. (iv) Plots plotAnom Identifies anomalous individuals and produces profile plots without them and with just them. plotCorrmatrix Calculates and plots correlation matrices for a set of responses. plotDeviationsBoxes Produces boxplots of the deviations of the observed values from the smoothed values over values of x. plotImagetimes Plots the time within an interval versus the interval. For example, the hour of the day carts are imaged against the days after planting (or some other number of days after an event). plotMedianDeviations Calculates and plots the medians of the deviations of the smoothed values from the observed values. plotProfiles Produces profile plots of longitudinal data for a set of individuals. plotSmoothsComparison Plots several sets of smoothed values for a response, possibly along with growth rates and optionally including the unsmoothed values, as well as deviations boxplots. plotSmoothsMedianDevns Calculates and plots the medians of the deviations from the observed values of several sets for smoothed values stored in a data.frame in long format. probeSmooths Computes and compares, for a set of smoothing parameters, a response and the smooths of it, possibly along with growth rates calculated from the smooths. (v) Smoothing and calculation of growth rates and water use traits for each individual (Indv) byIndv4Intvl_GRsAvg Calculates the growth rates for a specified time interval for individuals in a data.frame in long format by taking weighted averages of growth rates for times within the interval. byIndv4Intvl_GRsDiff Calculates the growth rates for a specified time interval for individuals in a data.frame in long format by differencing the values for a response within the interval. byIndv4Intvl_ValueCalc Calculates a single value that is a function of the values of an individual for a response in a data.frame in long format over a specified time interval. byIndv4Intvl_WaterUse Calculates, water use traits (WU, WUR, WUI) over a specified time interval for each individual in a data.frame in long format. byIndv4Times_GRsDiff Adds, to a ’data.frame’, the growth rates calculated for consecutive times for individuals in a data.frame in long format by differencing response values. byIndv4Times_SplinesGRs For a response in a data.frame in long format, computes, for a single set of smoothing parameters, smooths of the response, possibly along with growth rates calculated from the smooths. byIndv_ValueCalc Applies a function to calculate a single value from an individual’s values for a response in a data.frame in long format. smoothSpline Fit a spline to smooth the relationship between a response and an x in a data.frame, optionally computing growth rates using derivatives. probeSmooths For a response in a data.frame in long format, computes and compares, for sets of smoothing parameters, smooths of the response, possibly along with growth rates calculated from the smooths. (vi) Data frame manipulation as.smooths.frame Forms a smooths.frame from a data.frame, ensuring that the correct columns are present. designFactors Adds the factors and covariates for a blocked, split-unit design. getTimesSubset Forms a subset of ’responses’ in ’data’ that contains their values for the nominated times. importExcel Imports an Excel imaging file and allows some renaming of variables. is.smooths.frame Tests whether an object is of class smooths.frame. prepImageData Selects a set variables to be retained in a data frame of longitudinal data. smooths.frame Description of a smooths.frame object, twoLevelOpcreate Creates a data.frame formed by applying, for each response, a binary operation to the values of two different treatments. validSmoothsFrame Checks that an object is a valid smooths.frame. (vii) General calculations anom Tests if any values in a vector are anomalous in being outside specified limits. calcLagged Replaces the values in a vector with the result of applying an operation to it and a lagged value. calcTimes Calculates for a set of times, the time intervals after an origin time and the position of each within a time interval cumulate Calculates the cumulative sum, ignoring the first element if exclude.1st is TRUE. GrowthRates Calculates growth rates (AGR, PGR, RGRdiff) between a pair of values in a vector. WUI Calculates the Water Use Index (WUI) for a value of the response and of the water use. (viii) Principal variates analysis (PVA) intervalPVA.data.frame Selects a subset of variables using PVA, based on the observed values within a specified time interval PVA.data.frame Selects a subset of variables stored in a data.frame using PVA. PVA.matrix Selects a subset of variables using PVA based on a correlation matrix. rcontrib.data.frame Computes a measure of how correlated each variable in a set is with the other variable, conditional on a nominated subset of them. rcontrib.matrix Computes a measure of how correlated each variable in a set is with the other variable, conditional on a nominated subset of them. Overview This package can be used to perform a functional analysis of growth data using splines to smooth the trend of individual plant traces over time and then to extract features or tertiarty traits for further analysis. This process is called smoothing and extraction of traits (SET) by Brien et al. (2020), who detail the use of growthPheno for carrying out the method. However, growthPheno now has the two wrapper, or primary, functions traitSmooth and traitExtractFeatures that implement the SET approach. These may be the only functions that are used in that the complete SET process can be carried out using only them. The Tomato vignette illustrates their use for the example presented in Brien et al. (2020). The function traitSmooth utilizes the secondary functions probeSmooths, plotSmoothsComparison and plotSmoothsMedianDevns and accepts the arguments of the secondary functions. The function probeSmooths utilizes the tertiary functions byIndv4Times_SplinesGRs and byIndv4Times_GRsDiff, which in turn call the function smoothSpline. The function plotSmoothsComparison calls plotDeviationsBoxes. All of these functions play a role in choosing the smoothing method and parameters for a data set. The primary function traitExtractFeatures uses the secondary functions getTimesSubset and the set of byIndv4Intvl_ functions. These functions are concerned with the extraction of traits that yield a single value for each individual in the data. Recourse to the secondary and terriary functions may be necessary for special cases. Their use is illustrated in the Rice vignette. Use vignette("Tomato", package = "growthPheno") or vignette("Rice", package = "growthPheno") to access either of the vignettes. In addition to functions that implement SET approach, growthPheno also has functions for im- porting and organizing the data that are generally applicable, although they do have defaults that make them particularly adapted to data from a high-throughput phenotyping facility based on a Lemna-Tec Scananalyzer 3D system. Data suitable for use with this package consists of columns of data obtained from a set of individuals (e.g. plants, pots, carts, plots or units) over time. There should be a unique identifier for each individual and a time variable, such as Days after Planting (DAP), that contain no repeats for an individual. The combination of the identifier and a time for an individual should be unique to that individual. For imaging data, the individuals may be arranged in a grid of Lanes × Positions. That is, the minimum set of columns is an individuals, a times and one or more primary trait columns. Author(s) <NAME> [aut, cre] (<https://orcid.org/0000-0003-0581-1817>) Maintainer: <NAME> <<EMAIL>> References <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2020). Smoothing and extraction of traits in the growth analysis of noninvasive phenotypic data. Plant Methods, 16, 36. doi:10.1186/s13007020005776. See Also dae anom Tests if any values in a vector are anomalous in being outside specified limits Description Test whether any values in x are less than the value of lower, if it is not NULL, or are greater than the value of upper, if it is not NULL, or both. Usage anom(x, lower=NULL, upper=NULL, na.rm = TRUE) Arguments x A vector containing the values to be tested. lower A numeric such that values in x below it are considered to be anomalous. upper A numeric such that values in x above it are considered to be anomalous. na.rm A logical indicating whether NA values should be stripped before the testing proceeds. Value A logical indicating whether any values have been found to be outside the limits specified by lower or upper or both. Author(s) <NAME> Examples data(exampleData) anom.val <- anom(longi.dat$sPSA.AGR, lower=2.5) args4chosen_plot Creates a list of the values for the options of profile plots for the chosen smooth Description Creates a list of the values for the options of profile plots (and boxplots facets) for comparing smooths. Note that plots.by, facet.x, facet.y and include.raw jointly define the organization of the plots. The default settings are optimized for traitSmooth. Usage args4chosen_plot(plots.by = NULL, facet.x = ".", facet.y = ".", include.raw = "no", collapse.facets.x = FALSE, collapse.facets.y = FALSE, facet.labeller = NULL, facet.scales = "fixed", breaks.spacing.x = -2, angle.x = 0, colour = "black", colour.column = NULL, colour.values = NULL, alpha = 0.3, addMediansWhiskers = TRUE, ggplotFuncs = NULL, ...) Arguments plots.by A character that gives the names of the set of factors by which the data is to be grouped and a separate plot produced for each group. If NULL, no groups are formed. If a set of factors, such as Type, Tuning and Method, that uniquely index the combinations of the smoothing-parameter values is specified, then groups are formed for each combination of the levels of the these factors, and a separate plot is produced for each combination. facet.x A character giving the names of the factors to be used to form subsets to be plotted in separate columns of the profiles plots. The default of "." results in no split into columns. facet.y A character giving the factors to be used to form subsets to be plotted in separate rows of the profiles plots. The default of "." results in no split into rows. include.raw A character indicating whether plots of the raw (unsmoothed) trait, corre- sponding to the plots of the smoothed traits, are to be included in profile plots. The options are no, alone, facet.x, or facet.y. That is, the plots of the raw traits are plotted separately or as part of either facet.x or facet.y. collapse.facets.x A logical to indicate whether all variables specified by facets.x are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. collapse.facets.y A logical to indicate whether all variables specified by facets.y are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. facet.labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. facet.scales A character specifying whether the scales are shared across all facets of a plot ("fixed"), or do they vary across rows (the default, "free_x"), columns ("free_y"), or both rows and columns ("free")? breaks.spacing.x A numeric whose absolute values specifies the distance between breaks for the x-axis in a sequence beginning with the minimum x value and continuing up to the maximum x value. If it is negative, the breaks that do not have x values in data will be omitted. Minor breaks will be at half this value or, if these do not correspond to x-values in data when breaks.spacing.x is negative, have a spacing of one. Thus, when breaks.spacing.x is negative, grid lines will only be included for x-values that occur in data. These settings can be overwritten by supplying, in ggplotFuncs, a scale_x_continuous function from ggplot2. angle.x A numeric between 0 and 360 that gives the angle of the x-axis text to the x- axis. It can also be set by supplying, in ggplotFuncs, a theme function from ggplot2. colour A character specifying a single colour to use in drawing the lines for the pro- files. If colouring according to the values of a variable is required then use colour.column. colour.column A character giving the name of a column in data over whose values the colours of the lines are to be varied. The colours can be specified using colour.values. colour.values A character vector specifying the values of the colours to use in drawing the lines for the profiles. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually al- phabetical) within the limits of the scale. alpha A numeric specifying the degrees of transparency to be used in plotting the responses. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. addMediansWhiskers A logical indicating whether plots over time of the medians and outer whiskers are to be added to the plot. The outer whiskers are related to the whiskers on a box-and-whisker and are defined as the median plus (and minus) 1.5 times the interquartile range (IQR). Points lying outside the whiskers are considered to be potential outliers. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object for a profiles plot. ... allows arguments to be passed to other functions; not used at present. Value A named list. Author(s) <NAME> See Also traitSmooth, probeSmooths, plotSmoothsComparison and args4profile_plot. Examples args4chosen_plot(plots.by = "Type", facet.x = "Tuning", facet.y = c("Smarthouse", "Treatment.1"), include.raw = "facet.x", alpha = 0.4, colour.column = "Method", colour.values = c("orange", "olivedrab")) args4chosen_smooth Creates a list of the values for the smoothing parameters for which a smooth is to be extracted Description Creates a list of the values for the smoothing parameters for which a smooth is to be extracted. The default settings for these are optimized for traitSmooth. Usage args4chosen_smooth(smoothing.methods = "logarithmic", spline.types = "PS", df = NULL, lambdas = NULL, combinations = "single") Arguments smoothing.methods A character giving the smoothing method for the chosen smooth. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response. spline.types A character giving the type of spline for the chosen smooth. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric with single value that specifies, for natural cubic smoothing splines (NCSS), the desired equivalent numbers of degrees of freedom of the chosen smooth (trace of the smoother matrix). Lower values result in more smoothing. lambdas A named list or a numeric specifying the positive penalties for which the chosen smooth is required. combinations Generally, this argument should be set to single so that ony one value should be supplied to the functions arguments. Also, only one of df or lambdas should be set. Value A named list. Author(s) <NAME> See Also traitSmooth and probeSmooths. Examples args4chosen_smooth(smoothing.methods = "direct", spline.types = "NCSS", df = 4) args4chosen_smooth(smoothing.methods = "log", spline.types = "PS", lambdas = 0.36) args4devnboxes_plot Creates a list of the values for the options of profile plots for com- paring smooths Description Creates a list of the values for the options of deviations boxplots for comparing smooths. Note that plots.by, facet.x and facet.y jointly define the organization of the plots. The default settings are optimized for traitSmooth so that, if you want to change any of these from their default settings when using args4devnboxes_plot with a function other than traitSmooth, then it is recommended that you specify all of them to ensure that the complete set has been correctly specified. Otherwise, the default settings will be those shown here and these may be different to the default settings shown for the function with which you are using args4devnboxes_plot. Usage args4devnboxes_plot(plots.by = "Type", facet.x = c("Method", "Tuning"), facet.y = ".", collapse.facets.x = TRUE, collapse.facets.y = FALSE, facet.labeller = NULL, facet.scales = "fixed", angle.x = 0, which.plots = "none", ggplotFuncs = NULL, ...) Arguments plots.by A character that gives the names of the set of factors by which the data is to be grouped and a separate plot produced for each group. If NULL, no groups are formed. If a set of factors, such as Type, Tuning and Method, that uniquely index the combinations of the smoothing-parameter values is specified, then groups are formed for each combination of the levels of the these factors, and a separate plot is produced for each combination. facet.x A character giving the names of the factors to be used to form subsets to be plotted in separate columns of the profiles plots and deviations boxplots. The default of "." results in no split into columns. facet.y A character giving the factors to be used to form subsets to be plotted in separate rows of the profiles plots and deviations boxplots. The default of "." results in no split into rows. collapse.facets.x A logical to indicate whether all variables specified by facets.x are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. collapse.facets.y A logical to indicate whether all variables specified by facets.y are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. facet.labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. facet.scales A character specifying whether the scales are shared across all facets of a plot ("fixed"), or do they vary across rows (the default, "free_x"), columns ("free_y"), or both rows and columns ("free")? angle.x A numeric between 0 and 360 that gives the angle of the x-axis text to the x- axis. It can also be set by supplying, in ggplotFuncs, a theme function from ggplot2. which.plots A logical indicating which plots are to be produced. The options are either none or absolute.deviations and/or relative.deviations. Boxplots of the absolute deviations are specified by absolute.boxplots, the absolute de- viations being the values of a trait minus their smoothed values (observed - smoothed). Boxplots of the relative deviations are specified by relative.boxplots, the relative deviations being the absolute deviations divided by the smoothed values of the trait. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object for a boxplots. ... allows arguments to be passed to other functions; not used at present. Value A named list. Author(s) <NAME> See Also traitSmooth, probeSmooths, plotSmoothsComparison and args4chosen_plot. Examples args4devnboxes_plot(plots.by = "Type", facet.x = "Tuning", facet.y = c("Smarthouse", "Treatment.1"), which.plots = "absolute") args4meddevn_plot Creates a list of the values for the options of median deviations plots for smooths Description Creates a list of the values for the options of median deviations plots for smooths. Note that the arguments plots.by, plots.group, facet.x and facet.y jointly define the organization of the plots. The default settings are optimized for traitSmooth so that, if you want to change any of these from their default settings when using args4meddevn_plot with a function other than traitSmooth, then it is recommended that you specify all of them to ensure that the complete set has been correctly specified. Otherwise, the default settings will be those shown here and these may be different to the default settings shown for the function with which you are using args4meddevn_plot. Usage args4meddevn_plot(plots.by = NULL, plots.group = "Tuning", facet.x = c("Method","Type"), facet.y = ".", facet.labeller = NULL, facet.scales = "free_x", breaks.spacing.x = -4, angle.x = 0, colour.values = NULL, shape.values = NULL, alpha = 0.5, propn.note = TRUE, propn.types = NULL, ggplotFuncs = NULL, ...) Arguments plots.by A character that give the names of the set of factors by which medians de- viations data is to be grouped and a separate plot produced for each group. If NULL, no groups are formed. If a set of factors, such as Type, Tuning and Method, that uniquely index the combinations of the smoothing-parameter val- ues is specified, then groups are formed for each combination of the levels of the these factors, and a separate plot is produced for each combination. plots.group A character that gives the names of the set of factors by which the subset of medians deviations data within a single facet in a single plot is to be grouped for plotting as separate lines. facet.x A character giving the factors to be used to form subsets to be plotted in separate columns of the medians deviations plots. The default of "." results in no split into columns. facet.y A character giving the factors to be used to form subsets to be plotted in separate rows of the medians deviations plots. The default of "." results in no split into rows. facet.labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. facet.scales A character specifying whether the scales are shared across all facets of a plot ("fixed"), or do they vary across rows (the default, "free_x"), columns ("free_y"), or both rows and columns ("free")? breaks.spacing.x A numeric whose absolute values specifies the distance between breaks for the x-axis in a sequence beginning with the minimum x value and continuing up to the maximum x value. If it is negative, the breaks that do not have x values in data will be omitted. Minor breaks will be at half this value or, if these do not correspond to x-values in data when breaks.spacing.x is negative, have a spacing of one. Thus, when breaks.spacing.x is negative, grid lines will only be included for x-values that occur in data. These settings can be overwritten by supplying, in ggplotFuncs, a scale_x_continuous function from ggplot2. angle.x A numeric between 0 and 360 that gives the angle of the x-axis text to the x- axis. It can also be set by supplying, in ggplotFuncs, a theme function from ggplot2. colour.values A character vector specifying the values of the colours to use in drawing the lines for the medians deviations within a facet. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually alphabetical) within the limits of the scale. shape.values A numeric vector specifying the values of the shapes to use in drawing the points for the medians deviations within a facet. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order. alpha A numeric specifying the degrees of transparency to be used in plotting a me- dian deviations plot. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. propn.note A logical indicating whether a note giving the proportion of the median value of the response for each time is to be included in the medians.deviations plots. propn.types A numeric giving, for each of the trait.types, the proportion of the median value of the response for each time to be used to plot envelopes in the median deviations plots. If set to NULL, the plots of the proprotion envelopes are omitted. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. Note that these functions are applied to the compareian devia- tions plots only. ... allows arguments to be passed to other functions; not used at present. Value A named list. Author(s) <NAME> See Also traitSmooth, probeSmooths and plotSmoothsMedianDevns. Examples args4meddevn_plot(plots.by = "Type", plots.group = "Tuning", facet.x = "Method", facet.y = ".", propn.types = c(0.02,0.1, 0.2)) args4profile_plot Creates a list of the values for the options of profile plots for com- paring smooths Description Creates a list of the values for the options of profile plots for comparing smooths. Note that plots.by, facet.x, facet.y and include.raw jointly define the organization of the plots. The default settings are optimized for traitSmooth so that, if you want to change any of these from their default settings when using args4profile_plot with a function other than traitSmooth, then it is recommended that you specify all of them to ensure that the complete set has been correctly specified. Otherwise, the default settings will be those shown here and these may be different to the default settings shown for the function with which you are using args4profile_plot. Usage args4profile_plot(plots.by = "Type", facet.x = c("Method", "Tuning"), facet.y = ".", include.raw = "facet.x", collapse.facets.x = TRUE, collapse.facets.y = FALSE, facet.labeller = NULL, facet.scales = "fixed", breaks.spacing.x = -4, angle.x = 0, colour = "black", colour.column = NULL, colour.values = NULL, alpha = 0.3, addMediansWhiskers = TRUE, ggplotFuncs = NULL, ...) Arguments plots.by A character that gives the names of the set of factors by which the data is to be grouped and a separate plot produced for each group. If NULL, no groups are formed. If a set of factors, such as Type, Tuning and Method, that uniquely index the combinations of the smoothing-parameter values is specified, then groups are formed for each combination of the levels of the these factors, and a separate plot is produced for each combination. facet.x A character giving the names of the factors to be used to form subsets to be plotted in separate columns of the profiles plots and deviations boxplots. The default of "." results in no split into columns. facet.y A character giving the factors to be used to form subsets to be plotted in separate rows of the profiles plots and deviations boxplots. The default of "." results in no split into rows. include.raw A character indicating whether plots of the raw (unsmoothed) trait, corre- sponding to the plots of the smoothed traits, are to be included in profile plots. The options are no, alone, facet.x, or facet.y. That is, the plots of the raw traits are plotted separately or as part of either facet.x or facet.y. collapse.facets.x A logical to indicate whether all variables specified by facets.x are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. collapse.facets.y A logical to indicate whether all variables specified by facets.y are to be collapsed to a single variable. Note that the smoothing-parameters factors, if present, are always collapsed. facet.labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. facet.scales A character specifying whether the scales are shared across all facets of a plot ("fixed"), or do they vary across rows (the default, "free_x"), columns ("free_y"), or both rows and columns ("free")? breaks.spacing.x A numeric whose absolute values specifies the distance between breaks for the x-axis in a sequence beginning with the minimum x value and continuing up to the maximum x value. If it is negative, the breaks that do not have x values in data will be omitted. Minor breaks will be at half this value or, if these do not correspond to x-values in data when breaks.spacing.x is negative, have a spacing of one. Thus, when breaks.spacing.x is negative, grid lines will only be included for x-values that occur in data. These settings can be overwritten by supplying, in ggplotFuncs, a scale_x_continuous function from ggplot2. angle.x A numeric between 0 and 360 that gives the angle of the x-axis text to the x- axis. It can also be set by supplying, in ggplotFuncs, a theme function from ggplot2. colour A character specifying a single colour to use in drawing the lines for the pro- files. If colouring according to the values of a variable is required then use colour.column. colour.column A character giving the name of a column in data over whose values the colours of the lines are to be varied. The colours can be specified using colour.values. colour.values A character vector specifying the values of the colours to use in drawing the lines for the profiles. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually al- phabetical) within the limits of the scale. alpha A numeric specifying the degrees of transparency to be used in plotting the responses. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. addMediansWhiskers A logical indicating whether plots over time of the medians and outer whiskers are to be added to the plot. The outer whiskers are related to the whiskers on a box-and-whisker and are defined as the median plus (and minus) 1.5 times the interquartile range (IQR). Points lying outside the whiskers are considered to be potential outliers. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object for a profiles plot. ... allows arguments to be passed to other functions; not used at present. Value A named list. Author(s) <NAME> See Also traitSmooth, probeSmooths, plotSmoothsComparison and args4chosen_plot. Examples args4profile_plot(plots.by = "Type", facet.x = "Tuning", facet.y = c("Smarthouse", "Treatment.1"), include.raw = "facet.x", alpha = 0.4, colour.column = "Method", colour.values = c("orange", "olivedrab")) args4smoothing Creates a list of the values for the smoothing parameters to be passed to a smoothing function Description Creates a list of the values for the smoothing parameters to be passed to a smoothing function. Note that smoothing.methods, spline.types, df and lambdas are combined to define the set of smooths. The default settings are optimized for traitSmooth so that, if you want to change any of these from their default settings when using args4smoothing with a function other than traitSmooth, then it is recommended that you specify all of them to ensure that the complete set has been correctly specified. Otherwise, the default settings will be those shown here and these may be different to the default settings shown for the function with which you are using args4smoothing. Usage args4smoothing(smoothing.methods = "logarithmic", spline.types = c("NCSS","PS"), df = 5:7, lambdas = list(PS = round(10^c(-0.5, 0, 0.5, 1), smoothing.segments = NULL, npspline.segments = NULL, na.x.action="exclude", na.y.action = "trimx", external.smooths = NULL, correctBoundaries = FALSE, combinations = "allvalid", ...) Arguments smoothing.methods A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. spline.types A character giving the type of spline to use. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric with at least one value that specifies, for natural cubic smoothing splines (NCSS), the desired equivalent numbers of degrees of freedom of the smooths (trace of the smoother matrix). Lower values result in more smoothing. If df = NULL, the amount of smoothing can be controlled by including a compo- nent named NCSS in the list for lambdas. If df is NULL and lambda does not include a component named NCSS, then an error is issued. lambdas A named list or a numeric specifying the positive penalties to apply in order to control the amount of smoothing. The amount of smoothing decreases as lamda decreases. If lambdas is a list, then include a components with lambdas values and named for each of the specified values of spline.types for which lambdas are to be used. If spline.types includes PS, then a component named PS with at least one numeric value must be present. If a numeric, then it will be converted to a list with the single component named PS. smoothing.segments A named list, each of whose components is a numeric pair specifying the first and last values of an times-interval whose data is to be subjected as an entity to smoothing using splines. The separate smooths will be combined to form a whole smooth for each individual. If get.rates includes smoothed or is TRUE, rates.method is differences and ntimes2span is 2, the smoothed growth rates will be computed over the set of segments; otherwise, they will be computed within segments. If smoothing.segments is NULL, the data is not segmented for smoothing. npspline.segments A numeric specifying, for P-splines (PS), the number of equally spaced seg- ments between min(x) and max(x), excluding missing values, to use in con- structing the B-spline basis for the spline fitting. If npspline.segments is NULL, npspline.segments is set to the maximum of 10 and ceiling((nrow(data)-1)/2) i.e. there will be at least 10 segments and, for more than 22 times values, there will be half as many segments as there are times values. The amount of smooth- ing decreases as npspline.segments increases. When the data has been seg- mented for smoothing (smoothing.segments is not NULL), an npspline.segments value can be supplied for each segment. na.x.action A character string that specifies the action to be taken when values of x are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. external.smooths A data.frame containing the one or more smooths of a response in the column specified by smoothed.response. Multiple smoooths should be supplied in long.format with the same columns as the smooths.frame data, except for the smoothing-parameter columns Type, TunePar, TuneVal, Tuning and Method. Only those smoothing-parameter columns that are to be used in any of plots.by, plots.group, facet.x and facet.y should be included with labels appropriate to the external.smooths. Those smoothing-parameter columns not included in external.smooths will have columns of "Other" added to external.smooths. The growth rates will be computed by differencing according to the settings of get.rates and trait.types in the function that calls args4smoothing. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. combinations A character specifying how the values of the different smoothing parameters are to be combined to specify the smooths that are to be obtained. The option allvalid results in a smooth for each of the combinations of the values of smoothing.methods, spline.types, df and lambdas that are valid; the other smoothing.args will be the same for all smooths. The option parallel specifies that, if set, each of four smoothing parameters, smoothing.methods, spline.types, df and lambdas, must have the same number of values and that this number is the number of different smooths to be produced. The values of the parameters in the same position within each pa- rameter collectively specify a single smooth. Because the value of only one of df and lambdas must be specified for a smooth, one of these must be set to NA and the other to the desired value for each smooth. If all values for one of them is NA, then the argument may be omitted or set to NULL. The option single is for the specification of a single smooth. This will mean that only one of df or lambdas should be set. ... allows arguments to be passed to other functions; not used at present. Value A named list. Author(s) <NAME> See Also traitSmooth and probeSmooths. Examples args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", df = NULL, lambdas = NULL, smoothing.segments = NULL, npspline.segments = NULL, combinations = "allvalid") args4smoothing(smoothing.methods = c("log","dir","log"), spline.types = c("NCSS","NCSS","PS"), df = c(4,5,NA), lambdas = c(NA,NA,0.36), combinations = "parallel") args4smoothing(smoothing.methods = "log", spline.types = "PS", df = NULL, lambdas = 0.36, combinations = "single") as.smooths.frame Forms a smooths.frame from a data.frame, ensuring that the cor- rect columns are present. Description Creates a smooths.frame from a data.frame by adding the class smooths.frame and a set of attributes to it. Usage as.smooths.frame(data, individuals = NULL, times = NULL) Arguments data A data.frame containing the results of smoothing the data on a set of individuals over time, the data being arranged in long format both with respect to the times and the smoothing-parameter values. It must contain the columns Type, TunePar, TuneVal, Tuning and Method that give the smoothing-parameter values that were used to produce each smooth of the data, as well as the columns identifying the individuals, the observation times of the responses and the unsmoothed and smoothed responses. Each response occupies a single column. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the numeric, or factor with numeric levels, that contains the values of the predictor variable to be supplied to smooth.spline and to be plotted on the x-axis. Value A smooths.frame Author(s) <NAME> See Also validSmoothsFrame, as.smooths.frame Examples dat <- read.table(header = TRUE, text = " Type TunePar TuneVal Tuning Method ID DAP PSA sPSA NCSS df 4 df-4 direct 045451-C 28 57.446 51.18456 NCSS df 4 df-4 direct 045451-C 30 89.306 87.67343 NCSS df 7 df-7 direct 045451-C 28 57.446 57.01589 NCSS df 7 df-7 direct 045451-C 30 89.306 87.01316 ") dat[1:7] <- lapply(dat[1:6], factor) dat <- as.smooths.frame(dat, individuals = "ID", times = "DAP") is.smooths.frame(dat) validSmoothsFrame(dat) byIndv4Intvl_GRsAvg Calculates the growth rates for a specified time interval for individuals in a data.frame in long format by taking weighted averages of growth rates for times within the interval. Description Using previously calculated growth rates over time, calculates the Absolute Growth Rates for a specified interval using the weighted averages of AGRs for each time point in the interval (AGR) and the Relative Growth Rates for a specified interval using the weighted geometric means of RGRs for each time point in the interval (RGR). Usage byIndv4Intvl_GRsAvg(data, responses, individuals = "Snapshot.ID.Tag", times = "DAP", which.rates = c("AGR","RGR"), suffices.rates=c("AGR","RGR"), sep.rates = ".", start.time, end.time, suffix.interval, sep.suffix.interval = ".", sep.levels=".", na.rm=FALSE) Arguments data A data.frame containing the columns from which the growth rates are to be calculated. responses A character giving the names of the responses for which there are columns in data that contain the growth rates that are to be averaged. The names of the growth rates should have either AGR or RGR appended to the responses names. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. which.rates A character giving the growth rates that are to be averaged to obtain growth rates for an interval. It should be a combination of one or more of "AGR" and "RGR". suffices.rates A character giving the suffices to be appended to response to form the names of the columns containing the calculated the growth rates and in which growth rates are to be stored. Their elements will be matched with those of which.rates. sep.rates A character giving the character(s) to be used to separate the suffices.rates value from a response value in constructing the name for a new rate. For no separator, set to "". start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the growth rate is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the growth rate is to be calculated. suffix.interval A character giving the suffix to be appended to response.suffices.rates to form the names of the columns containing the calculated the growth rates. sep.suffix.interval A character giving the separator to use in appending suffix.inteval to a growth rate. For no separator, set to "". sep.levels A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. na.rm A logical indicating whether NA values should be stripped before the calcula- tion of weighted means proceeds. Details The AGR for an interval is calculated as the weighted mean of the AGRs for times within the interval. The RGR is calculated as the weighted geometric mean of the RGRs for times within the interval; in fact the exponential is taken of the weighted means of the logs of the RGRs. The weights are obtained from the times. They are taken as the sum of half the time subintervals before and after each time, except for the end points; the end points are taken to be the subintervals at the start and end of the interval. Value A data.frame with the growth rates. The name of each column is the concatenation of (i) one of responses, (ii) one of AGR, PGR or RGR, or the appropriate element of suffices.rates, and (iii) suffix.interval, the three components being separated by full stops. Author(s) <NAME> See Also byIndv4Intvl_GRsDiff, byIndv4Intvl_WaterUse, splitValueCalculate, getTimesSubset, GrowthRates, byIndv4Times_SplinesGRs, splitContGRdiff Examples data(exampleData) longi.dat <- byIndv4Times_SplinesGRs(data = longi.dat, response="PSA", response.smoothed = "sPSA", individuals = "Snapshot.ID.Tag", times = "DAP", df = 4, rates.method = "deriv", which.rates = c("AGR", "RGR"), suffices.rates = c("AGRdv", "RGRdv")) sPSA.GR <- byIndv4Intvl_GRsAvg(data = longi.dat, response="sPSA", times = "DAP", which.rates = c("AGR","RGR"), suffices.rates = c("AGRdv","RGRdv"), start.time = 31, end.time = 35, suffix.interval = "31to35") byIndv4Intvl_GRsDiff Calculates the growth rates for a specified time interval for individ- uals in a data.frame in long format by differencing the values for a response within the interval. Description Using the values of the responses, calculates the specified combination of the Absolute Growth Rates using differences (AGR), the Proportionate Growth Rates (PGR) and Relative Growth Rates using log differences (RGR) between two nominated time points. Usage byIndv4Intvl_GRsDiff(data, responses, individuals = "Snapshot.ID.Tag", times = "DAP", which.rates = c("AGR","PGR","RGR"), suffices.rates=NULL, sep.rates = ".", start.time, end.time, suffix.interval, sep.suffix.interval = ".") Arguments data A data.frame containing the column from which the growth rates are to be calculated. responses A character giving the names of the columns in data from which the growth rates are to be calculated. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. which.rates A character giving the growth rates that are to be calculated. It should be a combination of one or more of "AGR", "PGR" and "RGR". suffices.rates A character giving the characters to be appended to the names of the responses in constructing the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. sep.rates A character giving the character(s) to be used to separate the suffices.rates value from a response value in constructing the name for a new rate. For no separator, set to "". start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the growth rate is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the growth rate is to be calculated. suffix.interval A character giving the suffix to be appended to response to form the names of the columns containing the calculated the growth rates. sep.suffix.interval A character giving the separator to use in appending suffix.inteval to a growth rate. For no separator, set to "". Details The AGR is calculated as the difference between the values of response at the end.time and start.time divided by the difference between end.time and start.time. The PGR is calcu- lated as the ratio of response at the end.time to that at start.time and the ratio raised to the power of the reciprocal of the difference between end.time and start.time. The RGR is calcu- lated as the log of the PGR and so is equal to the difference between the logarithms of response at the end.time and start.time divided by the difference between end.time and start.time. Value A data.frame with the growth rates. The name of each column is the concatenation of (i) one of responses, (ii) one of AGR, PGR or RGR, or the appropriate element of suffices.rates, and (iii) suffix.interval, the three components being separated by full stops. Author(s) <NAME> See Also byIndv4Intvl_GRsAvg, byIndv4Intvl_WaterUse, getTimesSubset, GrowthRates, byIndv4Times_SplinesGRs, splitContGRdiff Examples data(exampleData) sPSA.GR <- byIndv4Intvl_GRsDiff(data = longi.dat, responses = "sPSA", times = "DAP", which.rates = c("AGR","RGR"), start.time = 31, end.time = 35, suffix.interval = "31to35") byIndv4Intvl_ValueCalc Calculates a single value that is a function of the values of an indi- vidual for a response in a data.frame in long format over a specified time interval. Description Splits the values of a response into subsets corresponding individuals and applies a function that calculates a single value from each individual’s observations during a specified time interval. It includes the ability to calculate the observation number that is closest to the calculated value of the function and the assocated values of a factor or numeric. Usage byIndv4Intvl_ValueCalc(data, response, individuals = "Snapshot.ID.Tag", times = "DAP", FUN = "max", which.obs = FALSE, which.values = NULL, addFUN2name = TRUE, sep.FUNname = ".", start.time=NULL, end.time=NULL, suffix.interval=NULL, sep.suffix.interval = ".", sep.levels=".", weights=NULL, na.rm=TRUE, ...) Arguments data A data.frame containing the column from which the function is to be calcu- lated. response A character giving the name of the column in data from which the values of FUN are to be calculated. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. FUN A character giving the name of the function that calculates the value for each subset. which.obs A logical indicating whether or not to determine the observation number cor- responding to the observed value that is closest to the value of the function, in addition to the value of the function itself. That is, FUN need not return an observed value of the reponse, e.g. quantile. which.values A character giving the name of the factor or numeric whose values are as- sociated with the response values and whose value is to be returned for the observation number whose response value corresponds to the observed value closest to the value of the function. That is, FUN need not return an observed value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the value of the which.values vector for the first of these is returned. addFUN2name A logical that, if TRUE, indicates that the FUN name is to be added to the names of the columns in the data.frame returned by byIndv4Intvl_ValueCalc. sep.FUNname A character giving the character(s) to be used to separate the name of FUN from the response value in constructing the name for a new response. For no separator, set to "". start.time A numeric giving the times, in terms of levels of times.factor, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which a value is to be calculated. If start.time is NULL, the interval will start with the first observation. In the case of multiple observed response values satisfying this condition, the first is returned. end.time A numeric giving the times, in terms of levels of times.factor, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which a value is to be calculated. If end.time is NULL, the interval will end with the last observation. suffix.interval A character giving the suffix to be appended to response to form the name of the column containing the calculated values. If it is NULL then nothing will be appended. sep.suffix.interval A character giving the separator to use in appending suffix.inteval to a growth rate. For no separator, set to "". sep.levels A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. weights A character giving the name of the column in data containing the weights to be supplied as w to FUN. na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. ... allows for arguments to be passed to FUN. Value A data.frame, with the same number of rows as there are individuals, containing a column for the individuals and a column with the values of the function for the individuals. It is also pos- sible to determine observaton numbers or the values of another column in data for the response values that are closest to the FUN results, using either or both of which.obs and which.values. If which.obs is TRUE, a column with observation numbers is included in the data.frame. If which.values is set to the name of a factor or a numeric, a column containing the levels of that factor or the values of that numeric is included in the data.frame. The name of the column with the values of the function will be result of concatenating the response, FUN and, if it is not NULL, suffix.interval, each separated by a full stop. If which.obs is TRUE, the column name for the obervations numbers will have .obs added after FUN into the column name for the function values; if which.values is specified, the column name for these values will have a full stop followed by which.values added after FUN into the column name for the function values. Author(s) <NAME> See Also byIndv4Intvl_GRsAvg, byIndv4Intvl_GRsDiff, byIndv4Intvl_WaterUse, splitValueCalculate, getTimesSubset Examples data(exampleData) sPSA.max <- byIndv4Intvl_ValueCalc(data = longi.dat, response = "sPSA", times = "DAP", start.time = 31, end.time = 35, suffix.interval = "31to35") AGR.max.dat <- byIndv4Intvl_ValueCalc(data = longi.dat, response = "sPSA", times = "DAP", FUN="max", start.time = 31, end.time = 35, suffix.interval = "31to35", which.values = "DAP", which.obs = TRUE) byIndv4Intvl_WaterUse Calculates, water use traits (WU, WUR, WUI) over a specified time interval for each individual in a data.frame in long format. Description Calculates one or more of water use (WU), water use rate (WUR), and, for a set of responses, water use indices (WUI)s over a specified time interval for each individual in a data.frame in long format. Usage byIndv4Intvl_WaterUse(data, water.use = "Water.Use", responses = NULL, individuals = "Snapshot.ID.Tag", times = "DAP", trait.types = c("WU", "WUR", "WUI"), suffix.rate = "R", suffix.index = "I", sep.water.traits = "", sep.responses = ".", start.time, end.time, suffix.interval = NULL, sep.suffix.interval = ".", na.rm = FALSE) Arguments data A data.frame containing the column from which the water use traits are to be calculated. water.use A character giving the names of the column in data that contains the water use values. responses A character giving the names of the columns in data for which WUIs are to be calculated. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used identifying the intervals and, if a factor or character, the values should be numerics stored as characters. trait.types A character listing the trait types to compute and return. It should be some combination of WU, WUR and WUI, or be all. See Details for how each is calcu- lated. suffix.rate A character giving the label to be appended to the value of water.use to form the name of the WUR. suffix.index A character giving the label to be appended to the value of water.use to form the name of the WUI. sep.water.traits A character giving the character(s) to be used to separate the suffix.rate and suffix.index values from the responses values in constructing the name for a new rate/index. The default of "" results in no separator. sep.responses A character giving the character(s) to be used to separate the suffix.rate value from aresponses value in constructing the name for a new index. For no separator, set to "". start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the WUI is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the WUI is to be calculated. suffix.interval A character giving the suffix to be appended to the names of the columns for the water use traits to indicate the interval for which the traits have been calculated . sep.suffix.interval A character giving the separator to use in appending suffix.inteval to a growth rate. For no separator, set to "". na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. Details WU is the water use and is the sum of the water use after start.time until end.time. Thus, the water use up to start.time is not included. WUR is the Water Use Rate and is WU divided by the difference between end.time and start.time. WUI is the Water Use Index and is calculated as a response difference between the start.time and the end.time, which s then divided by the WU. Value A data.frame containing the individuals column, WU and/or WUR and, if requested, a WUI for each element of responses. The names of WU and WUR will have suffix.interval appended, if it is not NULL, separated by a full stop (‘.’). The name of each WUI will be the concatenation of an element of responses with WUI and, if not NULL, suffix.interval, the three components being separated by a full stop (‘.’). Author(s) <NAME> See Also byIndv4Intvl_GRsAvg, byIndv4Intvl_GRsDiff, splitValueCalculate, getTimesSubset, GrowthRates Examples data(exampleData) WU.WUI_31_35 <- byIndv4Intvl_WaterUse(data = longi.dat, water.use = "WU", responses = "PSA", times = "DAP", trait.types = c("WUR","WUI"), suffix.rate = ".Rate", suffix.index = ".Index", byIndv4Times_GRsDiff Adds, to a data.frame, the growth rates calculated for consecutive times for individuals in a data.frame in long format by differencing response values. Description Uses AGRdiff, PGR and RGRdiff to calculate growth rates continuously over time for the response by differencing pairs of pairs of response values and stores the results in data. The subsets are those values with the same levels combinations of the factors listed in individuals. Usage byIndv4Times_GRsDiff(data, responses, individuals = "Snapshot.ID.Tag", times = "DAP", which.rates = c("AGR","PGR","RGR"), suffices.rates=NULL, sep.rates = ".", avail.times.diffs = FALSE, ntimes2span = 2) Arguments data A data.frame containing the columns for which growth rates are to be calcu- lated. responses A character giving the names of the columns in data for which growth rates are to be calculated. individuals A character giving the name(s) of the factor(s) that define the subsets of response that correspond to the response values for an individual (e.g. plant, pot, cart, plot or unit) for which growth rates are to be calculated continuously. If the columns corresponding to individuals are not factor(s) then they will be coerced to factor(s). The subsets are formed using split. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating the growth rates. If a factor or character, the values should be numerics stored as characters. which.rates A character giving the growth rates that are to be calculated. It should be a combination of one or more of "AGR", "PGR" and "RGR". suffices.rates A character giving the characters to be appended to the names of the responses to provide the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. If NULL, the values of which.rates are used. sep.rates A character giving the character(s) to be used to separate the suffices.rates value from a response value in constructing the name for a new rate. For no separator, set to "". avail.times.diffs A logical indicating whether there is an appropriate column of times diff- serences that can be used as the denominator in computing the growth rates. If TRUE, it will be assumed that the name of the column is the value of times with .diffs appended. If FALSE, a column, whose column name will be the value of times with .diffs appended, will be formed and saved in the result, overwriting any existing columns with the constructed name in data. It will be calculated using the values of times in data. ntimes2span A numeric giving the number of values in times to span in calculating growth rates by differencing. Each growth rate is calculated as the difference in the values of one of the responses for pairs of times values that are spanned by ntimes2span times values divided by the difference between this pair of times values. For ntimes2span set to 2, a growth rate is the difference between con- secutive pairs of values of one of the responses divided by the difference be- tween consecutive pairs of times values. Value A data.frame containing data to which has been added i) a column for the differences between the times, if it is not already in data, and (ii) columns with growth rates. The name of the column for times differences will be the value of times with ".diffs" appended. The name for each of the growth-rate columns will be either the value of response with one of ".AGR", ".PGR" or "RGR", or the corresponding value from suffices.rates appended. Each growth rate will be positioned at observation ceiling(ntimes2span + 1) / 2 relative to the two times from which the growth rate is calculated. Author(s) <NAME> See Also smoothSpline, byIndv4Times_SplinesGRs Examples data(exampleData) longi.dat <- byIndv4Times_GRsDiff(data = longi.dat, response = "sPSA", individuals = "Snapshot.ID.Tag", times = "DAP", which.rates=c("AGR", "RGR"), avail.times.diffs = TRUE) byIndv4Times_SplinesGRs For a response in a data.frame in long format, computes, for a single set of smoothing parameters, smooths of the response, possibly along with growth rates calculated from the smooths. Description Uses smoothSpline to fit a spline to the values of response for each individual and stores the fitted values in data. The degree of smoothing is controlled by the tuning parameters df and lambda, related to the penalty, and by npspline.segments. The smoothing.method provides for direct and logarithmic smoothing. The Absolute and Relative Growth Rates ( AGR and RGR) can be computed either using the first derivatives of the splines or by differencing the smooths. If using the first derivative to obtain growth rates, correctBoundaries must be FALSE. Derivatives other than the first derivative can also be produced. The function byIndv4Times_GRsDiff is used to obtain growth rates by differencing. The handling of missing values in the observations is controlled via na.x.action and na.y.action. If there are not at least four distinct, nonmissing x-values, a warning is issued and all smoothed val- ues and derivatives are set to NA. The function probeSmoothing can be used to investgate the effect the smoothing parameters (smoothing.method, df or lambda) on the smooth that results. Usage byIndv4Times_SplinesGRs(data, response, response.smoothed = NULL, individuals = "Snapshot.ID.Tag", times, smoothing.method = "direct", smoothing.segments = NULL, spline.type = "NCSS", df=NULL, lambda = NULL, npspline.segments = NULL, correctBoundaries = FALSE, rates.method = "differences", which.rates = c("AGR","RGR"), suffices.rates = NULL, sep.rates = ".", avail.times.diffs = FALSE, ntimes2span = 2, extra.derivs = NULL, suffices.extra.derivs=NULL, sep.levels = ".", na.x.action="exclude", na.y.action = "trimx", ...) Arguments data A data.frame containing the column to be smoothed. response A character giving the name of the column in data that is to be smoothed. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. individuals A character giving the name(s) of the factor(s) that define the subsets of response that correspond to the response values for an individual (e.g. plant, pot, cart, plot or unit) that are to be smoothed separately. If the columns cor- responding to individuals are not factor(s) then they will be coerced to factor(s). The subsets are formed using split. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used as the values of the predictor variable to be supplied to smooth.spline and in calculating growth rates. If a factor or character, the values should be numerics stored as characters. smoothing.method A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. smoothing.segments A named list, each of whose components is a numeric pair specifying the first and last values of an times-interval whose data is to be subjected as an entity to smoothing using splines. The separate smooths will be combined to form a whole smooth for each individual. If get.rates is TRUE, rates.method is differences and ntimes2span is 2, the smoothed growth rates will be com- puted over the set of segments; otherwise, they will be computed within seg- ments. If smoothing.segments is NULL, the data is not segmented for smooth- ing. spline.type A character giving the type of spline to use. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric specifying, for natural cubic smoothing splines (NCSS), the desired equivalent number of degrees of freedom of the smooth (trace of the smoother matrix). Lower values result in more smoothing. If df = NULL, the amount of smoothing can be controlled by setting lambda. If both df and lambda are NULL, smoothing is controlled by the default arguments for smooth.spline, and any that you supply via the ellipsis (. . . ) argument. lambda A numeric specifying the positive penalty to apply. The amount of smoothing decreases as lamda decreases. npspline.segments A numeric specifying, for P-splines (PS), the number of equally spaced seg- ments between min(times) and max(times), excluding missing values, to use in constructing the B-spline basis for the spline fitting. If npspline.segments is NULL, npspline.segments is set to the maximum of 10 and ceiling((nrow(data)-1)/2) i.e. there will be at least 10 segments and, for more than 22 times values, there will be half as many segments as there are times values. The amount of smooth- ing decreases as npspline.segments increases. When the data has been seg- mented for smoothing (smoothing.segments is not NULL), an npspline.segments value can be supplied for each segment. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. rates.method A character specifying the method to use in calculating the growth rates. The possibilities are none, differences and derivatives. which.rates A character giving the growth rates that are to be calculated. It should be a combination of one or more of "AGR", "PGR" and "RGR". suffices.rates A character giving the characters to be appended to the names of the responses to provide the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. If NULL, the values of which.rates are used. sep.rates A character giving the character(s) to be used to separate the suffices.rates value from a response value in constructing the name for a new rate. For no separator, set to "". avail.times.diffs A logical indicating whether there is an appropriate column of times diff- serences that can be used as the denominator in computing the growth rates. If TRUE, it will be assumed that the name of the column is the value of times with .diffs appended. If FALSE, a column, whose column name will be the value of times with .diffs appended, will be formed and saved in the result, overwriting any existing columns with the constructed name in data. It will be calculated using the values of times in data. ntimes2span A numeric giving the number of values in times to span in calculating growth rates by differencing. Each growth rate is calculated as the difference in the values of one of the responses for pairs of times values that are spanned by ntimes2span times values divided by the difference between this pair of times values. For ntimes2span set to 2, a growth rate is the difference between con- secutive pairs of values of one of the responses divided by the difference be- tween consecutive pairs of times values. extra.derivs A numeric specifying one or more orders of derivatives that are required, in addition to any required for calculating the growth rates. When rates.method is derivatives, these can be derivatives other than the first. Otherwise, any derivatives can be specified. suffices.extra.derivs A character giving the characters to be appended to response.method to con- struct the names of the derivatives. If NULL and the derivatives are to be retained, then .dv followed by the order of the derivative is appended to response.method . sep.levels A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. na.x.action A character string that specifies the action to be taken when values of x, or the times, are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. ... allows for arguments to be passed to smooth.spline. Value A data.frame containing data to which has been added a column with the fitted smooth, the name of the column being the value of response.smoothed. If rates.method is not none, columns for the growth rates listed in which.rates will be added to data; the names each of these columns will be the value of response.smoothed with the elements of which.rates appended. When rates.method is derivatives and smoothing.method is direct, the AGR is obtained from the first derivative of the spline for each value of times and the RGR is calculated as the AGR di- vided by the value of the response.smoothed for the corresponding time. When rates.method is derivatives and smoothing.method is logarithmic, the RGR is obtained from the first deriva- tive of the spline and the AGR is calculated as the RGR multiplied by the corresponding value of the response.smoothed. If extra.derivs is not NULL, the values for the nominated derivatives will also be added to data; the names each of these columns will be the value of response.smoothed with .dvf appended, where f is the order of the derivative, or the value of response.smoothed with the corresponding element of suffices.deriv appended. Any pre-existing smoothed and growth rate columns in data will be replaced. The ordering of the data.frame for the times values will be preserved as far as is possible; the main difficulty is with the handling of missing values by the function merge. Thus, if missing values in times are retained, they will occur at the bottom of each subset of individuals and the order will be problematic when there are missing values in y and na.y.action is set to omit. Author(s) <NAME> References <NAME> and <NAME>. (2021) Practical smoothing: the joys of P-splines. Cambridge Uni- versity Press, Cambridge. <NAME>. (2001) Boundary corrected cubic smoothing splines. Journal of Statistical Computation and Simulation, 70, 107-121. See Also smoothSpline, probeSmoothing, byIndv4Times_GRsDiff, smooth.spline, predict.smooth.spline, split Examples data(exampleData) #smoothing with growth rates calculated using derivates longi.dat <- byIndv4Times_SplinesGRs(data = longi.dat, response="PSA", response.smoothed = "sPSA", times="DAP", df = 4, rates.method = "deriv", suffices.rates = c("AGRdv", "RGRdv")) #Use P-splines longi.dat <- byIndv4Times_SplinesGRs(data = longi.dat, response="PSA", response.smoothed = "sPSA", individuals = "Snapshot.ID.Tag", times="DAP", spline.type = "PS", lambda = 0.1, npspline.segments = 10, rates.method = "deriv", suffices.rates = c("AGRdv", "RGRdv")) #with segmented smoothing and no growth rates longi.dat <- byIndv4Times_SplinesGRs(data = longi.dat, response="PSA", response.smoothed = "sPSA", individuals = "Snapshot.ID.Tag", times="DAP", smoothing.segments = list(c(28,34), c(35,42)), df = 5, rates.method = "none") byIndv_ValueCalc Calculates a single value that is a function of an individual’s values for a response. Description Applies a function to calculate a single value from an individual’s values for a response in a data.frame in long format. It includes the ability to calculate the observation number that is closest to the calculated value of the function and the assocated values of a factor or numeric. Usage byIndv_ValueCalc(data, response, individuals = "Snapshot.ID.Tag", FUN = "max", which.obs = FALSE, which.values = NULL, addFUN2name = TRUE, sep.FUNname = ".", weights = NULL, na.rm=TRUE, sep.levels = ".", ...) Arguments data A data.frame containing the column from which the function is to be calcu- lated. response A character giving the name of the column in data from which the values of FUN are to be calculated. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). FUN A character giving the name of the function that calculates the value for each subset. which.obs A logical indicating whether or not to determine the observation number cor- responding to the observed value that is closest to the value of the function, in addition to the value of the function itself. That is, FUN need not return an ob- served value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the first is returned. which.values A character giving the name of the factor or numeric whose values are as- sociated with the response values and whose value is to be returned for the observation number whose response value corresponds to the observed value closest to the value of the function. That is, FUN need not return an observed value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the value of the which.values vector for the first of these is returned. addFUN2name A logical that, if TRUE, indicates that the FUN name is to be added to the names of the columns in the data.frame returned by byIndv4Intvl_ValueCalc. sep.FUNname A character giving the character(s) to be used to separate the name of FUN from the response value in constructing the name for a new response. For no separator, set to "". weights A character giving the name of the column in data containing the weights to be supplied as w to FUN. na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. sep.levels A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. ... allows for arguments to be passed to FUN. Value A data.frame, with the same number of rows as there are individuals, containing a column for the individuals and a column with the values of the function for the individuals. It is also pos- sible to determine observaton numbers or the values of another column in data for the response values that are closest to the FUN results, using either or both of which.obs and which.values. If which.obs is TRUE, a column with observation numbers is included in the data.frame. If which.values is set to the name of a factor or a numeric,a column containing the levels of that factor or the values of that numeric is included in the data.frame. The name of the column with the values of the function will be formed by concatenating the response and FUN, separated by a full stop. If which.obs is TRUE, the column name for the ober- vations numbers will have .obs added after FUN into the column name for the function values; if which.values is specified, the column name for these values will have a full stop followed by which.values added after FUN into the column name for the function values. Author(s) <NAME> See Also byIndv4Intvl_ValueCalc, byIndv4Times_GRsDiff, byIndv4Times_SplinesGRs Examples data(exampleData) sPSA.max.dat <- byIndv_ValueCalc(data=longi.dat, response = "PSA") AGR.max.dat <- byIndv_ValueCalc(data=longi.dat, response = "sPSA.AGR", FUN="max", which.values = "DAP", which.obs = TRUE) sPSA.dec1.dat <- byIndv_ValueCalc(data=longi.dat, response = "sPSA", FUN="quantile", which.values = "DAP", probs = 0.1) calcLagged Replaces the values in a vector with the result of applying an operation to it and a lagged value Description Replaces the values in x with the result of applying an operation to it and the value that is lag positions either before it or after it in x, depending on whether lag is positive or negative. For positive lag the first lag values will be NA, while for negative lag the last lag values will be NA. When operation is NULL, the values are moved lag positions down the vector. Usage calcLagged(x, operation = NULL, lag = 1) Arguments x A vector containing the values on which the calculations are to be made. operation A character giving the operation to be performed on pairs of values in x. If operation is NULL then the values are moved lag positions down the vector. lag A integer specifying, for the second value in the pair to be operated on, the number positions it is ahead of or behind the current value. Value A vector containing the result of applying operation to values in x. For positive lag the first lag values will be NA, while for negative lag the last lag values will be NA. Author(s) <NAME> See Also Ops Examples data(exampleData) longi.dat$DAP.diffs <- calcLagged(longi.dat$xDAP, operation ="-") calcTimes Calculates for a set of times, the time intervals after an origin time and the position of each within a time interval Description For the column specified in imageTimes, having converted it to POSIXct if not already converted, calculates for each value the number of intervalUnits between the time and the startTime. Then the number of timePositions within the intervals is calculated for the values in imageTimes. The function difftimes is used in doing the calculations, but the results are converted to numeric. For example intervals could correspond to the number of Days after Planting (DAP) and the timePositions to the hour within each day. Usage calcTimes(data, imageTimes = NULL, timeFormat = "%Y-%m-%d %H:%M", intervals = "Time.after.Planting..d.", startTime = NULL, intervalUnit = "days", timePositions = NULL) Arguments data A data.frame containing any columns specified by imageTimes, intervals and timePositions. imageTimes A character giving the name of the column that contains the time that each cart was imaged. Note that in importing data into R, spaces and nonalphanumeric characters in names are converted to full stops. If imageTimes is NULL then no calculations are done. timeFormat A character giving the POSIXct format of characters containing times, in particular imageTimes and startTime. Note that if fractions of seconds are required options(digits.secs) must be used to set the number of decimal places and timeFormat must use %OS for seconds in timeFormat. intervals A character giving the name of the column in data containing, as a numeric or a factor, the calculated times after startTime to be plotted on the x-axis. It is given as the number of intervalUnits between the two times. If startTime is NULL then intervals is not calculated. startTime A character giving the time, in the POSIXct format specified by timeFormat, to be subtracted from imageTimes to caclualte intervals. For example, it might be the day of planting or treatment. If startTime is NULL then intervals is not calculated. intervalUnit A character giving the name of the unit in which the values of the intervals should be expressed. It must be one of "secs", "mins", "hours" or "days". timePositions A character giving the name of the column in data containing, as a numeric, the value of the time position within an interval (for example, the time of imag- ing during the day expressed in hours plus a fraction of an hour). If timePositions is NULL then it is not calculated. Value A data.frame, being the unchaged data data.frame when imageTimes is NULL or containing either intervals and/or timePositions depending on which is not NULL. Author(s) <NAME> See Also as.POSIXct, imagetimesPlot. Examples data(exampleData) raw.dat <- calcTimes(data = raw.dat, imageTimes = "Snapshot.Time.Stamp", timePositions = "Hour") cumulate Calculates the cumulative sum, ignoring the first element if exclude.1st is TRUE Description Uses cumsum to calculate the cumulative sum, ignoring the first element if exclude.1st is TRUE. Usage cumulate(x, exclude.1st = FALSE, na.rm = FALSE, ...) Arguments x A vector containing the values to be cumulated. exclude.1st A logical indicating whether or not the first value of the cumulative sum is to be NA. na.rm A logical indicating whether NA values should be stripped before the compu- tation proceeds ... allows passing of arguments to other functions; not used at present. Value A vector containing the cumulative sum. Author(s) <NAME> See Also cumsum Examples data(exampleData) PSA.cum <- cumulate(longi.dat$PSA) designFactors Adds the factors and covariates for a blocked, split-unit design Description Add the following factors and covariates to a date frame containing imaging data from the Plant Accelerator: Zone, xZone, SHZone, ZLane, ZMainunit, Subunit and xMainPosn. It checks that the numbers of levels of the factors are consistent with the observed numbers of carts and observa- tions. Usage designFactors(data, insertName = NULL, designfactorMethod = "LanePosition", nzones = 6, nlanesperzone = 4, nmainunitsperlane = 11, nsubunitspermain = 2) Arguments data A data.frame to which are to be added the design factors and covariates and which must contain the following columns: Smarthouse, Snapshot.ID.Tag, xDAP, and, if designfactorMethod = "LanePosition", Lane and Position. insertName A character giving the name of the column in the data.frame after which the new factors and covariates are to be inserted. If NULL, they are added after the last column. designfactorMethod A character giving the method to use to obtain the columns for the design factors Zone, ZLane, Mainunit and Subunit. For LanePosition, it is as- sumed that (i) Lane can be divided into Zone and ZLane, each with nzones and nlanesperzone levels, respectively, and (ii) Position can be divided into Mainunit and Subunit, each with nmainunitsperlane and nmainunitsperlane levels, respec- tively. The factor SHZone is formed by combining Smarthouse and Zone and ZMainunit is formed by combining ZLane and Mainunit. For StandardOrder, the factors Zone, ZLane, Mainunit, Subunit are generated in standard order, with the levels of Subunit changing for every observation and the levels of sub- sequent changing only after all combinations of the levels of the factors to its right have been cycled through. nzones A numeric giving the number of zones in a smarthouse. nlanesperzone A numeric giving the number of lanes in each zone. nmainunitsperlane A numeric giving the number of mainunits in each lane. nsubunitspermain A numeric giving the number of subunits in a main plot. Details The factors Zone, ZLane, ZMainunit and Subunit are derived for each Smarthouse based on the values of nzones, nlanesperzone, nmainunitsperlane, nsubunitspermain, Zone being the blocks in the split-unit design. Thus, the number of carts in each Smarthouse must be the product of these values and the number of observations must be the product of the numbers of smarthouse, carts and imagings for each cart. If this is not the case, it may be able to be achieved by including in data rows for extra observations that have values for the Snapshot.ID.Tag, Smarthouse, Lane, Posi- tion and Time.after.Planting..d. and the remaining columns for these rows have missing values (NA) Then SHZone is formed by combining Smarthouse and Zone and the covariates cZone, cMainPosn and cPosn calculated. The covariate cZone is calculated from Zone and cMainPosn is formed from the mean of cPosn for each main plot. Value A data.frame including the columns: 1. Smarthouse: factor with levels for the Smarthouse 2. Zone: factor dividing the Lanes into groups, usually of 4 lanes 3. cZone: numeric corresponding to Zone, centred by subtracting the mean of the unique posi- tions 4. SHZone: factor for the combinations of Smarthouse and Zone 5. ZLane: factor for the lanes within a Zone 6. ZMainunit: factor for the main units within a Zone 7. Subunit: factor for the subunits 8. cMainPosn: numeric for the main-plot positions within a Lane, centred by subtracting the mean of the unique Positions 9. cPosn: numeric for the Positions within a Lane, centred by subtracting the mean of the unique Positions Author(s) <NAME> Examples data(exampleData) longi.dat <- prepImageData(data = raw.dat, smarthouse.lev = 1) longi.dat <- designFactors(data = longi.dat, insertName = "Reps", nzones = 1, nlanesperzone = 1, nmainunitsperlane = 10, designfactorMethod="StandardOrder") exampleData A small data set to use in function examples Description Imaging data for 20 of the plants that were imaged over 14 days from an experiment in a Smarthouse in the Plant Accelerator. Producing these files is illustrated in the Rice vignette and the data is used as a small example in the growthPheno manual. Usage data(exampleData) Format Three data.frames: 1. raw.dat (280 rows by 33 columns) that contains the imaging data for 20 plants by 14 imaging days as produced by the image processing software; 2. longi.dat (280 rows by 37 columns) that contains a modified version of the imaging data for the 20 plants by 14 imaging days in raw.dat; 3. cart.dat (20 rows by 14 columns) that contains data summarizing the growth features of the 20 plants produced from the data in longi.dat. fitSpline Fits a spline to a response in a data.frame, and growth rates can be computed using derivatives Description Uses smooth.spline to fit a natural cubic smoothing spline or JOPS to fit a P-spline to all the values of response stored in data. The amount of smoothing can be controlled by tuning parameters, these being related to the penalty. For a natural cubic smoothing spline, these are df or lambda and, for a P-spline, it is lambda. For a P-spline, npspline.segments also influences the smoothness of the fit. The smoothing.method provides for direct and logarithmic smoothing. The method of Huang (2001) for correcting the fitted spline for estimation bias at the end-points will be applied when fitting using a natural cubic smoothing spline if correctBoundaries is TRUE. The derivatives of the fitted spline can also be obtained, and the Absolute and Relative Growth Rates ( AGR and RGR) computed using them, provided correctBoundaries is FALSE. Otherwise, growth rates can be obtained by difference using byIndv4Times_GRsDiff. The handling of missing values in the observations is controlled via na.x.action and na.y.action. If there are not at least four distinct, nonmissing x-values, a warning is issued and all smoothed val- ues and derivatives are set to NA. The function probeSmoothing can be used to investgate the effect the smoothing parameters (smoothing.method and df or lambda) on the smooth that results. Usage fitSpline(data, response, response.smoothed, x, smoothing.method = "direct", spline.type = "NCSS", df = NULL, lambda = NULL, npspline.segments = NULL, correctBoundaries = FALSE, deriv = NULL, suffices.deriv = NULL, extra.rate = NULL, na.x.action = "exclude", na.y.action = "trimx", ...) Arguments data A data.frame containing the column to be smoothed. response A character giving the name of the column in data that is to be smoothed. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. x A character giving the name of the column in data that contains the values of the predictor variable. smoothing.method A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. spline.type A character giving the type of spline to use. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric specifying, for natural cubic smoothing splines (NCSS), the desired equivalent number of degrees of freedom of the smooth (trace of the smoother matrix). Lower values result in more smoothing. If df = NULL, the amount of smoothing can be controlled by setting lambda. If both df and lambda are NULL, smoothing is controlled by the default arguments for smooth.spline, and any that you supply via the ellipsis (. . . ) argument. lambda A numeric specifying the positive penalty to apply. The amount of smoothing decreases as lamda decreases. npspline.segments A numeric specifying, for P-splines (PS), the number of equally spaced seg- ments between min(x) and max(x), excluding missing values, to use in con- structing the B-spline basis for the spline fitting. If npspline.segments is NULL, npspline.segments is set to the maximum of 10 and ceiling((nrow(data)-1)/2) i.e. there will be at least 10 segments and, for more than 22 x values, there will be half as many segments as there are x values. The amount of smoothing de- creases as npspline.segments increases. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. deriv A numeric specifying one or more orders of derivatives that are required. suffices.deriv A character giving the characters to be appended to response.method to con- struct the names of the derivatives. If NULL and the derivatives are to be retained, then .dv followed by the order of the derivative is appended to response.method . extra.rate A named character nominating a single growth rate (AGR or RGR) to be com- puted using the first derivative, which one being dependent on the smoothing.method. The name of this element will used as a suffix to be appended to the response when naming the resulting growth rate (see Examples). If unamed, AGR or RGR will be used, as appropriate. Note that, for the smoothing.method set to direct, the first derivative is the AGR and so extra.rate must be set to RGR, which is computed as the AGR / smoothed response. For the smoothing.method set to logarithmic, the first derivative is the RGR and so extra.rate must be set to AGR, which is computed as the RGR * smoothed response. Make sure that deriv includes one so that the first derivative is available for calculating the extra.rate. na.x.action A character string that specifies the action to be taken when values of x are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. ... allows for arguments to be passed to smooth.spline. Value A list with two components named predictions and fit.spline. The predictions component is a data.frame containing x and the fitted smooth. The names of the columns will be the value of x and the value of response.smoothed. The number of rows in the data.frame will be equal to the number of pairs that have neither a missing x or response and the order of codex will be the same as the order in data. If deriv is not NULL, columns containing the values of the derivative(s) will be added to the data.frame; the name each of these columns will be the value of response.smoothed with .dvf appended, where f is the order of the derivative, or the value of response.smoothed and the corresponding element of suffices.deriv appended. If RGR is not NULL, the RGR is calculated as the ratio of value of the first derivative of the fitted spline and the fitted value for the spline. The fit.spline component is a list with components x: the distinct x values in increasing order; y: the fitted values, with boundary values possibly corrected, and corresponding to x; lev: leverages, the diagonal values of the smoother matrix (NCSS only); lambda: the value of lambda (corresponding to spar for NCSS - see smooth.spline); df: the efective degrees of freedom; npspline.segments: the number of equally spaced segments used for smoothing method set to PS; uncorrected.fit: the object returned by smooth.spline for smoothing method set to NCSS or by JOPS::psNormal for PS. Author(s) <NAME> References Eilers, P.H.C and <NAME>.D. (2021) Practical smoothing: the joys of P-splines. Cambridge Uni- versity Press, Cambridge. <NAME>. (2001) Boundary corrected cubic smoothing splines. Journal of Statistical Computation and Simulation, 70, 107-121. See Also splitSplines, probeSmoothing, byIndv4Times_GRsDiff, smooth.spline, predict.smooth.spline, JOPS. Examples data(exampleData) fit <- fitSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", df = 4, deriv=c(1,2), suffices.deriv=c("AGRdv","Acc")) fit <- fitSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", spline.type = "PS", lambda = 0.1, npspline.segments = 10, deriv=c(1,2), suffices.deriv=c("AGRdv","Acc")) fit <- fitSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", df = 4, deriv=c(1), suffices.deriv=c("AGRdv"), extra.rate = c(RGR.dv = "RGR")) getTimesSubset Forms a subset of responses in data that contains their values for the nominated times Description Forms a subset of each of the responses in data that contains their values for the nominated times in a single column. Usage getTimesSubset(data, responses, individuals = "Snapshot.ID.Tag", times = "DAP", which.times, suffix = NULL, sep.suffix.times = ".", include.times = FALSE, include.individuals = FALSE) Arguments data A data.frame containing the column from which the growth rates are to be calculated. responses A character giving the names of the columns in data whose values are to be subsetted. individuals A character giving the name of the column in data containing an identifier for each individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used to identify the subset and, if a factor or character, the values should be numerics stored as characters. which.times A vector giving the times that are to be selected. suffix A character giving the suffix to be appended to responses to form the names of the columns containing the subset. sep.suffix.times A character giving the separator to use in appending a suffix for times to a trait. For no separator, set to "". include.times A logical indicating whether or not to include the times in the result, the name in the result having the suffix with a separating full appended. include.individuals A logical indicating whether or not to include the individuals column in the result. Value A data.frame containing the subset of responses ordered by as many of the initial columns of data as are required to uniquely identify each row (see order for more information). The names of the columns for each of the responses and for times in the subset are the concatenation of their names in data and suffix, separated by a full stop. Author(s) <NAME> Examples data(exampleData) sPSALast <- getTimesSubset("sPSA", data = longi.dat, times = "DAP", which.times = c(42), suffix = "last") growthPheno-deprecated Deprecated Functions in the Package growthPheno Description These functions have been renamed and deprecated in growthPheno: 1. getDates -> getTimesSubset 2. anomPlot -> plotAnom 3. corrPlot -> plotCorrmatrix 4. imagetimesPlot -> plotImagetimes 5. longiPlot -> plotProfiles 6. probeDF -> probeSmooths Usage getDates(...) anomPlot(...) corrPlot(...) imagetimesPlot(...) longiPlot(...) probeDF(...) Arguments ... absorbs arguments passed from the old functions of the style foo.bar(). Author(s) <NAME> GrowthRates Calculates growth rates (AGR, PGR, RGRdiff) between pairs of values in a vector Description Calculates either the Absolute Growth Rate (AGR), Proportionate Growth Rate (PGR) or Relative Growth Rate (RGR) between pairs of time points, the second of which is lag positions before the first. in x. Usage AGRdiff(x, time.diffs, lag=1) PGR(x, time.diffs, lag=1) RGRdiff(x, time.diffs, lag=1) Arguments x A numeric from which the growth rates are to be calculated. time.diffs a numeric giving the time differences between successive values in x. lag A integer specifying, for the second value in the pair to be operated on, the number positions it is ahead of the current value. Details The AGRdiff is calculated as the difference between a pair of values divided by the time.diffs. The PGR is calculated as the ratio of a value to a second value which is lag values ahead of the first in x and the ratio raised to the power of the reciprocal of time.diffs. The RGRdiff is calculated as the log of the PGR and so is equal to the difference between the logarithms of a pair of values divided by the time.diffs. The differences and ratios are obtained using calcLagged with lag = 1. Value A numeric containing the growth rates which is the same length as x and in which the first lag values NA. Author(s) <NAME> See Also byIndv4Intvl_GRsAvg, byIndv4Intvl_GRsDiff, byIndv4Times_GRsDiff, byIndv4Times_SplinesGRs, calcLagged Examples data(exampleData) longi.dat$PSA.AGR <- with(longi.dat, AGRdiff(PSA, time.diffs = DAP.diffs)) importExcel Imports an Excel imaging file and allows some renaming of variables Description Uses readxl to import a sheet of imaging data produced by the Lemna Tec Scanalyzer. Basically, the data consists of imaging data obtained from a set of pots or carts over time. There should be a column, which by default is called Snapshot.ID.Tag, containing a unique identifier for each cart and a column, which by default is labelled Snapshot.Time.Stamp, containing the time of imaging for each observation in a row of the sheet. Also, if startTime is not NULL, calcTimes is called to calculate, or recalculate if already present, timeAfterStart from imageTimes by subtracting a supplied startTime. Using cameraType, keepCameraType, labsCamerasViews and prefix2suffix, some flexibility is provided for renaming the columns with imaging data. For example, if the column names are prefixed with ’RGB_SV1’, ’RGB_SV2’ or ’RGB_TV’, the ’RGB_’ can be removed and the ’SV1’, ’SV2’ or ’TV’ become suffices. Usage importExcel(file, sheet="raw data", sep = ",", cartId = "Snapshot.ID.Tag", imageTimes = "Snapshot.Time.Stamp", timeAfterStart = "Time.after.Planting..d.", cameraType = "RGB", keepCameraType = FALSE, labsCamerasViews = NULL, prefix2suffix = TRUE, startTime = NULL, timeFormat = "%Y-%m-%d %H:%M", plotImagetimes = TRUE, ...) Arguments file A character giving the path and name of the file containing the data. sheet A character giving the name of the sheet containing the data, that must include columns whose names are as specified by cartId, which uniquely indexes the carts in the experiment, and imageTimes, which reflects the time of the imaging from which a particular data value was obtained. It is also assumed that a col- umn whose name is specified by timeAfterStart is in the sheet or that it will be calculated from imageTimes using the value of startTime supplied in the function call. sep A character giving the separator used in a csv file. cartId A character giving the name of the column that contains the unique Id for each cart. Note that in importing data into R, spaces and nonalphanumeric characters in names are converted to full stops. imageTimes A character giving the name of the column that contains the time that each cart was imaged. Note that in importing data into R, spaces and nonalphanumeric characters in names are converted to full stops. timeAfterStart A character giving the name of the column that contains or is to contain the difference between imageTimes and startTime. The function calcTimes is called to calculate the differences. For example, it might contain the number of days after planting. Note that in importing data into R, spaces and nonalphanu- meric characters in names are converted to full stops. cameraType A character string nominating the abbreviation used for the cameraType. A warning will be given if no variable names include this cameraType. keepCameraType A logical specifying whether to retain the cameraType in the variables names. It will be the start of the prefix or suffix and separated from the remander of the prefix or suffix by an underscore (_). labsCamerasViews A named character whose elements are new labels for the camera-view com- binations and the name of each element is the old label for the camera-view combination in the data being imported. If labsCamerasViews is NULL, all col- umn names beginning with cameraType are classed as imaging variables and the unique prefixes amongst them determined. If no imaging variables are found then no changes are made. Note that if you want to include a recognisable cameraType in a camier-view label, it should be at the start of the the label in labsCamerasViews and separated from the rest of the label by an underscore (_). prefix2suffix A logical specifying whether the variables names with prefixed camera-view labels are to have those prefixes transferred to become suffices. The prefix is assumed to be all the characters up to the first full stop (.) in the variable name and must contain cameraType to be moved. It is generally assumed that the characters up to the first underscore (_) are the camera type and this is removed if keepCameraType is FALSE. If there is no underscore (_), the whole prefix is moved. If labsCamerasViews is NULL, all column names beginning with cameraType are classed as imaging variables and the unique prefixes amongst them determined. If no imaging variables are found then no changes are made. startTime A character giving the time of planting, in the POSIXct format timeFormat, to be subtracted from imageTimes in recalculating timeAfterStart. If startTime is NULL then timeAfterStart is not recalculated. timeFormat A character giving the POSIXct format of characters containing times, in par- ticular imageTimes and startTime. plotImagetimes A logical indicating whether a plot of the imaging times against the recalcu- lated Time.After.Planting..d.. It aids in checking Time.After.Planting..d. and what occurred in imaging the plants. ... allows for arguments to be passed to plotImagetimes. However, if intervals is passed an error will occur; use timeAfterStart instead. Value A data.frame containing the data. Author(s) <NAME> See Also as.POSIXct, calcTimes, plotImagetimes Examples filename <- system.file("extdata/rawdata.xlsx", package = "growthPheno", mustWork = TRUE) raw.dat <- importExcel(file = filename, startTime = "2015-02-11 0:00 AM") camview.labels <- c("SF0", "SL0", "SU0", "TV0") names(camview.labels) <- c("RGB_Side_Far_0", "RGB_Side_Lower_0", "RGB_Side_Upper_0", "RGB_TV_0") filename <- system.file("extdata/raw19datarow.csv", package = "growthPheno", mustWork = TRUE) raw.19.dat <- suppressWarnings(importExcel(file = filename, cartId = "Snapshot.ID.Tags", timeFormat = "%d/%m/%Y %H:M", labsCamerasViews = camview.labels, plotImagetimes = FALSE)) intervalGRaverage Calculates the growth rates for a specified time interval by taking weighted averages of growth rates for times within the interval Description Using previously calculated growth rates over time, calculates the Absolute Growth Rates for a specified interval using the weighted averages of AGRs for each time point in the interval (AGR) and the Relative Growth Rates for a specified interval using the weighted geometric means of RGRs for each time point in the interval (RGR). Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Intvl_GRsAvg. Usage intervalGRaverage(responses, individuals = "Snapshot.ID.Tag", which.rates = c("AGR","RGR"), suffices.rates=c("AGR","RGR"), times = "Days", start.time, end.time, suffix.interval, data, sep=".", na.rm=TRUE) Arguments responses A character giving the names of the responses for which there are columns in data that contain the growth rates that are to be averaged. The names of the growth rates should have either AGR or RGR appended to the responses names. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). which.rates A character giving the growth rates that are to be averaged to obtain growth rates for an interval. It should be a combination of one or more of "AGR" and "RGR". suffices.rates A character giving the suffices to be appended to response to form the names of the columns containing the calculated the growth rates and in which growth rates are to be stored. Their elements will be matched with those of which.rates. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the growth rate is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the growth rate is to be calculated. suffix.interval A character giving the suffix to be appended to response.suffices.rates to form the names of the columns containing the calculated the growth rates. data A data.frame containing the columns from which the growth rates are to be calculated. sep A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. na.rm A logical indicating whether NA values should be stripped before the calcula- tion of weighted means proceeds. Details The AGR for an interval is calculated as the weighted mean of the AGRs for times within the interval. The RGR is calculated as the weighted geometric mean of the RGRs for times within the interval; in fact the exponential is taken of the weighted means of the logs of the RGRs. The weights are obtained from the times. They are taken as the sum of half the time subintervals before and after each time, except for the end points; the end points are taken to be the subintervals at the start and end of the interval. Value A data.frame with the growth rates. The name of each column is the concatenation of (i) one of responses, (ii) one of AGR, PGR or RGR, or the appropriate element of suffices.rates, and (iii) suffix.interval, the three components being separated by full stops. Author(s) <NAME> See Also intervalGRdiff, intervalWUI, splitValueCalculate, getTimesSubset, GrowthRates, splitSplines, splitContGRdiff Examples data(exampleData) longi.dat <- splitSplines(data = longi.dat, response = "PSA", response.smoothed = "sPSA", x="xDAP", individuals = "Snapshot.ID.Tag", df = 4, deriv=1, suffices.deriv = "AGRdv", extra.rate = c(RGRdv = "RGR")) sPSA.GR <- intervalGRaverage(data = longi.dat, responses = "sPSA", times = "DAP", which.rates = c("AGR","RGR"), suffices.rates = c("AGRdv","RGRdv"), start.time = 31, end.time = 35, suffix.interval = "31to35") intervalGRdiff Calculates the growth rates for a specified time interval Description Using the values of the responses, calculates the specified combination of the Absolute Growth Rates using differences (AGR), the Proportionate Growth Rates (PGR) and Relative Growth Rates using log differences (RGR) between two nominated time points. Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Intvl_GRsDiff. Usage intervalGRdiff(responses, individuals = "Snapshot.ID.Tag", which.rates = c("AGR","PGR","RGR"), suffices.rates=NULL, times = "Days", start.time, end.time, suffix.interval, data) Arguments responses A character giving the names of the columns in data from which the growth rates are to be calculated. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). which.rates A character giving the growth rates that are to be calculated. It should be a combination of one or more of "AGR", "PGR" and "RGR". suffices.rates A character giving the characters to be appended to the names of the responses in constructing the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the growth rate is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the growth rate is to be calculated. suffix.interval A character giving the suffix to be appended to response to form the names of the columns containing the calculated the growth rates. data A data.frame containing the column from which the growth rates are to be calculated. Details The AGR is calculated as the difference between the values of response at the end.time and start.time divided by the difference between end.time and start.time. The PGR is calcu- lated as the ratio of response at the end.time to that at start.time and the ratio raised to the power of the reciprocal of the difference between end.time and start.time. The RGR is calcu- lated as the log of the PGR and so is equal to the difference between the logarithms of response at the end.time and start.time divided by the difference between end.time and start.time. Value A data.frame with the growth rates. The name of each column is the concatenation of (i) one of responses, (ii) one of AGR, PGR or RGR, or the appropriate element of suffices.rates, and (iii) suffix.interval, the three components being separated by full stops. Author(s) <NAME> See Also intervalGRaverage, intervalWUI, getTimesSubset, GrowthRates, splitSplines, splitContGRdiff Examples data(exampleData) sPSA.GR <- intervalGRdiff(responses = "sPSA", times = "DAP", which.rates = c("AGR","RGR"), start.time = 31, end.time = 35, suffix.interval = "31to35", data = longi.dat) intervalPVA.data.frame Selects a subset of variables using Principal Variable Analysis (PVA), based on the observed values within a specified time interval Description Principal Variable Analysis (PVA) (Cumming and Wooff, 2007) selects a subset from a set of the variables such that the variables in the subset are as uncorrelated as possible, in an effort to ensure that all aspects of the variation in the data are covered. Here, all observations in a specified time interval are used for calculation the correlations on which the selection is based. Usage ## S3 method for class 'data.frame' intervalPVA(obj, responses, times = "Days", start.time, end.time, nvarselect = NULL, p.variance = 1, include = NULL, plot = TRUE, ...) Arguments obj A data.frame containing the columns of variables from which the selection is to be made. responses A character giving the names of the columns in data from which the variables are to be selected. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used to identify the subset and, if a factor or character, the values should be numerics stored as characters. start.time A numeric giving the time, in terms of values in times, at which the time inter- val begins; observations at this time and up to and including end.time will be included. end.time A numeric giving the time, in terms of values in times, at the end of the interval; observations after this time will not be included. nvarselect A numeric specifying the number of variables to be selected, which includes those listed in include. If nvarselect = 1, as many variables are selected as is need to satisfy p.variance. p.variance A numeric specifying the minimum proportion of the variance that the selected variables must account for, include A character giving the names of the columns in data for the variables whose selection is mandatory. plot A logical indicating whether a plot of the cumulative proportion of the vari- ance explained is to be produced. ... allows passing of arguments to other functions. Details The variable that is most correlated with the other variables is selected first for inclusion. The partial correlation for each of the remaining variables, given the first selected variable, is calculated and the most correlated of these variables is selects for inclusion next. Then the partial correlations are adjust for the second included variables. This process is repeated until the specified criteria have been satisfied. The possibilities are to: 1. the default (nvarselect = NULL and p.variance = 1) select all variables in increasing order of amount of information they provide; 2. select exactly nvarselect variables; 3. select just enough variables, up to a maximum of nvarselect variables, to explain at least p.variance*100 per cent of the total variance. Value A data.frame giving the results of the variable selection. It will contain the columns Variable, Selected, h.partial, Added.Propn and Cumulative.Propn. Author(s) <NAME> References Cumming, <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also PVA, rcontrib Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") results <- intervalPVA(longi.dat, responses, times = "DAP", start.time = "31", end.time = "31", p.variance=0.9, plot = FALSE) intervalValueCalculate Calculates a single value that is a function of an individual’s values for a response over a specified time interval Description Splits the values of a response into subsets corresponding individuals and applies a function that calculates a single value from each individual’s observations during a specified time interval. It includes the ability to calculate the observation number that is closest to the calculated value of the function and the assocated values of a factor or numeric. Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Intvl_ValueCalc. Usage intervalValueCalculate(response, weights=NULL, individuals = "Snapshot.ID.Tag", FUN = "max", which.obs = FALSE, which.values = NULL, times = "Days", start.time=NULL, end.time=NULL, suffix.interval=NULL, data, sep=".", na.rm=TRUE, ...) Arguments response A character giving the name of the column in data from which the values of FUN are to be calculated. weights A character giving the name of the column in data containing the weights to be supplied as w to FUN. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). FUN A character giving the name of the function that calculates the value for each subset. which.obs A logical indicating whether or not to determine the observation number cor- responding to the observed value that is closest to the value of the function, in addition to the value of the function itself. That is, FUN need not return an observed value of the reponse, e.g. quantile. which.values A character giving the name of the factor or numeric whose values are as- sociated with the response values and whose value is to be returned for the observation number whose response value corresponds to the observed value closest to the value of the function. That is, FUN need not return an observed value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the value of the which.values vector for the first of these is returned. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used in calculating growth rates and, if a factor or character, the values should be numerics stored as characters. start.time A numeric giving the times, in terms of levels of times.factor, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which a value is to be calculated. If start.time is NULL, the interval will start with the first observation. In the case of multiple observed response values satisfying this condition, the first is returned. end.time A numeric giving the times, in terms of levels of times.factor, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which a value is to be calculated. If end.time is NULL, the interval will end with the last observation. suffix.interval A character giving the suffix to be appended to response to form the name of the column containing the calculated values. If it is NULL then nothing will be appended. data A data.frame containing the column from which the function is to be calcu- lated. na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. sep A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. ... allows for arguments to be passed to FUN. Value A data.frame, with the same number of rows as there are individuals, containing a column for the individuals and a column with the values of the function for the individuals. It is also pos- sible to determine observaton numbers or the values of another column in data for the response values that are closest to the FUN results, using either or both of which.obs and which.values. If which.obs is TRUE, a column with observation numbers is included in the data.frame. If which.values is set to the name of a factor or a numeric,a column containing the levels of that factor or the values of that numeric is included in the data.frame. The name of the column with the values of the function will be result of concatenating the response, FUN and, if it is not NULL, suffix.interval, each separated by a full stop. If which.obs is TRUE, the column name for the obervations numbers will have .obs added after FUN into the column name for the function values; if which.values is specified, the column name for these values will have a full stop followed by which.values added after FUN into the column name for the function values. Author(s) <NAME> See Also intervalGRaverage, intervalGRdiff, intervalWUI, splitValueCalculate, getTimesSubset Examples data(exampleData) sPSA.max <- intervalValueCalculate(response = "sPSA", times = "DAP", start.time = 31, end.time = 35, suffix.interval = "31to35", data = longi.dat) AGR.max.dat <- intervalValueCalculate(response = "sPSA.AGR", times = "DAP", FUN="max", start.time = 31, end.time = 35, suffix.interval = "31to35", which.values = "DAP", which.obs = TRUE, data=longi.dat) intervalWUI Calculates water use indices (WUI) over a specified time interval to a data.frame Description Calculates the Water Use Index (WUI) between two time points for a set of responses. Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Intvl_WaterUse. Usage intervalWUI(responses, water.use = "Water.Use", individuals = "Snapshot.ID.Tag", times = "Days", start.time, end.time, suffix.interval = NULL, data, include.total.water = FALSE, na.rm = FALSE) Arguments responses A character giving the names of the columns in data from which the growth rates are to be calculated. water.use A character giving the names of the column in data which contains the water use values. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used identifying the intervals and, if a factor or character, the values should be numerics stored as characters. start.time A numeric giving the times, in terms of values in times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the start of the interval for which the WUI is to be calculated. end.time A numeric giving the times, in terms of values times, that will give a single value for each Snapshot.ID.Tag and that will be taken as the observation at the end of the interval for which the WUI is to be calculated. suffix.interval A character giving the suffix to be appended to response to form the names of the columns containing the calculated the growth rates. data A data.frame containing the column from which the growth rates are to be calculated. include.total.water A logical indicating whether or not to include a column in the results for the total of water.use for the interval for each individual. na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. Details The WUI is calculated as the difference between the values of a response at the end.time and start.time divided by the sum of the water use after start.time until end.time. Thus, the water use up to start.time is not included. Value A data.frame containing the WUIs, the name of each column being the concatenation of one of responses, WUI and, if not NULL, suffix.interval, the three components being separated by a full stop. If the total water is to be included, the name of the column will be the concatenation of water.use, Total and the suffix, each separated by a full stop(‘.’). Author(s) <NAME> See Also intervalGRaverage, intervalGRdiff, splitValueCalculate, getTimesSubset, GrowthRates Examples data(exampleData) PSA.WUI <- intervalWUI(response = "PSA", water.use = "WU", times = "DAP", start.time = 31, end.time = 35, suffix = "31to35", data = longi.dat, include.total.water = TRUE) is.smooths.frame Tests whether an object is of class smooths.frame Description A single-line function that tests whether an object is of class smooths.frame. Usage is.smooths.frame(object) Arguments object An object to be tested. Value A logical. Author(s) <NAME> See Also validSmoothsFrame, as.smooths.frame Examples dat <- read.table(header = TRUE, text = " Type TunePar TuneVal Tuning Method ID DAP PSA sPSA NCSS df 4 df-4 direct 045451-C 28 57.446 51.18456 NCSS df 4 df-4 direct 045451-C 30 89.306 87.67343 NCSS df 7 df-7 direct 045451-C 28 57.446 57.01589 NCSS df 7 df-7 direct 045451-C 30 89.306 87.01316 ") dat[1:7] <- lapply(dat[1:7], factor) dat <- as.smooths.frame(dat, individuals = "ID", times = "DAP") is.smooths.frame(dat) validSmoothsFrame(dat) longitudinalPrime Selects a set variables to be retained in a data frame of longitudinal data Description Forms the prime traits by selecting a subset of the traits in a data.frame of imaging data produced by the Lemna Tec Scanalyzer. The imaging traits to be retained are specified using the traits and labsCamerasViews arguments. Some imaging traits are divided by 1000 to convert them from pixels to kilopixels. Also added are factors and explanatory variates that might be of use in an analysis. Usage longitudinalPrime(data, cartId = "Snapshot.ID.Tag", imageTimes = "Snapshot.Time.Stamp", timeAfterStart = "Time.after.Planting..d.", PSAcolumn = "Projected.Shoot.Area..pixels.", idcolumns = c("Genotype.ID","Treatment.1"), traits = list(all = c("Area", "Boundary.Points.To.Area.Ratio", "Caliper.Length", "Compactness", "Convex.Hull.Area"), side = c("Center.Of.Mass.Y", "Max.Dist.Above.Horizon.Line")), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), smarthouse.lev = NULL, calcWaterLoss = TRUE, pixelsPERcm) Arguments data A data.frame containing the columns specified by cartId, imageTimes, timeAfterStart, PSAcolumn idcolumns, traits and cameras along with the following columns: Smarthouse, Lane, Position, Weight.Before, Weight.After, Water.Amount, Projected.Shoot.Area..pixels. The defaults for the arguments to longitudinalPrime requires a data.frame containing the following columns, although not necessarily in the order given here: Smarthouse, Lane, Position, Weight.Before, Weight.After, Water.Amount, Projected.Shoot.Area..pixels., Area.SV1, Area.SV2, Area.TV, Boundary.Points.To.Area.Ratio.SV1, Boundary.Points.To.Area.Ratio.SV2, Boundary.Points.To.Area.Ratio.TV, Caliper.Length.SV1, Caliper.Length.SV2, Caliper.Length.TV, Compactness.SV1, Compactness.SV2, Compactness.TV, Convex.Hull.Area.SV1, Convex.Hull.Area.SV2, Convex.Hull.Area.TV, Center.Of.Mass.Y.SV1, Center.Of.Mass.Y.SV2, Max.Dist.Above.Horizon.Line.SV1, Max.Dist.Above.Horizon.Line.SV2. cartId A character giving the name of the column that contains the unique Id for each cart. imageTimes A character giving the name of the column that contains the time that each cart was imaged. timeAfterStart A character giving the name of the column that contains the time after some nominated starting time e.g. the number of days after planting. PSAcolumn A character giving the name of the column that contains the projected shoot area. idcolumns A character vector giving the names of the columns that identify differences between the plants or carts e.g. Genotype.ID, Treatment.1, Treatment.2. traits A character or a list whose components are characters. Each character gives the names of the columns for imaging traits whose values are required for each of the camera-view combinations given in the corresponding list component of labsCamerasViews. If labsCamerasViews or a component of labsCamerasViews is NULL, then the contents of traits or the coresponding component of traits are merely treated as the names of columns to be retained. labsCamerasViews A character or a list whose components are characters. Each character gives the labels of the camera-view combinations for which is required values of each of the imaging traits in the corresponding character of traits. It is as- sumed that the camera-view labels are appended to the trait names and separated from the trait names by a full stop (.). If labsCamerasViews or a component of labsCamerasViews is NULL, then the contents of the traits or the corespond- ing component of traits are merely treated as the names of columns to be retained. smarthouse.lev A character vector giving the levels to use for the Smarthouse factor. If NULL then the unique values in Smarthouse will be used. calcWaterLoss A logical indicating whether to calculate the Water.Loss. If it is FALSE, Water.Before, Water.After and Water.Amount will not be in the returned data.frame. They can be copied across by listing them in a component of traits and set the cor- responding component of cameras to NULL. pixelsPERcm A numeric giving the number of pixels per cm for the images. No longer used. Details The columns are copied from data, except for those columns in the list under Value that have ‘(calculated)’ appended. Value A data.frame containing the columns specified by cartId, imageTimes, timeAfterStart, idcolumns, traits and cameras. The defaults will result in the following columns: 1. Smarthouse: factor with levels for the Smarthouse 2. Lane: factor for lane number in a smarthouse 3. Position: factor for east/west position in a lane 4. Days: factor for the number of Days After Planting (DAP) 5. cartId: unique code for each cart 6. imageTimes: time at which an image was taken in POSIXct format 7. Reps: factor indexing the replicates for each combination of the factors in idcolumns (calculated) 8. xPosn: numeric for the Positions within a Lane (calculated) 9. Hour: hour of the day, to 2 decimal places, at which the image was taken (calculated) 10. xDays: numeric for the DAP that is centred by subtracting the mean of the unique days (cal- culated) 11. idcolumns: the columns listed in idcolumns that have been converted to factors 12. Weight.Before: weight of the pot before watering (only if calcWaterLoss is TRUE) 13. Weight.After: weight of the pot after watering (only if calcWaterLoss is TRUE) 14. Water.Amount: the weight of the water added (= Water.After - Water.Before) (calculated) 15. Water.Loss: the difference between Weight.Before for the current imaging and the Weight.After for the previous imaging (calculated unless calcWaterLoss is FALSE) 16. Area: the Projected.Shoot.Area..pixels. divided by 1000 (calculated) 17. Area.SV1: the Projected.Shoot.Area from Side View 1 divided by 1000 (calculated) 18. Area.SV2: the Projected.Shoot.Area from Side View 2 divided by 1000 (calculated) 19. Area.TV: the Projected.Shoot.Area from Top View divided by 1000 (calculated) 20. Boundary.To.Area.Ratio.SV1 21. Boundary.To.Area.Ratio.SV2 22. Boundary.To.Area.Ratio.TV 23. Caliper.Length.SV1 24. Caliper.Length.SV2 25. Caliper.Length.TV 26. Compactness.SV1 from Side View 1 27. Compactness.SV2 from Side View 2 28. Compactness.TV: from Top View 29. Convex.Hull.Area.SV1: area of Side View 1 Convex Hull divided by 1000 (calculated) 30. Convex.Hull.Area.SV2: area of Side View 2 Convex Hull divided by 1000 (calculated) 31. Convex.Hull.TV: Convex.Hull.Area.TV divided by 1000 (calculated) 32. Center.Of.Mass.Y.SV1: Centre of Mass from Side View 1 33. Center.Of.Mass.Y.SV2: Centre of Mass from Side View 2 34. Max.Dist.Above.Horizon.Line.SV1: the Max.Dist.Above.Horizon.Line.SV1 converted to cm using pixelsPERcm (calculated) 35. Max.Dist.Above.Horizon.Line.SV2: the Max.Dist.Above.Horizon.Line.SV2 converted to cm using pixelsPERcm (calculated) Author(s) <NAME> Examples data(exampleData) longiPrime.dat <- longitudinalPrime(data=raw.dat, smarthouse.lev=1) longiPrime.dat <- longitudinalPrime(data=raw.dat, smarthouse.lev=1, traits = list(a = "Area", c = "Compactness"), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), t = "TV")) longiPrime.dat <- longitudinalPrime(data=raw.dat, smarthouse.lev=1, traits = c("Area.SV1", "Area.SV2", "Area.TV", "Compactness.TV"), labsCamerasViews = NULL) longiPrime.dat <- longitudinalPrime(data=raw.dat, smarthouse.lev=1, calcWaterLoss = FALSE, traits = list(img = c("Area", "Compactness"), "Water.Amount")), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), plotAnom Identifies anomalous individuals and produces profile plots without them and with just them Description Uses byIndv4Intvl_ValueCalc and the function anom to identify anomalous individuals in lon- gitudinal data. The user can elect to print the anomalous individuals, a profile plot without the anomalous individuals and/or a profile plot with only the anomalous individuals. The plots are produced using ggplot. The plot can be facettd so that a grid of plots is produced. Usage plotAnom(data, response="sPSA", individuals="Snapshot.ID.Tag", times = "DAP", x = NULL, breaks=seq(12, 36, by=2), vertical.line=NULL, groupsFactor=NULL, lower=NULL, upper=NULL, start.time=NULL, end.time=NULL, suffix.interval=NULL, columns.retained=c("Snapshot.ID.Tag", "Smarthouse", "Lane", "Position", "Treatment.1", "Genotype.ID"), whichPrint=c("anomalous","innerPlot","outerPlot"), na.rm=TRUE, ...) Arguments data A data.frame containing the data to be tested and plotted. response A character specifying the response variable that is to be tested and plotted on the y-axis. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. If not a numeric, it will be converted to a numeric and used to provide the values to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. x A character specifying a variable, or a function of variables, to be plotted on the x-axis. If NULL, it will be set to the value of times, which it can be assumed will be converted to a numeric. breaks A numeric vector giving the breaks to be plotted on the x-axis scale. vertical.line A numeric giving position on the x-axis at which a vertical line is to be drawn. If NULL, no line is drawn. groupsFactor A factor giving the name of a factor that defines groups of individuals be- tween which the test for anomalous individuals can be varied by setting values for one or more of lower, upper, start.time and end.time to be NULL, a single value or a set of values whose number equals the number of levels of groupsFactor. If NULL or only a dingle value is supplied, the test is the same for all individuals. lower A numeric such that values in response below it are considered to be anoma- lous. If NULL, there is no testing for values below the lower bound. upper A numeric such that values in response above it are considered to be anoma- lous. If NULL, there is no testing for values above the upper bound. start.time A numeric giving the start of the time interval, in terms of a level of times, during which testing for anomalous values is to occur. If NULL, the interval will start with the first observation. end.time A numeric giving the end of the time interval, in terms of a level of times, during which testing for anomalous values is to occur. If NULL, the interval will end with the last observation. suffix.interval A character giving the suffix to be appended to response to form the name of the column containing the calculated values. If it is NULL then nothing will be appended. columns.retained A character giving the names of the columns in data that are to be retained in the data.frame of anomalous individuals. whichPrint A character indicating what is to be printed. If anomalous is included, the columns.retained are printed for the anomalous individuals. na.rm A logical indicating whether NA values should be stripped before the testing proceeds. ... allows for arguments to be passed to plotLongitudinal. Value A list with three components: 1. data, a data frame resulting from the merge of data and the logical identifying whether or not an individual is anomalous; 2. innerPlot, an object of class ggplot storing the profile plot of the individuals that are not anomalous; 3. outerPlot, an object of class ggplot storing the profile plot of only the individuals that are anomalous. The name of the column indicating anomalous individuals will be result of concatenating the response, anom and, if it is not NULL, suffix.interval, each separated by a full stop. The ggplot objects can be plotted using print and can be modified by adding ggplot functions before printing. If there are no observations to plot, NULL will be returned for the plot. Author(s) <NAME> See Also anom, byIndv4Intvl_ValueCalc, ggplot. Examples data(exampleData) anomalous <- plotAnom(longi.dat, response="sPSA.AGR", times = "xDAP", lower=2.5, start.time=40, vertical.line=29, breaks=seq(28, 42, by=2), whichPrint=c("innerPlot"), y.title="sPSA AGR") plotCorrmatrix Calculates and plots correlation matrices for a set of responses Description Having calculated the correlations a heat map indicating the magnitude of the correlations is pro- duced using ggplot. In this heat map, the darker the red in a cell then the closer the correlation is to -1, while the deeper the blue in the cell, then the closer the correlation is to 1. A matrix plot of all pairwise combinations of the variables can be produced. The matrix plot contains a scatter diagram for each pair, as well as the value of the correlation coefficient. The argument pairs.sets can be used to restrict the pairs in the matrix plot to those combinations within each set. Usage plotCorrmatrix(data, responses, which.plots = c("heatmap","matrixplot"), title = NULL, labels = NULL, labelSize = 4, pairs.sets = NULL, show.sig = FALSE, axis.text.size = 20, ggplotFuncs = NULL, printPlot = TRUE, ...) Arguments data A data.frame containing the columns of variables to be correlated. responses A character giving the names of the columns in data containing the variables to be correlated. which.plots A character specifying the plots of the correlations to be produced. The pos- sibilities are one or both of heatmap and matrixplot. title Title for the plots. labels A character specifying the labels to be used in the plots. If labels is NULL, responses is used for the labels. labelSize A numeric giving the size of the labels in the matrixplot. pairs.sets A list each of whose components is a numeric giving the position of the vari- able names in responses that are to be included in the set. All pairs of variables in this pairs.set will be included in a matrixplot. show.sig A logical indicating whether or not to give asterisks on the heatmap indicating the correlations are significantly different from zero. axis.text.size A numeric giving the size of the labels on the axes of the heatmap. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object. printPlot A logical indicating whether or not to print the plot. ... allows passing of arguments to other functions; not used at present. Details The correlations and their p-values are producced using rcorr from the Hmisc package. The heatmap is produced using ggplot and the matrixplot is produced using GGally. Value The heatmap plot, if produced, as an object of class "ggplot", which can be plotted using print; otherwise NULL is returned. Author(s) <NAME> See Also rcorr, GGally, ggplot. Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") plotCorrmatrix(longi.dat, responses, pairs.sets=list(c(1:4),c(5:7))) plotDeviationsBoxes Produces boxplots of the deviations of the observed values from the smoothed values over values of x. Description Produces boxplots of the deviations of the observed values from the smoothed values over values of x. Usage plotDeviationsBoxes(data, observed, smoothed, x.factor, x.title = NULL, y.titles = NULL, facet.x = ".", facet.y = ".", facet.labeller = NULL, facet.scales = "fixed", angle.x = 0, deviations.plots = "absolute", ggplotFuncs = NULL, printPlot = TRUE, ...) Arguments data A data.frame containing the observed and smoothed values from which the deviations are to be computed. observed A character specifying the response variable for which the observed values are supplied. smoothed A character specifying the smoothed response variable, corresponding to observed, for which values are supplied. x.factor A character giving the factor to be plotted on the x-axis. x.title Title for the x-axis. If NULL then set to x. y.titles A character giving the titles for the y-axis, one for each plot specified deviations.plots. facet.x A data.frame giving the variable to be used to form subsets to be plotted in separate columns of plots. Use "." if a split into columns is not wanted. For which.plots set to methodcompare or dfcompare facet.x.pf is ignored. facet.y A data.frame giving the variable to be used to form subsets to be plotted in separate rows of plots. Use "." if a split into columns is not wanted. facet.labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. facet.scales A character specifying whether the scales are shared across all facets of a plot ("fixed"), or do they vary across rows (the default, "free_x"), columns ("free_y"), or both rows and columns ("free")? angle.x A numeric between 0 and 360 that gives the angle of the x-axis text to the x- axis. It can also be set by supplying, in ggplotFuncs, a theme function from ggplot2. deviations.plots A character specifying whether absolute and/or relative deviations are to be plotted. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object for plotting. printPlot A logical indicating whether or not to print the plots. ... allows passing of arguments to ggplot. Value A list whose components are named absolute and relative; a component will contain an object of class "ggplot" when the plot has been requested using the deviations.plots argument and a NULL otherwise. The objects can be plotted using print. Author(s) <NAME> See Also plotMedianDeviations, probeSmoothing, ggplot. Examples data(exampleData) plotDeviationsBoxes(longi.dat, observed = "PSA", smoothed = "sPSA", x.factor="DAP", facet.x.pf = ".", facet.y= ".", df =5) plotImagetimes Plots the position of a time within an interval against the interval for each cart Description Uses ggplot to produce a plot of the time position within an interval against the interval. For example, one might plot the hour of the day carts are imaged against the days after planting (or some other number of days after an event). A line is produced for each value of groupVariable and the colour is varied according to the value of the colourVariable. Each Smarthouse is plotted separately. It aids in checking whether delays occurred in imaging the plants. Usage plotImagetimes(data, intervals = "Time.after.Planting..d.", timePositions = "Hour", groupVariable = "Snapshot.ID.Tag", colourVariable = "Lane", ggplotFuncs = NULL, printPlot = TRUE) Arguments data A data.frame containing any columns specified by intervals, timePositions, groupVariable and colourVariable. intervals A character giving the name of the column in data containing, as a numeric or a factor, the calculated times to be plotted on the x-axis. For example, it could be the days after planting or treatment. timePositions A character giving the name of the column in data containing, as a numeric, the value of the time position within an interval (for example, the time of imag- ing during the day expressed in hours plus a fraction of an hour). groupVariable A character giving the name of the column in data containing the variable to be used to group the plotting. colourVariable A character giving the name of the column in data containing the variable to be used to colour the plotting. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object. printPlot A logical indicating whether or not to print the plot. Value An object of class "ggplot", which can be plotted using print. Author(s) <NAME> See Also ggplot, calcTimes. Examples data(exampleData) library(ggplot2) longi.dat <- calcTimes(longi.dat, imageTimes = "Snapshot.Time.Stamp", timePositions = "Hour") plotImagetimes(data = longi.dat, intervals = "DAP", timePositions = "Hour", ggplotFuncs=list(scale_colour_gradient(low="grey20", high="black"), geom_line(aes(group=Snapshot.ID.Tag, colour=Lane)))) plotLongitudinal Produces profile plots of longitudinal data for a set of individuals Description Produce profile plots of longitudinal data for a response using ggplot. A line is drawn for the data for each individual and the plot can be faceted so that a grid of plots is produced. For each facet a line for the medians over time can be added, along with the vaue of the outer whiskers (median +/- 1.5 * IQR). Usage plotLongitudinal(data, x = "xDays+44.5", response = "Area", individuals = "Snapshot.ID.Tag", title = NULL, x.title = "Days", y.title = "Area (kpixels)", facet.x = ".", facet.y = ".", labeller = NULL, colour = "black", colour.column = NULL, colour.values = NULL, alpha = 0.1, addMediansWhiskers = FALSE, xname = "xDays", ggplotFuncs = NULL, printPlot = TRUE) Arguments data A data.frame containing the data to be plotted. x A character giving the variable to be plotted on the x-axis. response A character specifying the response variable that is to be plotted on the y-axis. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). x.title Title for the x-axis. y.title Title for the y-axis. title Title for the plot. facet.x A data.frame giving the variable to be used to form subsets to be plotted in separate columns of plots. Use "." if a split into columns is not wanted. facet.y A data.frame giving the variable to be used to form subsets to be plotted in separate rows of plots. Use "." if a split into rows is not wanted. labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. colour A character specifying a single colour to use in drawing the lines for the pro- files. If colouring according to the values of a variable is required then use colour.column. colour.column A character giving the name of a column in data over whose values the colours of the lines are to be varied. The colours can be specified using colour.values. colour.values A character vector specifying the values of the colours to use in drawing the lines for the profiles. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually al- phabetical) with the limits of the scale. alpha A numeric specifying the degrees of transparency to be used in plotting. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. addMediansWhiskers A logical indicating whether plots over time of the medians and outer whiskers are to be added to the plot. The outer whiskers are related to the whiskers on a box-and-whisker and are defined as the median plus (and minus) 1.5 times the interquartile range (IQR). Points lying outside the whiskers are considered to be potential outliers. xname A character giving the name of the numeric that contains the values of the predictor variable from which x is derived, it being that x may incorporate an expression. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object. printPlot A logical indicating whether or not to print the plot. Value An object of class "ggplot", which can be plotted using print. Author(s) <NAME> See Also ggplot, labeller. Examples data(exampleData) plotLongitudinal(data = longi.dat, x = "xDAP", response = "sPSA") plt <- plotLongitudinal(data = longi.dat, x = "xDAP", response = "sPSA", x.title = "DAP", y.title = "sPSA (kpixels)", facet.x = "Treatment.1", facet.y = "Smarthouse", printPlot=FALSE) plt <- plt + ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1) + ggplot2::scale_x_continuous(breaks=seq(28, 42, by=2)) + ggplot2::scale_y_continuous(limits=c(0,750)) print(plt) plotLongitudinal(data = longi.dat, x="xDAP", response = "sPSA", x.title = "DAP", y.title = "sPSA (kpixels)", facet.x = "Treatment.1", facet.y = "Smarthouse", ggplotFuncs = list(ggplot2::geom_vline(xintercept=29, linetype="longdash", ggplot2::scale_x_continuous(breaks=seq(28, 42, ggplot2::scale_y_continuous(limits=c(0,750)))) plotMedianDeviations Calculates and plots the median of the deviations of the smoothed val- ues from the observed values. Description Calculates and plots the median of the deviations of the supplied smoothed values from the supplied observed values for traits and combinations of different smoothing methods and smoothing degrees of freedom, possibly for subsets of factor combinations. The requisite values can be generated using probeSmoothing with which.plots set to none. The results of smoothing methods applied externally to growthPheno can be included via the extra.smooths argument. Envelopes of the median value of a trait for each factor combination can be added. Note: this function is soft deprecated and may be removed in future versions. Use plotSmoothsMedianDevns. Usage plotMedianDeviations(data, response, response.smoothed, x = NULL, xname="xDays", individuals = "Snapshot.ID.Tag", x.title = NULL, y.titles = NULL, facet.x = "Treatment.1", facet.y = "Smarthouse", labeller = NULL, trait.types = c("response", "AGR", "RGR"), propn.types = c(0.1, 0.5, 0.75), propn.note = TRUE, alpha.med.devn = 0.5, smoothing.methods = "direct", df, extra.smooths = NULL, ggplotFuncsMedDevn = NULL, printPlot = TRUE, ...) Arguments data A data.frame containing the observed and smoothed values from which the deviations are to be computed. There should be a column of smoothed values for each combination of smoothing.methods, df and the types specified by trait.types. In addition, there should be a column of values for each element of extra.smooths in combination with the elements of trait.types. Also, there should be a column of observed values for each of the types specified by trait.types . The naming of the columns for smoothed traits should follow the convention that a name is made up, in the order sepcified, of (i) a response.smoothed, (ii) the trait.type if not just a response trait type, a smoothing.method or an extra.smooths and, (iii) if a smoothing.method, a df. Each component should be separated by a period (.). response A character specifying the response variable for which the observed values are supplied. Depending on the setting of trait.types, the observed values of related trait.types may also need to be be supplied. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response and obtained for the combinations of smoothing.methods and df, usually using smoothing splines. If response.smoothed is NULL, then response.smoothed is set to the response to which .smooth is added. Depending on the setting of trait.types, the smoothed values of related trait.types may also need to be be supplied. x A character giving the variable to be plotted on the x-axis; it may incorporate an expression. If x is NULL then xname is used. xname A character giving the name of the numeric that contains the values from which x is derived, it being that x may incorporate an expression. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). x.title A character giving the title for the x-axis. If NULL then set to xname. y.titles A character giving the titles for the y-axis, one for each trait specified by trait.types. If NULL then set to the traits derived for response from trait.types. facet.x A data.frame giving the variable to be used to form subsets to be plotted in separate columns of plots. Use "." if a split into columns is not wanted. For which.plots set to methodcompare or dfcompare facet.x is ignored. facet.y A data.frame giving the variable to be used to form subsets to be plotted in separate rows of plots. Use "." if a split into columns is not wanted. labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. trait.types A character giving the traits types that are to be plotted. While AGR and RGR are commonly used, the names can be arbitrary, except that response is a special case that indicates that the original response is to be plotted. propn.types A numeric giving the proportion of the medians values of each of the trait.types that are to be plotted in the median deviations plots. If set to NULL, the plots of the proprotions are omitted. propn.note A logical indicating whether a note giving the proportion of the median values plotted in the compare.medians plots. alpha.med.devn A numeric specifying the degrees of transparency to be used in plotting a me- dian deviations plot. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. smoothing.methods A character giving the smoothing method used in producing the response.smoothed and which is to be used in labelling the plot. df A numeric specifying the smoothing degrees of freedom used in producing the response.smoothed and which is to be used in labelling the plot. extra.smooths A character specifying one or more smoothing.method labels that have been used in naming of columns of smooths of the response obtained by methods other than the smoothing spline methods provided by growthPheno. Depending on the setting of trait.types, the smoothed values of related trait types must also be supplied, with names constructed according to the convention described under data. ggplotFuncsMedDevn A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object. printPlot A logical indicating whether or not to print any plots. ... allows passing of arguments to plotLongitudinal. Value A list that consists of two components: (i) a componenent named plots that stores a list of the median deviations plots, one for each trait.types; (ii) a component named med.dev.dat that stores the data.frame containing the median deviations that have been plotted. Each plot in the plots list is in an object of class "ggplot", which can be plotted using print. Author(s) <NAME> See Also plotDeviationsBoxes, probeSmoothing, ggplot. Examples data(exampleData) vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1), ggplot2::scale_x_continuous(breaks=seq(28, 42, by=2))) traits <- probeSmoothing(data = longi.dat, xname = "xDAP", times.factor = "DAP", response = "PSA", response.smoothed = "sPSA", df = c(4:7), facet.x = ".", facet.y = ".", which.plots = "none", propn.types = NULL) med <- plotMedianDeviations(data = traits, response = "PSA", response.smoothed = "sPSA", x="xDAP", xname = "xDAP", df = c(4,7), x.title = "DAP", facet.x = ".", facet.y = ".", trait.types = "response", propn.types = 0.05, ggplotFuncsMedDevn = vline) plotProfiles Produces profile plots of longitudinal data for a set of individuals Description Produce profile plots of longitudinal data for a response using ggplot. A line is drawn for the data for each individual and the plot can be faceted so that a grid of plots is produced. For each facet a line for the medians over time can be added, along with the vaue of the outer whiskers (median +/- 1.5 * IQR). Usage plotProfiles(data, response = "PSA", individuals = "Snapshot.ID.Tag", times = "DAP", x = NULL, title = NULL, x.title = "DAP", y.title = "PSA (kpixels)", facet.x = ".", facet.y = ".", labeller = NULL, scales = "fixed", colour = "black", colour.column = NULL, colour.values = NULL, alpha = 0.1, addMediansWhiskers = FALSE, ggplotFuncs = NULL, printPlot = TRUE) Arguments data A data.frame containing the data to be plotted. response A character specifying the response variable that is to be plotted on the y-axis. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. If not a numeric, it will be converted to a numeric and used to provide the values to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. x A character specifying a variable, or a function of variables, to be plotted on the x-axis. If NULL, it will be set to the value of times, which it can be assumed will be converted to a numeric. x.title Title for the x-axis. y.title Title for the y-axis. title Title for the plot. facet.x A data.frame giving the variable to be used to form subsets to be plotted in separate columns of plots. Use "." if a split into columns is not wanted. facet.y A data.frame giving the variable to be used to form subsets to be plotted in separate rows of plots. Use "." if a split into rows is not wanted. labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. scales A character specifying whether the scales are shared across all facets of a plot (the default, "fixed"), or do they vary across rows ("free_x"), columns ("free_y"), or both rows and columns ("free")? colour A character specifying a single colour to use in drawing the lines for the pro- files. If colouring according to the values of a variable is required then use colour.column. colour.column A character giving the name of a column in data over whose values the colours of the lines are to be varied. The colours can be specified using colour.values. colour.values A character vector specifying the values of the colours to use in drawing the lines for the profiles. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually al- phabetical) with the limits of the scale. alpha A numeric specifying the degrees of transparency to be used in plotting. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. addMediansWhiskers A logical indicating whether plots over time of the medians and outer whiskers are to be added to the plot. The outer whiskers are related to the whiskers on a box-and-whisker and are defined as the median plus (and minus) 1.5 times the interquartile range (IQR). Points lying outside the whiskers are considered to be potential outliers. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. These functions are applied in creating the ggplot object. printPlot A logical indicating whether or not to print the plot. Value An object of class "ggplot", which can be plotted using print. Author(s) <NAME> See Also ggplot, labeller. Examples data(exampleData) plotProfiles(data = longi.dat, response = "sPSA", times = "DAP") plt <- plotProfiles(data = longi.dat, response = "sPSA", y.title = "sPSA (kpixels)", facet.x = "Treatment.1", facet.y = "Smarthouse", printPlot=FALSE) plt <- plt + ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1) + ggplot2::scale_x_continuous(breaks=seq(28, 42, by=2)) + ggplot2::scale_y_continuous(limits=c(0,750)) print(plt) plotProfiles(data = longi.dat, response = "sPSA", times = "DAP", x.title = "DAP", y.title = "sPSA (kpixels)", facet.x = "Treatment.1", facet.y = "Smarthouse", ggplotFuncs = list(ggplot2::geom_vline(xintercept=29, linetype="longdash", ggplot2::scale_x_continuous(breaks=seq(28, 42, ggplot2::scale_y_continuous(limits=c(0,750)))) plotSmoothsComparison Plots several sets of smoothed values for a response, possibly along with growth rates and optionally including the unsmoothed values, as well as deviations boxplots. Description Plots the smoothed values for an observed response and, optionally, the unsmoothed observed response using plotProfiles. Depending on the setting of trait.types (response, AGR or RGR), the computed traits of the Absolute Growth Rates (AGR) and/or the Relative Growth Rates (RGR) are plotted. This function will also calculate and produce, using plotDeviationsBoxes, boxplots of the deviations of the supplied smoothed values from the observed response values for the traits and for combinations of the different smoothing parameters and for subsets of non- smoothing-factor combinations. The observed and smoothed values are supplied in long format i.e. with the values for each set of smoothing parameters stacked one under the other in the sup- plied smooths.frame. Such data can be generated using probeSmooths; to prevent probeSmooths producing the plots, which it is does using plotSmoothsComparison, plotDeviationsBoxes and plotSmoothsMedianDevns, set which.plots to none. The smoothing parameters include spline.types, df, lambdas and smoothing.methods (see probeSmooths). Multiple plots, possibly each having multiple facets, are produced using ggplot2. The layout of these plots is controlled via the arguments plots.by, facet.x and facet.y. The basic princi- ple is that the number of levels combinations of the smoothing-parameter factors Type, TunePar, TuneVal, Tuning (the combination of (TunePar and TuneVal), and Method that are included in plots.by, facet.x and facet.y must be the same as those covered by the combinations of the val- ues supplied to spline.types, df, lambdas and Method and incorporated into the smooths.frame input to plotSmoothsComparison via the data argument. This ensures that smooths from different parameter sets are not pooled into the same plot. The factors other than the smoothing-parameter factors can be supplied to the plots.by and facet arguments. The following profiles plots can be produced: (i) separate plots of the smoothed traits for each combination of the smoothing parameters (include Type, Tuning and Method in plots.by); (ii) as for (i), with the corresponding plot for the unsmoothed trait preceeding the plots for the smoothed trait (also set include.raw to alone); (iii) profiles plots that compare a smoothed trait for all combinations of the values of the smoothing parameters, arranging the plots side-by-side or one above the other (include Type, Tuning and Method in facet.x and/or facet.y - to include the unsmoothed trait set include.raw to one of facet.x or facet.y; (iv) as for (iii), except that separate plots are produced for each combination of the levels of the factors in plot.by and each plot compares the smoothed traits for the smoothing-parameter factors included in facet.x and/or facet.y (set both plots.by and one or more of facet.x and facet.y). Usage plotSmoothsComparison(data, response, response.smoothed = NULL, individuals = "Snapshot.ID.Tag", times = "DAP", trait.types = c("response", "AGR", "RGR"), x.title = NULL, y.titles = NULL, profile.plot.args = args4profile_plot(plots.by = NULL, facet.x = ".", facet.y = ".", include.raw = "no"), printPlot = TRUE, ...) Arguments data A smooths.frame, such as is produced by probeSmooths and that contains the data resulting from smoothing a response over time for a set of individuals, the data being arranged in long format both with respect to the times and the smoothing-parameter values used in the smoothing. That is, each response occu- pies a single column. The unsmoothed response and the response.smoothed are to be plotted for different sets of values for the smoothing parameters. The smooths.frame must include the columns Type, TunePar, TuneVal, Tuning and Method, and the columns nominated using the arguments individuals, times, plots.by, facet.x, facet.y, response, response.smoothed, and, if requested, the AGR and the RGR of the response and response.smoothed. The names of the growth rates should be formed from response and response.smoothed by adding .AGR and .RGR to both of them. response A character specifying the response variable for which the observed values are supplied. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response and obtained for the combinations of smoothing.methods and df, usually using smoothing splines. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used to provide the values to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). trait.types A character giving the trait.types that are to be plotted when which.plots is profiles. Irrespective of the setting of get.rates, the nominated traits are plotted. If all, each of response, AGR and RGR is plotted. x.title Title for the x-axis, used for all plots. If NULL then set to times. y.titles A character giving the titles for the y-axis, one for each trait specified by trait.types and used for all plots. If NULL, then set to the traits derived for response from trait.types. profile.plot.args A named list that is most easily generated using args4profile_plot, it doc- umenting the options available for varying profile plots and boxplots. Note that if args4profile_plot is to be called to change from the default settings given in the default plotSmoothsComparison call and some of those settings are to be retained, then the arguments whose settings are to be retained must also be included in the call to args4profile_plot; be aware that if you call args4profile_plot, then the defaults for this call are those for args4profile_plot, NOT the call to args4profile_plot shown as the default for plotSmoothsComparison. printPlot A logical indicating whether or not to print any plots. ... allows passing of arguments to plotProfiles. Value A multilevel list that contains the ggplot objects for the plots produced. The first-level list has a component for each trait.types and each of these is a second-level list that contains the trait profile plots and for a trait. It may contain components labelled Unsmoothed, all or for one of the levels of the factors in plots.by; each of these third-level lists contains a ggplot object that can be plotted using print. Author(s) <NAME> See Also traitSmooth, probeSmooths, args4profile_plot, plotDeviationsBoxes, plotSmoothsMedianDevns, ggplot. Examples data(exampleData) vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1)) traits <- probeSmooths(data = longi.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", #only df is changed from the probeSmooth default smoothing.args = args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", which.plots = "none") plotSmoothsComparison(data = traits, response = "PSA", response.smoothed = "sPSA", times = "DAP", x.title = "DAP", #only facet.x is changed from the probeSmooth default profile.plot.args = args4profile_plot(plots.by = NULL, facet.x = "Tuning", facet.y = ".", include.raw = "no", ggplotFuncs = vline)) plotSmoothsDevnBoxplots Produces boxplots for several sets of deviations of the smoothed values from a response, possibly along with growth rates. Description Calculates and produces, using plotDeviationsBoxes, boxplots of the deviations of the supplied smoothed values from the observed response values for the traits and for combinations of the dif- ferent smoothing parameters and for subsets of non-smoothing-factor combinations. Which traits are plotted is controlled by trait.types and may include the (responseand the computed traits of the Absolute Growth Rates (AGR) and/or the Relative Growth Rates (RGR). The observed and smoothed values are supplied in long format i.e. with the values for each set of smoothing parame- ters stacked one under the other in the supplied smooths.frame. Such data can be generated using probeSmooths. Multiple plots, possibly each having multiple facets, are produced using ggplot2. The layout of these plots is controlled via the arguments plots.by, facet.x and facet.y. The basic principle is that the number of levels combinations of the smoothing-parameter factors Type, TunePar, TuneVal, Tuning (the combination of (TunePar and TuneVal), and Method that are included in plots.by, facet.x and facet.y must be the same as those covered by the combinations of the values incorporated into the smooths.frame input to plotSmoothsDevnBoxplots via the data argument. This ensures that smooths from different parameter sets are not pooled into the same plot. The factors other than the smoothing-parameter factors can be supplied to the plots.by and facet arguments. Usage plotSmoothsDevnBoxplots(data, response, response.smoothed = NULL, individuals = "Snapshot.ID.Tag", times = "DAP", trait.types = c("response", "AGR", "RGR"), which.plots = "absolute.boxplots", x.title = NULL, y.titles = NULL, devnboxes.plot.args = args4devnboxes_plot(plots.by = NULL, facet.x = ".", facet.y = "."), printPlot = TRUE, ...) Arguments data A smooths.frame, such as is produced by probeSmooths and that contains the data resulting from smoothing a response over time for a set of individuals, the data being arranged in long format both with respect to the times and the smoothing-parameter values used in the smoothing. That is, each response occu- pies a single column. The unsmoothed response and the response.smoothed are to be plotted for different sets of values for the smoothing parameters. The smooths.frame must include the columns Type, TunePar, TuneVal, Tuning and Method, and the columns nominated using the arguments individuals, times, plots.by, facet.x, facet.y, response, response.smoothed, and, if requested, the AGR and the RGR of the response and response.smoothed. The names of the growth rates should be formed from response and response.smoothed by adding .AGR and .RGR to both of them. response A character specifying the response variable for which the observed values are supplied. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response and obtained for the combinations of smoothing.methods and df, usually using smoothing splines. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used to provide the values to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). trait.types A character giving the trait.types that are to be plotted. If all, each of response, AGR and RGR is plotted. which.plots A logical indicating which plots are to be produced. The options are either none or absolute.deviations and/or relative.deviations. Boxplots of the absolute deviations are specified by absolute.boxplots, the absolute de- viations being the values of a trait minus their smoothed values (observed - smoothed). Boxplots of the relative deviations are specified by relative.boxplots, the relative deviations being the absolute deviations divided by the smoothed values of the trait. x.title Title for the x-axis, used for all plots. If NULL then set to times. y.titles A character giving the titles for the y-axis, one for each trait specified by trait.types and used for all plots. If NULL, then set to the traits derived for response from trait.types. devnboxes.plot.args A named list that is most easily generated using args4devnboxes_plot, it documenting the options available for varying the boxplots. Note that if args4devnboxes_plot is to be called to change from the default settings given in the default probeSmooths call and some of those settings are to be retained, then the arguments whose set- tings are to be retained must also be included in the call to args4devnboxes_plot; be aware that if you call args4devnboxes_plot, then the defaults for this call are those for args4devnboxes_plot, NOT the call to args4devnboxes_plot shown as the default for probeSmooths. printPlot A logical indicating whether or not to print any plots. ... allows passing of arguments to plotProfiles. Value A multilevel list that contains the ggplot objects for the plots produced. The first-level list has a component for each trait.types and each of these is a second-level list with contains the deviations boxplots for a response. Each plot is in an object of class ggplot, which can be plotted using print. Author(s) <NAME> See Also traitSmooth, probeSmooths, args4profile_plot, plotDeviationsBoxes, plotSmoothsMedianDevns, ggplot. Examples data(exampleData) traits <- probeSmooths(data = longi.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", #only df is changed from the probeSmooth default smoothing.args = args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", which.plots = "none") plotSmoothsDevnBoxplots(data = traits, response = "PSA", response.smoothed = "sPSA", times = "DAP", x.title = "DAP", #only facet.x is changed from the probeSmooth default devnboxes.plot.args = args4devnboxes_plot(plots.by = NULL, facet.x = "Tuning", facet.y = ".")) plotSmoothsMedianDevns Calculates and plots the medians of the deviations from the observed values for several sets of smoothed values stored in a data.frame in long format. Description Calculates and plots the medians of the deviations of the supplied smoothed values from the sup- plied observed values for traits and combinations of different smoothing parameters, possibly for subsets of non-smoothing-factor combinations. The observed and smoothed values are sup- plied in long format i.e. with the values for each set of smoothing parameters stacked one un- der the other in the supplied data.frame. Such data can be generated using probeSmooths; to prevent probeSmooths producing the plots, which it is does using plotSmoothsComparison, plotDeviationsBoxes and plotSmoothsMedianDevns, set which.plots to none. The smoothing parameters include spline.types, df, lambdas and smoothing.methods (see probeSmooths). Multiple plots, possibly each having multiple facets, are produced using ggplot2. The layout of these plots is controlled via the smoothing-parameter factors Type, Tuning (the combination of TunePar and TuneVal) and Method that can be supplied to the arguments plots.by, plots.group, facet.x and facet.y. These plots and facet arguments can also include factors other than the smoothing-parameter factors, that are also associated with the data. The basic principle is that the number of levels combinations of the smoothing-parameter factors included in the plots and facet arguments must be the same as those covered by the combinations of the values sup- plied to spline.types, df, lambdas and Method and incorporated into the smooths.frame input to plotSmoothsMedianDevns via the data argument. This ensures that smooths from different pa- rameter sets are not pooled in a single plot. Envelopes of the median value of a trait for each factor combination can be added. Usage plotSmoothsMedianDevns(data, response, response.smoothed = NULL, individuals = "Snapshot.ID.Tag", times = "DAP", trait.types = c("response", "AGR", "RGR"), x.title = NULL, y.titles = NULL, meddevn.plot.args = args4meddevn_plot(plots.by = NULL, plots.group = NULL, facet.x = ".", facet.y = ".", propn.note = TRUE, printPlot = TRUE, ...) Arguments data A smooths.frame, such as is produced by probeSmooths and that contains the data resulting from smoothing a response over time for a set of individuals, the data being arranged in long format both with respect to the times and the smoothing-parameter values used in the smoothing. That is, each response oc- cupies a single column. The smooths.frame must include the columns Type, TunePar, TuneVal, Tuning and Method, and the columns nominated using the arguments individuals, times, plots.by, facet.x, facet.y, plots.group, response, response.smoothed, and, if requested, the AGR and the RGR of the response and response.smoothed. The names of the growth rates should be formed from response and response.smoothed by adding .AGR and .RGR to both of them. response A character specifying the response variable for which the observed values are supplied. Depending on the setting of trait.types, the observed values of related trait.types may also need to be be supplied. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response and obtained for the combinations of smoothing.methods and df, usually using smoothing splines. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. Depending on the setting of trait.types, the smoothed values of related trait.types may also need to be be supplied. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used to provide the values to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. trait.types A character giving the traits types that are to be plotted. While AGR and RGR are commonly used, the names can be arbitrary, except that response is a special case that indicates that the original response is to be plotted. If all, each of response, AGR and RGR is plotted. x.title Title for the x-axis. If NULL then set to times. y.titles A character giving the titles for the y-axis, one for each trait specified by trait.types. If NULL, then set to the traits derived for response from trait.types. meddevn.plot.args A named list that is most easily generated using args4meddevn_plot, it doc- umenting the options available for varying median deviations plots. Note that if args4meddevn_plot is to be called to change from the default settings given in the default plotSmoothsMedianDevns call and some of those settings are to be retained, then the arguments whose settings are to be retained must also be in- cluded in the call to args4meddevn_plot; be aware that if you call args4meddevn_plot, then the defaults for this call are those for args4meddevn_plot, NOT the call to args4meddevn_plot shown as the default for plotSmoothsMedianDevns. printPlot A logical indicating whether or not to print the plot. ... allows passing of arguments to other functions; not used at present. Value A list that consists of two components: (i) a componenent named plots that stores a two-level list of the median deviations plots; the first-level list has a component for each trait.types and each of these list(s) is a second-level list that contains the set of plots specified by plots.by (if plots.by is NULL, a single plot is stored); (ii) a component named med.dev.dat that stores the data.frame containing the median deviations that have been plotted. Each plot in the plots list is in an object of class ggplot, which can be plotted using print. Author(s) <NAME> See Also traitSmooth, probeSmooths, args4meddevn_plot, plotSmoothsComparison, plotDeviationsBoxes, ggplot. Examples data(exampleData) vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1)) traits <- probeSmooths(data = longi.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", get.rates = FALSE, trait.types = "response", smoothing.args = args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", which.plots = "none") med <- plotSmoothsMedianDevns(data = traits, response = "PSA", response.smoothed = "sPSA", times = "DAP", trait.types = "response", meddevn.plot.args = args4meddevn_plot(plots.by = NULL, plots.group = "Tuning", facet.x = ".", facet.y = ".", ggplotFuncs = vline)) prepImageData Prepares raw imaging data for further processing Description Forms the prime traits by selecting a subset of the traits in a data.frame of imaging data produced by the Lemna Tec Scanalyzer. The imaging traits to be retained are specified using the traits and labsCamerasViews arguments. Some imaging traits are divided by 1000 to convert them from pixels to kilopixels. Also added are factors and explanatory variates that might be of use in an analysis of the data. Usage prepImageData(data, cartId = "Snapshot.ID.Tag", imageTimes = "Snapshot.Time.Stamp", timeAfterStart = "Time.after.Planting..d.", PSAcolumn = "Projected.Shoot.Area..pixels.", idcolumns = c("Genotype.ID","Treatment.1"), traits = list(all = c("Area", "Boundary.Points.To.Area.Ratio", "Caliper.Length", "Compactness", "Convex.Hull.Area"), side = c("Center.Of.Mass.Y", "Max.Dist.Above.Horizon.Line")), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), smarthouse.lev = NULL, calcWaterUse = TRUE) Arguments data A data.frame containing the columns specified by cartId, imageTimes, timeAfterStart, PSAcolumn idcolumns, traits and cameras along with the following columns: Smarthouse, Lane, Position, Weight.Before, Weight.After, Water.Amount, Projected.Shoot.Area..pixels. The defaults for the arguments to prepImageData requires a data.frame con- taining the following columns, although not necessarily in the order given here: Smarthouse, Lane, Position, Weight.Before, Weight.After, Water.Amount, Projected.Shoot.Area..pixels., Area.SV1, Area.SV2, Area.TV, Boundary.Points.To.Area.Ratio.SV1, Boundary.Points.To.Area.Ratio.SV2, Boundary.Points.To.Area.Ratio.TV, Caliper.Length.SV1, Caliper.Length.SV2, Caliper.Length.TV, Compactness.SV1, Compactness.SV2, Compactness.TV, Convex.Hull.Area.SV1, Convex.Hull.Area.SV2, Convex.Hull.Area.TV, Center.Of.Mass.Y.SV1, Center.Of.Mass.Y.SV2, Max.Dist.Above.Horizon.Line.SV1, Max.Dist.Above.Horizon.Line.SV2. cartId A character giving the name of the column that contains the unique Id for each cart. imageTimes A character giving the name of the column that contains the time that each cart was imaged. timeAfterStart A character giving the name of the column that contains the time after some nominated starting time e.g. the number of days after planting. PSAcolumn A character giving the name of the column that contains the projected shoot area. idcolumns A character vector giving the names of the columns that identify differences between the plants or carts e.g. Genotype.ID, Treatment.1, Treatment.2. traits A character or a list whose components are characters. Each character gives the names of the columns for imaging traits whose values are required for each of the camera-view combinations given in the corresponding list component of labsCamerasViews. If labsCamerasViews or a component of labsCamerasViews is NULL, then the contents of traits or the coresponding component of traits are merely treated as the names of columns to be retained. labsCamerasViews A character or a list whose components are characters. Each character gives the labels of the camera-view combinations for which is required values of each of the imaging traits in the corresponding character of traits. It is as- sumed that the camera-view labels are appended to the trait names and separated from the trait names by a full stop (.). If labsCamerasViews or a component of labsCamerasViews is NULL, then the contents of the traits or the corespond- ing component of traits are merely treated as the names of columns to be retained. smarthouse.lev A character vector giving the levels to use for the Smarthouse factor. If NULL then the unique values in Smarthouse will be used. calcWaterUse A logical indicating whether to calculate the Water.Loss. If it is FALSE, Water.Before, Water.After and Water.Amount will not be in the returned data.frame. They can be copied across by listing them in a component of traits and set the cor- responding component of cameras to NULL. Details The columns are copied from data, except for those columns that are calculated from the columns in data; those columns that are calculated have ‘(calculated)’ appended in the list under Value. Value A data.frame containing the columns specified by cartId, imageTimes, timeAfterStart, idcolumns, traits and cameras. The defaults will result in the following columns: 1. Smarthouse: factor with levels for the Smarthouse 2. Lane: factor for lane number in a smarthouse 3. Position: factor for east/west position in a lane 4. DAP: factor for the number of Days After Planting 5. xDAP: numeric for the DAP (calculated) 6. cartId: unique code for each cart 7. imageTimes: time at which an image was taken in POSIXct format 8. Hour: hour of the day, to 2 decimal places, at which the image was taken (calculated) 9. Reps: factor indexing the replicates for each combination of the factors in idcolumns (calculated) 10. idcolumns: the columns listed in idcolumns that have been converted to factors 11. Weight.Before: weight of the pot before watering (only if calcWaterUse is TRUE) 12. Weight.After: weight of the pot after watering (only if calcWaterUse is TRUE) 13. Water.Amount: the weight of the water added (= Water.After - Water.Before) (calculated) 14. WU: the water use calculated as the difference between Weight.Before for the current imaging and the Weight.After for the previous imaging (calculated unless calcWaterUse is FALSE) 15. PSA: the Projected.Shoot.Area..pixels. divided by 1000 (calculated) 16. PSA.SV1: the Projected.Shoot.Area from Side View 1 divided by 1000 (calculated) 17. PSA.SV2: the Projected.Shoot.Area from Side View 2 divided by 1000 (calculated) 18. PSA.TV: the Projected.Shoot.Area from Top View divided by 1000 (calculated) 19. Boundary.To.PSA.Ratio.SV1 20. Boundary.To.PSA.Ratio.SV2 21. Boundary.To.PSA.Ratio.TV 22. Caliper.Length.SV1 23. Caliper.Length.SV2 24. Caliper.Length.TV 25. Compactness.SV1 from Side View 1 26. Compactness.SV2 from Side View 2 27. Compactness.TV: from Top View 28. Convex.Hull.PSA.SV1: area of Side View 1 Convex Hull divided by 1000 (calculated) 29. Convex.Hull.PSA.SV2: area of Side View 2 Convex Hull divided by 1000 (calculated) 30. Convex.Hull.PSA.TV: Convex.Hull.Area.TV divided by 1000 (calculated) 31. Center.Of.Mass.Y.SV1: Centre of Mass from Side View 1 32. Center.Of.Mass.Y.SV2: Centre of Mass from Side View 2 33. Max.Dist.Above.Horizon.Line.SV1: the Max.Dist.Above.Horizon.Line.SV1 divided by 1000 (calculated) 34. Max.Dist.Above.Horizon.Line.SV2: the Max.Dist.Above.Horizon.Line.SV2 divided by 1000 (calculated) Author(s) <NAME> Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1, traits = list(a = "Area", c = "Compactness"), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), t = "TV")) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1, traits = c("Area.SV1", "Area.SV2", "Area.TV", "Compactness.TV"), labsCamerasViews = NULL) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1, calcWaterUse = FALSE, traits = list(img = c("Area", "Compactness"), H20 = c("Weight.Before","Weight.After", "Water.Amount")), labsCamerasViews = list(all = c("SV1", "SV2", "TV"), probeSmoothing Compares, for a set of specified values of df and different smoothing methods, a response and the smooths of it, possibly along with growth rates calculated from the smooths Description Takes a response and, for each individual, uses splitSplines to smooth its values for each in- dividual using the degrees of freedom values in df. Provided get.rates is TRUE, both the Absolute Growth Rates (AGR) and the Relative Growth Rates (RGR) are calculated for each smooth, either using differences or first derivatives. A combination of the unsmoothed and smoothed values, as well as the AGR and RGR, can be plotted for each value in smoothing methods in combination with df. Note that the arguments that modify the plots apply to all plots that are produced. The handling of missing values is controlled via na.x.action and na.y.action Note: this function is soft deprecated and may be removed in future versions. Use probeSmooths. Usage probeSmoothing(data, response = "Area", response.smoothed = NULL, x = NULL, xname="xDays", times.factor = "Days", individuals="Snapshot.ID.Tag", na.x.action="exclude", na.y.action = "exclude", df, smoothing.methods = "direct", correctBoundaries = FALSE, get.rates = TRUE, rates.method="differences", facet.x = "Treatment.1", facet.y = "Smarthouse", labeller = NULL, x.title = NULL, colour = "black", colour.column=NULL, colour.values=NULL, alpha = 0.1, trait.types = c("response", "AGR", "RGR"), propn.types = c(0.1, 0.5, 0.75), propn.note = TRUE, which.plots = "smoothedonly", deviations.plots = "none", alpha.med.devn = 0.5, ggplotFuncs = NULL, ggplotFuncsMedDevn = NULL, ...) Arguments data A data.frame containing the data. response A character specifying the response variable to be supplied to smooth.spline and that is to be plotted on the y-axis. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. If response.smoothed is NULL, then response.smoothed is set to the response to which .smooth is added. x A character giving the variable to be plotted on the x-axis; it may incorporate an expression. If x is NULL then xname is used. xname A character giving the name of the numeric that contains the values of the predictor variable to be supplied to smooth.spline and from which x is derived. times.factor A character giving the name of the column in data containing the factor for times at which the data was collected. Its levels will be used in calculating growth rates and should be numeric values stored as characters. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). na.x.action A character string that specifies the action to be taken when values of x are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. df A numeric specifying the set of degrees of freedom to be probed. smoothing.methods A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. get.rates A logical specifying whether or not the growth rates (AGR and RGR) are to be computed and stored. rates.method A character specifying the method to use in calculating the growth rates. The two possibilities are "differences" and "derivates". facet.x A data.frame giving the variable to be used to form subsets to be plotted in separate columns of plots. Use "." if a split into columns is not wanted. For which.plots set to methodscompare or dfcompare , facet.x is ignored. facet.y A data.frame giving the variable to be used to form subsets to be plotted in separate rows of plots. Use "." if a split into columns is not wanted. labeller A ggplot function for labelling the facets of a plot produced using the ggplot function. For more information see ggplot. x.title Title for the x-axis. If NULL then set to times.factor. colour A character specifying a single colour to use in drawing the lines for the pro- files. If colouring according to the values of a variable is required then use colour.column. colour.column A character giving the name of a column in data over whose values the colours of the lines are to be varied. The colours can be specified using colour.values. colour.values A character vector specifying the values of the colours to use in drawing the lines for the profiles. If this is a named vector, then the values will be matched based on the names. If unnamed, values will be matched in order (usually al- phabetical) with the limits of the scale. alpha A numeric specifying the degrees of transparency to be used in plotting. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. trait.types A character giving the trait.types that are to be produced, and potentially plotted. One of more of response, AGR and RGR. If all, all three traits are produced. propn.types A numeric giving the proportion of the median values of each of the trait.types that are to be plotted in the compare.medians plots of the deviations of the ob- served values from the smoothed values. If set to NULL, the plots of the propor- tions of the median values of the traits are omitted. propn.note A logical indicating whether a note giving the proportion of the median values plotted in the compare.medians plots. which.plots A character giving the plots that are to be produced. If none, no plots are pro- duced. If smoothedonly, plots of the smoothed traits are plotted. If bothseparately, plots of the unsmoothed trait followed by the smoothed traits are produced for each trait. If methodscompare, a combined plot of the smoothed traits for each smoothing.methods is produced, for each value of df. If methods+rawcompare, the unsmoothed trait is added to the combined plot. if dfcompare, a combined plot of the smoothed trait for each df is produced, for each smoothing.methods. If df+rawcompare, the unsmoothed trait is added to the combined plot. deviations.plots A character is either none or any combination of absolute.boxplots, relative.boxplots and compare.medians. If none, no plots are produced. Boxplots of the absolute and relative deviations are specified by absolute.boxplots and relative.boxplots. The absolute deviations are the values of a trait minus their smoothed values (observed - smoothed); the relative deviations are the absolute deviations di- vided by the smoothed values of the trait. The option compare.medians results in a plot that compares the medians of the deviations over the times.factor for each combination of the smoothing.methods and the df. The argument trait.types controls the traits for which boxplots are produced. alpha.med.devn A numeric specifying the degrees of transparency to be used in plotting a me- dian deviations plot. It is a ratio in which the denominator specifies the number of points (or lines) that must be overplotted to give a solid cover. ggplotFuncs A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. Note that these functions are applied to all three plots produced. ggplotFuncsMedDevn A list, each element of which contains the results of evaluating a ggplot func- tion. It is created by calling the list function with a ggplot function call for each element. Note that these functions are applied to the compare.median deviations plots only. ... allows passing of arguments to plotLongitudinal. Value A data.frame containing individuals, times.factor, facet.x, facet.y, xname, response, and, for each df, the smoothed response, the AGR and the RGR. It is returned invisibly. The names of the new data are constructed by joining elements separated by full stops (.). In all cases, the last element is the value of df. For the smoothed response, the other elements are response and "smooth"; for AGR and RGR, the other elements are the name of the smoothed response and either "AGR" or "RGR". Author(s) <NAME> See Also splitSplines, splitContGRdiff, smooth.spline, ggplot. Examples data(exampleData) vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1), ggplot2::scale_x_continuous(breaks=seq(28, 42, by=2))) probeSmoothing(data = longi.dat, response = "PSA", df = c(4,7), xname = "xDAP", times = "DAP", ggplotFuncs=vline) probeSmooths For a response in a data.frame in long format, computes and com- pares, for sets of smoothing parameters, smooths of the response, pos- sibly along with growth rates calculated from the smooths. Description Takes an observed response and, for each individual, uses byIndv4Times_SplinesGRs to smooth its values employing the smoothing parameters specified by (i) spline.types, (ii) the tuning pa- rameters, being the degrees of freedom values in df or the smoothing penalties in lambdas, and (iii) the smoothing.methods. The values of these, and other, smoothing arguments are set using the helper function args4smoothing. Provided get.rates is TRUE or includes raw and/or smoothed and depending on the setting of trait.types, the Absolute Growth Rates (AGR) and/or the Relative Growth Rates (RGR) are calculated for each individual from the unsmoothed, observed response and from the smooths of the response, using either differences or first derivatives, as specified by rates.method. Generally, profile plots for the traits (a response, an AGR or an RGR) specified in traits.types are produced if which.plots is profiles; if which.plots specifies one or more deviations plots, then those deviations plots will also be produced, these being based on the unsmoothed data from which the smoothed data has been subtracted. The layout of the plots is controlled via combi- nations of one or more of the smoothing-parameter factors Type, TunePar, TuneVal, Tuning (the combination of TunePar and TuneVal) and Method, as well as other factors associated with the data. The factors that are to be used for the profile plots are supplied via the argument profile.plot.args using the helper function args4profile_plot and for the and deviations boxplots using the helper function args4devnboxes_plot. These helper functions set plots.by, facet.x, and facet.y. For the plots of the medians of the deviations, the factors are supplied via the argument meddevn.plot.args using the helper function args4meddevn_plot to set plots.by, facet.x, facet.y and plots.group. Here, the basic principle is that the number of levels com- binations of the smoothing-parameter factors included in the set of plots and facets arguments to one of these helper functions must be the same as those covered by the combinations of the val- ues supplied to spline.types, df, lambdas and smoothing.methods and incorporated into the smooths.frame, such as is returned by probeSmooths. This ensures that smooths from different parameter sets are not pooled together in a single plot. It is also possible to include factors that are not smoothing-parameter factors in the plots amd facets arguments. The following profiles plots can be produced using args4profile_plot: (i) separate plots of the smoothed traits for each combination of the smoothing parameters (include Type, Tuning and Method in plots.by); (ii) as for (i), with the corresponding plot for the unsmoothed trait pre- ceeding the plots for the smoothed trait (also set include.raw to alone); (iii) profiles plots that compare a smoothed trait for all combinations of the values of the smoothing parameters, arranging the plots side-by-side or one above the other (include Type, Tuning and Method in facet.x and/or facet.y - to include the unsmoothed trait set include.raw to one of facet.x or facet.y; (iv) as for (iii), except that separate plots are produced for each combination of the levels of the factors in plot.by and each plot compares the smoothed traits for the smoothing-parameter factors included in facet.x and/or facet.y (set both plots.by and one or more of facet.x and facet.y). Deviation plots that can be produced are the absolute and relative deviations boxplots and plots of medians deviations (see which.plots). The handling of missing values is controlled via na.x.action and na.y.action supplied to the helper function args4smoothing. The probeSmooths arguments are grouped according to function in the following order: 1. Data description arguments: data, response, response.smoothed, individuals, times, keep.columns, trait.types, get.rates, rates.method, ntimes2span. 2. Smoothing arguments: smoothing.args (see args4smoothing). 3. General plot control: x.title, y.titles, facet.labeller, which.plots. 4. Profile plots (pf) features: profile.plot.args (see args4profile_plot) 5. Median-deviations (med) plots features: meddevn.plot.args (see args4meddevn_plot) 6. Deviations boxplots (box) features: devnboxes.plot.args (see args4devnboxes_plot) Usage probeSmooths(data, response = "PSA", response.smoothed = NULL, individuals="Snapshot.ID.Tag", times = "DAP", keep.columns = NULL, get.rates = TRUE, rates.method="differences", ntimes2span = NULL, trait.types = c("response", "AGR", "RGR"), smoothing.args = args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", df = NULL, lambdas = NULL), x.title = NULL, y.titles = NULL, which.plots = "profiles", profile.plot.args = args4profile_plot(plots.by = NULL, facet.x = ".", facet.y = ".", include.raw = "no"), meddevn.plot.args = args4meddevn_plot(plots.by = NULL, plots.group = NULL, facet.x = ".", facet.y = ".", propn.note = TRUE, propn.types = c(0.1, 0.5, 0.75)), devnboxes.plot.args = args4devnboxes_plot(plots.by = NULL, facet.x = ".", facet.y = ".", which.plots = "none"), ...) Arguments data A data.frame containing the data or a smooths.frame as is produced by probeSmooths. if data is not a smooths.frame, then smoothing will be performed. If data is a smooths.frame, then the plotting and selection of smooths will be performed as specified by smoothing.args and which.plots. response A character specifying the response variable to be supplied to smoothSpline and that is to be plotted on the y-axis. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used as the values of the predictor variable to be supplied to smooth.spline and to be plotted on the x-axis. If a factor or character, the values should be numerics stored as characters. keep.columns A character vector giving the names of columns from data that are to be included in the smooths.frame that will be returned. Its main use is when no plots are being produced by probeSmooths, but there are columns in the supplied data.frame that are likely to be needed for the plots and facets arguments when producing plots subsequently. get.rates A logical or a character specifying which of the response and the response.smoothed are to have growth rates (AGR and/or RGR) computed and stored. If set to TRUE or c("raw", "smoothed"), growth rates will be obtained for both. Setting to only one of raw or smoothed, results in the growth rates for either the response or the response.smoothed being computed. If set to none or FALSE, no growth rates ar computed. Which growth.rates are computed can be changed using the arguments traits.types and the method used for computing them for the response.smooth by rates.method. The growth rates for the response can only be computed by differencing. rates.method A character specifying the method to use in calculating the growth rates for response.smoothed. The two possibilities are "differences" and "derivatives". ntimes2span A numeric giving the number of values in times to span in calculating growth rates by differencing. For ntimes2span set to NULL, if rates.method is set to differences then ntimes2span is set to 2; if rates.method is set to derivatives then ntimes2span is set to 3. Note that when get.rates is includes raw or is TRUE, the growth rates for the unsmoothed response must be calculated by differ- encing, even if the growth rates for the smoothed response are computed using derivatives. When differencing, each growth rate is calculated as the differ- ence in the values of one of the responses for pairs of times values that are spanned by ntimes2span times values divided by the difference between this pair of times values. For ntimes2span set to 2, a growth rate is the difference between consecutive pairs of values of one of the responses divided by the difference between consecutive pairs of times values. trait.types A character giving the trait.types that are to be plotted. If growth rates are included in trait.types, then they will be computed for either the response and/or the response.smoothed, depending on the setting of get.rates. Any growth rates included in trait.types for the response that are available in data, but have not been specified for computation in get.rates, will be re- tained in the returned smooths.frame. If all, the response.smoothed, its AGR and RGR, will be plotted. The response, and its AGR and RGR, will be plotted as the plotting options require it. smoothing.args A list that is most easily generated using args4smoothing, it documenting the options available for smoothing the data. It gives the settings of smoothing.methods, spline.types, df, lambdas, smoothing.segments, npspline.segments, na.x.action, na.y.action, external.smooths, and correctBoundaries, to be used in smooth- ing the response or in selecting a subset of the smooths in data, depend- ing on whether data is a data.frame or a smooths.frame, respectively. Set smoothing.args to NULL if data is a smooths.frame and only plotting or ex- traction of a chosen smooth is required. x.title Title for the x-axis, used for all plots. If NULL then set to times. y.titles A character giving the titles for the y-axis, one for each trait specified by trait.types and used for all plots. If NULL then set to the traits derived for response from trait.types. which.plots A logical indicating which plots are to be produced. The options are either none or some combination of profiles, absolute.boxplots, relative.boxplots and medians.deviations. The various profiles plots that can be poduced are described in the introduction to this function. Boxplots of the absolute deviations are specified by absolute.boxplots, the absolute deviations being the values of a trait minus their smoothed values (observed - smoothed). Boxplots of the relative deviations are specified by relative.boxplots, the relative deviations being the absolute deviations di- vided by the smoothed values of the trait. The option medians.deviations results in a plot that compares the medians of the absolute deviations over the values of times for each combination of the smoothing-parameter values. The arguments to probeSmooths that apply to medians.deviations plots have the suffix med. profile.plot.args A named list that is most easily generated using args4profile_plot, it doc- umenting the options available for varying profile plots and boxplots. Note that if args4profile_plot is to be called to change from the default settings given in the default probeSmooths call and some of those settings are to be retained, then the arguments whose settings are to be retained must also be included in the call to args4profile_plot; be aware that if you call args4profile_plot, then the defaults for this call are those for args4profile_plot, NOT the call to args4profile_plot shown as the default for probeSmooths. meddevn.plot.args A named list that is most easily generated using args4meddevn_plot, it doc- umenting the options available for varying median deviations plots. Note that if args4meddevn_plot is to be called to change from the default settings given in the default probeSmooths call and some of those settings are to be retained, then the arguments whose settings are to be retained must also be included in the call to args4meddevn_plot; be aware that if you call args4meddevn_plot, then the defaults for this call are those for args4meddevn_plot, NOT the call to args4meddevn_plot shown as the default for probeSmooths. devnboxes.plot.args A named list that is most easily generated using args4devnboxes_plot, it documenting the options available for varying the boxplots. Note that if args4devnboxes_plot is to be called to change from the default settings given in the default probeSmooths call and some of those settings are to be retained, then the arguments whose set- tings are to be retained must also be included in the call to args4devnboxes_plot; be aware that if you call args4devnboxes_plot, then the defaults for this call are those for args4devnboxes_plot, NOT the call to args4devnboxes_plot shown as the default for probeSmooths. ... allows passing of arguments to plotProfiles. Value A smooths.frame that contains the unsmoothed and smoothed data in long format. That is, all the values for either an unsmoothed or a smoothed trait are in a single column. The smooths for a trait for the different combinatons of the smoothing parameters are placed in rows one below the other. The columns that are included in the smooths.frame are Type, TunePar, TuneVal, Tuning and Method, as well as those specified by individuals, times, response, and response.smoothed. and any included in the keep.columns, plots and facet arguments. If trait.types includes AGR or RGR, then the included growth rate(s) of the response and response.smoothed must be present, unless get.rates is TRUE or includes raw and/or smoothed. In this case, the growth rates specified by trait.types will be calculated for the responses nominated by get.rates and the differences between the times used in calculating the rates will be computed and added. Then, the names of the growth rates are formed from response and response.smoothed by appending .AGR and .RGR as appropriate; the name of the column with the times differences will be formed by appending .diffs to the value of times. The external.smooths will also be included. A smooths.frame has the attributes described in smooths.frame. Columns in the supplied data.frame that have not been used in probeSmooths will not be included in the returned smooths.frame. If they might be needed subsequently, such as when extra plots are produced, they can be included in the smooths.frame by listing them in a character vector for the keep.columns argument. The smooths.frame is returned invisibly. Author(s) <NAME> See Also args4smoothing, , args4meddevn_plot, args4profile_plot, traitSmooth, smoothSpline, byIndv4Times_SplinesGRs, byIndv4Times_GRsDiff, smooth.spline, psNormal, plotSmoothsComparison, plotSmoothsMedianDevns, ggplot. Examples data(exampleData) longi.dat <- longi.dat[1:140,] #reduce to a smaller data set vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", linewidth=1)) yfacets <- c("Smarthouse", "Treatment.1") probeSmooths(data = longi.dat, response = "PSA", response.smoothed = "sPSA", individuals = "Snapshot.ID.Tag",times = "DAP", smoothing.args = args4smoothing(df = c(4,7), lambda = list(PS = c(0.316,10))), profile.plot.args = args4profile_plot(plots.by = NULL, facet.x = "Tuning", facet.y = c("Smarthouse", "Treatment.1"), include.raw = "no", alpha = 0.4, colour.column = "Method", colour.values = c("orange", "olivedrab"), ggplotFuncs = vline)) #An example that supplies three smoothing schemes to be compared data(tomato.dat) probeSmooths(data = tomato.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", smoothing.args = args4smoothing(spline.types = c( "N", "NCS", "P"), df = c( 4, 6, NA), lambdas = c( NA, NA, 1), smoothing.methods = c("dir", "log", "log"), combinations = "parallel"), which.plots = "medians.deviations", meddevn.plot.args = args4meddevn_plot(plots.by = NULL, plots.group = c("Type", "Tuning", "Method"), facet.x = ".", facet.y = ".", propn.note = FALSE, propn.types = NULL)) PVA Selects a subset of variables using Principal Variable Analysis (PVA) Description Principal Variable Analysis (PVA) (Cumming and Wooff, 2007) selects a subset from a set of the variables such that the variables in the subset are as uncorrelated as possible, in an effort to ensure that all aspects of the variation in the data are covered. Usage PVA(obj, ...) Arguments obj A data.frame containing the columns of variables from which the selection is to be made. ... allows passing of arguments to other functions Details PVA is the generic function for the PVA method. Use methods("PVA") to get all the methods for the PVA generic. PVA.data.frame is a method for a data.frame. PVA.matrix is a method for a matrix. Value A data.frame giving the results of the variable selection. It will contain the columns Variable, Selected, h.partial, Added.Propn and Cumulative.Propn. Author(s) <NAME> References Cumming, <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also PVA.data.frame, PVA.matrix, intervalPVA, rcontrib PVA.data.frame Selects a subset of variables stored in a data.frame using Principal Variable Analysis (PVA) Description Principal Variable Analysis (PVA) (Cumming and Wooff, 2007) selects a subset from a set of the variables such that the variables in the subset are as uncorrelated as possible, in an effort to ensure that all aspects of the variation in the data are covered. Usage ## S3 method for class 'data.frame' PVA(obj, responses, nvarselect = NULL, p.variance = 1, include = NULL, plot = TRUE, ...) Arguments obj A data.frame containing the columns of variables from which the selection is to be made. responses A character giving the names of the columns in data from which the variables are to be selected. nvarselect A numeric specifying the number of variables to be selected, which includes those listed in include. If nvarselect = 1, as many variables are selected as is need to satisfy p.variance. p.variance A numeric specifying the minimum proportion of the variance that the selected variables must account for, include A character giving the names of the columns in data for the variables whose selection is mandatory. plot A logical indicating whether a plot of the cumulative proportion of the vari- ance explained is to be produced. ... allows passing of arguments to other functions Details The variable that is most correlated with the other variables is selected first for inclusion. The partial correlation for each of the remaining variables, given the first selected variable, is calculated and the most correlated of these variables is selects for inclusion next. Then the partial correlations are adjust for the second included variables. This process is repeated until the specified criteria have been satisfied. The possibilities are: 1. the default (nvarselect = NULL and p.variance = 1), which selects all variables in increas- ing order of amount of information they provide; 2. to select exactly nvarselect variables; 3. to select just enough variables, up to a maximum of nvarselect variables, to explain at least p.variance*100 per cent of the total variance. Value A data.frame giving the results of the variable selection. It will contain the columns Variable, Selected, h.partial, Added.Propn and Cumulative.Propn. Author(s) <NAME> References Cumming, <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also PVA, PVA.matrix, intervalPVA.data.frame, rcontrib Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") results <- PVA(longi.dat, responses, p.variance=0.9, plot = FALSE) PVA.matrix Selects a subset of variables using Principal Variable Analysis (PVA) based on a correlation matrix Description Principal Variable Analysis (PVA) (Cumming and Wooff, 2007) selects a subset from a set of the variables such that the variables in the subset are as uncorrelated as possible, in an effort to ensure that all aspects of the variation in the data are covered. Usage ## S3 method for class 'matrix' PVA(obj, responses, nvarselect = NULL, p.variance = 1, include = NULL, plot = TRUE, ...) Arguments obj A matrix containing the correlation matrix for the variables from which the selection is to be made. responses A character giving the names of the rows and columns in obj, being the names of the variables from which the selection is to be made. nvarselect A numeric specifying the number of variables to be selected, which includes those listed in include. If nvarselect = 1, as many variables are selected as is need to satisfy p.variance. p.variance A numeric specifying the minimum proportion of the variance that the selected variables must account for, include A character giving the names of the columns in data for the variables whose selection is mandatory. plot A logical indicating whether a plot of the cumulative proportion of the vari- ance explained is to be produced. ... allows passing of arguments to other functions Details The variable that is most correlated with the other variables is selected first for inclusion. The partial correlation for each of the remaining variables, given the first selected variable, is calculated and the most correlated of these variables is selects for inclusion next. Then the partial correlations are adjust for the second included variables. This process is repeated until the specified criteria have been satisfied. The possibilities are: 1. the default (nvarselect = NULL and p.variance = 1), which selects all variables in increas- ing order of amount of information they provide; 2. to select exactly nvarselect variables; 3. to select just enough variables, up to a maximum of nvarselect variables, to explain at least p.variance*100 per cent of the total variance. Value A data.frame giving the results of the variable selection. It will contain the columns Variable, Selected, h.partial, Added.Propn and Cumulative.Propn. Author(s) <NAME> References Cumming, <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also PVA, PVA.data.frame, intervalPVA.data.frame, rcontrib Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") R <- Hmisc::rcorr(as.matrix(longi.dat[responses]))$r results <- PVA(R, responses, p.variance=0.9, plot = FALSE) rcontrib Computes a measure of how correlated each variable in a set is with the other variable, conditional on a nominated subset of them Description A measure of how correlated a variable is with those in a set is given by the square root of the sum of squares of the correlation coefficients between the variables and the other variables in the set (Cumming and Wooff, 2007). Here, the partial correlation between the subset of the variables listed in response that are not listed in include is calculated from the partial correlation matrix for the subset, adjusting for those variables in include. This is useful for manually deciding which of the variables not in include should next be added to it. Usage rcontrib(obj, ...) Arguments obj A data.frame containing the columns of variables from which the correlation measure is to be calculated. ... allows passing of arguments to other functions Details rcontrib is the generic function for the rcontrib method. Use methods("rcontrib") to get all the methods for the rcontrib generic. rcontrib.data.frame is a method for a data.frame. rcontrib.matrix is a method for a matrix. Value A numeric giving the correlation measures. Author(s) <NAME> References <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also PVA, intervalPVA rcontrib.data.frame Computes a measure of how correlated each variable in a set is with the other variable, conditional on a nominated subset of them Description A measure of how correlated a variable is with those in a set is given by the square root of the sum of squares of the correlation coefficients between the variables and the other variables in the set (Cumming and Wooff, 2007). Here, the partial correlation between the subset of the variables listed in response that are not listed in include is calculated from the partial correlation matrix for the subset, adjusting for those variables in include. This is useful for manually deciding which of the variables not in include should next be added to it. Usage ## S3 method for class 'data.frame' rcontrib(obj, responses, include = NULL, ...) Arguments obj A data.frame containing the columns of variables from which the correlation measure is to be calculated. responses A character giving the names of the columns in data from which the correla- tion measure is to be calculated. include A character giving the names of the columns in data for the variables for which other variables are to be adjusted. ... allows passing of arguments to other functions. Value A numeric giving the correlation measures. Author(s) <NAME> References Cumming, <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also rcontrib, rcontrib.matrix, PVA, intervalPVA.data.frame Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") h <- rcontrib(longi.dat, responses, include = "PSA") rcontrib.matrix Computes a measure of how correlated each variable in a set is with the other variable, conditional on a nominated subset of them Description A measure of how correlated a variable is with those in a set is given by the square root of the sum of squares of the correlation coefficients between the variables and the other variables in the set (Cumming and Wooff, 2007). Here, the partial correlation between the subset of the variables listed in response that are not listed in include is calculated from the partial correlation matrix for the subset, adjusting for those variables in include. This is useful for manually deciding which of the variables not in include should next be added to it. Usage ## S3 method for class 'matrix' rcontrib(obj, responses, include = NULL, ...) Arguments obj A matrix containing the correlations of the variables from which the correlation measure is to be calculated. responses A character giving the names of the columns in data from which the correla- tion measure is to be calculated. include A character giving the names of the columns in data for the variables for which other variables are to be adjusted. ... allows passing of arguments to other functions. Value A numeric giving the correlation measures. Author(s) <NAME> References <NAME>. and <NAME> (2007) Dimension reduction via principal variables. Computa- tional Statistics and Data Analysis, 52, 550–565. See Also rcontrib, rcontrib.data.frame, PVA, intervalPVA.data.frame Examples data(exampleData) longi.dat <- prepImageData(data=raw.dat, smarthouse.lev=1) longi.dat <- within(longi.dat, { Max.Height <- pmax(Max.Dist.Above.Horizon.Line.SV1, Density <- PSA/Max.Height PSA.SV = (PSA.SV1 + PSA.SV2) / 2 Image.Biomass = PSA.SV * (PSA.TV^0.5) Centre.Mass <- (Center.Of.Mass.Y.SV1 + Center.Of.Mass.Y.SV2) / 2 Compactness.SV = (Compactness.SV1 + Compactness.SV2) / 2 }) responses <- c("PSA","PSA.SV","PSA.TV", "Image.Biomass", "Max.Height","Centre.Mass", "Density", "Compactness.TV", "Compactness.SV") R <- Hmisc::rcorr(as.matrix(longi.dat[responses]))$r h <- rcontrib(R, responses, include = "PSA") RicePrepped.dat Prepped data from an experiment to investigate a rice germplasm panel. Description The data is the full set of Lanes and Positions from an experiment in a Smarthouse at the Plant Accelerator in Adelaide. It is used in the growthPheno-package as an executable example to illustrate the use of growthPheno. The experiment and data collection are described in Al-Tamimi et al. (2016) and the data is derived from the data.frame in the file 00-raw.0254.dat.rda that is available from Al-Tamimi et al. (2017); halpf od the unprred data is in RiceRaw.dat. Usage data(RicePrepped.dat) Format A data.frame containing 14784 observations on 32 variables. The names of the columns in the data.frame are: Column Name Class Description 1 Smarthouse factor the Smarthouse in which a cart occurs. 2 Snapshot.ID.Tag character a unique identifier for each cart in the experiment. 3 xDAP numeric the numbers of days after planting on which the current data was observed. 4 DAST factor the numbers of days after the salting treatment on which the current data was observed. 5 xDAST numeric the numbers of days after the salting treatment on which the current data was observed. 6 cDAST numeric a centered numeric covariate for DAST. 7 DAST.diffs numeric the number of days between this and the previous observations (all one for this experiment). 8 Lane factor the Lane in the 24 Lane x 24 Positions grid. 9 Position factor the Position in the 24 Lane x 24 Positions grid. 10 cPosn numeric a centered numeric covaariate for Positions. 11 cMainPosn numeric a centered numeric covaariate for Main plots. 12 Zone factor the Zone of 4 Lanes to which the current cart belonged. 13 cZone numeric a centered numeric covariate for Zone. 14 SHZone factor the Zone numbered across the two Smarthouses. 15 ZLane factor the number of the Lane within a Zone. 16 ZMainunit factor the number of the Main plot within a Zone. 17 Subunit factor the number of a Cart within a Main plot. 18 Reps numeric the replicate number of each Genotype-Salinity combination. 19 Genotype factor the number assigned to the 298 Genotypes in the experiment. 20 Salinity factor the Salinity treatment (Control, Salt) allocated to a Cart. 21 PSA numeric the Projected shoot area (kpixels). 22 PSA.AGR numeric the Absolute Growth Rate for the Projected shoot area (kpixels/day). 23 PSA.RGR numeric the Relative Growth Rate for the Projected shoot area (per day). 24 Tr numeric the amount of water (g) transpired by a plant. 25 TrR numeric the rate of water transpireation (g/day) for a plant. 26 PSA.TUE numeric the Transpiration Use Efficiency for PSA (kpixels / day) for a plant. 27 sPSA numeric the smoothed Projected shoot area (kpixels). 29 sPSA.AGR numeric the smoothed Absolute Growth Rate for the Projected shoot area (kpixels/day). 29 sPSA.RGR numeric the smoothed Relative Growth Rate for the Projected shoot area (per day). 30 sTr numeric the smoothed amount of water (g) transpired by a plant. 31 sTrR numeric the smoothed rate of water transpireation (g/day) for a plant. 32 sPSA.TUE numeric the smoothed Transpiration Use Efficiency for PSA (kpixels / day) for a plant. Source <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Negrao S. (2017) Data from: Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping. Retrieved from: doi:10.5061/dryad.3118j. References <NAME>, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., Tester, M. and <NAME>. (2016) New salinity tolerance loci revealed in rice using high-throughput non- invasive phenotyping. Nature Communications, 7, 13342. Retrieved from doi:10.1038/ncomms13342. RiceRaw.dat Data for an experiment to investigate a rice germplasm panel Description The data is half (the first 12 of 24 Lanes) of that from an experiment in a Smarthouse at the Plant Accelerator in Adelaide. It is used in the growthPheno-package as an executable example to illustrate the use of growthPheno. The experiment and data collection are described in Al-Tamimi et al. (2016) and the data is derived from the data.frame in the file 00-raw.0255.dat.rda that is available from Al-Tamimi et al. (2017). Usage data(RiceRaw.dat) Format A data.frame containing 7392 observations on 33 variables. Source <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Tester M, Negrao S: Data from: Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping. Retrieved from: doi:10.5061/dryad.3118j. References <NAME>, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., Tester, M. and <NAME>. (2016) New salinity tolerance loci revealed in rice using high-throughput non- invasive phenotyping. Nature Communications, 7, 13342. Retrieved from doi:10.1038/ncomms13342. smooths.frame Description of a smooths.frame object Description A data.frame of S3-class smooths.frame that stores the smooths of one or more responses for several sets of smoothing parameters. as.smooths.frame is function that converts a data.frame to an object of this class. is.smooths.frame is the membership function for this class; it tests that an object has class smooths.frame. validSmoothsFrame can be used to test the validity of a smooths.frame. Value A data.frame that is also inherits the S3-class smooths.frame. It contains the results of smoothing a response over time from a set of individuals, the data being arranged in long format both with re- spect to the times and the smoothing-parameter values used in the smoothing. That is, each response occupies a single column. The smooths.frame must include the columns Type, TunePar, TuneVal, Tuning (the combination of TunePar and TuneVal) and Method, and the columns that would be nominated using the probeSmooths arguments individuals, the plots and facet arguments, times, response, response.smoothed, and, if requested, the AGR and the RGR of the response and response.smoothed. The names of the growth rates should be formed from response and response.smoothed by adding .AGR and .RGR to both of them. The function probeSmooths pro- duces a smooths.frame for a response. A smooths.frame has the following attributes: 1. individuals, the character giving the name of the factor that define the subsets of the data for which each subset corresponds to the response values for an individual; 2. n, the number of unique individuals; 3. times, the character giving the name of the numeric, or factor with numeric levels, that contains the values of the predictor variable plotted on the x-axis; 4. t, the number of unique values in the times; 5. nschemes, the number of unique combinations of the smoothing-parameter values in the smoothsframe. Author(s) <NAME> See Also probeSmooths, is.smooths.frame, as.smooths.frame, validSmoothsFrame, args4smoothing Examples dat <- read.table(header = TRUE, text = " Type TunePar TuneVal Tuning Method ID DAP PSA sPSA NCSS df 4 df-4 direct 045451-C 28 57.446 51.18456 NCSS df 4 df-4 direct 045451-C 30 89.306 87.67343 NCSS df 7 df-7 direct 045451-C 28 57.446 57.01589 NCSS df 7 df-7 direct 045451-C 30 89.306 87.01316 ") dat[1:7] <- lapply(dat[1:6], factor) dat <- as.smooths.frame(dat, individuals = "ID", times = "DAP") is.smooths.frame(dat) validSmoothsFrame(dat) data(exampleData) vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1)) smths <- probeSmooths(data = longi.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", smoothing.args = args4smoothing(smoothing.methods = "direct", spline.types = "NCSS", profile.plot.args = args4profile_plot(plots.by = NULL, facet.x = "Tuning", include.raw = "no", ggplotFuncs = vline)) is.smooths.frame(smths) validSmoothsFrame(smths) smoothSpline Fit a spline to smooth the relationship between a response and an x in a data.frame, optionally computing growth rates using derivatives. Description Uses smooth.spline to fit a natural cubic smoothing spline or JOPS to fit a P-spline to all the values of response stored in data. The amount of smoothing can be controlled by tuning parameters, these being related to the penalty. For a natural cubic smoothing spline, these are df or lambda and, for a P-spline, it is lambda. For a P-spline, npspline.segments also influences the smoothness of the fit. The smoothing.method provides for direct and logarithmic smoothing. The method of Huang (2001) for correcting the fitted spline for estimation bias at the end-points will be applied when fitting using a natural cubic smoothing spline if correctBoundaries is TRUE. The derivatives of the fitted spline can also be obtained, and the Absolute and Relative Growth Rates ( AGR and RGR) computed using them, provided correctBoundaries is FALSE. Otherwise, growth rates can be obtained by difference using byIndv4Times_GRsDiff. The handling of missing values in the observations is controlled via na.x.action and na.y.action. If there are not at least four distinct, nonmissing x-values, a warning is issued and all smoothed val- ues and derivatives are set to NA. The function probeSmooths can be used to investgate the effect the smoothing parameters (smoothing.method and df or lambda) on the smooth that results. Usage smoothSpline(data, response, response.smoothed = NULL, x, smoothing.method = "direct", spline.type = "NCSS", df = NULL, lambda = NULL, npspline.segments = NULL, correctBoundaries = FALSE, rates = NULL, suffices.rates = NULL, sep.rates = ".", extra.derivs = NULL, suffices.extra.derivs=NULL, na.x.action = "exclude", na.y.action = "trimx", ...) Arguments data A data.frame containing the column to be smoothed. response A character giving the name of the column in data that is to be smoothed. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. If response.smoothed is NULL, then response.smoothed is set to the response to which is added the prefix s. x A character giving the name of the column in data that contains the values of the predictor variable. smoothing.method A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. spline.type A character giving the type of spline to use. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric specifying, for natural cubic smoothing splines (NCSS), the desired equivalent number of degrees of freedom of the smooth (trace of the smoother matrix). Lower values result in more smoothing. If df = NULL, the amount of smoothing can be controlled by setting lambda. If both df and lambda are NULL, smoothing is controlled by the default arguments for smooth.spline, and any that you supply via the ellipsis (. . . ) argument. lambda A numeric specifying the positive penalty to apply. The amount of smoothing decreases as lamda decreases. npspline.segments A numeric specifying, for P-splines (PS), the number of equally spaced seg- ments between min(x) and max(x), excluding missing values, to use in con- structing the B-spline basis for the spline fitting. If npspline.segments is NULL, npspline.segments is set to the maximum of 10 and ceiling((nrow(data)-1)/2) i.e. there will be at least 10 segments and, for more than 22 x values, there will be half as many segments as there are x values. The amount of smoothing de- creases as npspline.segments increases. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. rates A character giving the growth rates that are to be calculated using derivative. It should be a combination of one or more of "AGR", "PGR" and "RGR". If NULL, then growth rates are not computed. suffices.rates A character giving the characters to be appended to the names of the responses to provide the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. If NULL, the values of rates are used. sep.rates A character giving the character(s) to be used to separate the suffices.rates value from a response value in constructing the name for a new rate. For no separator, set to "". extra.derivs A numeric specifying one or more orders of derivatives that are required, in addition to any required for calculating the growth rates. When rates.method is derivatives, these can be derivatives other than the first. Otherwise, any derivatives can be specified. suffices.extra.derivs A character giving the characters to be appended to response.method to con- struct the names of the derivatives. If NULL and the derivatives are to be retained, then .dv followed by the order of the derivative is appended to response.method . na.x.action A character string that specifies the action to be taken when values of x are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. ... allows for arguments to be passed to smooth.spline. Value A list with two components named predictions and fit.spline. The predictions component is a data.frame containing x and the fitted smooth. The names of the columns will be the value of x and the value of response.smoothed. The number of rows in the data.frame will be equal to the number of pairs that have neither a missing x or response and the order of codex will be the same as the order in data. If deriv is not NULL, columns containing the values of the derivative(s) will be added to the data.frame; the name each of these columns will be the value of response.smoothed with .dvf appended, where f is the order of the derivative, or the value of response.smoothed and the corresponding element of suffices.deriv appended. If RGR is not NULL, the RGR is calculated as the ratio of value of the first derivative of the fitted spline and the fitted value for the spline. The fit.spline component is a list with components x: the distinct x values in increasing order; y: the fitted values, with boundary values possibly corrected, and corresponding to x; lev: leverages, the diagonal values of the smoother matrix (NCSS only); lambda: the value of lambda (corresponding to spar for NCSS - see smooth.spline); df: the efective degrees of freedom; npspline.segments: the number of equally spaced segments used for smoothing method set to PS; uncorrected.fit: the object returned by smooth.spline for smoothing method set to NCSS or by JOPS::psNormal for PS. Author(s) <NAME> References <NAME> and <NAME>. (2021) Practical smoothing: the joys of P-splines. Cambridge Uni- versity Press, Cambridge. <NAME>. (2001) Boundary corrected cubic smoothing splines. Journal of Statistical Computation and Simulation, 70, 107-121. See Also byIndv4Times_SplinesGRs, probeSmoothing, byIndv4Times_GRsDiff, smooth.spline, predict.smooth.spline, JOPS. Examples data(exampleData) fit <- smoothSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", df = 4, rates = c("AGR", "RGR")) fit <- smoothSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", df = 4, rates = "AGR", suffices.rates = "AGRdv", extra.derivs = 2, suffices.extra.derivs = "Acc") fit <- smoothSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", spline.type = "PS", lambda = 0.1, npspline.segments = 10, rates = "AGR", suffices.rates = "AGRdv", extra.derivs = 2, suffices.extra.derivs = "Acc") fit <- smoothSpline(longi.dat, response="PSA", response.smoothed = "sPSA", x="xDAP", df = 4, rates = "AGR", suffices.rates = "AGRdv") splitContGRdiff Adds, to a data.frame, the growth rates for individuals calculated continuously over time by differencing response values. Description Uses AGRdiff, PGR and RGRdiff to calculate growth rates continuously over time for the response by differencing pairs of pairs of response values and stores the results in data. The subsets are those values with the same levels combinations of the factors listed in individuals. Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Times_GRsDiff. Usage splitContGRdiff(data, responses, individuals = "Snapshot.ID.Tag", INDICES = NULL, which.rates = c("AGR","PGR","RGR"), suffices.rates=NULL, times.factor = "Days", avail.times.diffs = FALSE, ntimes2span = 2) Arguments data A data.frame containing the columns for which growth rates are to be calcu- lated. responses A character giving the names of the columns in data for which growth rates are to be calculated. individuals A character giving the name(s) of the factor(s) that define the subsets of response that correspond to the response values for an individual (e.g. plant, pot, cart, plot or unit) for which growth rates are to be calculated continuously. If the columns corresponding to individuals are not factor(s) then they will be coerced to factor(s). The subsets are formed using split. INDICES A pseudonym for individuals. which.rates A character giving the growth rates that are to be calculated. It should be a combination of one or more of "AGR", "PGR" and "RGR". times.factor A character giving the name of the column in data containing the factor for times at which the data was collected. Its levels will be used in calculating growth rates and should be numeric values stored as characters. avail.times.diffs A logical indicating whether there is an appropriate column of times diff- serences that can be used as the denominator in computing the growth rates. If TRUE, it will be assumed that the name of the column is the value of times with .diffs appended. If FALSE, a column, whose column name will be the value of times with .diffs appended, will be formed and saved in the result, overwriting any existing columns with the constructed name in data. It will be calculated using the values of times in data. ntimes2span A numeric giving the number of values in times to span in calculating growth rates by differencing. Each growth rate is calculated as the difference in the values of one of the responses for pairs of times values that are spanned by ntimes2span times values divided by the difference between this pair of times values. For ntimes2span set to 2, a growth rate is the difference between con- secutive pairs of values of one of the responses divided by the difference be- tween consecutive pairs of times values. suffices.rates A character giving the characters to be appended to the names of the responses to provide the names of the columns containing the calculated growth rates. The order of the suffices in suffices.rates should correspond to the order of the elements of which.rates. If NULL, the values of which.rates are used. Value A data.frame containing data to which has been added 9i) a column for the differences between the times, if it is not already in data, and (ii) columns with growth rates. The name of the col- umn for times differences will be the times with ".diffs" appended. The name for each of the growth-rate columns will be either the value of response with one of ".AGR", ".PGR" or "RGR", or the corresponding value from suffices.rates appended. Each growth rate will be positioned at observation ceiling(ntimes2span + 1) / 2 relative to the two times from which the growth rate is calculated. Author(s) <NAME> See Also fitSpline, splitSplines Examples data(exampleData) longi.dat <- splitContGRdiff(data = longi.dat, response="sPSA", times.factor = "DAP", individuals = "Snapshot.ID.Tag", which.rates=c("AGR", "RGR"), avail.times.diffs = TRUE) splitSplines Adds the fits, and optionally growth rates computed from deriva- tives, after fitting splines to a response for an individual stored in a data.frame in long format Description Uses fitSpline to fit a spline to a subset of the values of response and stores the fitted values in data. The subsets are those values with the same levels combinations of the factors listed in individuals. The degree of smoothing is controlled by the tuning parameters df and lambda, related to the penalty, and by npspline.segments. The smoothing.method provides for direct and logarithmic smoothing. The derivatives of the fitted spline can also be obtained, and the Absolute and Relative Growth Rates ( AGR and RGR) computed using them, provided correctBoundaries is FALSE. Otherwise, growth rates can be obtained by difference using splitContGRdiff. The handling of missing values in the observations is controlled via na.x.action and na.y.action. If there are not at least four distinct, nonmissing x-values, a warning is issued and all smoothed val- ues and derivatives are set to NA. The function probeSmoothing can be used to investgate the effect the smoothing parameters (smoothing.method, df or lambda) on the smooth that results. Note: this function is soft deprecated and may be removed in future versions. Use byIndv4Times_SplinesGRs. Usage splitSplines(data, response, response.smoothed = NULL, x, individuals = "Snapshot.ID.Tag", INDICES = NULL, smoothing.method = "direct", smoothing.segments = NULL, spline.type = "NCSS", df=NULL, lambda = NULL, npspline.segments = NULL, correctBoundaries = FALSE, deriv = NULL, suffices.deriv = NULL, extra.rate = NULL, sep = ".", na.x.action="exclude", na.y.action = "exclude", ...) Arguments data A data.frame containing the column to be smoothed. response A character giving the name of the column in data that is to be smoothed. response.smoothed A character specifying the name of the column containing the values of the smoothed response variable, corresponding to response. If response.smoothed is NULL, then response.smoothed is set to the response to which .smooth is added. x A character giving the name of the column in data that contains the values of the predictor variable. individuals A character giving the name(s) of the factor(s) that define the subsets of response that correspond to the response values for an individual (e.g. plant, pot, cart, plot or unit) that are to be smoothed separately. If the columns cor- responding to individuals are not factor(s) then they will be coerced to factor(s). The subsets are formed using split. INDICES A pseudonym for individuals. smoothing.method A character giving the smoothing method to use. The two possibilites are (i) "direct", for directly smoothing the observed response, and (ii) "logarithmic", for smoothing the log-transformed response and then back-transforming by taking the exponentional of the fitted values. smoothing.segments A named list, each of whose components is a numeric pair specifying the first and last values of an x-interval whose data is to be subjected as an entity to smoothing using splines. The separate smooths will be combined to form a whole smooth for each individual. If smoothing.segments is NULL, the data is not segmented for smoothing. spline.type A character giving the type of spline to use. Currently, the possibilites are (i) "NCSS", for natural cubic smoothing splines, and (ii) "PS", for P-splines. df A numeric specifying, for natural cubic smoothing splines (NCSS), the desired equivalent number of degrees of freedom of the smooth (trace of the smoother matrix). Lower values result in more smoothing. If df = NULL, the amount of smoothing can be controlled by setting lambda. If both df and lambda are NULL, smoothing is controlled by the default arguments for smooth.spline, and any that you supply via the ellipsis (. . . ) argument. lambda A numeric specifying the positive penalty to apply. The amount of smoothing decreases as lamda decreases. npspline.segments A numeric specifying, for P-splines (PS), the number of equally spaced seg- ments between min(x) and max(x), excluding missing values, to use in con- structing the B-spline basis for the spline fitting. If npspline.segments is NULL, npspline.segments is set to the maximum of 10 and ceiling((nrow(data)-1)/2) i.e. there will be at least 10 segments and, for more than 22 x values, there will be half as many segments as there are x values. The amount of smoothing de- creases as npspline.segments increases. When the data has been segmented for smoothing (smoothing.segments is not NULL), an npspline.segments value can be supplied for each segment. correctBoundaries A logical indicating whether the fitted spline values are to have the method of Huang (2001) applied to them to correct for estimation bias at the end-points. Note that spline.type must be NCSS and lambda and deriv must be NULL for correctBoundaries to be set to TRUE. deriv A numeric specifying one or more orders of derivatives that are required. suffices.deriv A character giving the characters to be appended to the names of the deriva- tives. If NULL and the derivative is to be retained then smooth.dv is appended. extra.rate A named character nominating a single growth rate (AGR or RGR) to be com- puted using the first derivative, which one being dependent on the smoothing.method. The name of this element will used as a suffix to be appended to the response when naming the resulting growth rate (see Examples). If unamed, AGR or RGR will be used, as appropriate. Note that, for the smoothing.method set to direct, the first derivative is the AGR and so extra.rate must be set to RGR, which is computed as the AGR / smoothed response. For the smoothing.method set to logarithmic, the first derivative is the RGR and so extra.rate must be set to AGR, which is computed as the RGR * smoothed response. Make sure that deriv includes one so that the first derivative is available for calculating the extra.rate. sep A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. na.x.action A character string that specifies the action to be taken when values of x are NA. The possible values are fail, exclude or omit. For exclude and omit, predictions and derivatives will only be obtained for nonmissing values of x. The difference between these two codes is that for exclude the returned data.frame will have as many rows as data, the missing values have been incorporated. na.y.action A character string that specifies the action to be taken when values of y, or the response, are NA. The possible values are fail, exclude, omit, allx, trimx, ltrimx or rtrimx. For all options, except fail, missing values in y will be removed before smoothing. For exclude and omit, predictions and derivatives will be obtained only for nonmissing values of x that do not have missing y values. Again, the difference between these two is that, only for exclude will the missing values be incorporated into the returned data.frame. For allx, predictions and derivatives will be obtained for all nonmissing x. For trimx, they will be obtained for all nonmissing x between the first and last nonmissing y values that have been ordered for x; for ltrimx and utrimx either the lower or upper missing y values, respectively, are trimmed. ... allows for arguments to be passed to smooth.spline. Value A data.frame containing data to which has been added a column with the fitted smooth, the name of the column being response.smoothed. If deriv is not NULL, columns containing the values of the derivative(s) will be added to data; the name each of these columns will be the value of response.smoothed with .dvf appended, where f is the order of the derivative, or the value of response.smoothed with the corresponding element of suffices.deriv appended. If RGR is not NULL, the RGR is calculated as the ratio of value of the first derivative of the fitted spline and the fitted value for the spline. Any pre-existing smoothed and derivative columns in data will be replaced. The ordering of the data.frame for the x values will be preserved as far as is possible; the main difficulty is with the handling of missing values by the function merge. Thus, if missing values in x are retained, they will occur at the bottom of each subset of individuals and the order will be problematic when there are missing values in y and na.y.action is set to omit. Author(s) <NAME> References Eilers, P.H.C and Marx, B.D. (2021) Practical smoothing: the joys of P-splines. Cambridge Uni- versity Press, Cambridge. Huang, C. (2001) Boundary corrected cubic smoothing splines. Journal of Statistical Computation and Simulation, 70, 107-121. See Also fitSpline, probeSmoothing, splitContGRdiff, smooth.spline, predict.smooth.spline, split Examples data(exampleData) #smoothing with growth rates calculated using derivates longi.dat <- splitSplines(longi.dat, response="PSA", x="xDAP", individuals = "Snapshot.ID.Tag", df = 4, deriv=1, suffices.deriv="AGRdv", extra.rate = c(RGRdv = "RGR")) #Use P-splines longi.dat <- splitSplines(longi.dat, response="PSA", x="xDAP", individuals = "Snapshot.ID.Tag", spline.type = "PS", lambda = 0.1, npspline.segments = 10, deriv=1, suffices.deriv="AGRdv", extra.rate = c(RGRdv = "RGR")) #with segmented smoothing longi.dat <- splitSplines(longi.dat, response="PSA", x="xDAP", individuals = "Snapshot.ID.Tag", smoothing.segments = list(c(28,34), c(35,42)), df = 5) splitValueCalculate Calculates a single value that is a function of an individual’s values for a response Description Splits the values of a response into subsets corresponding individuals and applies a function that calculates a single value from each individual’s observations. It includes the ability to calculate the observation number that is closest to the calculated value of the function and the assocated values of a factor or numeric. Note: this function is soft deprecated and may be removed in future versions. Use byIndv_ValueCalc. Usage splitValueCalculate(response, weights=NULL, individuals = "Snapshot.ID.Tag", FUN = "max", which.obs = FALSE, which.values = NULL, data, na.rm=TRUE, sep=".", ...) Arguments response A character giving the name of the column in data from which the values of FUN are to be calculated. weights A character giving the name of the column in data containing the weights to be supplied as w to FUN. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). FUN A character giving the name of the function that calculates the value for each subset. which.obs A logical indicating whether or not to determine the observation number cor- responding to the observed value that is closest to the value of the function, in addition to the value of the function itself. That is, FUN need not return an ob- served value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the first is returned. which.values A character giving the name of the factor or numeric whose values are as- sociated with the response values and whose value is to be returned for the observation number whose response value corresponds to the observed value closest to the value of the function. That is, FUN need not return an observed value of the reponse, e.g. quantile. In the case of multiple observed response values satisfying this condition, the value of the which.values vector for the first of these is returned. data A data.frame containing the column from which the function is to be calcu- lated. na.rm A logical indicating whether NA values should be stripped before the calcula- tion proceeds. sep A character giving the separator to use when the levels of individuals are combined. This is needed to avoid using a character that occurs in a factor to delimit levels when the levels of individuals are combined to identify subsets. ... allows for arguments to be passed to FUN. Value A data.frame, with the same number of rows as there are individuals, containing a column for the individuals and a column with the values of the function for the individuals. It is also pos- sible to determine observaton numbers or the values of another column in data for the response values that are closest to the FUN results, using either or both of which.obs and which.values. If which.obs is TRUE, a column with observation numbers is included in the data.frame. If which.values is set to the name of a factor or a numeric,a column containing the levels of that factor or the values of that numeric is included in the data.frame. The name of the column with the values of the function will be formed by concatenating the response and FUN, separated by a full stop. If which.obs is TRUE, the column name for the ober- vations numbers will have .obs added after FUN into the column name for the function values; if which.values is specified, the column name for these values will have a full stop followed by which.values added after FUN into the column name for the function values. Author(s) <NAME> See Also intervalValueCalculate, splitContGRdiff, splitSplines Examples data(exampleData) sPSA.max.dat <- splitValueCalculate("sPSA", data = longi.dat) AGR.max.dat <- splitValueCalculate("sPSA.AGR", FUN="max", data=longi.dat, which.values = "DAP", which.obs = TRUE) sPSA.dec1.dat <- splitValueCalculate("sPSA", FUN="quantile", data=longi.dat, which.values = "DAP", probs = 0.1) tomato.dat Longitudinal data for an experiment to investigate tomato response to mycorrhizal fungi and zinc Description The data is from an experiment in a Smarthouse in the Plant Accelerator and is decribed by Watts- Williams et al. (2019). The experiment involves 32 plants, each placed in a pot in a cart, and the carts were assigned 8 treatments using a randomized complete-block design. The main response is Projected Shoot Area (PSA for short), being the sum of the plant pixels from three images. The eight treatments were the combinations of 4 Zinc (Zn) levels by two Arbuscular Mycorrhiza Fungi (AMF) levls. Each plant was imaged on 35 different days after planting (DAPs). It is used to explore the analysis of growth dynamics. Usage data(tomato.dat) Format A data.frame containing 1120 observations on 16 variables. The names of the columns in the data.frame are: Column Name Class Description 1 Lane factor the Lane in the 2 Lane x 16 Positions grid. 2 Position factor the Position in the 2 Lane x 16 Positions grid. 3 DAP factor the numbers of days after planting on which the current data was observed. 4 Snapshot.ID.Tag character a unique identifier for each cart in the experiment. 5 cDAP numeric a centered numeric covariate for DAP. 6 DAP.diffs numeric the number of days between this and the previous observations (all one for this experiment). 7 cPosn numeric a centered numeric covaariate for Positions. 8 Block factor the block of the randomized complete-block design to which the current cart belonged. 9 Cart factor the number of the cart within a block. 10 AMF factor the AMF treatment (- AMF, +AMF) assigned to the cart. 11 Zn factor the Zinc level (0, 10, 40, 90) assigned to the cart. 12 Treatments factor the combined factor formed from AMF and Zn with levels: (-,0; -,10; -,40; -,90; +,0; +,10; +,40; +,90). 12 Weight.After numeric the weight of the cart after watering. 13 Water.Amount numeric the weight of the water added to the cart. 14 WU numeric the weight of the water used since the previous watering. 15 PSA numeric the Projected Shoot Area, being the total number of plant pixels in three plant images. References Watts-Williams SJ, <NAME>, <NAME>, <NAME>, <NAME>, Cavagnaro TR (2019) Using high- throughput phenotyping to explore growth responses to mycorrhizal fungi and zinc in three plant species. Plant Phenomics, 2019, 12. traitExtractFeatures Extract features, that are single-valued for each individual, from traits observed over time. Description Extract one or more sets of features from traits observed over time, the result being traits that have a single value for each individual. The sets of features are: 1. single times – the value for each individual for a single time. (uses getTimesSubset) 2. growth rates for a time interval – the average growth rate (AGR and/or RGR) over a time interval for each individual. (uses byIndv4Intvl_GRsDiff or byIndv4Intvl_GRsAvg) 3. water use traits for a time interval – the total water use (WU), the water use rate (WUR) and the water use index (WUI) over a time interval for each individual. (uses byIndv4Intvl_WaterUse) 4. growth rates for the imaging period overall – the average growth rate (AGR and/or RGR) over the whole imaging period for each individual. (uses byIndv4Intvl_GRsDiff or byIndv4Intvl_GRsAvg) 5. water use traits for the imaging period overall – the total water use (WU), the water use rate (WUR) and the water use index (WUI) for the whole imaging period for each individual. (uses byIndv4Intvl_WaterUse) 6. totals for the imaging period overall – the total over the whole imaging period of a trait for each individual. (uses byIndv4Intvl_ValueCalc) 7. maximum for the imaging period overall – the maximum value over the whole imaging pe- riod, and the time at which it occurred, for each individual. (uses byIndv4Intvl_ValueCalc) The Tomato vignette illustrates the use of traitSmooth and traitExtractFeatures to carry out the SET procedure for the example presented in Brien et al. (2020). Use vignette("Tomato", package = "growthPheno") to access it. Usage traitExtractFeatures(data, individuals = "Snapshot.ID.Tag", times = "DAP", starts.intvl = NULL, stops.intvl = NULL, suffices.intvl = NULL, responses4intvl.rates = NULL, growth.rates = NULL, growth.rates.method = "differences", suffices.growth.rates = NULL, water.use4intvl.traits = NULL, responses4water = NULL, water.trait.types = c("WU", "WUR", "WUI"), suffix.water.rate = "R", suffix.water.index = "I", responses4singletimes = NULL, times.single = NULL, responses4overall.rates = NULL, water.use4overall.water = NULL, responses4overall.water = NULL, responses4overall.totals = NULL, responses4overall.max = NULL, intvl.overall = NULL, suffix.overall = NULL, sep.times.intvl = "to", sep.suffix.times = ".", sep.growth.rates = ".", sep.water.traits = "", mergedata = NULL, ...) Arguments data A data.frame containing the columns specified by individuals, times, the various responses arguments and the water.use argument. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the column in data containing the times at which the data was collected, either as a numeric, factor, or character. It will be used identifying the intervals and, if a factor or character, the values should be numerics stored as characters. starts.intvl A numeric giving the times, in terms of values in times, that are the initial times for a set of intervals for which growth.rates and water.use traits are to be obtained. These times may also be used to obtain values for single-time traits (see responses4singletimes). stops.intvl A numeric giving the times, in terms of values in times, that are the end times for a set of intervals for which growth.rates and water.use traits are to be obtained. These times may also be used to obtain values for single-time traits (see responses4singletimes). suffices.intvl A character giving the suffices for intervals specified using starts.intvl and stops.intvl. If NULL, the suffices are automatically generated using starts.intvl, stops.intvl and sep.times.intvl. responses4intvl.rates A character specifying the names of the columns containing responses for which growth rates are to be obtained for the intervals specified by starts.intvl and stops.intvl. For growth.rates.method set to differences, the growth rates will be computed from the column of the response values whose name is listed in responses4intvl.rates. For growth.rates.method set to derivatives, the growth rates will be computed from a column with the growth rates com- puted for each time. The name of the column should be a response listed in responses4intvl.rates to which is appended an element of suffices.growth.rates. growth.rates.method A character specifying the method to use in calculating the growth rates over an interval for the responses specified by responses4intvl.rates. The two possibilities are "differences" and "ratesaverages". For differences, the growth rate for an interval is computed by taking differences between the values of a response for pairs of times. For ratesaverage, the growth rate for an in- terval is computed by taking weighted averages of growth rates for times within the interval. That is, differences operates on the response and ratesaverage operates on the growth rates previously calculated from the response, so that the appropriate one of these must be in data. The ratesaverage option is most appropriate when the growth rates are calculated using the derivatives of a fitted curve. Note that, for responses for which the AGR has been calculated using differences, both methods will give the same result, but the differences option will be more efficient than ratesaverages. growth.rates A character specifying which growth rates are to be obtained for the intervals specified by starts.intvl and stops.intvl. It should contain one of both of "AGR" and "RGR". suffices.growth.rates A character giving the suffices appended to responses4intvl.rates in con- structung the column names for the storing the growth rates specified by growth.rates. If suffices.growth.rates is NULL, then "AGR" and "RGR" will be used. water.use4intvl.traits A character giving the names of the columns in data that contain the water use values that are to be used in computing the water use traits (WU, WUR, WUI) for the intervals specified by starts.intvl and stops.intvl. If there is only one column name, then the WUI will be calculated using this name for all column names in responses4water. If there are several column names in water.use4intvl.traits, then there must be either one or the same number of names in responses4water. If both have same number of names, then the two lists of column names will be processed in parallel, so that a single WUI will be produced for each pair of responses4water and water.use4intvl.traits values. responses4water A character giving the names of the columns in data that are to provide the nu- merator in calculating a WUI for the intervals specified using starts.intvl and stops.intvl. The denominator will be the values in the columns in data whose names are those given by water.use4intvl.traits. If there is only one col- umn name in responses4water, then the WUI will be calculated using this name for all column names in responses4water. If there are several column names in responses4water, then there must be either one or the same number of names in water.use4intvl.traits. If both have same number of names, then the two lists of column names will be processed in parallel, so that a single WUI will be produced for each pair of responses4water and water.use4intvl.traits values. See the Value section for a description of how responses4water is incorpo- rated into the names constructed for the water use traits. water.trait.types A character listing the trait types to compute and return. It should be some combination of WU, WUR and WUI. See Details in byIndv4Intvl_WaterUse for how each is calculated. suffix.water.rate A character giving the label to be appended to the value of water.use4intvl.traits to form the name of the WUR. suffix.water.index A character giving the label to be appended to the value of water.use4intvl.traits to form the name of the WUI. responses4singletimes A character specifying the names of the columns containing responses for which a column of the values is to be formed for each response for each of the times values specified in times.single. If times.single is NULL, then the unique values in the combined starts.intvl and stops.intvl will be used. times.single A numeric giving the times of imaging, for each of which, the values of each responses4singletimes will be stored in a column of the resulting data.frame. If NULL, then the unique values in the combined starts.intvl and stops.intvl will be used. responses4overall.rates A character specifying the names of the columns containing responses for which growth rates are to be obtained for the whole imaging period i.e. the interval specified by intvl.overall. The settings of growth.rates.method, growth.rates, suffices.growth.rates, sep.growth.rates, suffix.overall and intvl.overall will be used in producing the growth rates. See responses4intvl.rates for more information about how these arguments are used. water.use4overall.water A logical indicating whether the overall water.traits are to be obtained. The settings of water.trait.types, suffix.water.rate, suffix.water.index, sep.water.traits, suffix.overall and intvl.overall will be used in pro- ducing the overall water traits. See water.use4intvl.traits for more infor- mation about how these arguments are used. responses4overall.water A character giving the names of the columns in data that are to provide the numerator in calculating a WUI for the interval corresponding to the whole imag- ing period. See response.water for further details. See responses4water for more information about how this argument is processed. responses4overall.totals A character specifying the names of the columns containing responses for which a column of the values is to be formed by summing the response for each individual over the whole of the imaging period. responses4overall.max A character specifying the names of the columns containing responses for which columns of the values are to be formed for the maximum of the response for each individual over the whole of the imaging period and the times value at which the maximum occurred. intvl.overall A numeric giving the starts and stop times of imaging. If NULL, the start time will be the minimum of starts.intvl and the stop time will be the maximum of stops.intvl. suffix.overall A character giving the suffix to be appended to the names of traits that apply to the whole imagng period. It applies to overall.growth.rates, water.use4overall.water, responses4overall.water and responses4overall.totals. If NULL, then nothing will be added. sep.times.intvl A character giving the separator to use in combining a starts.intvl with a stops.intvl in constructing the suffix to be appended to an interval trait. If set to NULL and there is only one value for each of starts.intvl and stops.intvl, then no suffix will be added; otherwise sep.times.intvl set to NULL will result in an error. sep.suffix.times A character giving the separator to use in appending a suffix for times to a trait. For no separator, set to "". sep.growth.rates A character giving the character(s) to be used to separate the suffices.growth.rates value from the responses4intvl.rates values in constructing the name for a new rate. It is also used for separating responses4water values from the suffix.water.index. For no separator, set to "". sep.water.traits A character giving the character(s) to be used to separate the suffix.rate and suffix.index values from the response value in constructing the name for a new rate/index. The default of "" results in no separator. mergedata A data.frame containing a column with the name given in individuals and for which there is only one row for each value given in this column. In gen- eral, it will be that the number of rows in mergedata is equal to the number of unique values in the column in data labelled by the value of individuals, but this is not mandatory. If mergedata is not NULL, the values extracted by traitExtractFeatures will be merged with it. ... allows passing of arguments to other functions; not used at present. Value A data.frame that contains an individuals column and a column for each extracted trait, in addition to any columns in mergedata. The number of rows in the data.frame will equal the number of unique element of the individuals column in data, except when there are extra values in the individuals column in data. If the latter applies, then the number of rows will equal the number of unique values in the combined individuals columns from mergedata and data. The names of the columns produced by the function are constructed as follows: 1. single times – A name for a single-time trait is formed by appending a full stop to an ele- ment of responses4singletimes, followed by the value of times at which the values were observed. 2. growth rates for a time interval – The name for an interval growth rate is constructed by concatenating the relevant element of responses4intvl.rates, growth.rates and a suffix for the time interval, each separated by a full stop. The interval suffix is formed by joining its starts.intvl and stops.intvl values, separating them by the value of sep.times.intvl. 3. growth rates for the whole imaging period – The name for an interval growth rate is con- structed by concatenating the relevant element of responses4intvl.rates, growth.rates and suffix.overall, each separated by a full stop. 4. water use traits for a time interval – Construction of the names for the three water traits begins with the value of water.use4intvl.traits. The rate (WUR) has either R or the value of suffix.water.rate added to the value of water.use4intvl.traits. Similarly the index (WUI) has either I or the value of suffix.water.index added to it. The WUI also has the element of responses4water used in calculating the WUI prefixed to its name. All three water use traits have a suffix for the interval appended to their names. This suffix is contructed by joining its starts.intvl and stops.intvl, separated by the value of sep.times.intvl. 5. water use traits for the whole imaging period – Construction of the names for the three wa- ter traits begins with the value of water.use4intvl.traits. The rate (WUR) has either R or the value of suffix.water.rate added to the value of water.use4intvl.traits. Similarly the index (WUI) has either I or the value of suffix.water.index added to it. The WUI also has the element of responses4water used in calculating the WUI prefixed to its name. All three water use traits have suffix.overall appended to their names. 6. the total for the whole of imaging period – The name for whole-of-imaging total is formed by combining an element ofresponses4overall.totals with suffix.overall, separating them by a full stop. 7. maximum for the whole of imaging period – The name of the column with the max- imum values will be the result of concatenating the responses4overall.max, "max" and suffix.overall, each separated by a full stop. The name of the column with the value of times at which the maximum occurred will be the result of concatenating the responses4overall.max, "max" and the value of times, each separated by a full stop. The data.frame is returned invisibly. Author(s) <NAME> References <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2020). Smoothing and extraction of traits in the growth analysis of noninvasive phenotypic data. Plant Methods, 16, 36. doi:10.1186/s13007020005776. See Also getTimesSubset, byIndv4Intvl_GRsAvg, byIndv4Intvl_GRsDiff, byIndv4Intvl_WaterUse, byIndv_ValueCalc. Examples #Load dat data(tomato.dat) #Define DAP constants DAP.endpts <- c(18,22,27,33,39,43,51) nDAP.endpts <- length(DAP.endpts) DAP.starts <- DAP.endpts[-nDAP.endpts] DAP.stops <- DAP.endpts[-1] DAP.segs <- list(c(DAP.endpts[1]-1, 39), c(40, DAP.endpts[nDAP.endpts])) #Add PSA rates and smooth PSA, also producing sPSA rates tom.dat <- byIndv4Times_SplinesGRs(data = tomato.dat, response = "PSA", response.smoothed = "sPSA", times = "DAP", rates.method = "differences", smoothing.method = "log", spline.type = "PS", lambda = 1, smoothing.segments = DAP.segs) #Smooth WU tom.dat <- byIndv4Times_SplinesGRs(data = tom.dat, response = "WU", response.smoothed = "sWU", rates.method = "none", times = "DAP", smoothing.method = "direct", spline.type = "PS", lambda = 10^(-0.5), smoothing.segments = DAP.segs) #Extract single-valued traits for each individual indv.cols <- c("Snapshot.ID.Tag", "Lane", "Position", "Block", "Cart", "AMF", "Zn") indv.dat <- subset(tom.dat, subset = DAP == DAP.endpts[1], select = indv.cols) indv.dat <- traitExtractFeatures(data = tom.dat, starts.intvl = DAP.starts, stops.intvl = DAP.stops, responses4singletimes = "sPSA", responses4intvl.rates = "sPSA", growth.rates = c("AGR", "RGR"), water.use4intvl.traits = "sWU", responses4water = "sPSA", responses4overall.totals = "sWU", responses4overall.max = "sPSA.AGR", mergedata = indv.dat) traitSmooth Obtain smooths for a trait by fitting spline functions and, having com- pared several smooths, allows one of them to be chosen and returned in a data.frame. Description Takes a response that has been observed for a set of individuals over a number times and carries out one or more of the following steps: Smooth: Produces response.smoothed using splines for a set of smoothing parameter settings and, optionally, computes growth rates either as differences or derivatives. (see smoothing.args below and args4smoothing) This step is bypassed if a data.frame that is also of class smooths.frame is supplied to data. Profile plots: Produces profile plots of response.smoothed and its growth rates that compare the smooths; also, boxplots of the deviations of the observed from smoothed data can be obtained. (see profile.plot.args below and args4profile_plot) Whether these plots are produced is controlled via which.plots or whether profile.plot.args is set to NULL. Median deviations plots: Produces plots of the medians of the deviations of the observed response, and its growth rates, from response.smoothed, and its growth rates. These aid in the assess- ment of the different smooths. (see meddevn.plot.args below and args4meddevn_plot) Whether these plots are produced is controlled via which.plots or whether meddevn.plot.args is set to NULL. Deviations boxplots: Produces boxplots of the absolute or relative deviations of the observed response, and its growth rates, from response.smoothed, and its growth rates. These aid in the assessment of the different smooths. (see devnboxes.plot.args below and args4devnboxes_plot) Whether these plots are produced is controlled via which.plots or whether devnboxes.plot.args is set to NULL. Choose a smooth: Extract a single, favoured response.smoothed, and its growth rates, for a cho- sen set of smoothing parameter settings. (see chosen.smooth.args below and args4chosen_smooth) This step will be omitted if chosen.smooth.args is NULL. Chosen smooth plot: Produces profile plots of the chosen smooth and its growth rates. (see chosen.plot.args below and args4chosen_plot) Whether these plots are produced is controlled by whether chosen.plot.args is set to NULL. Each of the ‘args4’ functions has a set of defaults that will be used if the corresponding argument, ending in ‘.args’, is omitted. The defaults have been optimized for traitSmooth. Input to the function can be either a data.frame, that contains data to be smoothed, or a smooths.frame, that contains data that has been smoothed. The function can be run (i) without saving any output, (ii) saving the complete set of smooths in a data.frame that is also of class smooths.frame, (iii) saving a subset of the smooths in a supplied smooths.frame, or (iv) saving a single smooth in a data.frame, which can be merged with a pre-existing data.frame such as the data.frame that contains the unsmoothed data. The Tomato vignette illustrates the use of traitSmooth and traitExtractFeatures to carry out the SET procedure for the example presented in Brien et al. (2020). Use vignette("Tomato", package = "growthPheno") to access it. Usage traitSmooth(data, response, response.smoothed, individuals, times, keep.columns = NULL, get.rates = TRUE, rates.method="differences", ntimes2span = NULL, trait.types = c("response", "AGR", "RGR"), smoothing.args = args4smoothing(), x.title = NULL, y.titles = NULL, which.plots = c("profiles", "medians.deviations"), profile.plot.args = args4profile_plot(), meddevn.plot.args = args4meddevn_plot(), devnboxes.plot.args = args4devnboxes_plot(), chosen.smooth.args = args4chosen_smooth(), chosen.plot.args = args4chosen_plot(), mergedata = NULL, ...) Arguments data A data.frame containing the data or a smooths.frame as is produced by probeSmooths. if data is not a smooths.frame, then smoothing will be performed. If data is a smooths.frame, then the plotting and selection of smooths will be per- formed as specified by smoothing.args , which.plots, chosen.smooth.args and chosen.plot.args. response A character specifying the response variable to be smoothed. response.smoothed A character specifying the name of the column to contain the values of the smoothed response variable, corresponding to response. individuals A character giving the name of the factor that defines the subsets of the data for which each subset corresponds to the response values for an individual (e.g. plant, pot, cart, plot or unit). times A character giving the name of the numeric, or factor with numeric levels, that contains the values of the predictor variable to be supplied to smooth.spline and to be plotted on the x-axis. keep.columns A character vector giving the names of columns from data that are to be included in the smooths.frame that will be returned. get.rates A logical or a character specifying which of the response and the response.smoothed are to have growth rates (AGR and/or RGR) computed and stored. If set to TRUE or c("raw", "smoothed"), growth rates will be obtained for both. Setting to only one of raw or smoothed, results in the growth rates for either the response or the response.smoothed being computed. If set to none or FALSE, no growth rates ar computed. Which growth.rates are computed can be changed using the arguments traits.types and the method used for computing them for the response.smooth by rates.method. The growth rates for the response can only be computed by differencing. rates.method A character specifying the method to use in calculating the growth rates for response.smoothed. The two possibilities are "differences" and "derivatives". ntimes2span A numeric giving the number of values in times to span in calculating growth rates by differencing. For ntimes2span set to NULL, if rates.method is set to differences then ntimes2span is set to 2; if rates.method is set to derivatives then ntimes2span is set to 3. Note that when get.rates is includes raw or is TRUE, the growth rates for the unsmoothed response must be calculated by differ- encing, even if the growth rates for the smoothed response are computed using derivatives. When differencing, each growth rate is calculated as the differ- ence in the values of one of the responses for pairs of times values that are spanned by ntimes2span times values divided by the difference between this pair of times values. For ntimes2span set to 2, a growth rate is the difference between consecutive pairs of values of one of the responses divided by the difference between consecutive pairs of times values. trait.types A character giving the trait.types that are to be plotted. If growth rates are included in trait.types, then they will be computed for either the response and/or the response.smoothed, depending on the setting of get.rates. Any growth rates included in trait.types for the response that are available in data, but have not been specified for computation in get.rates, will be re- tained in the returned smooths.frame. If all, the response.smoothed, its AGR and RGR, will be plotted. The response, and its AGR and RGR, will be plotted as the plotting options require it. smoothing.args A list that is most easily generated using args4smoothing, it documenting the options available for smoothing the data. It gives the settings of smoothing.methods, spline.types, df, lambdas, smoothing.segments, npspline.segments, na.x.action, na.y.action, external.smooths, and correctBoundaries, to be used in smooth- ing the response or in selecting a subset of the smooths in data, depending on whether data is a data.frame or a smooths.frame, respectively. If data is a data.frame, then smoothing will be performed. If data is a smooths.frame, no smoothing will be carried out. If smoothing.args is NULL then a smooths.frame will only be used for plotting. Otherwise, the setting of smoothing.args will specifying the smooths that are to be extracted from the smooths.frame, in which case smoothing.args must specify a subset of the smooths in data. x.title Title for the x-axis, used for all plots. If NULL then set to times. y.titles A character giving the titles for the y-axis, one for each the response, the AGE and the RGR. They are used for all plots. If NULL then they are set to the response and the response with .AGR and .RGR appended. which.plots A logical indicating which plots of the smooths specified by smoothing.args are to be produced. The options are either none or some combination of profiles, absolute.boxplots, relative.boxplots and medians.deviations. The var- ious profiles plots that can be poduced are described in the introduction to this function. The plot of a chosen smooth is dealt with separately by the argument chosen.plot.args. profile.plot.args A named list that is most easily generated using args4profile_plot, it docu- menting the options available for varying the profile plots. Note that if args4profile_plot is being called from traitSmooth to change some arguments from the default settings, then it is safest to set all of the following arguments in the call: plots.by, facet.x facet.y and include.raw. If this argument is set to NULL, these plots will not be produced. meddevn.plot.args A named list that is most easily generated using args4meddevn_plot, it doc- umenting the options available for varying median deviations plots. Note that if args4meddevn_plot is being called from traitSmooth to change some ar- guments from the default settings, then it is safest to set all of the following arguments in the call: plots.by, plots.group, facet.x and facet.y. If this argument is set to NULL, these plots will not be produced. devnboxes.plot.args A named list that is most easily generated using args4devnboxes_plot, it documenting the options available for varying the boxplots. Note that if args4devnboxes_plot is being called from traitSmooth to change some arguments from the default settings, then it is safest to set all of the following arguments in the call: plots.by, facet.x and facet.y. If this argument is set to NULL, these plots will not be produced. chosen.smooth.args A named list with just one element or NULL for each component. It is most eas- ily generated using args4chosen_smooth with combinations set to single. The call to args4smoothing should give the settings of smoothing.methods, spline.types, df and lambdas for a single smooth that is to be extracted and that is amongst the smooths that have been produced for the settings specified in smoothing.methods. If both df and lambda in chosen.smooth.args are NULL, then, depending on the settings for spline.type and smoothng.method, the value of either df or lambdas that is the median value or the observed value immediatly below the median value will be added to chosen.smooth.args. Otherwise, one of df and lambda should be NULL and the other should be a sin- gle numeric value. If a value in chosen.smooth.args is not amongst those investigated, a value that was investigated will be substituted. chosen.plot.args A named list that is most easily generated using args4chosen_plot, it doc- umenting the options available for varying profile plots. Because this plot in- cludes only a single smooth, the chosen.smooth.args, the smoothing-parameter factors are unnecessary and an error will be given if any are included. Note that if args4chosen_plot is to be called to change from the default settings given in the default traitSmooth call, then it is safest to set all of the following ar- guments in the call: plots.by, facet.x, facet.y and include.raw. If set to NULL, then no chosen-smooth plot will be produced. mergedata A data.frame that is to have the values for the trait.types for the smooth specified by chosen.smooth.args merged with it. It must contain columns with the names given in individuals and times, and for which there is only one row for each combination of unique values in these columns. In general, it will be that the number of rows in mergedata is equal to the number of unique combinations of the values in the columns of the chosen.smooth.args whose names are given by individuals and times, but this is not mandatory. If only one smooth has been produced, then it will be merged with data provided mergedata is NULL and data is not a smooths.frame. Othewrwise, a single smooth will be be merged with mergedata. ... allows arguments to be passed to plotProfiles. Details This function is a wrapper function for probeSmooths, plotSmoothsComparison, plotSmoothsComparison and plotDeviationsBoxes. It uses the helper functions args4smoothing, args4profile_plot and args4meddevn_plot to se arguments that control the smoothing and plotting. It takes a response that has been observed for a set of individuals over a number times and produces response.smoothed, using probeSmooths, for a default set of smoothing parameter settings (see args4smoothing for the defaults). The settings can be varied from the defaults by specifying alternate values for the smoothing parameters, the parameters being the type of spline (spline.types), the degrees of freedom (df) or smoothing penalty (lambdas) and smoothing.methods. There are also several other smoothing arguments that can be manipulated to affect the smooth (for details see args4smoothing). The secondary traits of the absolute growth rate (AGR) and relative growth rate (RGR) are calculated from the two primary traits, the response and response.smoothed. Generally, profile plots for the traits (a response, an AGR or an RGR) specified in traits.types are produced if which.plots is profiles; if which.plots specifies one or more deviations plots, then those deviations plots will also be produced, these being based on the unsmoothed data from which the smoothed data has been subtracted. The layout of the plots is controlled via combina- tions of one or more of the smoothing-parameter factors Type, TunePar, TuneVal, Tuning (the combination of TunePar and TuneVal) and Method, as well as other factors associated with the data. The factors that are to be used for the profile plots and deviations boxplots are supplied via the argument profile.plot.args using the helper function args4profile_plot to set plots.by, facet.x, and facet.y; for the plots of the medians of the deviations, the factors are supplied via the argument meddevn.plot.args using the helper function args4meddevn_plot to set plots.by, facet.x, facet.y and plots.group. Here, the basic principle is that the number of levels com- binations of the smoothing-parameter factors included in the set of plots and facets arguments to one of these helper functions must be the same as those covered by the combinations of the val- ues supplied to spline.types, df, lambdas and smoothing.methods and incorporated into the smooths.frame, such as is returned by probeSmooths. This ensures that smooths from different parameter sets are not pooled together in a single plot. It is also possible to include factors that are not smoothing-parameter factors in the plots amd facets arguments. The following profiles plots can be produced using args4profile_plot: (i) separate plots of the smoothed traits for each combination of the smoothing parameters (include Type, Tuning and Method in plots.by); (ii) as for (i), with the corresponding plot for the unsmoothed trait pre- ceeding the plots for the smoothed trait (also set include.raw to alone); (iii) profiles plots that compare a smoothed trait for all combinations of the values of the smoothing parameters, arranging the plots side-by-side or one above the other (include Type, Tuning and Method in facet.x and/or facet.y - to include the unsmoothed trait set include.raw to one of facet.x or facet.y; (iv) as for (iii), except that separate plots are produced for each combination of the levels of the factors in plot.by and each plot compares the smoothed traits for the smoothing-parameter factors included in facet.x and/or facet.y (set both plots.by and one or more of facet.x and facet.y). Deviation plots that can be produced are the absolute and relative deviations boxplots and plots of medians deviations (see which.plots). By default, the single smooth for an arbitrarily chosen combination of the smoothing parameters is returned by the function. The smooth for a single combination other than default combina- tion can be nominated for return using the chosen.smooth.args argument. This combination must involve only the supplied values of the smoothing parameters. The values for response, the response.smoothed and their AGRs and RGRs are are added to data, after any pre-existing columns of these have been removed from data. Profile plots of the three smoothed traits are produced using plotProfiles. However, if chosen.smooth.args is NULL, all of the smooths will be returned in a smooths.frame, and plots for the single combination of the smoothing parameters will not be produced. Value A smooths.frame or a data.frame that contains the unsmoothed and smoothed data in long for- mat. That is, all the values for either an unsmoothed or a smoothed trait are in a single column. A smooths.frame will be returned when (i) chosen.smooth.args is NULL and there is more than one smooth specified by the smoothing parameter arguments, or (ii) chosen.smooth.args is not NULL but mergedata is NULL. It will contain the smooths for a trait for the different com- binatons of the smoothing parameters, the values for the different smooths being placed in rows one below the other. The columns that are included in the smooths.frame are Type, TunePar, TuneVal, Tuning and Method, as well as those specified by individuals, times, response, and response.smoothed, and any included in the keep.columns, plots and facet arguments when the smooths were produced. The AGR or RGR for the response and response.smoothed, if ob- tained, will also be included. A smooths.frame has the attributes described in smooths.frame. A data.frame will be returned when (i) chosen.smooth.args and mergedata are not NULL or (ii) chosen.smooth.args is NULL, data is not a smooths.frame and there is only one smooth specified by the smoothing parameter arguments. In either case, if mergedata is not NULL, the chosen smooth or the single smooth will be merged with the data.frame specified by mergedata. When there is a single smooth and both mergedata and chosen.smooth.args are NULL, the data.frame will include the columns individuals, times, response, and response.smoothed, and any included in the keep.columns, plots and facet arguments, as well as any growth rates calculated as a result of get.rates and trait.type. The smooths.frame/data.frame is returned invisibly. Author(s) <NAME> References <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2020). Smoothing and extraction of traits in the growth analysis of noninvasive phenotypic data. Plant Methods, 16, 36. doi:10.1186/s13007020005776. See Also args4smoothing, args4meddevn_plot, args4profile_plot, args4chosen_smooth, args4chosen_plot, probeSmooths plotSmoothsComparison and plotSmoothsMedianDevns, ggplot. Examples data(exampleData) longi.dat <- longi.dat[1:140,] #reduce to a smaller data set vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", linewidth=1)) yfacets <- c("Smarthouse", "Treatment.1") smth.dat <- traitSmooth(data = longi.dat, response = "PSA", response.smoothed = "sPSA", individuals = "Snapshot.ID.Tag",times = "DAP", keep.columns = yfacets, smoothing.args = args4smoothing(df = c(5,7), profile.plot.args = args4profile_plot(facet.y = yfacets, ggplotFuncs = vline), chosen.plot.args = args4chosen_plot(facet.y = yfacets, ggplotFuncs = vline)) twoLevelOpcreate Creates a data.frame formed by applying, for each response, a binary operation to the paired values of two different treatments Description Takes pairs of values for a set of responses indexed by a two-level treatment.factor and calcu- lates, for each of pair, the result of applying a binary operation to their values for the two levels of the treatment.factor. The level of the treatment.factor designated the control will be on the right of the binary operator and the value for the other level will be on the left. Usage twoLevelOpcreate(data, responses, treatment.factor = "Treatment.1", suffices.treatment = c("Cont","Salt"), control = 1, columns.suffixed = NULL, operations = "/", suffices.results="OST", columns.retained = c("Snapshot.ID.Tag","Smarthouse","Lane", "Zone","cZone","SHZone","ZLane", "ZMainunit","cMainPosn", "Genotype.ID"), by = c("Smarthouse","Zone","ZMainunit")) Arguments data A data.frame containing the columns specified by treatment.factor, columns.retained and responses. responses A character giving the names of the columns in data that contain the responses to which the binary operations are to be applied. treatment.factor A factor with two levels corresponding to what is to be designated the control and treated observations . suffices.treatment A character giving the characters to be appended to the names of the responses and columns.suffixed in constructing the names of the columns containing the responses and columns.suffixed for each level of the treatment.factor. The order of the suffices in suffices.treatment should correspond to the or- der of the levels of treatment.factor. control A numeric, equal to either 1 or 2, that specifies the level of treatment.factor that is the control treatment. The value for the control level will be on the right of the binary operator. columns.suffixed A character giving the names of the columns.retained in data that are to be have the values for each treatment retained and whose names are to be suffixed using suffices.treatment. Generally, this is done when columns.retained has different values for different levels of the treatment.factor. operations A character giving the binary operations to perform on the values for the two different levels of the treatment.factor. It should be either of length one, in which case the same operation will be performed for all columns specified in response.GR, or equal in length to response.GR so its elements correspond to those of response.GR. suffices.results A character giving the characters to be appended to the names of the responses in constructing the names of the columns containing the results of applying the operations. The order of the suffices in suffices.results should correspond to the order of the operators in operations. columns.retained A character giving the names of the columns in data that are to be retained in the data.frame being created. These are usually factors that index the results of applying the operations and that might be used subsequently. by A character giving the names of the columns in data whose combinations will be unique for the observation for each treatment. It is used by merge when com- bining the values of the two treatments in separate columns in the data.frame to be returned. Value A data.frame containing the following columns and the values of the : 1. those from data nominated in columns.retained; 2. those containing the treated values of the columns whose names are specified in responses; the treated values are those having the other level of treatment.factor to that specified by control; 3. those containing the control values of the columns whose names are specified in responses; the control values are those having the level of treatment.factor specified by control; 4. those containing the values calculated using the binary operations; the names of these columns will be constructed from responses by appending suffices.results to them. Author(s) <NAME> Examples data(exampleData) responses <- c("sPSA.AGR","sPSA.RGR") cols.retained <- c("Snapshot.ID.Tag","Smarthouse","Lane","Position", "DAP","Snapshot.Time.Stamp", "Hour", "xDAP", "Zone","cZone","SHZone","ZLane","ZMainunit", "cMainPosn", "Genotype.ID") longi.SIIT.dat <- twoLevelOpcreate(data = longi.dat, responses = responses, suffices.treatment=c("C","S"), operations = c("-", "/"), suffices.results = c("diff", "SIIT"), columns.retained = cols.retained, by = c("Smarthouse","Zone","ZMainunit","DAP")) longi.SIIT.dat <- with(longi.SIIT.dat, longi.SIIT.dat[order(Smarthouse,Zone,ZMainunit,DAP),]) validSmoothsFrame Checks that an object is a valid smooths.frame. Description Checks that an object is a smooths.frame of S3-class data.frame that contains the columns Type, TunePar, TuneVal, Tuning, Method, as well as the columns specified by the atttributes of the object, namely individuals and times. Usage validSmoothsFrame(object) Arguments object a smooths.frame. Value TRUE or a character describing why the object is not a valid smooths.frame. Author(s) <NAME> See Also is.smooths.frame, as.smooths.frame Examples dat <- read.table(header = TRUE, text = " Type TunePar TuneVal Tuning Method ID DAP PSA sPSA NCSS df 4 df-4 direct 045451-C 28 57.446 51.18456 NCSS df 4 df-4 direct 045451-C 30 89.306 87.67343 NCSS df 7 df-7 direct 045451-C 28 57.446 57.01589 NCSS df 7 df-7 direct 045451-C 30 89.306 87.01316 ") dat[1:7] <- lapply(dat[1:6], factor) dat <- as.smooths.frame(dat, individuals = "ID", times = "DAP") is.smooths.frame(dat) validSmoothsFrame(dat) WUI Calculates the Water Use Index (WUI) Description Calculates the Water Use Index, returning NA if the water use is zero. Usage WUI(response, water) Arguments response A numeric giving the value of the response achieved. water A numeric giving the amount of water used. Value A numeric containing the response divided by the water, unless water is zero in which case NA is returned. Author(s) <NAME> Examples data(exampleData) PSA.WUE <- with(longi.dat, WUI(PSA.AGR, WU))
ImportExport
cran
R
Package ‘ImportExport’ October 12, 2022 Type Package Title Import and Export Data Version 1.3 Date 2020-09-18 Author <NAME>, <NAME>, <NAME>. Maintainer <NAME> <<EMAIL>> Description Import and export data from the most common statistical formats by using R functions that guarantee the least loss of the data information, giving special attention to the date variables and the labelled ones. Depends gdata, Hmisc, chron, RODBC Imports readxl, writexl, haven, utils Suggests shiny, shinyBS, shinythemes, compareGroups, foreign License GPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2020-09-21 13:00:03 UTC R topics documented: ImportExport-packag... 2 access_expor... 3 access_impor... 4 excel_expor... 5 format_correcto... 6 ImportExportAp... 8 spss_expor... 8 spss_impor... 10 table_impor... 11 var_vie... 12 ImportExport-package Import and Export Data Description Import and export data from the most common statistical formats by using R functions that guar- antee the least loss of the data information, giving special attention to the date variables and the labelled ones. The package also includes an usefull shiny app called by ImportExportApp which uses all the content of the package to import and export databases in a rather easy way. Details The DESCRIPTION file: Package: ImportExport Type: Package Title: Import and Export Data Version: 1.3 Date: 2020-09-18 Author: <NAME>, <NAME>, <NAME>. Maintainer: <NAME> <<EMAIL>> Description: Import and export data from the most common statistical formats by using R functions that guarantee the least Depends: gdata, Hmisc, chron, RODBC Imports: readxl, writexl, haven, utils Suggests: shiny, shinyBS, shinythemes, compareGroups, foreign License: GPL (>= 2) Index of help topics: ImportExport-package Import and Export Data ImportExportApp Runs the shiny app access_export Export multiple R data sets to Microsoft Office Access access_import Import tables and queries from Microssoft Office Access(.mdb) excel_export Export multiple R data sets to Excel format_corrector Identify and corrects variable formats spss_export Export data to SPSS (.sav) by using runsyntx.exe or pspp.exe spss_import Import data set from SPSS (.sav) table_import Automatic separator data input var_view Summarize variable information Author(s) <NAME>, <NAME>, <NAME>. Maintainer: <NAME> <<EMAIL>> See Also ImportExportApp Examples ## Not run: x<-spss_import("mydata.sav") ## End(Not run) access_export Export multiple R data sets to Microsoft Office Access Description Directly connect (and disconnect at the end) with the Microssoft Office Access database using the RODBC package and write one or multiple data sets. Usage access_export(file,x,tablename=as.character(1:length(x)),uid="",pwd="",...) Arguments file The path to the file with .mdb extension. x Either a data frame or a list containing multiple data frame to be exported. tablename A character or a vector character containing the names that will receive the tables where the data frame is stored. If it is a vector, it must follow the same order as the data frames in x have. All names must be different from each others. uid see odbcConnect . pwd see odbcConnect . ... see odbcConnect,sqlSave. Details Date variables are exported as an integer, they might be converted to character if a character repre- sentation in the access database is wanted. Value No value is returned. Note This function connects and writes on an existing Microsoft Office Access database, but it can’t create a new one. Examples ## Not run: # x is a data.frame file<-("mydata.xlsx") a<- 1:10 b<-rep("b",times=10) c<-rep(1:2,each=5) x<-data.frame(a,b,c) excel_export(x,file,table_names="mydata") # x is a list y<-list(x,x[2:3]) excel_export(y,file,table_names=c("mydata1","mydata2")) ## End(Not run) access_import Import tables and queries from Microssoft Office Access(.mdb) Description Directly connect (and disconnect at the end) with the Microssoft Office Access database using the RODBC package and read one or multiple data sets. It can read both tables and SQL queries depending on the input instructions. It automatically detects date variables that are stored also with date format in the original data set. Usage access_import(file,table_names,ntab=length(table_names), SQL_query=rep(F,times=ntab),where_sql=c(),out.format="d-m-yy",uid="",pwd="",...) Arguments file The path to the file with .mdb extension. table_names A single character or a character vector containing either the names of the tables to read or the SQL queries to perform. Each position must contain only one table name or SQL querie.The format of the SQL queries must follow the one described in sqlQuery. where_sql If table_names is a vector, where_sql must contain the position of the SQL queries within the table_names vector. Ex: If the first and the fifth elements of table_names are SQL queries (the other ones are table names) the vector where_sql should be where_sql=c(1,5) . out.format a character specifying the format for printing the date variables. ntab The number of tables to import, equal to the number of table names. SQL_query Auxiliar vector to perform the function. uid see odbcConnect . pwd see odbcConnect . ... see odbcConnect, sqlFetch, sqlQuery. Details By default, the function gives to each data set the name specified in table_names, so the sql queries data set have probably an inappropriate name. It can be easily renamed using names. Value A data frame or a data frame list containing the data requested from the Microsoft Office Access file. Note The function don’t contribute in the date variables detection, it just process with the Chron package the ones who has been automatically detected. See Also access_export,var_view sqlFetch, sqlQuery Examples ## Not run: x<-access_import(file="mydata.mdb", table_names=c("table1","table2", "Select * From table1 inner join table2 on table.var1=table2.var2","table3") ,where_sql=c(3)) ## End(Not run) excel_export Export multiple R data sets to Excel Description Exports a single data frame or a list of data frames to one or multiple excel sheets using the function write_xlsx frome the writexl package. This function can write multiple data frames (passed as a list) with a single command .It can write both .xls and .xlsx files. Usage excel_export(x,file,table_names=as.character(1:length(x)),...) Arguments x Either a data frame or a list containing multiple data frame to be exported. file The name of the file we want to create. table_names A character or a vector character containing the names that will receive the sheet where the data frame is stored. If it is a vector, it must follow the same order as the data frames in x have. All names must be different from each others. ... see write_xlsx. Value No value is returned. See Also read_excel, read_excel Examples ## Not run: # x is a data.frame file<-("mydata.xlsx") a<- 1:10 b<-rep("b",times=10) c<-rep(1:2,each=5) x<-data.frame(a,b,c) excel_export(x,file,table_names="mydata") # x is a list y<-list(x,x[2:3]) excel_export(y,file,table_names=c("mydata1","mydata2")) ## End(Not run) format_corrector Identify and corrects variable formats Description The function creates a loop to compare for each variable the values it have with the usual ones that typical R formats have in order to correct, for example, missing value or dates stored as a character. It also specify for each variable the most appropriate SPSS format that it should have. Usage format_corrector(table,identif=NULL,force=FALSE,rate.miss.date=0.5) Arguments table The data set we want to correct. identif The name of the identification variable included in the data frame. It will be used to list the individuals who had any problems during the execution of the function. force If TRUE, run format_corrector even if "fixed.formats" attribute is TRUE rate.miss.date The maximum rate of missing date fields we want the function to accept.The function details which fields have been lost anyways. Details If the date variable don’t have chron format it must be in one of the following formats, else the function leaves it as a character: —-dates separator must be one of the following:("-","/","."). —-hour separator must be ":". Value A single data frame which results from the function. Note This function may not be completely optimal so it might have problems when correcting huge data frames. See Also spss_export Examples require(ImportExport) a<-c(1,NA,3,5,".") b<-c("19/11/2006","05/10/2011","09/02/1906","22/01/1956","10/10/2010") c<-101:105 x<-data.frame(a,b,c) sapply(x,class) x_corr<-format_corrector(x) sapply(x_corr,class) ImportExportApp Runs the shiny app Description Runs a shiny app which uses all the content of the package to import and export databases in a rather easy way. Usage ImportExportApp(...) Arguments ... See runApp Details It requires a few packages to run the app: shiny, shinyBS, shinythemes, compareGroups. See Also runApp Examples ## Not run: ImportExportApp() ## End(Not run) spss_export Export data to SPSS (.sav) by using runsyntx.exe or pspp.exe Description Export data to txt and syntax to an spss syntax file and then runs runsyntx.exe (located in the SPSS folder) in order to create the final file with .sav extension containing the data frame we wanted to export. Date variables in the original data frame are also identified when reading the .sav file with SPSS. Usage spss_export(table,file.dict=NULL,file.save=NULL,var.keep="ALL", file.runsyntax="C:/Archivos de programa/SPSS/runsyntx.exe", file.data=NULL,run.spss=TRUE,dec=".") Arguments table A data frame to be exported. If it’s a matrix, it will be converted into data frame. file.dict Spss syntax file containing the variable and value labels. file.save The name of the .sav file we want to create. var.keep Name of the variables to save. All variables will be saved by default. file.runsyntax The path to the file runsyntx.exe or pspp.exe . file.data The name of the .txt file containing the data. It will be created as a temp file by default. run.spss If true, it runs SPSS and creates the .sav file, else it shows the syntax on the screen. dec The string to use for decimal points, it must be a single character. Details Both runsyntax.exe (from SPSS) and pspp.exe works the same way. Value No value is returned. Note If neither SPSS nor PSPP is installed the function can just return the data in a .txt file and the syntax in an SPSS syntax file (.sps). See Also spss_import,var_view Examples ## Not run: table=mydata file.dict=NULL file.save="C:\xxx.sav" var.keep="ALL" export.SPSS(table=table,file.dict=file.dict,var.keep=var.keep,file.save=file.save) ## End(Not run) spss_import Import data set from SPSS (.sav) Description Read a labelled data set from SPSS, finding automatically the date variables and keeping the variable and value labels information, by using the information obtained with spss_varlist() and the function spss.get from the Hmisc Package. Usage spss_import(file, allow="_",out.format="d-m-yy",use.value.labels=F,...) Arguments file The path to the file with .sav extension. allow A vector which contains the characters that must be allowed in the variable names. out.format A character specifying the format for printing the date variables. use.value.labels If TRUE, replace the labelled variables with their value labels. ... See spss.get. Details In order to provide the maximum functionallity, if the main code generates an error, the function tries to read the file with the read_sav function from the haven package, but a warning message appears. The var_view function can be used to summarize the contents of the data frame labels. Value A data frame or a list containing the data stored in the SPSS file. Note If the warning message appears and the file has been read using read_sav the resulting data frame will be diferent from the expected one (see the haven package to learn more about read_sav). Author(s) <NAME>, <NAME>, <NAME> See Also var_view, spss.get, read_sav Examples ## Not run: x <- spss_import("mydata.sav") ## End(Not run) table_import Automatic separator data input Description A small variation of the original read.table that most of the time detect automatically the field separator character. It also includes the option to run the format_corrector function in order to detect, for example, the date variables included in the original data set. If the function don’t recognize any separator, it asks to specify the real one. Usage table_import(file,sep=F,format_corrector=F,...) Arguments file The patch to he file which the data are to be read from. sep The field separator character, see read.table.If it is not specified, the function try to detect it automatically. format_corrector If True, it runs the format_corrector function before returning the data frame. ... More arguments from read.table. Details The format_corrector function is a complicated function so it’s not recommended to run it unless the data set contains awkward variables like dates. Value A data frame containing the data stored in the file. Note This function might have problems if any of the fields contain typical separators, so it’s always recommended to check the resulting data frame in order to avoid possible errors. See Also read.table Examples ## Not run: x <- table_import('mydata.csv',format_corrector=T) ## End(Not run) var_view Summarize variable information Description Creates a table with the name, the description, the value labels and the format for each variable in the data frame. It is similar to the variable view shown in the SPSS. Usage var_view(x) Arguments x The data frame whose variables we want to summarize. Value A data frame containing the specified summary. Note This function was built in order to summarize imported SPSS labelled data sets using spss_import, but it can also work with other labelled data sets. See Also spss_import Examples require(ImportExport) a<- 1:10 b<-rep("b",times=10) c<-rep(1:2,each=5) x<-data.frame(a,b,c) attr(x$a,"label")<- "descr1" attr(x$b,"label")<- NULL attr(x$c,"label")<- "descr3" attr(x$c,"value.labels")<-list("1"="Yes","2"="No") var_view(x)
nu-protocol
rust
Rust
Struct nu_protocol::util::BufferedReader === ``` pub struct BufferedReader<R: Read> { pub input: BufReader<R>, } ``` Fields --- `input: BufReader<R>`Implementations --- ### impl<R: Read> BufferedReader<R#### pub fn new(input: BufReader<R>) -> Self Trait Implementations --- ### impl<R: Read> Iterator for BufferedReader<R#### type Item = Result<Vec<u8, Global>, ShellErrorThe type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl<R> RefUnwindSafe for BufferedReader<R>where R: RefUnwindSafe, ### impl<R> Send for BufferedReader<R>where R: Send, ### impl<R> Sync for BufferedReader<R>where R: Sync, ### impl<R> Unpin for BufferedReader<R>where R: Unpin, ### impl<R> UnwindSafe for BufferedReader<R>where R: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::Alias === ``` pub struct Alias { pub name: String, pub command: Option<Box<dyn Command>>, pub wrapped_call: Expression, pub usage: String, pub extra_usage: String, } ``` Fields --- `name: String``command: Option<Box<dyn Command>>``wrapped_call: Expression``usage: String``extra_usage: String`Trait Implementations --- ### impl Clone for Alias #### fn clone(&self) -> Alias Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn name(&self) -> &str #### fn signature(&self) -> Signature #### fn usage(&self) -> &str #### fn extra_usage(&self) -> &str #### fn run( &self, _engine_state: &EngineState, _stack: &mut Stack, call: &Call, _input: PipelineData ) -> Result<PipelineData, ShellError#### fn is_alias(&self) -> bool #### fn as_alias(&self) -> Option<&Alias#### fn run_const( &self, working_set: &StateWorkingSet<'_>, call: &Call, input: PipelineData ) -> Result<PipelineData, ShellErrorUsed by the parser to run command at parse time #### fn is_known_external(&self) -> bool #### fn is_custom_command(&self) -> bool #### fn is_sub(&self) -> bool #### fn is_parser_keyword(&self) -> bool #### fn is_plugin(&self) -> Option<(&Path, Option<&Path>)#### fn is_const(&self) -> bool #### fn get_block_id(&self) -> Option<BlockId#### fn search_terms(&self) -> Vec<&str#### fn command_type(&self) -> CommandType Auto Trait Implementations --- ### impl !RefUnwindSafe for Alias ### impl Send for Alias ### impl Sync for Alias ### impl Unpin for Alias ### impl !UnwindSafe for Alias Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: 'static + Command + Clone, #### fn clone_box(&self) -> Box<dyn Command, Global### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::DidYouMean === ``` pub struct DidYouMean(/* private fields */); ``` Implementations --- ### impl DidYouMean #### pub fn new(possibilities_bytes: &[&[u8]], input_bytes: &[u8]) -> DidYouMean Trait Implementations --- ### impl Clone for DidYouMean #### fn clone(&self) -> DidYouMean Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(value: Option<String>) -> Self Converts to this type from the input type.### impl Serialize for DidYouMean #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for DidYouMean ### impl Send for DidYouMean ### impl Sync for DidYouMean ### impl Unpin for DidYouMean ### impl UnwindSafe for DidYouMean Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::Example === ``` pub struct Example<'a> { pub example: &'a str, pub description: &'a str, pub result: Option<Value>, } ``` Fields --- `example: &'a str``description: &'a str``result: Option<Value>`Trait Implementations --- ### impl<'a> Debug for Example<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<'a> !RefUnwindSafe for Example<'a### impl<'a> Send for Example<'a### impl<'a> Sync for Example<'a### impl<'a> Unpin for Example<'a### impl<'a> !UnwindSafe for Example<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::Flag === ``` pub struct Flag { pub long: String, pub short: Option<char>, pub arg: Option<SyntaxShape>, pub required: bool, pub desc: String, pub var_id: Option<VarId>, pub default_value: Option<Value>, } ``` Fields --- `long: String``short: Option<char>``arg: Option<SyntaxShape>``required: bool``desc: String``var_id: Option<VarId>``default_value: Option<Value>`Trait Implementations --- ### impl Clone for Flag #### fn clone(&self) -> Flag Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Flag) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Flag #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl !RefUnwindSafe for Flag ### impl Send for Flag ### impl Sync for Flag ### impl Unpin for Flag ### impl !UnwindSafe for Flag Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::ListStream === ``` pub struct ListStream { pub stream: Box<dyn Iterator<Item = Value> + Send + 'static>, pub ctrlc: Option<Arc<AtomicBool>>, } ``` A potentially infinite stream of values, optionally with a mean to send a Ctrl-C signal to stop the stream from continuing. In practice, a “stream” here means anything which can be iterated and produce Values as it iterates. Like other iterators in Rust, observing values from this stream will drain the items as you view them and the stream cannot be replayed. Fields --- `stream: Box<dyn Iterator<Item = Value> + Send + 'static>``ctrlc: Option<Arc<AtomicBool>>`Implementations --- ### impl ListStream #### pub fn into_string(self, separator: &str, config: &Config) -> String #### pub fn drain(self) -> Result<(), ShellError#### pub fn from_stream( input: impl Iterator<Item = Value> + Send + 'static, ctrlc: Option<Arc<AtomicBool>> ) -> ListStream Trait Implementations --- ### impl Debug for ListStream #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Item = Value The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, Self::Item: Clone, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places a copy of `separator` between adjacent items of the original iterator. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, Self::Item: PartialOrd<Self::Item>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for ListStream ### impl Send for ListStream ### impl !Sync for ListStream ### impl Unpin for ListStream ### impl !UnwindSafe for ListStream Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoInterruptiblePipelineData for Iwhere I: IntoIterator + Send + 'static, <I as IntoIterator>::IntoIter: Send + 'static, <<I as IntoIterator>::IntoIter as Iterator>::Item: Into<Value>, #### fn into_pipeline_data( self, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData #### fn into_pipeline_data_with_metadata( self, metadata: impl Into<Option<Box<PipelineMetadata, Global>>>, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> ParallelBridge for Twhere T: Iterator + Send, <T as Iterator>::Item: Send, #### fn par_bridge(self) -> IterBridge<TCreates a bridge from this type to a `ParallelIterator`.### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"ListStream":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.ListStream.html\" title=\"struct nu_protocol::ListStream\">ListStream</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.ListStream.html\" title=\"struct nu_protocol::ListStream\">ListStream</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"enum.Value.html\" title=\"enum nu_protocol::Value\">Value</a>;</span>"} Struct nu_protocol::Module === ``` pub struct Module { pub name: Vec<u8>, pub decls: IndexMap<Vec<u8>, DeclId>, pub submodules: IndexMap<Vec<u8>, ModuleId>, pub constants: IndexMap<Vec<u8>, VarId>, pub env_block: Option<BlockId>, pub main: Option<DeclId>, pub span: Option<Span>, } ``` Collection of definitions that can be exported from a module Fields --- `name: Vec<u8>``decls: IndexMap<Vec<u8>, DeclId>``submodules: IndexMap<Vec<u8>, ModuleId>``constants: IndexMap<Vec<u8>, VarId>``env_block: Option<BlockId>``main: Option<DeclId>``span: Option<Span>`Implementations --- ### impl Module #### pub fn new(name: Vec<u8>) -> Self #### pub fn from_span(name: Vec<u8>, span: Span) -> Self #### pub fn name(&self) -> Vec<u8#### pub fn add_decl(&mut self, name: Vec<u8>, decl_id: DeclId) -> Option<DeclId#### pub fn add_submodule( &mut self, name: Vec<u8>, module_id: ModuleId ) -> Option<ModuleId#### pub fn add_variable(&mut self, name: Vec<u8>, var_id: VarId) -> Option<VarId#### pub fn add_env_block(&mut self, block_id: BlockId) #### pub fn has_decl(&self, name: &[u8]) -> bool #### pub fn resolve_import_pattern( &self, working_set: &StateWorkingSet<'_>, self_id: ModuleId, members: &[ImportPatternMember], name_override: Option<&[u8]>, backup_span: Span ) -> (ResolvedImportPattern, Vec<ParseError>) #### pub fn decl_name_with_head(&self, name: &[u8], head: &[u8]) -> Option<Vec<u8>#### pub fn decls_with_head(&self, head: &[u8]) -> Vec<(Vec<u8>, DeclId)#### pub fn consts(&self) -> Vec<(Vec<u8>, VarId)#### pub fn decl_names_with_head(&self, head: &[u8]) -> Vec<Vec<u8>#### pub fn decls(&self) -> Vec<(Vec<u8>, DeclId)#### pub fn submodules(&self) -> Vec<(Vec<u8>, ModuleId)#### pub fn decl_names(&self) -> Vec<Vec<u8>Trait Implementations --- ### impl Clone for Module #### fn clone(&self) -> Module Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Module ### impl Send for Module ### impl Sync for Module ### impl Unpin for Module ### impl UnwindSafe for Module Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::PipelineIterator === ``` pub struct PipelineIterator(/* private fields */); ``` Trait Implementations --- ### impl Iterator for PipelineIterator #### type Item = Value The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, Self::Item: Clone, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places a copy of `separator` between adjacent items of the original iterator. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, Self::Item: PartialOrd<Self::Item>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for PipelineIterator ### impl Send for PipelineIterator ### impl !Sync for PipelineIterator ### impl Unpin for PipelineIterator ### impl !UnwindSafe for PipelineIterator Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoInterruptiblePipelineData for Iwhere I: IntoIterator + Send + 'static, <I as IntoIterator>::IntoIter: Send + 'static, <<I as IntoIterator>::IntoIter as Iterator>::Item: Into<Value>, #### fn into_pipeline_data( self, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData #### fn into_pipeline_data_with_metadata( self, metadata: impl Into<Option<Box<PipelineMetadata, Global>>>, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> ParallelBridge for Twhere T: Iterator + Send, <T as Iterator>::Item: Send, #### fn par_bridge(self) -> IterBridge<TCreates a bridge from this type to a `ParallelIterator`.### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::PipelineMetadata === ``` pub struct PipelineMetadata { pub data_source: DataSource, } ``` Fields --- `data_source: DataSource`Trait Implementations --- ### impl Clone for PipelineMetadata #### fn clone(&self) -> PipelineMetadata Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for PipelineMetadata ### impl Send for PipelineMetadata ### impl Sync for PipelineMetadata ### impl Unpin for PipelineMetadata ### impl !UnwindSafe for PipelineMetadata Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::PositionalArg === ``` pub struct PositionalArg { pub name: String, pub desc: String, pub shape: SyntaxShape, pub var_id: Option<VarId>, pub default_value: Option<Value>, } ``` Fields --- `name: String``desc: String``shape: SyntaxShape``var_id: Option<VarId>``default_value: Option<Value>`Trait Implementations --- ### impl Clone for PositionalArg #### fn clone(&self) -> PositionalArg Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &PositionalArg) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for PositionalArg #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl !RefUnwindSafe for PositionalArg ### impl Send for PositionalArg ### impl Sync for PositionalArg ### impl Unpin for PositionalArg ### impl !UnwindSafe for PositionalArg Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::Range === ``` pub struct Range { pub from: Value, pub incr: Value, pub to: Value, pub inclusion: RangeInclusion, } ``` Fields --- `from: Value``incr: Value``to: Value``inclusion: RangeInclusion`Implementations --- ### impl Range #### pub fn new( expr_span: Span, from: Value, next: Value, to: Value, operator: &RangeOperator ) -> Result<Range, ShellError#### pub fn is_end_inclusive(&self) -> bool #### pub fn from(&self) -> Result<i64, ShellError#### pub fn to(&self) -> Result<i64, ShellError#### pub fn contains(&self, item: &Value) -> bool #### pub fn into_range_iter( self, ctrlc: Option<Arc<AtomicBool>> ) -> Result<RangeIterator, ShellErrorTrait Implementations --- ### impl Clone for Range #### fn clone(&self) -> Range Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn from_value(v: &Value) -> Result<Self, ShellError### impl PartialEq<Range> for Range #### fn eq(&self, other: &Range) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Range> for Range #### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl !RefUnwindSafe for Range ### impl Send for Range ### impl Sync for Range ### impl Unpin for Range ### impl !UnwindSafe for Range Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::RangeIterator === ``` pub struct RangeIterator { /* private fields */ } ``` Implementations --- ### impl RangeIterator #### pub fn new( range: Range, ctrlc: Option<Arc<AtomicBool>>, span: Span ) -> RangeIterator Trait Implementations --- ### impl Iterator for RangeIterator #### type Item = Value The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, Self::Item: Clone, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places a copy of `separator` between adjacent items of the original iterator. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, Self::Item: PartialOrd<Self::Item>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for RangeIterator ### impl Send for RangeIterator ### impl Sync for RangeIterator ### impl Unpin for RangeIterator ### impl !UnwindSafe for RangeIterator Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoInterruptiblePipelineData for Iwhere I: IntoIterator + Send + 'static, <I as IntoIterator>::IntoIter: Send + 'static, <<I as IntoIterator>::IntoIter as Iterator>::Item: Into<Value>, #### fn into_pipeline_data( self, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData #### fn into_pipeline_data_with_metadata( self, metadata: impl Into<Option<Box<PipelineMetadata, Global>>>, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> ParallelBridge for Twhere T: Iterator + Send, <T as Iterator>::Item: Send, #### fn par_bridge(self) -> IterBridge<TCreates a bridge from this type to a `ParallelIterator`.### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"RangeIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.RangeIterator.html\" title=\"struct nu_protocol::RangeIterator\">RangeIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.RangeIterator.html\" title=\"struct nu_protocol::RangeIterator\">RangeIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"enum.Value.html\" title=\"enum nu_protocol::Value\">Value</a>;</span>"} Struct nu_protocol::RawStream === ``` pub struct RawStream { pub stream: Box<dyn Iterator<Item = Result<Vec<u8>, ShellError>> + Send + 'static>, pub leftover: Vec<u8>, pub ctrlc: Option<Arc<AtomicBool>>, pub is_binary: bool, pub span: Span, pub known_size: Option<u64>, } ``` Fields --- `stream: Box<dyn Iterator<Item = Result<Vec<u8>, ShellError>> + Send + 'static>``leftover: Vec<u8>``ctrlc: Option<Arc<AtomicBool>>``is_binary: bool``span: Span``known_size: Option<u64>`Implementations --- ### impl RawStream #### pub fn new( stream: Box<dyn Iterator<Item = Result<Vec<u8>, ShellError>> + Send + 'static>, ctrlc: Option<Arc<AtomicBool>>, span: Span, known_size: Option<u64> ) -> Self #### pub fn into_bytes(self) -> Result<Spanned<Vec<u8>>, ShellError#### pub fn into_string(self) -> Result<Spanned<String>, ShellError#### pub fn chain(self, stream: RawStream) -> RawStream #### pub fn drain(self) -> Result<(), ShellErrorTrait Implementations --- ### impl Debug for RawStream #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Item = Result<Value, ShellErrorThe type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, Self::Item: Clone, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places a copy of `separator` between adjacent items of the original iterator. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Read more1.29.0 · source#### fn flatten(self) -> Flatten<Self>where Self: Sized, Self::Item: IntoIterator, Creates an iterator that flattens nested structure. Self: Sized, F: FnMut(&[Self::Item; N]) -> R, 🔬This is a nightly-only experimental API. (`iter_map_windows`)Calls the given function `f` for each contiguous window of size `N` over `self` and returns an iterator over the outputs of `f`. Like `slice::windows()`, the windows during mapping overlap as well. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. &mut self ) -> <<Self::Item as Try>::Residual as Residual<B>>::TryTypewhere Self: Sized, Self::Item: Try, <Self::Item as Try>::Residual: Residual<B>, B: FromIterator<<Self::Item as Try>::Output>, 🔬This is a nightly-only experimental API. (`iterator_try_collect`)Fallibly transforms an iterator into a collection, short circuiting if a failure is encountered. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for RawStream ### impl Send for RawStream ### impl !Sync for RawStream ### impl Unpin for RawStream ### impl !UnwindSafe for RawStream Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> ParallelBridge for Twhere T: Iterator + Send, <T as Iterator>::Item: Send, #### fn par_bridge(self) -> IterBridge<TCreates a bridge from this type to a `ParallelIterator`.### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"RawStream":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.RawStream.html\" title=\"struct nu_protocol::RawStream\">RawStream</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.RawStream.html\" title=\"struct nu_protocol::RawStream\">RawStream</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/core/result/enum.Result.html\" title=\"enum core::result::Result\">Result</a>&lt;<a class=\"enum\" href=\"enum.Value.html\" title=\"enum nu_protocol::Value\">Value</a>, <a class=\"enum\" href=\"enum.ShellError.html\" title=\"enum nu_protocol::ShellError\">ShellError</a>&gt;;</span>"} Struct nu_protocol::Record === ``` pub struct Record { pub cols: Vec<String>, pub vals: Vec<Value>, } ``` Fields --- `cols: Vec<String>``vals: Vec<Value>`Implementations --- ### impl Record #### pub fn new() -> Self #### pub fn with_capacity(capacity: usize) -> Self #### pub fn iter(&self) -> Zip<Iter<'_, String>, Iter<'_, Value>#### pub fn iter_mut(&mut self) -> Zip<Iter<'_, String>, IterMut<'_, Value>#### pub fn is_empty(&self) -> bool #### pub fn len(&self) -> usize #### pub fn push(&mut self, col: impl Into<String>, val: Value) Trait Implementations --- ### impl Clone for Record #### fn clone(&self) -> Record Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Record Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn from_iter<T: IntoIterator<Item = (String, Value)>>(iter: T) -> Self Creates a value from an iterator. #### fn from_value(v: &Value) -> Result<Self, ShellError### impl<'a> IntoIterator for &'a Record #### type Item = (&'a String, &'a Value) The type of the elements being iterated over.#### type IntoIter = Zip<Iter<'a, String>, Iter<'a, Value>Which kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. #### type Item = (&'a String, &'a mut Value) The type of the elements being iterated over.#### type IntoIter = Zip<Iter<'a, String>, IterMut<'a, Value>Which kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. #### type Item = (String, Value) The type of the elements being iterated over.#### type IntoIter = Zip<IntoIter<String, Global>, IntoIter<Value, Global>Which kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Record ### impl Send for Record ### impl Sync for Record ### impl Unpin for Record ### impl !UnwindSafe for Record Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::ResolvedImportPattern === ``` pub struct ResolvedImportPattern { pub decls: Vec<(Vec<u8>, DeclId)>, pub modules: Vec<(Vec<u8>, ModuleId)>, pub constants: Vec<(Vec<u8>, Value)>, } ``` Fields --- `decls: Vec<(Vec<u8>, DeclId)>``modules: Vec<(Vec<u8>, ModuleId)>``constants: Vec<(Vec<u8>, Value)>`Implementations --- ### impl ResolvedImportPattern #### pub fn new( decls: Vec<(Vec<u8>, DeclId)>, modules: Vec<(Vec<u8>, ModuleId)>, constants: Vec<(Vec<u8>, Value)> ) -> Self Auto Trait Implementations --- ### impl !RefUnwindSafe for ResolvedImportPattern ### impl Send for ResolvedImportPattern ### impl Sync for ResolvedImportPattern ### impl Unpin for ResolvedImportPattern ### impl !UnwindSafe for ResolvedImportPattern Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nu_protocol::Signature === ``` pub struct Signature { pub name: String, pub usage: String, pub extra_usage: String, pub search_terms: Vec<String>, pub required_positional: Vec<PositionalArg>, pub optional_positional: Vec<PositionalArg>, pub rest_positional: Option<PositionalArg>, pub named: Vec<Flag>, pub input_output_types: Vec<(Type, Type)>, pub allow_variants_without_examples: bool, pub is_filter: bool, pub creates_scope: bool, pub allows_unknown_args: bool, pub category: Category, } ``` Fields --- `name: String``usage: String``extra_usage: String``search_terms: Vec<String>``required_positional: Vec<PositionalArg>``optional_positional: Vec<PositionalArg>``rest_positional: Option<PositionalArg>``named: Vec<Flag>``input_output_types: Vec<(Type, Type)>``allow_variants_without_examples: bool``is_filter: bool``creates_scope: bool``allows_unknown_args: bool``category: Category`Implementations --- ### impl Signature #### pub fn new(name: impl Into<String>) -> Signature #### pub fn get_input_type(&self) -> Type #### pub fn get_output_type(&self) -> Type #### pub fn add_help(self) -> Signature #### pub fn build(name: impl Into<String>) -> Signature #### pub fn usage(self, msg: impl Into<String>) -> Signature Add a description to the signature #### pub fn extra_usage(self, msg: impl Into<String>) -> Signature Add an extra description to the signature #### pub fn search_terms(self, terms: Vec<String>) -> Signature Add search terms to the signature #### pub fn update_from_command(self, command: &dyn Command) -> Signature Update signature’s fields from a Command trait implementation #### pub fn allows_unknown_args(self) -> Signature Allow unknown signature parameters #### pub fn required( self, name: impl Into<String>, shape: impl Into<SyntaxShape>, desc: impl Into<String> ) -> Signature Add a required positional argument to the signature #### pub fn optional( self, name: impl Into<String>, shape: impl Into<SyntaxShape>, desc: impl Into<String> ) -> Signature Add an optional positional argument to the signature #### pub fn rest( self, name: &str, shape: impl Into<SyntaxShape>, desc: impl Into<String> ) -> Signature #### pub fn operates_on_cell_paths(&self) -> bool Is this command capable of operating on its input via cell paths? #### pub fn named( self, name: impl Into<String>, shape: impl Into<SyntaxShape>, desc: impl Into<String>, short: Option<char> ) -> Signature Add an optional named flag argument to the signature #### pub fn required_named( self, name: impl Into<String>, shape: impl Into<SyntaxShape>, desc: impl Into<String>, short: Option<char> ) -> Signature Add a required named flag argument to the signature #### pub fn switch( self, name: impl Into<String>, desc: impl Into<String>, short: Option<char> ) -> Signature Add a switch to the signature #### pub fn input_output_type(self, input_type: Type, output_type: Type) -> Signature Changes the input type of the command signature #### pub fn input_output_types( self, input_output_types: Vec<(Type, Type)> ) -> Signature Set the input-output type signature variants of the command #### pub fn category(self, category: Category) -> Signature Changes the signature category #### pub fn creates_scope(self) -> Signature Sets that signature will create a scope as it parses #### pub fn allow_variants_without_examples(self, allow: bool) -> Signature #### pub fn call_signature(&self) -> String #### pub fn get_shorts(&self) -> Vec<charGet list of the short-hand flags #### pub fn get_names(&self) -> Vec<&strGet list of the long-hand flags #### pub fn get_positional(&self, position: usize) -> Option<PositionalArg#### pub fn num_positionals(&self) -> usize #### pub fn num_positionals_after(&self, idx: usize) -> usize #### pub fn get_long_flag(&self, name: &str) -> Option<FlagFind the matching long flag #### pub fn get_short_flag(&self, short: char) -> Option<FlagFind the matching long flag #### pub fn filter(self) -> Signature Set the filter flag for the signature #### pub fn predeclare(self) -> Box<dyn CommandCreate a placeholder implementation of Command as a way to predeclare a definition’s signature so other definitions can see it. This placeholder is later replaced with the full definition in a second pass of the parser. #### pub fn into_block_command(self, block_id: BlockId) -> Box<dyn CommandCombines a signature and a block into a runnable block #### pub fn formatted_flags(self) -> String Trait Implementations --- ### impl Clone for Signature #### fn clone(&self) -> Signature Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Self) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Signature #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl !RefUnwindSafe for Signature ### impl Send for Signature ### impl Sync for Signature ### impl Unpin for Signature ### impl !UnwindSafe for Signature Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Struct nu_protocol::Variable === ``` pub struct Variable { pub declaration_span: Span, pub ty: Type, pub mutable: bool, pub const_val: Option<Value>, } ``` Fields --- `declaration_span: Span``ty: Type``mutable: bool``const_val: Option<Value>`Implementations --- ### impl Variable #### pub fn new(declaration_span: Span, ty: Type, mutable: bool) -> Variable Trait Implementations --- ### impl Clone for Variable #### fn clone(&self) -> Variable Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Variable ### impl Send for Variable ### impl Sync for Variable ### impl Unpin for Variable ### impl !UnwindSafe for Variable Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum nu_protocol::Category === ``` pub enum Category { Bits, Bytes, Chart, Conversions, Core, Custom(String), Date, Debug, Default, Removed, Env, Experimental, FileSystem, Filters, Formats, Generators, Hash, Math, Misc, Network, Path, Platform, Random, Shells, Strings, System, Viewers, } ``` Variants --- ### Bits ### Bytes ### Chart ### Conversions ### Core ### Custom(String) ### Date ### Debug ### Default ### Removed ### Env ### Experimental ### FileSystem ### Filters ### Formats ### Generators ### Hash ### Math ### Misc ### Network ### Path ### Platform ### Random ### Shells ### Strings ### System ### Viewers Trait Implementations --- ### impl Clone for Category #### fn clone(&self) -> Category Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn eq(&self, other: &Category) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Category #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralEq for Category ### impl StructuralPartialEq for Category Auto Trait Implementations --- ### impl RefUnwindSafe for Category ### impl Send for Category ### impl Sync for Category ### impl Unpin for Category ### impl UnwindSafe for Category Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum nu_protocol::DataSource === ``` pub enum DataSource { Ls, HtmlThemes, Profiling(Vec<Value>), } ``` Variants --- ### Ls ### HtmlThemes ### Profiling(Vec<Value>) Trait Implementations --- ### impl Clone for DataSource #### fn clone(&self) -> DataSource Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for DataSource ### impl Send for DataSource ### impl Sync for DataSource ### impl Unpin for DataSource ### impl !UnwindSafe for DataSource Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum nu_protocol::PipelineData === ``` pub enum PipelineData { Value(Value, Option<Box<PipelineMetadata>>), ListStream(ListStream, Option<Box<PipelineMetadata>>), ExternalStream { stdout: Option<RawStream>, stderr: Option<RawStream>, exit_code: Option<ListStream>, span: Span, metadata: Option<Box<PipelineMetadata>>, trim_end_newline: bool, }, Empty, } ``` The foundational abstraction for input and output to commands This represents either a single Value or a stream of values coming into the command or leaving a command. A note on implementation: We’ve tried a few variations of this structure. Listing these below so we have a record. * We tried always assuming a stream in Nushell. This was a great 80% solution, but it had some rough edges. Namely, how do you know the difference between a single string and a list of one string. How do you know when to flatten the data given to you from a data source into the stream or to keep it as an unflattened list? * We tried putting the stream into Value. This had some interesting properties as now commands “just worked on values”, but lead to a few unfortunate issues. The first is that you can’t easily clone Values in a way that felt largely immutable. For example, if you cloned a Value which contained a stream, and in one variable drained some part of it, then the second variable would see different values based on what you did to the first. To make this kind of mutation thread-safe, we would have had to produce a lock for the stream, which in practice would have meant always locking the stream before reading from it. But more fundamentally, it felt wrong in practice that observation of a value at runtime could affect other values which happen to alias the same stream. By separating these, we don’t have this effect. Instead, variables could get concrete list values rather than streams, and be able to view them without non-local effects. * A balance of the two approaches is what we’ve landed on: Values are thread-safe to pass, and we can stream them into any sources. Streams are still available to model the infinite streams approach of original Nushell. Variants --- ### Value(Value, Option<Box<PipelineMetadata>>) ### ListStream(ListStream, Option<Box<PipelineMetadata>>) ### ExternalStream #### Fields `stdout: Option<RawStream>``stderr: Option<RawStream>``exit_code: Option<ListStream>``span: Span``metadata: Option<Box<PipelineMetadata>>``trim_end_newline: bool`### Empty Implementations --- ### impl PipelineData #### pub fn new_with_metadata( metadata: Option<Box<PipelineMetadata>>, span: Span ) -> PipelineData #### pub fn new_external_stream_with_only_exit_code(exit_code: i64) -> PipelineData create a `PipelineData::ExternalStream` with proper exit_code It’s useful to break running without raising error at user level. #### pub fn empty() -> PipelineData #### pub fn metadata(&self) -> Option<Box<PipelineMetadata>#### pub fn set_metadata(self, metadata: Option<Box<PipelineMetadata>>) -> Self #### pub fn is_nothing(&self) -> bool #### pub fn span(&self) -> Option<SpanPipelineData doesn’t always have a Span, but we can try! #### pub fn into_value(self, span: Span) -> Value #### pub fn drain(self) -> Result<(), ShellError#### pub fn drain_with_exit_code(self) -> Result<i64, ShellError#### pub fn into_iter_strict( self, span: Span ) -> Result<PipelineIterator, ShellErrorTry convert from self into iterator It returns Err if the `self` cannot be converted to an iterator. #### pub fn into_interruptible_iter( self, ctrlc: Option<Arc<AtomicBool>> ) -> PipelineIterator #### pub fn collect_string( self, separator: &str, config: &Config ) -> Result<String, ShellError#### pub fn collect_string_strict( self, span: Span ) -> Result<(String, Span, Option<Box<PipelineMetadata>>), ShellErrorRetrieves string from pipeline data. As opposed to `collect_string` this raises error rather than converting non-string values. The `span` will be used if `ListStream` is encountered since it doesn’t carry a span. #### pub fn follow_cell_path( self, cell_path: &[PathMember], head: Span, insensitive: bool ) -> Result<Value, ShellError#### pub fn upsert_cell_path( &mut self, cell_path: &[PathMember], callback: Box<dyn FnOnce(&Value) -> Value>, head: Span ) -> Result<(), ShellError#### pub fn map<F>( self, f: F, ctrlc: Option<Arc<AtomicBool>> ) -> Result<PipelineData, ShellError>where Self: Sized, F: FnMut(Value) -> Value + 'static + Send, Simplified mapper to help with simple values also. For full iterator support use `.into_iter()` instead #### pub fn flat_map<U, F>( self, f: F, ctrlc: Option<Arc<AtomicBool>> ) -> Result<PipelineData, ShellError>where Self: Sized, U: IntoIterator<Item = Value> + 'static, <U as IntoIterator>::IntoIter: 'static + Send, F: FnMut(Value) -> U + 'static + Send, Simplified flatmapper. For full iterator support use `.into_iter()` instead #### pub fn filter<F>( self, f: F, ctrlc: Option<Arc<AtomicBool>> ) -> Result<PipelineData, ShellError>where Self: Sized, F: FnMut(&Value) -> bool + 'static + Send, #### pub fn is_external_failed(self) -> (Self, bool) Try to catch external stream exit status and detect if it runs to failed. This is useful to commands with semicolon, we can detect errors early to avoid commands after semicolon running. Returns self and a flag indicates if the external stream runs to failed. If `self` is not Pipeline::ExternalStream, the flag will be false. #### pub fn try_expand_range(self) -> Result<PipelineData, ShellErrorTry to convert Value from Value::Range to Value::List. This is useful to expand Value::Range into array notation, specifically when converting `to json` or `to nuon`. `1..3 | to XX -> [1,2,3]` #### pub fn print( self, engine_state: &EngineState, stack: &mut Stack, no_newline: bool, to_stderr: bool ) -> Result<i64, ShellErrorConsume and print self data immediately. `no_newline` controls if we need to attach newline character to output. `to_stderr` controls if data is output to stderr, when the value is false, the data is output to stdout. #### pub fn print_not_formatted( self, engine_state: &EngineState, no_newline: bool, to_stderr: bool ) -> Result<i64, ShellErrorConsume and print self data immediately. Unlike print does not call `table` to format data and just prints it one element on a line * `no_newline` controls if we need to attach newline character to output. * `to_stderr` controls if data is output to stderr, when the value is false, the data is output to stdout. Trait Implementations --- ### impl Debug for PipelineData #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Item = Value The type of the elements being iterated over.#### type IntoIter = PipelineIterator Which kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for PipelineData ### impl Send for PipelineData ### impl !Sync for PipelineData ### impl Unpin for PipelineData ### impl !UnwindSafe for PipelineData Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoInterruptiblePipelineData for Iwhere I: IntoIterator + Send + 'static, <I as IntoIterator>::IntoIter: Send + 'static, <<I as IntoIterator>::IntoIter as Iterator>::Item: Into<Value>, #### fn into_pipeline_data( self, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData #### fn into_pipeline_data_with_metadata( self, metadata: impl Into<Option<Box<PipelineMetadata, Global>>>, ctrlc: Option<Arc<AtomicBool, Global>> ) -> PipelineData ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"PipelineIterator":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.PipelineIterator.html\" title=\"struct nu_protocol::PipelineIterator\">PipelineIterator</a></code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.PipelineIterator.html\" title=\"struct nu_protocol::PipelineIterator\">PipelineIterator</a></span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = <a class=\"enum\" href=\"enum.Value.html\" title=\"enum nu_protocol::Value\">Value</a>;</span>"} Enum nu_protocol::ShellError === ``` pub enum ShellError { OperatorMismatch { op_span: Span, lhs_ty: String, lhs_span: Span, rhs_ty: String, rhs_span: Span, }, OperatorOverflow { msg: String, span: Span, help: String, }, PipelineMismatch { exp_input_type: String, dst_span: Span, src_span: Span, }, OnlySupportsThisInputType { exp_input_type: String, wrong_type: String, dst_span: Span, src_span: Span, }, PipelineEmpty { dst_span: Span, }, TypeMismatch { err_message: String, span: Span, }, IncorrectValue { msg: String, val_span: Span, call_span: Span, }, UnsupportedOperator { operator: Operator, span: Span, }, AssignmentRequiresVar { lhs_span: Span, }, AssignmentRequiresMutableVar { lhs_span: Span, }, UnknownOperator { op_token: String, span: Span, }, MissingParameter { param_name: String, span: Span, }, IncompatibleParameters { left_message: String, left_span: Span, right_message: String, right_span: Span, }, DelimiterError { msg: String, span: Span, }, IncompatibleParametersSingle { msg: String, span: Span, }, ExternalNotSupported { span: Span, }, InvalidProbability { span: Span, }, InvalidRange { left_flank: String, right_flank: String, span: Span, }, NushellFailed { msg: String, }, NushellFailedSpanned { msg: String, label: String, span: Span, }, NushellFailedHelp { msg: String, help: String, }, VariableNotFoundAtRuntime { span: Span, }, EnvVarNotFoundAtRuntime { envvar_name: String, span: Span, }, ModuleNotFoundAtRuntime { mod_name: String, span: Span, }, OverlayNotFoundAtRuntime { overlay_name: String, span: Span, }, NotFound { span: Span, }, CantConvert { to_type: String, from_type: String, span: Span, help: Option<String>, }, CantConvertToDuration { details: String, dst_span: Span, src_span: Span, help: Option<String>, }, EnvVarNotAString { envvar_name: String, span: Span, }, AutomaticEnvVarSetManually { envvar_name: String, span: Span, }, CannotReplaceEnv { span: Span, }, DivisionByZero { span: Span, }, CannotCreateRange { span: Span, }, AccessBeyondEnd { max_idx: usize, span: Span, }, InsertAfterNextFreeIndex { available_idx: usize, span: Span, }, AccessEmptyContent { span: Span, }, AccessBeyondEndOfStream { span: Span, }, IncompatiblePathAccess { type_name: String, span: Span, }, CantFindColumn { col_name: String, span: Span, src_span: Span, }, ColumnAlreadyExists { col_name: String, span: Span, src_span: Span, }, NotAList { dst_span: Span, src_span: Span, }, ColumnDefinedTwice { second_use: Span, first_use: Span, }, ExternalCommand { label: String, help: String, span: Span, }, UnsupportedInput(String, String, Span, Span), DatetimeParseError(String, Span), NetworkFailure(String, Span), CommandNotFound(Span), AliasNotFound(Span), FlagNotFound(String, Span), FileNotFound(Span), FileNotFoundCustom(String, Span), PluginFailedToLoad(String), PluginFailedToEncode(String), PluginFailedToDecode(String), IOInterrupted(String, Span), IOError(String), IOErrorSpanned(String, Span), PermissionDeniedError(String, Span), OutOfMemoryError(String, Span), NotADirectory(Span), DirectoryNotFound(Span, String), DirectoryNotFoundCustom(String, Span), MoveNotPossible { source_message: String, source_span: Span, destination_message: String, destination_span: Span, }, MoveNotPossibleSingle(String, Span), CreateNotPossible(String, Span), ChangeAccessTimeNotPossible(String, Span), ChangeModifiedTimeNotPossible(String, Span), RemoveNotPossible(String, Span), NoFileToBeRemoved(/* private fields */), NoFileToBeMoved(/* private fields */), NoFileToBeCopied(/* private fields */), ReadingFile(String, Span), DidYouMean(String, Span), DidYouMeanCustom(String, String, Span), NonUtf8(Span), NonUtf8Custom(String, Span), DowncastNotPossible(String, Span), UnsupportedConfigValue(String, String, Span), MissingConfigValue(String, Span), NeedsPositiveValue(Span), GenericError(String, String, Option<Span>, Option<String>, Vec<ShellError>), OutsideSpannedLabeledError(String, String, String, Span), RemovedCommand(String, String, Span), NonUnicodeInput, UnexpectedAbbrComponent(String), EvalBlockWithInput(Span, Vec<ShellError>), Break(Span), Continue(Span), Return(Span, Box<Value>), RecursionLimitReached { recursion_limit: u64, span: Option<Span>, }, LazyRecordAccessFailed { message: String, column_name: String, span: Span, }, InterruptedByUser { span: Option<Span>, }, MatchGuardNotBool { span: Span, }, MissingConstEvalImpl { span: Span, }, NotAConstant(Span), NotAConstCommand(Span), NotAConstHelp(Span), } ``` The fundamental error type for the evaluation engine. These cases represent different kinds of errors the evaluator might face, along with helpful spans to label. An error renderer will take this error value and pass it into an error viewer to display to the user. Variants --- ### OperatorMismatch #### Fields `op_span: Span``lhs_ty: String``lhs_span: Span``rhs_ty: String``rhs_span: Span`An operator received two arguments of incompatible types. ##### Resolution Check each argument’s type and convert one or both as needed. ### OperatorOverflow #### Fields `msg: String``span: Span``help: String`An arithmetic operation’s resulting value overflowed its possible size. ##### Resolution Check the inputs to the operation and add guards for their sizes. Integers are generally of size i64, floats are generally f64. ### PipelineMismatch #### Fields `exp_input_type: String``dst_span: Span``src_span: Span`The pipelined input into a command was not of the expected type. For example, it might expect a string input, but received a table instead. ##### Resolution Check the relevant pipeline and extract or convert values as needed. ### OnlySupportsThisInputType #### Fields `exp_input_type: String``wrong_type: String``dst_span: Span``src_span: Span`The pipelined input into a command was not of the expected type. For example, it might expect a string input, but received a table instead. (duplicate of `ShellError::PipelineMismatch` that reports the observed type) ##### Resolution Check the relevant pipeline and extract or convert values as needed. ### PipelineEmpty #### Fields `dst_span: Span`No input value was piped into the command. ##### Resolution Only use this command to process values from a previous expression. ### TypeMismatch #### Fields `err_message: String``span: Span`A command received an argument of the wrong type. ##### Resolution Convert the argument type before passing it in, or change the command to accept the type. ### IncorrectValue #### Fields `msg: String``val_span: Span``call_span: Span`A command received an argument with correct type but incorrect value. ##### Resolution Correct the argument value before passing it in or change the command. ### UnsupportedOperator #### Fields `operator: Operator``span: Span`This value cannot be used with this operator. ##### Resolution Not all values, for example custom values, can be used with all operators. Either implement support for the operator on this type, or convert the type to a supported one. ### AssignmentRequiresVar #### Fields `lhs_span: Span`Invalid assignment left-hand side ##### Resolution Assignment requires that you assign to a variable or variable cell path. ### AssignmentRequiresMutableVar #### Fields `lhs_span: Span`Invalid assignment left-hand side ##### Resolution Assignment requires that you assign to a mutable variable or cell path. ### UnknownOperator #### Fields `op_token: String``span: Span`An operator was not recognized during evaluation. ##### Resolution Did you write the correct operator? ### MissingParameter #### Fields `param_name: String``span: Span`An expected command parameter is missing. ##### Resolution Add the expected parameter and try again. ### IncompatibleParameters #### Fields `left_message: String``left_span: Span``right_message: String``right_span: Span`Two parameters conflict with each other or are otherwise mutually exclusive. ##### Resolution Remove one of the parameters/options and try again. ### DelimiterError #### Fields `msg: String``span: Span`There’s some issue with number or matching of delimiters in an expression. ##### Resolution Check your syntax for mismatched braces, RegExp syntax errors, etc, based on the specific error message. ### IncompatibleParametersSingle #### Fields `msg: String``span: Span`An operation received parameters with some sort of incompatibility (for example, different number of rows in a table, incompatible column names, etc). ##### Resolution Refer to the specific error message for details on what’s incompatible and then fix your inputs to make sure they match that way. ### ExternalNotSupported #### Fields `span: Span`You’re trying to run an unsupported external command. ##### Resolution Make sure there’s an appropriate `run-external` declaration for this external command. ### InvalidProbability #### Fields `span: Span`The given probability input is invalid. The probability must be between 0 and 1. ##### Resolution Make sure the probability is between 0 and 1 and try again. ### InvalidRange #### Fields `left_flank: String``right_flank: String``span: Span`The first value in a `..` range must be compatible with the second one. ##### Resolution Check to make sure both values are compatible, and that the values are enumerable in Nushell. ### NushellFailed #### Fields `msg: String`Catastrophic nushell failure. This reflects a completely unexpected or unrecoverable error. ##### Resolution It is very likely that this is a bug. Please file an issue at https://github.com/nushell/nushell/issues with relevant information. ### NushellFailedSpanned #### Fields `msg: String``label: String``span: Span`Catastrophic nushell failure. This reflects a completely unexpected or unrecoverable error. ##### Resolution It is very likely that this is a bug. Please file an issue at https://github.com/nushell/nushell/issues with relevant information. ### NushellFailedHelp #### Fields `msg: String``help: String`Catastrophic nushell failure. This reflects a completely unexpected or unrecoverable error. ##### Resolution It is very likely that this is a bug. Please file an issue at https://github.com/nushell/nushell/issues with relevant information. ### VariableNotFoundAtRuntime #### Fields `span: Span`A referenced variable was not found at runtime. ##### Resolution Check the variable name. Did you typo it? Did you forget to declare it? Is the casing right? ### EnvVarNotFoundAtRuntime #### Fields `envvar_name: String``span: Span`A referenced environment variable was not found at runtime. ##### Resolution Check the environment variable name. Did you typo it? Did you forget to declare it? Is the casing right? ### ModuleNotFoundAtRuntime #### Fields `mod_name: String``span: Span`A referenced module was not found at runtime. ##### Resolution Check the module name. Did you typo it? Did you forget to declare it? Is the casing right? ### OverlayNotFoundAtRuntime #### Fields `overlay_name: String``span: Span`A referenced overlay was not found at runtime. ##### Resolution Check the overlay name. Did you typo it? Did you forget to declare it? Is the casing right? ### NotFound #### Fields `span: Span`The given item was not found. This is a fairly generic error that depends on context. ##### Resolution This error is triggered in various places, and simply signals that “something” was not found. Refer to the specific error message for further details. ### CantConvert #### Fields `to_type: String``from_type: String``span: Span``help: Option<String>`Failed to convert a value of one type into a different type. ##### Resolution Not all values can be coerced this way. Check the supported type(s) and try again. ### CantConvertToDuration #### Fields `details: String``dst_span: Span``src_span: Span``help: Option<String>`### EnvVarNotAString #### Fields `envvar_name: String``span: Span`An environment variable cannot be represented as a string. ##### Resolution Not all types can be converted to environment variable values, which must be strings. Check the input type and try again. ### AutomaticEnvVarSetManually #### Fields `envvar_name: String``span: Span`This environment variable cannot be set manually. ##### Resolution This environment variable is set automatically by Nushell and cannot not be set manually. ### CannotReplaceEnv #### Fields `span: Span`It is not possible to replace the entire environment at once ##### Resolution Setting the entire environment is not allowed. Change environment variables individually instead. ### DivisionByZero #### Fields `span: Span`Division by zero is not a thing. ##### Resolution Add a guard of some sort to check whether a denominator input to this division is zero, and branch off if that’s the case. ### CannotCreateRange #### Fields `span: Span`An error happened while tryin to create a range. This can happen in various unexpected situations, for example if the range would loop forever (as would be the case with a 0-increment). ##### Resolution Check your range values to make sure they’re countable and would not loop forever. ### AccessBeyondEnd #### Fields `max_idx: usize``span: Span`You attempted to access an index beyond the available length of a value. ##### Resolution Check your lengths and try again. ### InsertAfterNextFreeIndex #### Fields `available_idx: usize``span: Span`You attempted to insert data at a list position higher than the end. ##### Resolution To insert data into a list, assign to the last used index + 1. ### AccessEmptyContent #### Fields `span: Span`You attempted to access an index when it’s empty. ##### Resolution Check your lengths and try again. ### AccessBeyondEndOfStream #### Fields `span: Span`You attempted to access an index beyond the available length of a stream. ##### Resolution Check your lengths and try again. ### IncompatiblePathAccess #### Fields `type_name: String``span: Span`Tried to index into a type that does not support pathed access. ##### Resolution Check your types. Only composite types can be pathed into. ### CantFindColumn #### Fields `col_name: String``span: Span``src_span: Span`The requested column does not exist. ##### Resolution Check the spelling of your column name. Did you forget to rename a column somewhere? ### ColumnAlreadyExists #### Fields `col_name: String``span: Span``src_span: Span`Attempted to insert a column into a table, but a column with that name already exists. ##### Resolution Drop or rename the existing column (check `rename -h`) and try again. ### NotAList #### Fields `dst_span: Span``src_span: Span`The given operation can only be performed on lists. ##### Resolution Check the input type to this command. Are you sure it’s a list? ### ColumnDefinedTwice #### Fields `second_use: Span``first_use: Span`Fields can only be defined once ##### Resolution Check the record to ensure you aren’t reusing the same field name ### ExternalCommand #### Fields `label: String``help: String``span: Span`An error happened while performing an external command. ##### Resolution This error is fairly generic. Refer to the specific error message for further details. ### UnsupportedInput(String, String, Span, Span) An operation was attempted with an input unsupported for some reason. ##### Resolution This error is fairly generic. Refer to the specific error message for further details. ### DatetimeParseError(String, Span) Failed to parse an input into a datetime value. ##### Resolution Make sure your datetime input format is correct. For example, these are some valid formats: * “5 pm” * “2020/12/4” * “2020.12.04 22:10 +2” * “2020-04-12 22:10:57 +02:00” * “2020-04-12T22:10:57.213231+02:00” * “Tue, 1 Jul 2003 10:52:37 +0200”“# ### NetworkFailure(String, Span) A network operation failed. ##### Resolution It’s always DNS. ### CommandNotFound(Span) Help text for this command could not be found. ##### Resolution Check the spelling for the requested command and try again. Are you sure it’s defined and your configurations are loading correctly? Can you execute it? ### AliasNotFound(Span) This alias could not be found ##### Resolution The alias does not exist in the current scope. It might exist in another scope or overlay or be hidden. ### FlagNotFound(String, Span) A flag was not found. ### FileNotFound(Span) Failed to find a file during a nushell operation. ##### Resolution Does the file in the error message exist? Is it readable and accessible? Is the casing right? ### FileNotFoundCustom(String, Span) Failed to find a file during a nushell operation. ##### Resolution Does the file in the error message exist? Is it readable and accessible? Is the casing right? ### PluginFailedToLoad(String) A plugin failed to load. ##### Resolution This is a fairly generic error. Refer to the specific error message for further details. ### PluginFailedToEncode(String) A message from a plugin failed to encode. ##### Resolution This is likely a bug with the plugin itself. ### PluginFailedToDecode(String) A message to a plugin failed to decode. ##### Resolution This is either an issue with the inputs to a plugin (bad JSON?) or a bug in the plugin itself. Fix or report as appropriate. ### IOInterrupted(String, Span) I/O operation interrupted. ##### Resolution This is a generic error. Refer to the specific error message for further details. ### IOError(String) An I/O operation failed. ##### Resolution This is a generic error. Refer to the specific error message for further details. ### IOErrorSpanned(String, Span) An I/O operation failed. ##### Resolution This is a generic error. Refer to the specific error message for further details. ### PermissionDeniedError(String, Span) Permission for an operation was denied. ##### Resolution This is a generic error. Refer to the specific error message for further details. ### OutOfMemoryError(String, Span) Out of memory. ##### Resolution This is a generic error. Refer to the specific error message for further details. ### NotADirectory(Span) Tried to `cd` to a path that isn’t a directory. ##### Resolution Make sure the path is a directory. It currently exists, but is of some other type, like a file. ### DirectoryNotFound(Span, String) Attempted to perform an operation on a directory that doesn’t exist. ##### Resolution Make sure the directory in the error message actually exists before trying again. ### DirectoryNotFoundCustom(String, Span) Attempted to perform an operation on a directory that doesn’t exist. ##### Resolution Make sure the directory in the error message actually exists before trying again. ### MoveNotPossible #### Fields `source_message: String``source_span: Span``destination_message: String``destination_span: Span`The requested move operation cannot be completed. This is typically because both paths exist, but are of different types. For example, you might be trying to overwrite an existing file with a directory. ##### Resolution Make sure the destination path does not exist before moving a directory. ### MoveNotPossibleSingle(String, Span) The requested move operation cannot be completed. This is typically because both paths exist, but are of different types. For example, you might be trying to overwrite an existing file with a directory. ##### Resolution Make sure the destination path does not exist before moving a directory. ### CreateNotPossible(String, Span) Failed to create either a file or directory. ##### Resolution This is a fairly generic error. Refer to the specific error message for further details. ### ChangeAccessTimeNotPossible(String, Span) Changing the access time (“atime”) of this file is not possible. ##### Resolution This can be for various reasons, such as your platform or permission flags. Refer to the specific error message for more details. ### ChangeModifiedTimeNotPossible(String, Span) Changing the modification time (“mtime”) of this file is not possible. ##### Resolution This can be for various reasons, such as your platform or permission flags. Refer to the specific error message for more details. ### RemoveNotPossible(String, Span) Unable to remove this item. ##### Resolution Removal can fail for a number of reasons, such as permissions problems. Refer to the specific error message for more details. ### NoFileToBeRemoved(/* private fields */) ### NoFileToBeMoved(/* private fields */) ### NoFileToBeCopied(/* private fields */) ### ReadingFile(String, Span) Error while trying to read a file ##### Resolution The error will show the result from a file operation ### DidYouMean(String, Span) A name was not found. Did you mean a different name? ##### Resolution The error message will suggest a possible match for what you meant. ### DidYouMeanCustom(String, String, Span) A name was not found. Did you mean a different name? ##### Resolution The error message will suggest a possible match for what you meant. ### NonUtf8(Span) The given input must be valid UTF-8 for further processing. ##### Resolution Check your input’s encoding. Are there any funny characters/bytes? ### NonUtf8Custom(String, Span) The given input must be valid UTF-8 for further processing. ##### Resolution Check your input’s encoding. Are there any funny characters/bytes? ### DowncastNotPossible(String, Span) A custom value could not be converted to a Dataframe. ##### Resolution Make sure conversion to a Dataframe is possible for this value or convert it to a type that does, first. ### UnsupportedConfigValue(String, String, Span) The value given for this configuration is not supported. ##### Resolution Refer to the specific error message for details and convert values as needed. ### MissingConfigValue(String, Span) An expected configuration value is not present. ##### Resolution Refer to the specific error message and add the configuration value to your config file as needed. ### NeedsPositiveValue(Span) Negative value passed when positive one is required. ##### Resolution Guard against negative values or check your inputs. ### GenericError(String, String, Option<Span>, Option<String>, Vec<ShellError>) This is a generic error type used for different situations. ### OutsideSpannedLabeledError(String, String, String, Span) This is a generic error type used for different situations. ### RemovedCommand(String, String, Span) Attempted to use a command that has been removed from Nushell. ##### Resolution Check the help for the new suggested command and update your script accordingly. ### NonUnicodeInput Non-Unicode input received. ##### Resolution Check that your path is UTF-8 compatible. ### UnexpectedAbbrComponent(String) Unexpected abbr component. ##### Resolution Check the path abbreviation to ensure that it is valid. ### EvalBlockWithInput(Span, Vec<ShellError>) Failed to eval block with specific pipeline input. ### Break(Span) Break event, which may become an error if used outside of a loop ### Continue(Span) Continue event, which may become an error if used outside of a loop ### Return(Span, Box<Value>) Return event, which may become an error if used outside of a function ### RecursionLimitReached #### Fields `recursion_limit: u64``span: Option<Span>`The code being executed called itself too many times. ##### Resolution Adjust your Nu code to ### LazyRecordAccessFailed #### Fields `message: String``column_name: String``span: Span`An attempt to access a record column failed. ### InterruptedByUser #### Fields `span: Option<Span>`Operation interrupted by user ### MatchGuardNotBool #### Fields `span: Span`An attempt to use, as a match guard, an expression that does not resolve into a boolean ### MissingConstEvalImpl #### Fields `span: Span`An attempt to run a command marked for constant evaluation lacking the const. eval. implementation. This is an internal Nushell error, please file an issue. ### NotAConstant(Span) Tried assigning non-constant value to a constant ##### Resolution Only a subset of expressions are allowed to be assigned as a constant during parsing. ### NotAConstCommand(Span) Tried running a command that is not const-compatible ##### Resolution Only a subset of builtin commands, and custom commands built only from those commands, can run at parse time. ### NotAConstHelp(Span) Tried getting a help message at parse time. ##### Resolution Help messages are not supported at parse time. Implementations --- ### impl ShellError #### pub fn wrap(self, working_set: &StateWorkingSet<'_>, span: Span) -> ParseError Trait Implementations --- ### impl Clone for ShellError #### fn clone(&self) -> ShellError Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn code(&self) -> Option<Box<dyn Display + '_>Unique diagnostic code that can be used to look up more information about this `Diagnostic`. Ideally also globally unique, and documented in the toplevel crate’s documentation for easy searching. Rust path format (`foo::bar::baz`) is recommended, but more classic codes like `E0123` or enums will work just fine.#### fn help(&self) -> Option<Box<dyn Display + '_>Additional help text related to this `Diagnostic`. Do you have any advice for the poor soul who’s just run into this issue?#### fn labels(&self) -> Option<Box<dyn Iterator<Item = LabeledSpan> + '_>Labels to apply to this `Diagnostic`’s `Diagnostic::source_code`#### fn source_code(&self) -> Option<&dyn SourceCodeSource code to apply this `Diagnostic`’s `Diagnostic::labels` to.#### fn related(&self) -> Option<Box<dyn Iterator<Item = &dyn Diagnostic> + '_>Additional related `Diagnostic`s.#### fn severity(&self) -> Option<SeverityDiagnostic severity. This may be used by `ReportHandler`s to change the display format of this diagnostic. `Diagnostic`.#### fn diagnostic_source(&self) -> Option<&dyn DiagnosticThe cause of the error.### impl Display for ShellError #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(input: Box<dyn Error>) -> ShellError Converts to this type from the input type.### impl From<Box<dyn Error + Send + Sync, Global>> for ShellError #### fn from(input: Box<dyn Error + Send + Sync>) -> ShellError Converts to this type from the input type.### impl From<Error> for ShellError #### fn from(input: Error) -> ShellError Converts to this type from the input type.### impl Serialize for ShellError #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for ShellError ### impl Send for ShellError ### impl Sync for ShellError ### impl Unpin for ShellError ### impl !UnwindSafe for ShellError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum nu_protocol::SyntaxShape === ``` pub enum SyntaxShape { Any, Binary, Block, Boolean, CellPath, Closure(Option<Vec<SyntaxShape>>), CompleterWrapper(Box<SyntaxShape>, DeclId), DateTime, Directory, Duration, Error, Expression, Filepath, Filesize, Float, FullCellPath, GlobPattern, Int, ImportPattern, Keyword(Vec<u8>, Box<SyntaxShape>), List(Box<SyntaxShape>), MathExpression, MatchBlock, MatchPattern, Nothing, Number, OneOf(Vec<SyntaxShape>), Operator, Range, Record(Vec<(String, SyntaxShape)>), RowCondition, Signature, String, Table(Vec<(String, SyntaxShape)>), VarWithOptType, } ``` The syntactic shapes that describe how a sequence should be parsed. This extends beyond `Type` which describes how `Value`s are represented. `SyntaxShape`s can describe the parsing rules for arguments to a command. e.g. `SyntaxShape::GlobPattern`/`SyntaxShape::Filepath` serve the completer, but don’t have an associated `Value` There are additional `SyntaxShape`s that only make sense in particular expressions or keywords Variants --- ### Any Any syntactic form is allowed ### Binary A binary literal ### Block A block is allowed, eg `{start this thing}` ### Boolean A boolean value, eg `true` or `false` ### CellPath A dotted path to navigate the table ### Closure(Option<Vec<SyntaxShape>>) A closure is allowed, eg `{|| start this thing}` ### CompleterWrapper(Box<SyntaxShape>, DeclId) A `SyntaxShape` with custom completion logic ### DateTime A datetime value, eg `2022-02-02` or `2019-10-12T07:20:50.52+00:00` ### Directory A directory is allowed ### Duration A duration value is allowed, eg `19day` ### Error An error value ### Expression A general expression, eg `1 + 2` or `foo --bar` ### Filepath A filepath is allowed ### Filesize A filesize value is allowed, eg `10kb` ### Float A floating point value, eg `1.0` ### FullCellPath A dotted path including the variable to access items Fully qualified ### GlobPattern A glob pattern is allowed, eg `foo*` ### Int Only an integer value is allowed ### ImportPattern A module path pattern used for imports ### Keyword(Vec<u8>, Box<SyntaxShape>) A specific match to a word or symbol ### List(Box<SyntaxShape>) A list is allowed, eg `[first second]` ### MathExpression A general math expression, eg `1 + 2` ### MatchBlock A block of matches, used by `match` ### MatchPattern A match pattern, eg `{a: $foo}` ### Nothing Nothing ### Number Only a numeric (integer or float) value is allowed ### OneOf(Vec<SyntaxShape>) One of a list of possible items, checked in order ### Operator An operator, eg `+` ### Range A range is allowed (eg, `1..3`) ### Record(Vec<(String, SyntaxShape)>) A record value, eg `{x: 1, y: 2}` ### RowCondition A math expression which expands shorthand forms on the lefthand side, eg `foo > 1` The shorthand allows us to more easily reach columns inside of the row being passed in ### Signature A signature for a definition, `[x:int, --foo]` ### String Strings and string-like bare words are allowed ### Table(Vec<(String, SyntaxShape)>) A table is allowed, eg `[[first, second]; [1, 2]]` ### VarWithOptType A variable with optional type, `x` or `x: int` Implementations --- ### impl SyntaxShape #### pub fn to_type(&self) -> Type If possible provide the associated concrete `Type` Note: Some `SyntaxShape`s don’t have a corresponding `Value` Here we currently return `Type::Any` ``` use nu_protocol::{SyntaxShape, Type}; let non_value = SyntaxShape::ImportPattern; assert_eq!(non_value.to_type(), Type::Any); ``` Trait Implementations --- ### impl Clone for SyntaxShape #### fn clone(&self) -> SyntaxShape Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn eq(&self, other: &SyntaxShape) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for SyntaxShape #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralEq for SyntaxShape ### impl StructuralPartialEq for SyntaxShape Auto Trait Implementations --- ### impl RefUnwindSafe for SyntaxShape ### impl Send for SyntaxShape ### impl Sync for SyntaxShape ### impl Unpin for SyntaxShape ### impl UnwindSafe for SyntaxShape Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum nu_protocol::TimePeriod === ``` pub enum TimePeriod { Nanos(i64), Micros(i64), Millis(i64), Seconds(i64), Minutes(i64), Hours(i64), Days(i64), Weeks(i64), Months(i64), Years(i64), } ``` Variants --- ### Nanos(i64) ### Micros(i64) ### Millis(i64) ### Seconds(i64) ### Minutes(i64) ### Hours(i64) ### Days(i64) ### Weeks(i64) ### Months(i64) ### Years(i64) Implementations --- ### impl TimePeriod #### pub fn to_text(self) -> Cow<'static, strTrait Implementations --- ### impl Clone for TimePeriod #### fn clone(&self) -> TimePeriod Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult Formats the value using the given formatter. Auto Trait Implementations --- ### impl RefUnwindSafe for TimePeriod ### impl Send for TimePeriod ### impl Sync for TimePeriod ### impl Unpin for TimePeriod ### impl UnwindSafe for TimePeriod Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum nu_protocol::Type === ``` pub enum Type { Any, Binary, Block, Bool, CellPath, Closure, Custom(String), Date, Duration, Error, Filesize, Float, Int, List(Box<Type>), ListStream, MatchPattern, Nothing, Number, Range, Record(Vec<(String, Type)>), Signature, String, Table(Vec<(String, Type)>), } ``` Variants --- ### Any ### Binary ### Block ### Bool ### CellPath ### Closure ### Custom(String) ### Date ### Duration ### Error ### Filesize ### Float ### Int ### List(Box<Type>) ### ListStream ### MatchPattern ### Nothing ### Number ### Range ### Record(Vec<(String, Type)>) ### Signature ### String ### Table(Vec<(String, Type)>) Implementations --- ### impl Type #### pub fn is_subtype(&self, other: &Type) -> bool #### pub fn is_numeric(&self) -> bool #### pub fn is_list(&self) -> bool #### pub fn accepts_cell_paths(&self) -> bool Does this type represent a data structure containing values that can be addressed using ‘cell paths’? #### pub fn to_shape(&self) -> SyntaxShape #### pub fn get_non_specified_string(&self) -> String Get a string representation, without inner type specification of lists, tables and records (get `list` instead of `list<any>` Trait Implementations --- ### impl Clone for Type #### fn clone(&self) -> Type Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Type Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn hash<__H: Hasher>(&self, state: &mut __H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn eq(&self, other: &Type) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Type #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralEq for Type ### impl StructuralPartialEq for Type Auto Trait Implementations --- ### impl RefUnwindSafe for Type ### impl Send for Type ### impl Sync for Type ### impl Unpin for Type ### impl UnwindSafe for Type Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum nu_protocol::Unit === ``` pub enum Unit { Byte, Kilobyte, Megabyte, Gigabyte, Terabyte, Petabyte, Exabyte, Kibibyte, Mebibyte, Gibibyte, Tebibyte, Pebibyte, Exbibyte, Nanosecond, Microsecond, Millisecond, Second, Minute, Hour, Day, Week, } ``` Variants --- ### Byte ### Kilobyte ### Megabyte ### Gigabyte ### Terabyte ### Petabyte ### Exabyte ### Kibibyte ### Mebibyte ### Gibibyte ### Tebibyte ### Pebibyte ### Exbibyte ### Nanosecond ### Microsecond ### Millisecond ### Second ### Minute ### Hour ### Day ### Week Implementations --- ### impl Unit #### pub fn to_value(&self, size: i64, span: Span) -> Result<Value, ShellErrorTrait Implementations --- ### impl Clone for Unit #### fn clone(&self) -> Unit Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Unit) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Unit #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. ### impl Eq for Unit ### impl StructuralEq for Unit ### impl StructuralPartialEq for Unit Auto Trait Implementations --- ### impl RefUnwindSafe for Unit ### impl Send for Unit ### impl Sync for Unit ### impl Unpin for Unit ### impl UnwindSafe for Unit Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Checks if this value is equivalent to the given key. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Enum nu_protocol::Value === ``` pub enum Value { Bool { val: bool, internal_span: Span, }, Int { val: i64, internal_span: Span, }, Float { val: f64, internal_span: Span, }, Filesize { val: i64, internal_span: Span, }, Duration { val: i64, internal_span: Span, }, Date { val: DateTime<FixedOffset>, internal_span: Span, }, Range { val: Box<Range>, internal_span: Span, }, String { val: String, internal_span: Span, }, Record { val: Record, internal_span: Span, }, List { vals: Vec<Value>, internal_span: Span, }, Block { val: BlockId, internal_span: Span, }, Closure { val: BlockId, captures: HashMap<VarId, Value>, internal_span: Span, }, Nothing { internal_span: Span, }, Error { error: Box<ShellError>, internal_span: Span, }, Binary { val: Vec<u8>, internal_span: Span, }, CellPath { val: CellPath, internal_span: Span, }, CustomValue { val: Box<dyn CustomValue>, internal_span: Span, }, LazyRecord { val: Box<dyn for<'a> LazyRecord<'a>>, internal_span: Span, }, MatchPattern { val: Box<MatchPattern>, internal_span: Span, }, } ``` Core structured values that pass through the pipeline in Nushell. Variants --- ### Bool #### Fields `val: bool``internal_span: Span`### Int #### Fields `val: i64``internal_span: Span`### Float #### Fields `val: f64``internal_span: Span`### Filesize #### Fields `val: i64``internal_span: Span`### Duration #### Fields `val: i64``internal_span: Span`### Date #### Fields `val: DateTime<FixedOffset>``internal_span: Span`### Range #### Fields `val: Box<Range>``internal_span: Span`### String #### Fields `val: String``internal_span: Span`### Record #### Fields `val: Record``internal_span: Span`### List #### Fields `vals: Vec<Value>``internal_span: Span`### Block #### Fields `val: BlockId``internal_span: Span`### Closure #### Fields `val: BlockId``captures: HashMap<VarId, Value>``internal_span: Span`### Nothing #### Fields `internal_span: Span`### Error #### Fields `error: Box<ShellError>``internal_span: Span`### Binary #### Fields `val: Vec<u8>``internal_span: Span`### CellPath #### Fields `val: CellPath``internal_span: Span`### CustomValue #### Fields `val: Box<dyn CustomValue>``internal_span: Span`### LazyRecord #### Fields `val: Box<dyn for<'a> LazyRecord<'a>>``internal_span: Span`### MatchPattern #### Fields `val: Box<MatchPattern>``internal_span: Span`Implementations --- ### impl Value #### pub fn into_config(&mut self, config: &Config) -> (Config, Option<ShellError>) ### impl Value #### pub fn as_f64(&self) -> Result<f64, ShellError#### pub fn as_i64(&self) -> Result<i64, ShellError### impl Value #### pub fn as_bool(&self) -> Result<bool, ShellError#### pub fn as_int(&self) -> Result<i64, ShellError#### pub fn as_float(&self) -> Result<f64, ShellError#### pub fn as_filesize(&self) -> Result<i64, ShellError#### pub fn as_duration(&self) -> Result<i64, ShellError#### pub fn as_date(&self) -> Result<DateTime<FixedOffset>, ShellError#### pub fn as_range(&self) -> Result<&Range, ShellError#### pub fn as_string(&self) -> Result<String, ShellErrorConverts into string values that can be changed into string natively #### pub fn as_spanned_string(&self) -> Result<Spanned<String>, ShellError#### pub fn as_char(&self) -> Result<char, ShellError#### pub fn as_path(&self) -> Result<PathBuf, ShellError#### pub fn as_record(&self) -> Result<&Record, ShellError#### pub fn as_list(&self) -> Result<&[Value], ShellError#### pub fn as_block(&self) -> Result<BlockId, ShellError#### pub fn as_closure( &self ) -> Result<(BlockId, &HashMap<VarId, Value>), ShellError#### pub fn as_binary(&self) -> Result<&[u8], ShellError#### pub fn as_cell_path(&self) -> Result<&CellPath, ShellError#### pub fn as_custom_value(&self) -> Result<&dyn CustomValue, ShellError#### pub fn as_lazy_record(&self) -> Result<&dyn for<'a> LazyRecord<'a>, ShellError#### pub fn as_match_pattern(&self) -> Result<&MatchPattern, ShellError#### pub fn span(&self) -> Span Get the span for the current value #### pub fn with_span(self, new_span: Span) -> Value Update the value with a new span #### pub fn get_type(&self) -> Type Get the type of the current Value #### pub fn get_data_by_key(&self, name: &str) -> Option<Value#### pub fn nonerror_into_string( &self, separator: &str, config: &Config ) -> Result<String, ShellError#### pub fn into_string(&self, separator: &str, config: &Config) -> String Convert Value into string. Note that Streams will be consumed. #### pub fn into_abbreviated_string(&self, config: &Config) -> String Convert Value into string. Note that Streams will be consumed. #### pub fn debug_value(&self) -> String Convert Value into a debug string #### pub fn into_string_parsable(&self, separator: &str, config: &Config) -> String Convert Value into a parsable string (quote strings) bugbug other, rarer types not handled #### pub fn debug_string(&self, separator: &str, config: &Config) -> String Convert Value into string. Note that Streams will be consumed. #### pub fn is_empty(&self) -> bool Check if the content is empty #### pub fn is_nothing(&self) -> bool #### pub fn is_error(&self) -> bool #### pub fn follow_cell_path( self, cell_path: &[PathMember], insensitive: bool ) -> Result<Value, ShellErrorFollow a given cell path into the value: for example accessing select elements in a stream or list #### pub fn follow_cell_path_not_from_user_input( self, cell_path: &[PathMember], insensitive: bool ) -> Result<Value, ShellError#### pub fn upsert_cell_path( &mut self, cell_path: &[PathMember], callback: Box<dyn FnOnce(&Value) -> Value> ) -> Result<(), ShellErrorFollow a given cell path into the value: for example accessing select elements in a stream or list #### pub fn upsert_data_at_cell_path( &mut self, cell_path: &[PathMember], new_val: Value ) -> Result<(), ShellError#### pub fn update_cell_path<'a>( &mut self, cell_path: &[PathMember], callback: Box<dyn FnOnce(&Value) -> Value + 'a> ) -> Result<(), ShellErrorFollow a given cell path into the value: for example accessing select elements in a stream or list #### pub fn update_data_at_cell_path( &mut self, cell_path: &[PathMember], new_val: Value ) -> Result<(), ShellError#### pub fn remove_data_at_cell_path( &mut self, cell_path: &[PathMember] ) -> Result<(), ShellError#### pub fn insert_data_at_cell_path( &mut self, cell_path: &[PathMember], new_val: Value, head_span: Span ) -> Result<(), ShellError#### pub fn is_true(&self) -> bool #### pub fn is_false(&self) -> bool #### pub fn columns(&self) -> &[String] #### pub fn bool(val: bool, span: Span) -> Value #### pub fn int(val: i64, span: Span) -> Value #### pub fn float(val: f64, span: Span) -> Value #### pub fn filesize(val: i64, span: Span) -> Value #### pub fn duration(val: i64, span: Span) -> Value #### pub fn date(val: DateTime<FixedOffset>, span: Span) -> Value #### pub fn range(val: Range, span: Span) -> Value #### pub fn string(val: impl Into<String>, span: Span) -> Value #### pub fn record(val: Record, span: Span) -> Value #### pub fn list(vals: Vec<Value>, span: Span) -> Value #### pub fn block(val: BlockId, span: Span) -> Value #### pub fn closure( val: BlockId, captures: HashMap<VarId, Value>, span: Span ) -> Value #### pub fn nothing(span: Span) -> Value Create a new `Nothing` value #### pub fn error(error: ShellError, span: Span) -> Value #### pub fn binary(val: impl Into<Vec<u8>>, span: Span) -> Value #### pub fn cell_path(val: CellPath, span: Span) -> Value #### pub fn custom_value(val: Box<dyn CustomValue>, span: Span) -> Value #### pub fn lazy_record(val: Box<dyn for<'a> LazyRecord<'a>>, span: Span) -> Value #### pub fn match_pattern(val: MatchPattern, span: Span) -> Value #### pub fn test_bool(val: bool) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_int(val: i64) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_float(val: f64) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_filesize(val: i64) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_duration(val: i64) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_date(val: DateTime<FixedOffset>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_range(val: Range) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_string(val: impl Into<String>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_record(val: Record) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_list(vals: Vec<Value>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_block(val: BlockId) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_closure(val: BlockId, captures: HashMap<VarId, Value>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_nothing() -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_binary(val: impl Into<Vec<u8>>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_cell_path(val: CellPath) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_custom_value(val: Box<dyn CustomValue>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_lazy_record(val: Box<dyn for<'a> LazyRecord<'a>>) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. #### pub fn test_match_pattern(val: MatchPattern) -> Value Note: Only use this for test data, *not* live data, as it will point into unknown source when used in errors. ### impl Value #### pub fn add( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn append( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn sub( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn mul( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn div( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn floor_div( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn lt(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn lte( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn gt(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn gte( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn eq(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn ne(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn in(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn not_in( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn regex_match( &self, engine_state: &EngineState, op: Span, rhs: &Value, invert: bool, span: Span ) -> Result<Value, ShellError#### pub fn starts_with( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn ends_with( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn bit_shl( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn bit_shr( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn bit_or( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn bit_xor( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn bit_and( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn modulo( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn and( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn or(&self, op: Span, rhs: &Value, span: Span) -> Result<Value, ShellError#### pub fn xor( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellError#### pub fn pow( &self, op: Span, rhs: &Value, span: Span ) -> Result<Value, ShellErrorTrait Implementations --- ### impl Clone for Value #### fn clone(&self) -> Self Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn from_value(v: &Value) -> Result<Self, ShellError### impl PartialEq<Value> for Value #### fn eq(&self, other: &Self) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Value> for Value #### fn partial_cmp(&self, other: &Self) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Value ### impl Send for Value ### impl Sync for Value ### impl Unpin for Value ### impl !UnwindSafe for Value Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<V> IntoPipelineData for Vwhere V: Into<Value>, #### fn into_pipeline_data(self) -> PipelineData #### fn into_pipeline_data_with_metadata( self, metadata: impl Into<Option<Box<PipelineMetadata, Global>>> ) -> PipelineData ### impl<D> OwoColorize for D #### fn fg<C>(&self) -> FgColorDisplay<'_, C, Self>where C: Color, Set the foreground color generically C: Color, Set the background color generically. Color: DynColor, Set the foreground color at runtime. Only use if you do not know which color will be used at compile-time. If the color is constant, use either `OwoColorize::fg` or a color-specific method, such as `OwoColorize::green`, Color: DynColor, Set the background color at runtime. Only use if you do not know what color to use at compile-time. If the color is constant, use either `OwoColorize::bg` or a color-specific method, such as `OwoColorize::on_yellow`, &self ) -> FgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the foreground color to a specific RGB value.#### fn bg_rgb<const R: u8, const G: u8, const B: u8>( &self ) -> BgColorDisplay<'_, CustomColor<R, G, B>, SelfSet the background color to a specific RGB value.#### fn truecolor(&self, r: u8, g: u8, b: u8) -> FgDynColorDisplay<'_, Rgb, SelfSets the foreground color to an RGB value.#### fn on_truecolor(&self, r: u8, g: u8, b: u8) -> BgDynColorDisplay<'_, Rgb, SelfSets the background color to an RGB value.#### fn style(&self, style: Style) -> Styled<&SelfApply a runtime-determined style### impl<T> Pointable for T #### const ALIGN: usize = _ The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. T: Serialize + ?Sized, #### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere T: for<'de> Deserialize<'de>, Trait nu_protocol::CustomValue === ``` pub trait CustomValue: Debug + Send + Sync + Serialize + Deserialize { // Required methods fn clone_value(&self, span: Span) -> Value; fn value_string(&self) -> String; fn to_base_value(&self, span: Span) -> Result<Value, ShellError>; fn as_any(&self) -> &dyn Any; // Provided methods fn follow_path_int( &self, _count: usize, span: Span ) -> Result<Value, ShellError> { ... } fn follow_path_string( &self, _column_name: String, span: Span ) -> Result<Value, ShellError> { ... } fn partial_cmp(&self, _other: &Value) -> Option<Ordering> { ... } fn operation( &self, _lhs_span: Span, operator: Operator, op: Span, _right: &Value ) -> Result<Value, ShellError> { ... } } ``` Required Methods --- #### fn clone_value(&self, span: Span) -> Value #### fn value_string(&self) -> String #### fn to_base_value(&self, span: Span) -> Result<Value, ShellError#### fn as_any(&self) -> &dyn Any Provided Methods --- #### fn follow_path_int( &self, _count: usize, span: Span ) -> Result<Value, ShellError#### fn follow_path_string( &self, _column_name: String, span: Span ) -> Result<Value, ShellError#### fn partial_cmp(&self, _other: &Value) -> Option<Ordering#### fn operation( &self, _lhs_span: Span, operator: Operator, op: Span, _right: &Value ) -> Result<Value, ShellErrorTrait Implementations --- ### impl<'typetag> Serialize for dyn CustomValue + 'typetag #### fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where S: Serializer, Serialize this value into the given Serde serializer. #### fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where S: Serializer, Serialize this value into the given Serde serializer. #### fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where S: Serializer, Serialize this value into the given Serde serializer. #### fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>where S: Serializer, Serialize this value into the given Serde serializer. Read moreImplementors --- Function nu_protocol::is_leap_year === ``` pub fn is_leap_year(year: i32) -> bool ``` Is the given year a leap year? Function nu_protocol::levenshtein_distance === ``` pub fn levenshtein_distance(a: &str, b: &str) -> usize ``` Finds the Levenshtein distance between two strings.
ActionKit
cocoapods
Objective-C
actionkit === About ActionKit --- ### What is ActionKit? ActionKit is a powerful grassroots organizing platform that enables individuals and organizations to create, manage, and analyze campaigns and actions. With a user-friendly interface and comprehensive features, ActionKit empowers users to mobilize supporters, track engagement, and drive meaningful change. ### Key Features * **Campaign Creation:** Easily create and manage campaigns to promote your cause or organization. * **Action Management:** Track and organize actions taken by supporters, such as signing petitions or making donations. * **Messaging Tools:** Utilize built-in email and SMS features to communicate with your supporters effectively. * **Advocacy Tracking:** Monitor and analyze the impact of your campaigns to measure success and optimize future strategies. * **Integrated Fundraising:** Seamlessly incorporate fundraising efforts within your campaigns to support your cause financially. * **Customizable Templates:** Personalize your campaigns with pre-designed templates or create your own unique designs. * **Data Management:** Efficiently organize and analyze data to gain valuable insights and inform your strategic decision-making. ### Why Choose ActionKit? There are several reasons why ActionKit stands as a top choice for organizations and individuals: 1. **User-Friendly Interface:** ActionKit offers an intuitive and easy-to-use platform, reducing the learning curve for users of all skill levels. 2. **Comprehensive Features:** From campaign creation to fundraising tools, ActionKit provides a wide range of features to support your advocacy efforts. 3. **Flexible Customization:** Customize your campaigns and actions to reflect your brand and capture the attention of your supporters. 4. **Data-Driven Insights:** Analysis and reporting features within ActionKit help you understand the impact of your campaigns and improve future strategies. 5. **Reliable Support:** ActionKit offers dedicated support to assist you in maximizing the platform’s capabilities and addressing any potential issues. ### Who is ActionKit for? ActionKit is ideal for: * Non-profit organizations * Grassroots advocacy groups * Political campaigns * Social justice organizations * Environmental initiatives * Community organizers * Citizen activists * Anyone looking to mobilize supporters and create meaningful change ### Getting Started with ActionKit Ready to get started with ActionKit? Follow these steps: 1. **Sign Up:** Visit the ActionKit website and sign up for an account. 2. **Create a Campaign:** Use the intuitive campaign creation tools to set up your first advocacy campaign. 3. **Engage Supporters:** Utilize ActionKit’s messaging tools to reach out to supporters and encourage them to take action. 4. **Analyze Results:** Monitor and analyze the impact of your campaigns using ActionKit’s data management and reporting features. 5. **Optimize and Repeat:** Use insights gained from the analysis to refine your future campaigns and drive even greater impact. ### Conclusion ActionKit is the go-to platform for organizing successful grassroots campaigns. With its comprehensive features, user-friendly interface, and powerful analytics, ActionKit empowers individuals and organizations to create meaningful change and amplify their advocacy efforts. Sign up today and make a difference!
github.com/colinmarc/hdfs
go
Go
README [¶](#section-readme) --- ### HDFS for Go [![GoDoc](https://godoc.org/github.com/colinmarc/hdfs/web?status.svg)](https://godoc.org/github.com/colinmarc/hdfs) [![build](https://travis-ci.org/colinmarc/hdfs.svg?branch=master)](https://travis-ci.org/colinmarc/hdfs) This is a native golang client for hdfs. It connects directly to the namenode using the protocol buffers API. It tries to be idiomatic by aping the stdlib `os` package, where possible, and implements the interfaces from it, including `os.FileInfo` and `os.PathError`. Here's what it looks like in action: ``` client, _ := hdfs.New("namenode:8020") file, _ := client.Open("/mobydick.txt") buf := make([]byte, 59) file.ReadAt(buf, 48847) fmt.Println(string(buf)) // => Abominable are the tumblers into which he pours his poison. ``` For complete documentation, check out the [Godoc](https://godoc.org/github.com/colinmarc/hdfs). #### The `hdfs` Binary Along with the library, this repo contains a commandline client for HDFS. Like the library, its primary aim is to be idiomatic, by enabling your favorite unix verbs: ``` $ hdfs --help Usage: hdfs COMMAND The flags available are a subset of the POSIX ones, but should behave similarly. Valid commands: ls [-lah] [FILE]... rm [-rf] FILE... mv [-fT] SOURCE... DEST mkdir [-p] FILE... touch [-amc] FILE... chmod [-R] OCTAL-MODE FILE... chown [-R] OWNER[:GROUP] FILE... cat SOURCE... head [-n LINES | -c BYTES] SOURCE... tail [-n LINES | -c BYTES] SOURCE... du [-sh] FILE... checksum FILE... get SOURCE [DEST] getmerge SOURCE DEST put SOURCE DEST ``` Since it doesn't have to wait for the JVM to start up, it's also a lot faster `hadoop -fs`: ``` $ time hadoop fs -ls / > /dev/null real 0m2.218s user 0m2.500s sys 0m0.376s $ time hdfs ls / > /dev/null real 0m0.015s user 0m0.004s sys 0m0.004s ``` Best of all, it comes with bash tab completion for paths! #### Installing the library To install the library, once you have Go [all set up](https://golang.org/doc/install): ``` $ go get -u github.com/colinmarc/hdfs ``` #### Installing the commandline client Grab a tarball from the [releases page](https://github.com/colinmarc/hdfs/releases) and unzip it wherever you like. You'll want to add the following line to your `.bashrc` or `.profile`: ``` export HADOOP_NAMENODE="namenode:8020" ``` To install tab completion globally on linux, copy or link the `bash_completion` file which comes with the tarball into the right place: ``` ln -sT bash_completion /etc/bash_completion.d/gohdfs ``` By default, the HDFS user is set to the currently-logged-in user. You can override this in your `.bashrc` or `.profile`: ``` export HADOOP_USER_NAME=username ``` #### Compatibility This library uses "Version 9" of the HDFS protocol, which means it should work with hadoop distributions based on 2.2.x and above. The tests run against CDH 5.x and HDP 2.x. #### Acknowledgements This library is heavily indebted to [snakebite](https://github.com/spotify/snakebite). Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package hdfs provides a native, idiomatic interface to HDFS. Where possible, it mimics the functionality and signatures of the standard `os` package. Example: ``` client, _ := hdfs.New("namenode:8020") file, _ := client.Open("/mobydick.txt") buf := make([]byte, 59) file.ReadAt(buf, 48847) fmt.Println(string(buf)) // => Abominable are the tumblers into which he pours his poison. ``` ### Index [¶](#pkg-index) * [Variables](#pkg-variables) * [func Username() (string, error)](#Username) * [type Client](#Client) * + [func New(address string) (*Client, error)](#New) + [func NewClient(options ClientOptions) (*Client, error)](#NewClient) + [func NewForConnection(namenode *rpc.NamenodeConnection) *Client](#NewForConnection)deprecated + [func NewForUser(address string, user string) (*Client, error)](#NewForUser)deprecated * + [func (c *Client) Append(name string) (*FileWriter, error)](#Client.Append) + [func (c *Client) Chmod(name string, perm os.FileMode) error](#Client.Chmod) + [func (c *Client) Chown(name string, user, group string) error](#Client.Chown) + [func (c *Client) Chtimes(name string, atime time.Time, mtime time.Time) error](#Client.Chtimes) + [func (c *Client) Close() error](#Client.Close) + [func (c *Client) CopyToLocal(src string, dst string) error](#Client.CopyToLocal) + [func (c *Client) CopyToRemote(src string, dst string) error](#Client.CopyToRemote) + [func (c *Client) Create(name string) (*FileWriter, error)](#Client.Create) + [func (c *Client) CreateEmptyFile(name string) error](#Client.CreateEmptyFile) + [func (c *Client) CreateFile(name string, replication int, blockSize int64, perm os.FileMode) (*FileWriter, error)](#Client.CreateFile) + [func (c *Client) GetContentSummary(name string) (*ContentSummary, error)](#Client.GetContentSummary) + [func (c *Client) Mkdir(dirname string, perm os.FileMode) error](#Client.Mkdir) + [func (c *Client) MkdirAll(dirname string, perm os.FileMode) error](#Client.MkdirAll) + [func (c *Client) Open(name string) (*FileReader, error)](#Client.Open) + [func (c *Client) ReadDir(dirname string) ([]os.FileInfo, error)](#Client.ReadDir) + [func (c *Client) ReadFile(filename string) ([]byte, error)](#Client.ReadFile) + [func (c *Client) Remove(name string) error](#Client.Remove) + [func (c *Client) Rename(oldpath, newpath string) error](#Client.Rename) + [func (c *Client) Stat(name string) (os.FileInfo, error)](#Client.Stat) + [func (c *Client) StatFs() (FsInfo, error)](#Client.StatFs) + [func (c *Client) Walk(root string, walkFn filepath.WalkFunc) error](#Client.Walk) * [type ClientOptions](#ClientOptions) * [type ContentSummary](#ContentSummary) * + [func (cs *ContentSummary) DirectoryCount() int](#ContentSummary.DirectoryCount) + [func (cs *ContentSummary) FileCount() int](#ContentSummary.FileCount) + [func (cs *ContentSummary) NameQuota() int](#ContentSummary.NameQuota) + [func (cs *ContentSummary) Size() int64](#ContentSummary.Size) + [func (cs *ContentSummary) SizeAfterReplication() int64](#ContentSummary.SizeAfterReplication) + [func (cs *ContentSummary) SpaceQuota() int64](#ContentSummary.SpaceQuota) * [type FileInfo](#FileInfo) * + [func (fi *FileInfo) AccessTime() time.Time](#FileInfo.AccessTime) + [func (fi *FileInfo) IsDir() bool](#FileInfo.IsDir) + [func (fi *FileInfo) ModTime() time.Time](#FileInfo.ModTime) + [func (fi *FileInfo) Mode() os.FileMode](#FileInfo.Mode) + [func (fi *FileInfo) Name() string](#FileInfo.Name) + [func (fi *FileInfo) Owner() string](#FileInfo.Owner) + [func (fi *FileInfo) OwnerGroup() string](#FileInfo.OwnerGroup) + [func (fi *FileInfo) Size() int64](#FileInfo.Size) + [func (fi *FileInfo) Sys() interface{}](#FileInfo.Sys) * [type FileReader](#FileReader) * + [func (f *FileReader) Checksum() ([]byte, error)](#FileReader.Checksum) + [func (f *FileReader) Close() error](#FileReader.Close) + [func (f *FileReader) Name() string](#FileReader.Name) + [func (f *FileReader) Read(b []byte) (int, error)](#FileReader.Read) + [func (f *FileReader) ReadAt(b []byte, off int64) (int, error)](#FileReader.ReadAt) + [func (f *FileReader) Readdir(n int) ([]os.FileInfo, error)](#FileReader.Readdir) + [func (f *FileReader) Readdirnames(n int) ([]string, error)](#FileReader.Readdirnames) + [func (f *FileReader) Seek(offset int64, whence int) (int64, error)](#FileReader.Seek) + [func (f *FileReader) Stat() os.FileInfo](#FileReader.Stat) * [type FileWriter](#FileWriter) * + [func (f *FileWriter) Close() error](#FileWriter.Close) + [func (f *FileWriter) Flush() error](#FileWriter.Flush) + [func (f *FileWriter) Write(b []byte) (int, error)](#FileWriter.Write) * [type FsInfo](#FsInfo) * [type HadoopConf](#HadoopConf) * + [func LoadHadoopConf(path string) HadoopConf](#LoadHadoopConf) * + [func (conf HadoopConf) Namenodes() ([]string, error)](#HadoopConf.Namenodes) * [type Property](#Property) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) ``` var StatFsError = [errors](/errors).[New](/errors#New)("Failed to get HDFS usage") ``` ### Functions [¶](#pkg-functions) #### func [Username](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L28) [¶](#Username) added in v1.0.0 ``` func Username() ([string](/builtin#string), [error](/builtin#error)) ``` Username returns the value of HADOOP_USER_NAME in the environment, or the current system user if it is not set. ### Types [¶](#pkg-types) #### type [Client](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L14) [¶](#Client) ``` type Client struct { // contains filtered or unexported fields } ``` A Client represents a connection to an HDFS cluster #### func [New](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L78) [¶](#New) ``` func New(address [string](/builtin#string)) (*[Client](#Client), [error](/builtin#error)) ``` New returns a connected Client, or an error if it can't connect. The user will be the user the code is running under. If address is an empty string it will try and get the namenode address from the hadoop configuration files. #### func [NewClient](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L42) [¶](#NewClient) added in v1.1.0 ``` func NewClient(options [ClientOptions](#ClientOptions)) (*[Client](#Client), [error](/builtin#error)) ``` NewClient returns a connected Client for the given options, or an error if the client could not be created. #### func [NewForConnection](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L114) deprecated added in v1.0.2 ``` func NewForConnection(namenode *[rpc](/github.com/colinmarc/[email protected]/rpc).[NamenodeConnection](/github.com/colinmarc/[email protected]/rpc#NamenodeConnection)) *[Client](#Client) ``` NewForConnection returns Client with the specified, underlying rpc.NamenodeConnection. You can use rpc.WrapNamenodeConnection to wrap your own net.Conn. Deprecated: Use NewClient with ClientOptions instead. #### func [NewForUser](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L103) deprecated ``` func NewForUser(address [string](/builtin#string), user [string](/builtin#string)) (*[Client](#Client), [error](/builtin#error)) ``` NewForUser returns a connected Client with the user specified, or an error if it can't connect. Deprecated: Use NewClient with ClientOptions instead. #### func (*Client) [Append](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L86) [¶](#Client.Append) added in v1.0.0 ``` func (c *[Client](#Client)) Append(name [string](/builtin#string)) (*[FileWriter](#FileWriter), [error](/builtin#error)) ``` Append opens an existing file in HDFS and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written. #### func (*Client) [Chmod](https://github.com/colinmarc/hdfs/blob/v1.1.3/perms.go#L13) [¶](#Client.Chmod) ``` func (c *[Client](#Client)) Chmod(name [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error) ``` Chmod changes the mode of the named file to mode. #### func (*Client) [Chown](https://github.com/colinmarc/hdfs/blob/v1.1.3/perms.go#L37) [¶](#Client.Chown) ``` func (c *[Client](#Client)) Chown(name [string](/builtin#string), user, group [string](/builtin#string)) [error](/builtin#error) ``` Chown changes the user and group of the file. Unlike os.Chown, this takes a string username and group (since that's what HDFS uses.) If an empty string is passed for user or group, that field will not be changed remotely. #### func (*Client) [Chtimes](https://github.com/colinmarc/hdfs/blob/v1.1.3/perms.go#L58) [¶](#Client.Chtimes) ``` func (c *[Client](#Client)) Chtimes(name [string](/builtin#string), atime [time](/time).[Time](/time#Time), mtime [time](/time).[Time](/time#Time)) [error](/builtin#error) ``` Chtimes changes the access and modification times of the named file. #### func (*Client) [Close](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L185) [¶](#Client.Close) added in v1.0.0 ``` func (c *[Client](#Client)) Close() [error](/builtin#error) ``` Close terminates all underlying socket connections to remote server. #### func (*Client) [CopyToLocal](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L132) [¶](#Client.CopyToLocal) ``` func (c *[Client](#Client)) CopyToLocal(src [string](/builtin#string), dst [string](/builtin#string)) [error](/builtin#error) ``` CopyToLocal copies the HDFS file specified by src to the local file at dst. If dst already exists, it will be overwritten. #### func (*Client) [CopyToRemote](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L150) [¶](#Client.CopyToRemote) added in v1.0.0 ``` func (c *[Client](#Client)) CopyToRemote(src [string](/builtin#string), dst [string](/builtin#string)) [error](/builtin#error) ``` CopyToRemote copies the local file specified by src to the HDFS file at dst. #### func (*Client) [Create](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L31) [¶](#Client.Create) added in v1.0.0 ``` func (c *[Client](#Client)) Create(name [string](/builtin#string)) (*[FileWriter](#FileWriter), [error](/builtin#error)) ``` Create opens a new file in HDFS with the default replication, block size, and permissions (0644), and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written. #### func (*Client) [CreateEmptyFile](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L136) [¶](#Client.CreateEmptyFile) ``` func (c *[Client](#Client)) CreateEmptyFile(name [string](/builtin#string)) [error](/builtin#error) ``` CreateEmptyFile creates a empty file at the given name, with the permissions 0644. #### func (*Client) [CreateFile](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L53) [¶](#Client.CreateFile) added in v1.0.0 ``` func (c *[Client](#Client)) CreateFile(name [string](/builtin#string), replication [int](/builtin#int), blockSize [int64](/builtin#int64), perm [os](/os).[FileMode](/os#FileMode)) (*[FileWriter](#FileWriter), [error](/builtin#error)) ``` CreateFile opens a new file in HDFS with the given replication, block size, and permissions, and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written. #### func (*Client) [GetContentSummary](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L22) [¶](#Client.GetContentSummary) added in v0.1.4 ``` func (c *[Client](#Client)) GetContentSummary(name [string](/builtin#string)) (*[ContentSummary](#ContentSummary), [error](/builtin#error)) ``` GetContentSummary returns a ContentSummary representing the named file or directory. The summary contains information about the entire tree rooted in the named file; for instance, it can return the total size of all #### func (*Client) [Mkdir](https://github.com/colinmarc/hdfs/blob/v1.1.3/mkdir.go#L13) [¶](#Client.Mkdir) ``` func (c *[Client](#Client)) Mkdir(dirname [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error) ``` Mkdir creates a new directory with the specified name and permission bits. #### func (*Client) [MkdirAll](https://github.com/colinmarc/hdfs/blob/v1.1.3/mkdir.go#L21) [¶](#Client.MkdirAll) ``` func (c *[Client](#Client)) MkdirAll(dirname [string](/builtin#string), perm [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error) ``` MkdirAll creates a directory for dirname, along with any necessary parents, and returns nil, or else returns an error. The permission bits perm are used for all directories that MkdirAll creates. If dirname is already a directory, MkdirAll does nothing and returns nil. #### func (*Client) [Open](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L35) [¶](#Client.Open) ``` func (c *[Client](#Client)) Open(name [string](/builtin#string)) (*[FileReader](#FileReader), [error](/builtin#error)) ``` Open returns an FileReader which can be used for reading. #### func (*Client) [ReadDir](https://github.com/colinmarc/hdfs/blob/v1.1.3/readdir.go#L14) [¶](#Client.ReadDir) ``` func (c *[Client](#Client)) ReadDir(dirname [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error)) ``` ReadDir reads the directory named by dirname and returns a list of sorted directory entries. #### func (*Client) [ReadFile](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L120) [¶](#Client.ReadFile) ``` func (c *[Client](#Client)) ReadFile(filename [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` ReadFile reads the file named by filename and returns the contents. #### func (*Client) [Remove](https://github.com/colinmarc/hdfs/blob/v1.1.3/remove.go#L13) [¶](#Client.Remove) ``` func (c *[Client](#Client)) Remove(name [string](/builtin#string)) [error](/builtin#error) ``` Remove removes the named file or directory. #### func (*Client) [Rename](https://github.com/colinmarc/hdfs/blob/v1.1.3/rename.go#L12) [¶](#Client.Rename) ``` func (c *[Client](#Client)) Rename(oldpath, newpath [string](/builtin#string)) [error](/builtin#error) ``` Rename renames (moves) a file. #### func (*Client) [Stat](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L21) [¶](#Client.Stat) ``` func (c *[Client](#Client)) Stat(name [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error)) ``` Stat returns an os.FileInfo describing the named file or directory. #### func (*Client) [StatFs](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat_fs.go#L25) [¶](#Client.StatFs) added in v1.0.3 ``` func (c *[Client](#Client)) StatFs() ([FsInfo](#FsInfo), [error](/builtin#error)) ``` #### func (*Client) [Walk](https://github.com/colinmarc/hdfs/blob/v1.1.3/walk.go#L14) [¶](#Client.Walk) added in v1.1.1 ``` func (c *[Client](#Client)) Walk(root [string](/builtin#string), walkFn [filepath](/path/filepath).[WalkFunc](/path/filepath#WalkFunc)) [error](/builtin#error) ``` Walk walks the file tree rooted at root, calling walkFn for each file or directory in the tree, including root. All errors that arise visiting files and directories are filtered by walkFn. The files are walked in lexical order, which makes the output deterministic but means that for very large directories Walk can be inefficient. Walk does not follow symbolic links. #### type [ClientOptions](https://github.com/colinmarc/hdfs/blob/v1.1.3/client.go#L20) [¶](#ClientOptions) added in v1.1.0 ``` type ClientOptions struct { Addresses [][string](/builtin#string) Namenode *[rpc](/github.com/colinmarc/[email protected]/rpc).[NamenodeConnection](/github.com/colinmarc/[email protected]/rpc#NamenodeConnection) User [string](/builtin#string) } ``` ClientOptions represents the configurable options for a client. #### type [ContentSummary](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L14) [¶](#ContentSummary) added in v0.1.4 ``` type ContentSummary struct { // contains filtered or unexported fields } ``` ContentSummary represents a set of information about a file or directory in HDFS. It's provided directly by the namenode, and has no unix filesystem analogue. #### func (*ContentSummary) [DirectoryCount](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L68) [¶](#ContentSummary.DirectoryCount) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) DirectoryCount() [int](/builtin#int) ``` DirectoryCount returns the number of directories under the named one, including any subdirectories, and including the root directory itself. If the named path is a file, this returns 0. #### func (*ContentSummary) [FileCount](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L61) [¶](#ContentSummary.FileCount) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) FileCount() [int](/builtin#int) ``` FileCount returns the number of files under the named path, including any subdirectories. If the named path is a file, FileCount returns 1. #### func (*ContentSummary) [NameQuota](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L75) [¶](#ContentSummary.NameQuota) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) NameQuota() [int](/builtin#int) ``` NameQuota returns the HDFS configured "name quota" for the named path. The name quota is a hard limit on the number of directories and files inside a directory; see <http://goo.gl/sOSJmJ> for more information. #### func (*ContentSummary) [Size](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L48) [¶](#ContentSummary.Size) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) Size() [int64](/builtin#int64) ``` Size returns the total size of the named path, including any subdirectories. #### func (*ContentSummary) [SizeAfterReplication](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L55) [¶](#ContentSummary.SizeAfterReplication) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) SizeAfterReplication() [int64](/builtin#int64) ``` SizeAfterReplication returns the total size of the named path, including any subdirectories. Unlike Size, it counts the total replicated size of each file, and represents the total on-disk footprint for a tree in HDFS. #### func (*ContentSummary) [SpaceQuota](https://github.com/colinmarc/hdfs/blob/v1.1.3/content_summary.go#L82) [¶](#ContentSummary.SpaceQuota) added in v0.1.4 ``` func (cs *[ContentSummary](#ContentSummary)) SpaceQuota() [int64](/builtin#int64) ``` SpaceQuota returns the HDFS configured "name quota" for the named path. The name quota is a hard limit on the number of directories and files inside a directory; see <http://goo.gl/sOSJmJ> for more information. #### type [FileInfo](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L15) [¶](#FileInfo) ``` type FileInfo struct { // contains filtered or unexported fields } ``` FileInfo implements os.FileInfo, and provides information about a file or directory in HDFS. #### func (*FileInfo) [AccessTime](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L109) [¶](#FileInfo.AccessTime) ``` func (fi *[FileInfo](#FileInfo)) AccessTime() [time](/time).[Time](/time#Time) ``` AccessTime returns the last time the file was accessed. It's not part of the os.FileInfo interface. #### func (*FileInfo) [IsDir](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L85) [¶](#FileInfo.IsDir) ``` func (fi *[FileInfo](#FileInfo)) IsDir() [bool](/builtin#bool) ``` #### func (*FileInfo) [ModTime](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L81) [¶](#FileInfo.ModTime) ``` func (fi *[FileInfo](#FileInfo)) ModTime() [time](/time).[Time](/time#Time) ``` #### func (*FileInfo) [Mode](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L72) [¶](#FileInfo.Mode) ``` func (fi *[FileInfo](#FileInfo)) Mode() [os](/os).[FileMode](/os#FileMode) ``` #### func (*FileInfo) [Name](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L64) [¶](#FileInfo.Name) ``` func (fi *[FileInfo](#FileInfo)) Name() [string](/builtin#string) ``` #### func (*FileInfo) [Owner](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L97) [¶](#FileInfo.Owner) ``` func (fi *[FileInfo](#FileInfo)) Owner() [string](/builtin#string) ``` Owner returns the name of the user that owns the file or directory. It's not part of the os.FileInfo interface. #### func (*FileInfo) [OwnerGroup](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L103) [¶](#FileInfo.OwnerGroup) ``` func (fi *[FileInfo](#FileInfo)) OwnerGroup() [string](/builtin#string) ``` OwnerGroup returns the name of the group that owns the file or directory. It's not part of the os.FileInfo interface. #### func (*FileInfo) [Size](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L68) [¶](#FileInfo.Size) ``` func (fi *[FileInfo](#FileInfo)) Size() [int64](/builtin#int64) ``` #### func (*FileInfo) [Sys](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat.go#L91) [¶](#FileInfo.Sys) ``` func (fi *[FileInfo](#FileInfo)) Sys() interface{} ``` Sys returns the raw *hadoop_hdfs.HdfsFileStatusProto message from the namenode. #### type [FileReader](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L20) [¶](#FileReader) ``` type FileReader struct { // contains filtered or unexported fields } ``` A FileReader represents an existing file or directory in HDFS. It implements io.Reader, io.ReaderAt, io.Seeker, and io.Closer, and can only be used for reads. For writes, see FileWriter and Client.Create. #### func (*FileReader) [Checksum](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L64) [¶](#FileReader.Checksum) ``` func (f *[FileReader](#FileReader)) Checksum() ([][byte](/builtin#byte), [error](/builtin#error)) ``` Checksum returns HDFS's internal "MD5MD5CRC32C" checksum for a given file. Internally to HDFS, it works by calculating the MD5 of all the CRCs (which are stored alongside the data) for each block, and then calculating the MD5 of all of those. #### func (*FileReader) [Close](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L310) [¶](#FileReader.Close) ``` func (f *[FileReader](#FileReader)) Close() [error](/builtin#error) ``` Close implements io.Closer. #### func (*FileReader) [Name](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L50) [¶](#FileReader.Name) ``` func (f *[FileReader](#FileReader)) Name() [string](/builtin#string) ``` Name returns the name of the file. #### func (*FileReader) [Read](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L141) [¶](#FileReader.Read) ``` func (f *[FileReader](#FileReader)) Read(b [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error)) ``` Read implements io.Reader. #### func (*FileReader) [ReadAt](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L194) [¶](#FileReader.ReadAt) ``` func (f *[FileReader](#FileReader)) ReadAt(b [][byte](/builtin#byte), off [int64](/builtin#int64)) ([int](/builtin#int), [error](/builtin#error)) ``` ReadAt implements io.ReaderAt. #### func (*FileReader) [Readdir](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L233) [¶](#FileReader.Readdir) ``` func (f *[FileReader](#FileReader)) Readdir(n [int](/builtin#int)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error)) ``` Readdir reads the contents of the directory associated with file and returns a slice of up to n os.FileInfo values, as would be returned by Stat, in directory order. Subsequent calls on the same file will yield further os.FileInfos. If n > 0, Readdir returns at most n os.FileInfo values. In this case, if Readdir returns an empty slice, it will return a non-nil error explaining why. At the end of a directory, the error is io.EOF. If n <= 0, Readdir returns all the os.FileInfo from the directory in a single slice. In this case, if Readdir succeeds (reads all the way to the end of the directory), it returns the slice and a nil error. If it encounters an error before the end of the directory, Readdir returns the os.FileInfo read until that point and a non-nil error. #### func (*FileReader) [Readdirnames](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L291) [¶](#FileReader.Readdirnames) ``` func (f *[FileReader](#FileReader)) Readdirnames(n [int](/builtin#int)) ([][string](/builtin#string), [error](/builtin#error)) ``` Readdirnames reads and returns a slice of names from the directory f. If n > 0, Readdirnames returns at most n names. In this case, if Readdirnames returns an empty slice, it will return a non-nil error explaining why. At the end of a directory, the error is io.EOF. If n <= 0, Readdirnames returns all the names from the directory in a single slice. In this case, if Readdirnames succeeds (reads all the way to the end of the directory), it returns the slice and a nil error. If it encounters an error before the end of the directory, Readdirnames returns the names read until that point and a non-nil error. #### func (*FileReader) [Seek](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L110) [¶](#FileReader.Seek) ``` func (f *[FileReader](#FileReader)) Seek(offset [int64](/builtin#int64), whence [int](/builtin#int)) ([int64](/builtin#int64), [error](/builtin#error)) ``` Seek implements io.Seeker. The seek is virtual - it starts a new block read at the new position. #### func (*FileReader) [Stat](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_reader.go#L55) [¶](#FileReader.Stat) ``` func (f *[FileReader](#FileReader)) Stat() [os](/os).[FileInfo](/os#FileInfo) ``` Stat returns the FileInfo structure describing file. #### type [FileWriter](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L15) [¶](#FileWriter) added in v1.0.0 ``` type FileWriter struct { // contains filtered or unexported fields } ``` A FileWriter represents a writer for an open file in HDFS. It implements Writer and Closer, and can only be used for writes. For reads, see FileReader and Client.Open. #### func (*FileWriter) [Close](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L195) [¶](#FileWriter.Close) added in v1.0.0 ``` func (f *[FileWriter](#FileWriter)) Close() [error](/builtin#error) ``` Close closes the file, writing any remaining data out to disk and waiting for acknowledgements from the datanodes. It is important that Close is called after all data has been written. #### func (*FileWriter) [Flush](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L180) [¶](#FileWriter.Flush) added in v1.1.1 ``` func (f *[FileWriter](#FileWriter)) Flush() [error](/builtin#error) ``` Flush flushes any buffered data out to the datanodes. Even immediately after a call to Flush, it is still necessary to call Close once all data has been written. #### func (*FileWriter) [Write](https://github.com/colinmarc/hdfs/blob/v1.1.3/file_writer.go#L149) [¶](#FileWriter.Write) added in v1.0.0 ``` func (f *[FileWriter](#FileWriter)) Write(b [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error)) ``` Write implements io.Writer for writing to a file in HDFS. Internally, it writes data to an internal buffer first, and then later out to HDFS. Because of this, it is important that Close is called after all data has been written. #### type [FsInfo](https://github.com/colinmarc/hdfs/blob/v1.1.3/stat_fs.go#L13) [¶](#FsInfo) added in v1.0.3 ``` type FsInfo struct { Capacity [uint64](/builtin#uint64) Used [uint64](/builtin#uint64) Remaining [uint64](/builtin#uint64) UnderReplicated [uint64](/builtin#uint64) CorruptBlocks [uint64](/builtin#uint64) MissingBlocks [uint64](/builtin#uint64) MissingReplOneBlocks [uint64](/builtin#uint64) BlocksInFuture [uint64](/builtin#uint64) PendingDeletionBlocks [uint64](/builtin#uint64) } ``` FsInfo provides information about HDFS #### type [HadoopConf](https://github.com/colinmarc/hdfs/blob/v1.1.3/conf.go#L27) [¶](#HadoopConf) added in v1.0.0 ``` type HadoopConf map[[string](/builtin#string)][string](/builtin#string) ``` HadoopConf represents a map of all the key value configutation pairs found in a user's hadoop configuration files. #### func [LoadHadoopConf](https://github.com/colinmarc/hdfs/blob/v1.1.3/conf.go#L36) [¶](#LoadHadoopConf) added in v1.0.0 ``` func LoadHadoopConf(path [string](/builtin#string)) [HadoopConf](#HadoopConf) ``` LoadHadoopConf returns a HadoopConf object representing configuration from the specified path, or finds the correct path in the environment. If path or the env variable HADOOP_CONF_DIR is specified, it should point directly to the directory where the xml files are. If neither is specified, ${HADOOP_HOME}/conf will be used. #### func (HadoopConf) [Namenodes](https://github.com/colinmarc/hdfs/blob/v1.1.3/conf.go#L68) [¶](#HadoopConf.Namenodes) added in v1.0.0 ``` func (conf [HadoopConf](#HadoopConf)) Namenodes() ([][string](/builtin#string), [error](/builtin#error)) ``` Namenodes returns the namenode hosts present in the configuration. The returned slice will be sorted and deduped. #### type [Property](https://github.com/colinmarc/hdfs/blob/v1.1.3/conf.go#L16) [¶](#Property) added in v1.0.0 ``` type Property struct { Name [string](/builtin#string) `xml:"name"` Value [string](/builtin#string) `xml:"value"` } ``` Property is the struct representation of hadoop configuration key value pair.
synx
ruby
Ruby
![synx logo](https://raw.githubusercontent.com/venmo/synx/marklarr/dev/docs/images/synx-logo.png?token=<KEY>) [![Gem Version](https://badge.fury.io/rb/synx.svg)](http://badge.fury.io/rb/synx) [![Build Status](https://travis-ci.org/venmo/synx.svg?branch=master)](https://travis-ci.org/venmo/synx) A command-line tool that reorganizes your Xcode project folder to match your Xcode groups. ![synx gif](https://raw.githubusercontent.com/venmo/synx/marklarr/dev/docs/images/synx.gif?token=<KEY>%3D%3D--fc7d8546f3d4860df9024b1ee82ea13b86a2da88) ##### Xcode ![synx Xcode](https://raw.githubusercontent.com/venmo/synx/marklarr/dev/docs/images/synx-Xcode.jpg?token=<KEY>3D%3D--969e312f6ee33430855c495f25d9f5ff78fa9e96) ##### Finder ![synx finder before/after](https://raw.githubusercontent.com/venmo/synx/marklarr/dev/docs/images/synx-finder-before-after.png?token=<KEY>--8cff7616e4af2f6f2eed624623092745184c0235) Installation --- ``` $ gem install synx ``` Usage --- ### Basic :warning: **WARNING: Make sure that your project is backed up through source control before doing anything** :warning: Execute the command on your project to have it reorganize the files on the file system: ``` $ synx path/to/my/project.xcodeproj ``` It may have confused CocoaPods. If you use them, execute this command: ``` $ pod install ``` You're good to go! ### Advanced Synx supports the following options: ``` --prune, -p remove source files and image resources that are not referenced by the the Xcode project --no-color removes all color from the output --no-default-exclusions doesn't use the default exclusions of /Libraries, /Frameworks, and /Products --no-sort-by-name disable sorting groups by name --quiet, -q silence all output --exclusion, -e EXCLUSION ignore an Xcode group while syncing ``` For example, OCMock could have been organized using this command: ``` $ synx -p -e "/OCMock/Core Mocks" -e /OCMockTests Source/OCMock.xcodeproj/ ``` if they had wanted not to sync the `/OCMock/Core Mocks` and `/OCMockTests` groups, and also remove (`-p`) any image/source files found by synx that weren't referenced by any groups in Xcode. Contributing --- We'd love to see your ideas for improving this library! The best way to contribute is by submitting a pull request. We'll do our best to respond to your patch as soon as possible. You can also submit a [new Github issue](https://github.com/venmo/synx/issues/new) if you find bugs or have questions. :octocat: Please make sure to follow our general coding style and add test coverage for new features! Contributors --- * [@vrjbndr](https://github.com/vrjbndr), awesome logo! * [@ayanonagon](https://github.com/ayanonagon) and [@benzguo](https://github.com/benzguo), feedback.
thread_fs_calc
ctan
TeX
thread_fs_calc grammar.** Create a rule's first set of called threads by building a closure-only state. The terminals within this state are the "called threads" used for the "list-of-transitive-threads" construct of the generated "fsc" emitted file for \(O_{2}^{linker}\). Each rule's first set within the grammar is built this way including the "start rule" of the grammar and possibly rules used only in a "parallel-la-boundary" expression. The Algorithm. The grammar reads each individual rule-def and all its subrule-def(s). Using its bottom-up recognition, _Rsubrule_def_ adds the 1st element of the subrule into the _fs_list_. _Rrule_ processes the _fs_list_ as a closure-only state generating the rule's first set. In generating the first set, the elements in _fs_list_ are consumed as they are evaluated by removal from the list. Referenced terminals are added to the rule's first set. For 1st time referenced rules, their subrules are added at the end of _fs_list_ for eventual consumption. The neat thing about this algorithm is the 1st element in the _fs_list_ is only visited! It's a singular point of evaluation that is thrown out to be replaced by its next in line element: ahh the bank queue and the teller. Due to _cweave_ irregularities in formatting C++ code of this grammar, please see _o2externs_ documentation where the routines GEN_CALLED_THREADS_FS_OF_RULE is coded an external to overcome this deficiency. ## 2 Fsm Cthread_fs_calc class. ### Cthread_fs_calc op directive. \(\langle\) Cthread_fs_calc op directive 3\(\rangle\equiv\) \(rule\_def_{=}=0\); \(subrule\_def_{=}=0\); \(elem\_t_{=}=0\); \(ip\_can_{=}=(\)\(tok\_can\)\(<\)AST\(*>\)\(*\)\()\)\(parser\_{-}token\_supplier\_{-}\); ### Cthread_fs_calc user-declaration directive. \(\langle\) Cthread_fs_calc user-declaration directive 4\(\rangle\equiv\) **public**: _FS_ELEM_LIST_typefs_list_; \(RULES\_IN\_FS\_LIST\_typefs_{in}\_fs\_list_{:}\) \(rule\_def\)\(*\)\(rule\_def\).; \(T\_subrule\_def\)\(*\)\(subrule\_def\).; AST\(*\)\(elem\_t\).; \(tok\_can\)\(<\)AST\(*>\)\(ip\_can\).; ### Cthread_fs_calc user-prefix-declaration directive. \(\langle\) Cthread_fs_calc user-prefix-declaration directive 5\(\rangle\equiv\) \(\#include\) "o2_externs.h" ### Rthread_fs_calc rule. Rthread_fs_calc Rrules\(\bullet\)_Rrule_ rule. Rrule_defRsubrules(Rrulesubrule1opdirectives)= _Cthread_fs_calc*fsm=(Cthread_fs_calc*)rule_info__.parser__-fsm_tbl__;_GEN_CALLED_THREADS_FS_OF_RULE(fsm-fs_list.,fsm-rules_in_fs_list.,fsm-rule_def__);_ ## 9 _Rrule_def rule._ Rrule_def(Rule_def) Initialize for its subrule findings. (Rrule_def subrule1opdirective9)= _Cthread_fs_calc*fsm=(Cthread_fs_calc*)rule_info__.parser__-fsm_tbl__;_fsm-rule_def__=sf-p1__;_fsm-rules_in_fs_list__.clear();_fsm-fs_list__.clear();_fsm-fs_list__.clear();_ ## 10 _Rsubrules rule._ Rsubrules(Rsubrule_) Rsubrule_def(Rsubrule_def) Create the entry within the _fs_list__. Only the 1st element of each subrule is evaluated. (Rsubrule_def subrule1opdirective12)= _Cthread_fs_calc*fsm=(Cthread_fs_calc*)rule_info__.parser__-fsm_tbl__;_fsm-subrule_def__=sf-p1__;_AST*sr_t=fsm-subrule_def__-subrule_s_tree();_AST*et=AST::get_spec_child(*sr_t,1);_fsm-fs_list__.push_back(FS_ELEM_type(fsm-rule_def.,fsm-subrule_def__, **13. First Set Language for \(O_{2}^{linker}\).** /* File: thread_fs_calc.fsc Date and Time: Sun Oct 30 13:39:24 2011 */ transitive n grammar-name "thread_fs_calc" name-space "NS_thread_fs_calc" thread-name "Cthread_fs_calc" monolithic y file-name "thread_fs_calc.fsc" no-of-T 569 list-of-native-first-set-terminals 1 rule_def end-list-of-native-first-set-terminals list-of-transitive-threads 0 end-list-of-transitive-threads list-of-used-threads end-list-of-used-threads fsm-comments "Determine first set of thread calls per rule." **15.** **Index.** AST: 3, 4, 12. _clear_: 9. _Cthread_fs_calc_: 8, 9, 12. _cweave_: 1. _elem_t_: 3, 4. eog: 6. _et_: 12. _FS_ELEM_LIST_type_: 4. _FS_ELEM_type_: 12. _fs_list_: 1, 4, 8, 9, 12. _fsm_: 8, 9, 12. _fsm_tbl_: 8, 9, 12. GEN_CALLED_THREADS_FS_OF_RULE: 1, 8. _get_spec_child_: 12. _ip_cam_: 3, 4. _o2externs_: 1. _parser_: 3, 8, 9, 12. _push_back_: 12. _p1_: 9, 12. _Rrule_: 7. _Rrule_: 1, 8. _Rrule_def_: 8. _Rrule_def_: 9. _Rrules_: 6, 7. _Rrules_: 7. Rsubrule_: 10. _Rsubrule_11. _Rsubrule_def_: 11. _Rsubrules_def_: 1, 12. _Rsubrules_10. _Rsubrules_11. _Rrule_def_: 9. _rule_def_: 4. _rule_def_: 3, 4, 8, 9, 12. _rule_info_: 8, 9, 12. _rules_in_fs_list_: 4, 8, 9. _RULES_IN_FS_LIST_type_: 4. _sf_: 9, 12. _sr_._t_: 12. subrule-def: 12. _subrule_def_: 3, 4, 12. _subrule_s_tree_: 12. _T_subrule_def_: 4. _thread_fs_calc_: 1. _tok_can_: 3, 4. _token_supplier_: 3. \(\langle\)Cthread_fs_calc op directive 3\(\rangle\)\(\langle\)Cthread_fs_calc user-declaration directive 4\(\rangle\)\(\langle\)Cthread_fs_calc user-prefix-declaration directive 5\(\rangle\)\(\langle\)Rule subrule 1 op directive 8\(\rangle\)\(\langle\)Rule_def subrule 1 op directive 9\(\rangle\)\(\langle\)Rsubrule_def subrule 1 op directive 12\(\rangle\)
@keystone-next/app-schema-router-legacy
npm
JavaScript
GraphQL Schema Router === A KeystoneJS App that route requests to different GraphQL schemas. The `SchemaRouterApp` allows you to define a `routerFn` which takes `(req, res)` and returns a `routerId`, which is used to pick between different GraphQL schemas which exist at the same `apiPath`. Usage --- ``` const { Keystone } = require('@keystone-next/keystone-legacy'); const { GraphQLAppPlayground } = require('@keystone-next/app-graphql-playground-legacy'); const { SchemaRouterApp } = require('@keystone-next/app-schema-router-legacy'); const { GraphQLApp } = require('@keystone-next/app-graphql-legacy'); const { AdminUIApp } = require('@keystone-next/app-admin-ui-legacy'); module.exports = { keystone: new Keystone(), apps: [ new GraphQLAppPlayground({ apiPath }) new SchemaRouterApp({ apiPath, routerFn: (req) => req.session.keystoneItemId ? 'private' : 'public', apps: { public: new GraphQLApp({ apiPath, schemaName: 'public', graphiqlPath: undefined }), private: new GraphQLApp({ apiPath, schemaName: 'private', graphiqlPath: undefined }), }, }), new AdminUIApp() ], }; ``` Config --- | Option | Type | Default | Description | | --- | --- | --- | --- | | `apiPath` | `String` | `/admin/api` | The GraphQL API path | | `routerFn` | `Function` | `() => {}` | A function which takes `(req, res)` and returns a `routerId` | | `apps` | `Object` | `{}` | An object with `routerId`s as keys and `GraphQLApp`s as values | Readme --- ### Keywords none
github.com/minio/direct-csi-driver
go
Go
README [¶](#section-readme) --- ### Container Storage Interface (CSI) driver for Direct Volume Access Go This repository provides tools and scripts for building and testing the DIRECT CSI provider. #### Steps to run ``` # set the environment variables $> cat << EOF > default.env DIRECT_CSI_DRIVER_PATHS=/var/lib/direct-csi-driver/data{1...4} DIRECT_CSI_DRIVER_COMMON_CONTAINER_ROOT=/var/lib/direct-csi-driver DIRECT_CSI_DRIVER_COMMON_HOST_ROOT=/var/lib/direct-csi-driver EOF $> export $(cat default.env) # create the namespace for the driver $> kubectl apply -k github.com/minio/direct-csi-driver # utilize the volume in your application # # --- # volumeClaimTemplates: # This is the specification in which you reference the StorageClass # - metadata: # name: direct-csi-driver-min-io-volume # spec: # accessModes: [ "ReadWriteOnce" ] # resources: # requests: # storage: 10Gi # storageClassName: direct.csi.driver.min.io # This field references the existing StorageClass # --- # # Example application in test-app.yaml $> kubectl create -f test-app.yaml ``` #### License Use of DIRECT CSI driver is governed by the AGPLv3 license that can be found in the [LICENSE](https://github.com/minio/direct-csi-driver/blob/v0.1.3/LICENSE) file. Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
lemmy_apub
rust
Rust
Struct lemmy_apub::VerifyUrlData === ``` pub struct VerifyUrlData(pub DbPool); ``` Tuple Fields --- `0: DbPool`Trait Implementations --- ### impl Clone for VerifyUrlData #### fn clone(&self) -> VerifyUrlData Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn verify<'life0, 'life1, 'async_trait>( &'life0 self, url: &'life1 Url ) -> Pin<Box<dyn Future<Output = Result<(), &'static str>> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait, 'life1: 'async_trait, Should return Ok iff the given url is valid for processing.Auto Trait Implementations --- ### impl !RefUnwindSafe for VerifyUrlData ### impl Send for VerifyUrlData ### impl Sync for VerifyUrlData ### impl Unpin for VerifyUrlData ### impl !UnwindSafe for VerifyUrlData Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Any, #### fn into_any(self: Box<T, Global>) -> Box<dyn Any, GlobalConvert `Box<dyn Trait>` (where `Trait: Downcast`) to `Box<dyn Any>`. `Box<dyn Any>` can then be further `downcast` into `Box<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn into_any_rc(self: Rc<T>) -> Rc<dyn AnyConvert `Rc<Trait>` (where `Trait: Downcast`) to `Rc<Any>`. `Rc<Any>` can then be further `downcast` into `Rc<ConcreteType>` where `ConcreteType` implements `Trait`.#### fn as_any(&self) -> &(dyn Any + 'static) Convert `&Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&Any`’s vtable from `&Trait`’s.#### fn as_any_mut(&mut self) -> &mut (dyn Any + 'static) Convert `&mut Trait` (where `Trait: Downcast`) to `&Any`. This is needed since Rust cannot generate `&mut Any`’s vtable from `&mut Trait`’s.### impl<T> DowncastSync for Twhere T: Any + Send + Sync, #### fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + SyncConvert `Arc<Trait>` (where `Trait: Downcast`) to `Arc<Any>`. `Arc<Any>` can then be further `downcast` into `Arc<ConcreteType>` where `ConcreteType` implements `Trait`.### impl<T> DynClone for Twhere T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> IntoSql for T #### fn into_sql<T>(self) -> Self::Expressionwhere Self: AsExpression<T> + Sized, T: SqlType + TypedExpressionType, Convert `self` to an expression for Diesel’s query builder. &'a Self: AsExpression<T>, T: SqlType + TypedExpressionType, Convert `&self` to an expression for Diesel’s query builder. #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### fn execute<'query, 'conn>( self, conn: &'conn mut Conn ) -> <Conn as AsyncConnection>::ExecuteFuture<'conn, 'query>where Conn: AsyncConnection + Send, Self: ExecuteDsl<Conn, <Conn as AsyncConnection>::Backend> + 'query, Executes the given command, returning the number of rows affected. self, conn: &'conn mut Conn ) -> AndThen<Self::LoadFuture<'conn>, TryCollect<Self::Stream<'conn>, Vec<U, Global>>, fn(_: Self::Stream<'conn>) -> TryCollect<Self::Stream<'conn>, Vec<U, Global>>>where U: Send, Conn: AsyncConnection, Self: LoadQuery<'query, Conn, U> + 'query, Executes the given query, returning a `Vec` with the returned rows. self, conn: &'conn mut Conn ) -> Self::LoadFuture<'conn>where Conn: AsyncConnection, U: 'conn, Self: LoadQuery<'query, Conn, U> + 'query, Executes the given query, returning a [`Stream`] with the returned rows. self, conn: &'conn mut Conn ) -> AndThen<Self::LoadFuture<'conn>, Map<StreamFuture<Pin<Box<Self::Stream<'conn>, Global>>>, fn(_: (Option<Result<U, Error>>, Pin<Box<Self::Stream<'conn>, Global>>)) -> Result<U, Error>>, fn(_: Self::Stream<'conn>) -> Map<StreamFuture<Pin<Box<Self::Stream<'conn>, Global>>>, fn(_: (Option<Result<U, Error>>, Pin<Box<Self::Stream<'conn>, Global>>)) -> Result<U, Error>>>where U: Send + 'conn, Conn: AsyncConnection, Self: LoadQuery<'query, Conn, U> + 'query, Runs the command, and returns the affected row. self, conn: &'conn mut Conn ) -> AndThen<Self::LoadFuture<'conn>, TryCollect<Self::Stream<'conn>, Vec<U, Global>>, fn(_: Self::Stream<'conn>) -> TryCollect<Self::Stream<'conn>, Vec<U, Global>>>where U: Send, Conn: AsyncConnection, Self: LoadQuery<'query, Conn, U> + 'query, Runs the command, returning an `Vec` with the affected rows. self, conn: &'conn mut Conn ) -> AndThen<<Self::Output as LoadQuery<'query, Conn, U>>::LoadFuture<'conn>, Map<StreamFuture<Pin<Box<<Self::Output as LoadQuery<'query, Conn, U>>::Stream<'conn>, Global>>>, fn(_: (Option<Result<U, Error>>, Pin<Box<<Self::Output as LoadQuery<'query, Conn, U>>::Stream<'conn>, Global>>)) -> Result<U, Error>>, fn(_: <Self::Output as LoadQuery<'query, Conn, U>>::Stream<'conn>) -> Map<StreamFuture<Pin<Box<<Self::Output as LoadQuery<'query, Conn, U>>::Stream<'conn>, Global>>>, fn(_: (Option<Result<U, Error>>, Pin<Box<<Self::Output as LoadQuery<'query, Conn, U>>::Stream<'conn>, Global>>)) -> Result<U, Error>>>where U: Send + 'conn, Conn: AsyncConnection, Self: LimitDsl, Self::Output: LoadQuery<'query, Conn, U> + Send + 'query, Attempts to load a single record. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more
@aws-cdk/aws-globalaccelerator
npm
JavaScript
AWS::GlobalAccelerator Construct Library === --- > AWS CDK v1 has reached End-of-Support on 2023-06-01. > This package is no longer being updated, and users should migrate to AWS CDK v2. > For more information on how to migrate, see the [*Migrating to AWS CDK v2* guide](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html). --- Introduction --- AWS Global Accelerator (AGA) is a service that improves the availability and performance of your applications with local or global users. It intercepts your user's network connection at an edge location close to them, and routes it to one of potentially multiple, redundant backends across the more reliable and less congested AWS global network. AGA can be used to route traffic to Application Load Balancers, Network Load Balancers, EC2 Instances and Elastic IP Addresses. For more information, see the [AWS Global Accelerator Developer Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_GlobalAccelerator.html). Example --- Here's an example that sets up a Global Accelerator for two Application Load Balancers in two different AWS Regions: ``` // Create an Accelerator const accelerator = new globalaccelerator.Accelerator(this, 'Accelerator'); // Create a Listener const listener = accelerator.addListener('Listener', { portRanges: [ { fromPort: 80 }, { fromPort: 443 }, ], }); // Import the Load Balancers const nlb1 = elbv2.NetworkLoadBalancer.fromNetworkLoadBalancerAttributes(this, 'NLB1', { loadBalancerArn: 'arn:aws:elasticloadbalancing:us-west-2:111111111111:loadbalancer/app/my-load-balancer1/e16bef66805b', }); const nlb2 = elbv2.NetworkLoadBalancer.fromNetworkLoadBalancerAttributes(this, 'NLB2', { loadBalancerArn: 'arn:aws:elasticloadbalancing:ap-south-1:111111111111:loadbalancer/app/my-load-balancer2/5513dc2ea8a1', }); // Add one EndpointGroup for each Region we are targeting listener.addEndpointGroup('Group1', { endpoints: [new ga_endpoints.NetworkLoadBalancerEndpoint(nlb1)], }); listener.addEndpointGroup('Group2', { // Imported load balancers automatically calculate their Region from the ARN. // If you are load balancing to other resources, you must also pass a `region` // parameter here. endpoints: [new ga_endpoints.NetworkLoadBalancerEndpoint(nlb2)], }); ``` Concepts --- The **Accelerator** construct defines a Global Accelerator resource. An Accelerator includes one or more **Listeners** that accepts inbound connections on one or more ports. Each Listener has one or more **Endpoint Groups**, representing multiple geographically distributed copies of your application. There is one Endpoint Group per Region, and user traffic is routed to the closest Region by default. An Endpoint Group consists of one or more **Endpoints**, which is where the user traffic coming in on the Listener is ultimately sent. The Endpoint port used is the same as the traffic came in on at the Listener, unless overridden. Types of Endpoints --- There are 4 types of Endpoints, and they can be found in the `@aws-cdk/aws-globalaccelerator-endpoints` package: * Application Load Balancers * Network Load Balancers * EC2 Instances * Elastic IP Addresses ### Application Load Balancers ``` declare const alb: elbv2.ApplicationLoadBalancer; declare const listener: globalaccelerator.Listener; listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.ApplicationLoadBalancerEndpoint(alb, { weight: 128, preserveClientIp: true, }), ], }); ``` ### Network Load Balancers ``` declare const nlb: elbv2.NetworkLoadBalancer; declare const listener: globalaccelerator.Listener; listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.NetworkLoadBalancerEndpoint(nlb, { weight: 128, }), ], }); ``` ### EC2 Instances ``` declare const listener: globalaccelerator.Listener; declare const instance: ec2.Instance; listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.InstanceEndpoint(instance, { weight: 128, preserveClientIp: true, }), ], }); ``` ### Elastic IP Addresses ``` declare const listener: globalaccelerator.Listener; declare const eip: ec2.CfnEIP; listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.CfnEipEndpoint(eip, { weight: 128, }), ], }); ``` Client IP Address Preservation and Security Groups --- When using the `preserveClientIp` feature, AGA creates **Elastic Network Interfaces** (ENIs) in your AWS account, that are associated with a Security Group AGA creates for you. You can use the security group created by AGA as a source group in other security groups (such as those for EC2 instances or Elastic Load Balancers), if you want to restrict incoming traffic to the AGA security group rules. AGA creates a specific security group called `GlobalAccelerator` for each VPC it has an ENI in (this behavior can not be changed). CloudFormation doesn't support referencing the security group created by AGA, but this construct library comes with a custom resource that enables you to reference the AGA security group. Call `endpointGroup.connectionsPeer()` to obtain a reference to the Security Group which you can use in connection rules. You must pass a reference to the VPC in whose context the security group will be looked up. Example: ``` declare const listener: globalaccelerator.Listener; // Non-open ALB declare const alb: elbv2.ApplicationLoadBalancer; const endpointGroup = listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.ApplicationLoadBalancerEndpoint(alb, { preserveClientIp: true, }), ], }); // Remember that there is only one AGA security group per VPC. declare const vpc: ec2.Vpc; const agaSg = endpointGroup.connectionsPeer('GlobalAcceleratorSG', vpc); // Allow connections from the AGA to the ALB alb.connections.allowFrom(agaSg, ec2.Port.tcp(443)); ``` Readme --- ### Keywords * aws * cdk * constructs * AWS::GlobalAccelerator * aws-globalaccelerator
Splinets
cran
R
Package ‘Splinets’ March 6, 2023 Date 2023-02-28 Type Package Title Functional Data Analysis using Splines and Orthogonal Spline Bases Version 1.5.0 Author <NAME> [aut], <NAME> [aut], <NAME> [aut, cre, cph] Description Splines are efficiently represented through their Taylor expansion at the knots. The repre- sentation accounts for the support sets and is thus suitable for sparse func- tional data. Two cases of boundary conditions are considered: zero-boundary or periodic- boundary for all derivatives except the last. The periodical splines are represented graphically us- ing polar coordinates. The B-splines and orthogonal bases of splines that reside on small to- tal support are implemented. The orthogonal bases are referred to as 'splinets' and are uti- lized for functional data analysis. Random spline generator is implemented as well as all funda- mental algebraic and calculus operations on splines. The optimal, in the least square sense, func- tional fit by 'splinets' to data consisting of sampled values of func- tions as well as splines build over another set of knots is obtained and used for functional data anal- ysis. <arXiv:2102.00733>, <doi:10.1016/j.cam.2022.114444>, <arXiv:2302.07552>. Depends R (>= 3.5.0) License GPL (>= 2) Encoding UTF-8 LazyData true RoxygenNote 7.2.3 Imports methods, graphics NeedsCompilation no Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-03-06 19:40:02 UTC R topics documented: construc... 2 deriv... 4 dintegr... 6 evsplin... 8 exsup... 10 gathe... 13 gramia... 14 integr... 17 is.splinet... 20 is.splinets,Splinets-metho... 23 lincom... 24 lines,Splinets-metho... 27 plot,Splinets-metho... 28 projec... 30 refin... 37 rsplin... 40 seq2dya... 43 spline... 45 Splinets-clas... 51 subsampl... 53 sym2on... 55 tir... 57 truc... 58 win... 60 construct Construction of a Splinets object Description The function constructs a Splinets object correspond to a single spline (size=1) from a vector of knots and a matrix of proposed derivatives. The matrix is tested for its correctness like in is.splinets and adjusted using one of the implemented methods. Usage construct(knots, smorder, matder, supp = vector(), mthd = "RRM") Arguments knots n+2 vector, the knots over which the spline is built; There should be at least 2*smorder+4 of knots. smorder integer, the order of smoothness; matder (n+2)x(smorder+1) matrix, the matrix of derivatives; This matrix will be cor- rected if does not correspond to a proper spline. supp vector, either empty or two integers representing the single interval support; mthd string, one of the three methods for correction of the matrix of derivative: ’CRLC’ matching mostly the highest derivative, ’CRFC’ matching mostly the function values at the knots, ’RRM’ balanced matching between all derivatives; The default method is 'RRM', see the paper on the package for further details about the methods. Details The function constructs a Splinet-object only over a single interval support. Combining with the function lincom allows to introduce a multi-component support. Value A Splinets-object corresponding to a single spline. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for diagnostic of Splinets-objects; gather and subsample for combining and sub- sampling Splinets-objects, respectively, plot,Splinets-method for a plotting method for Splinets- objects; lincomb for combining splines with more complex than a single interval support sets; Examples #-------------------------------------------------------------# #---Building 'Splinets' using different derviative matching---# #-------------------------------------------------------------# n=17; k=4 set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+1]=1 #Random matrix of derivatives -- the noise (wild) case to be corrected S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #construction of an object, the order of knots is corrected is.splinets(spl)[[1]] #validation spl=construct(xi,k,S,mthd='CRFC') #another method of the derivative matching is.splinets(spl)[[1]] spl=construct(xi,k,S,mthd='CRLC') #one more method is.splinets(spl)[[1]] #-----------------------------------------------------# #---------Building not over the full support----------# #-----------------------------------------------------# set.seed(5) n=20; xi=sort(runif(n+2));xi[1]=0;xi[n+2]=1 spl=construct(xi,k,S) #construction of a spline as the 'Splinets' object over the entire range is.splinets(spl)[[1]] #verification of the conditions supp=c(3,17) #definition of the single interval support SS=matrix(rnorm((supp[2]-supp[1]+1)*(k+1)),ncol=(k+1)) #The matrix of derivatives #over the support range sspl=construct(xi,k,SS,supp=supp) #construction of a spline as the 'Splinets' object #with the given support range is.splinets(sspl)[[1]] #Verification sspl@knots sspl@supp sspl@der deriva Derivatives of splines Description The function generates a Splinets-object which contains the first order derivatives of all the splines from the input Splinets-object. The function also verifies the support set of the output to provide the accurate information about the support sets by excluding regions over which the original func- tion is constant. Usage deriva(object, epsilon = 1e-07) Arguments object Splinets object of the smoothness order k; epsilon positive number, controls removal of knots from the support; If the derivative is smaller than this number, it is considered to be zero and the corresponding knots are removed from the support.The default value is 1e-7. Value A Splinets-object of the order k-1 that also contains the updated information about the support set. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also integra for generating the indefinite integral of a spline that can be viewed as the inverse operation to deriva; dintegra for the definite integral of a spline; Examples #-------------------------------------------------------# #--- Generating the deriviative functions of splines ---# #-------------------------------------------------------# n=13; k=4 set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 spl=construct(xi,k,matrix(rnorm((n+2)*(k+1)),ncol=(k+1))) #constructing three splines spl=gather(spl, construct(xi,k,matrix(rnorm((n+2)*(k+1)),ncol=(k+1)))) spl=gather(spl, construct(xi,k,matrix(rnorm((n+2)*(k+1)),ncol=(k+1)))) # calculate the derivative of splines dspl = deriva(spl) plot(spl) plot(dspl) #----------------------------------------------# #--- Examples with different support ranges ---# #----------------------------------------------# n=25; k=3 xi=seq(0,1,by=1/(n+1)); set.seed(5) #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[2,]) spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[3,]) spl=gather(spl,nspl) #the third is added der_spl = deriva(spl) par(mar=c(1,1,1,1)) par(mfrow=c(2,1)) plot(der_spl) plot(spl) par(mfrow=c(1,1)) dintegra Calculating the definite integral of a spline. Description The function calculates the definite integrals of the splines in an input Splinets-object. Usage dintegra(object, sID = NULL) Arguments object Splinets-object; sID vector of integers, the indicies specifying for which splines in the Splinets- object the definite integral is to be evaluated; If sID=NULL, then the definite integral of all splines in the object are calculated. The default is NULL. Value A length(sID) x 2 matrix, with the first column holding the id of splines and the second column holding the corresponding definite integrals. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also integra for generating the indefinite integral; deriva for generating derivative functions of splines; Examples #------------------------------------------# #--- Example with common support ranges ---# #------------------------------------------# n=23; k=4 set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 # generate a random matrix S S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) # construct the spline spl=construct(xi,k,S) #constructing the first correct spline spl=gather(spl,construct(xi,k,S,mthd='CRFC')) #the second and the first ones spl=gather(spl,construct(xi,k,S,mthd='CRLC')) #the third is added plot(spl) dintegra(spl, sID = c(1,3)) dintegra(spl) plot(spl,sID=c(1,3)) #---------------------------------------------# #--- Examples with different support ranges---# #---------------------------------------------# n=25; k=2 xi=seq(0,1,by=1/(n+1)) #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline set.seed(5) SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[2,]) spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[3,]) spl=gather(spl,nspl) #the third is added plot(spl) dintegra(spl, sID = 1) dintegra(spl) #The third order case n=40; xi=seq(0,1,by=1/(n+1)); k=3; support=list(matrix(c(2,12,15,27,30,40),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))); sp1 = is.splinets(sp)[[2]] support=list(matrix(c(2,13,17,30),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))); sp2 = is.splinets(sp)[[2]] sp = gather(sp1,sp2) dintegra(sp) plot(sp) lcsp=lincomb(sp,matrix(c(-1,1),ncol=2)) dintegra(lcsp) #linearity of the integral dintegra(sp2)-dintegra(sp1) evspline Evaluating splines at given arguments. Description For a Splinets-object S and a vector of arguments t, the function returns the matrix of values for the splines in S. The evaluations are done through the Taylor expansions, so on the ith interval for t ∈ [ξi , ξi+1 ]: k X (t − ξi )j S(t) = sij . j! For the zero order splines which are discontinuous at the knots, the following convention is taken. At the LHS knots the value is taken as the RHS-limit, at the RHS knots as the LHS-limit. The value at the central knot for the zero order and an odd number of knots case is assumed to be zero. Usage evspline(object, sID = NULL, x = NULL, N = 250) Arguments object Splinets object; sID vector of integers, the indicies specifying splines in the Splinets list to be eval- uated; If sID=NULL, then all splines in the Splinet-object are evaluated. The default value is NULL. x vector, the arguments at which the splines are evaluated; If x is NULL, then the splines are evaluated over regular grids per each interval of the support. The default value is x=NULL. N integer, the number of points per an interval between two consequitive knots at which the splines are evaluated. The default value is N = 250; Value The length(x) x length(sID+1) matrix containing the argument values, in the first column, then, columnwise, values of the subsequent splines. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for diagnostic of Splinets-objects; plot,Splinets-method for plotting Splinets- objects; Examples #---------------------------------------------# #-- Example piecewise polynomial vs. spline --# #---------------------------------------------# n=20; k=3; xi=sort(runif(n+2)) sp=new("Splinets",knots=xi) #Randomly assigning the derivatives -- a very 'wild' function. S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) sp@supp=list(t(c(1,n+2))); sp@smorder=k; sp@der[[1]]=S y = evspline(sp) plot(y,type = 'l',col='red') #A correct spline object nsp=is.splinets(sp) sp2=nsp$robject y = evspline(sp2) lines(y,type='l') #---------------------------------------------# #-- Example piecewise polynomial vs. spline --# #---------------------------------------------# #Gathering three 'Splinets' objects using three different #method to correct the derivative matrix n=17; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) # generate a random matrix S spl=construct(xi,k,S) #constructing the first correct spline spl=gather(spl,construct(xi,k,S,mthd='CRFC')) #the second and the first ones spl=gather(spl,construct(xi,k,S,mthd='CRLC')) #the third is added y = evspline(spl, sID= 1) plot(y,type = 'l',col='red') y = evspline(spl, sID = c(1,3)) plot(y[,1:2],type = 'l',col='red') points(y[,c(1,3)],type = 'l',col='blue') #sID = NULL y = evspline(spl) plot(y[,1:2],type = 'l',col='red',ylim=range(y[,2:4])) points(y[,c(1,3)],type = 'l',col='blue') points(y[,c(1,4)],type = 'l',col='green') #---------------------------------------------# #--- Example with different support ranges ---# #---------------------------------------------# n=25; k=3; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[2,],'CRFC') spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[3,],'CRLC') spl=gather(spl,nspl) #the third is added y = evspline(spl, sID= 1) plot(y,type = 'l',col='red') y = evspline(spl, sID = c(1,3)) plot(y[,1:2],type = 'l',col='red') points(y[,c(1,3)],type = 'l',col='blue') #sID = NULL -- all splines evaluated y = evspline(spl) plot(y[,c(1,3)],type = 'l',col='red',ylim=c(-1,1)) points(y[,1:2],type = 'l',col='blue') points(y[,c(1,4)],type = 'l',col='green') exsupp Correcting support sets and reshaping the matrix of derivatives at the knots. Description The function is adjusting for a potential reduction in the support sets due to negligibly small values of rows in the derivative matrix. If the derivative matrix has a row equal to zero (or smaller than a neglible positive value) in the one-sided representation of it (see the references and sym2one), then the corresponding knot should be removed from the support set. The function can be used to eliminate the neglible support components from a Splinets-object. Usage exsupp(S, supp = NULL, epsilon = 1e-07) Arguments S (m+2)x(k+1) matrix, the values of the derivatives at the knots over some input support set which has the cardinality m+2; The matrix is assumed to be in the symmetric around center form for each component of the support. supp NULL or Nsupp x2 matrix of integers, the endpoints indices for the input support intervals, where Nsupp is the number of the components in the support set; If the parameter is NULL, than the full support is assumed. epsilon small positive number, threshold value of the norm of rows of S; If the norm of a row of S is less than epsilon, then it will be viewed as a neglible and the knot is excluded from the inside of the support set. Details This function typically would be applied to an element in the list given by SLOT der of a Splinets- object. It eliminates from the support sets regions of negligible values of a corresponding spline and its derivatives. Value The list of two elements: exsupp$rS is the reduced derivative matrix from which the neglible rows, if any, have been removed and exsupp$rsupp is the corresponding reduced support. The output matrix has all the support components in the symmetric around the center form, which is how the derivatives are kept in the Splinets-objects. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also Splinets-class for the description of the Splinets-class; sym2one for switching between the representations of a derivative matrix over a general support set; lincomb for evaluating a linear transformation of splines in a Splinets-object; is.splinets for a diagnostic tool of the Splinets- objects; Examples #----------------------------------------------------# #---Correcting support sets in a derivative matrix---# #----------------------------------------------------# n=20; k=3; xi=seq(0,1,by=1/(n+1)) #an even number of equally spaced knots set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #this spline will be used below to construct a 'sparse' spline is.splinets(spl) #verification plot(spl) xxi=seq(0,20,by=1/(n+1)) #large set of knots for construction of a sparse spline nn=length(xxi)-2 spspl=new('Splinets',knots=xxi,smorder=k) #generic object from the 'Splinets'-class spspl@der[[1]]=matrix(0,ncol=(k+1),nrow=(nn+2)) #starting with zeros everywhere spspl@der[[1]][1:(n+2),]=sym2one(spl@der[[1]]) #assigning local spline to a sparse spline at spspl@der[[1]][nn+3-(1:(n+2)),]=spspl@der[[1]][(n+2):1,] #the beginning and the same at the end spspl@der[[1]]=sym2one(spspl@der[[1]],inv=TRUE) #at this point the object does not account for the sparsity is.splinets(spspl) #a sparse spline on 421 knots with a non-zero terms at the first 22 #and at the last 22 knots, the actual support set is not yet reported plot(spspl) plot(spspl,xlim=c(0,1)) #the local part of the sparse spline exsupp(spspl@der[[1]]) #the actual support of the spline given the sparse derivative matrix #Expanding the previous spline by building a slightly more complex support set spspl@der[[1]][(n+1)+(1:(n+2)),]=sym2one(spl@der[[1]]) #double the first component of the #support because these are tangent supports spspl@der[[1]][(2*n+3)+(1:(n+2)),]=sym2one(spl@der[[1]]) #tdetect a single component of #the support with no internal knots removed is.splinets(spspl) plot(spspl) es=exsupp(spspl@der[[1]]) es[[2]] #the new support made of three components with the two first ones #separated by an interval with no knots in it spspl@der[[1]]=es[[1]] #defining the spline on the evaluated actual support spspl@supp[[1]]=es[[2]] #Example with reduction of not a full support. xi1=seq(0,14/(n+1),by=1/(n+1)); n1=13; #the odd number of equally spaced knots S1=matrix(rnorm((n1+2)*(k+1)),ncol=(k+1)) spl1=construct(xi1,k,S1) #construction of a local spline xi2=seq(16/(n+1),42/(n+1),by=1/(n+1)); n2=25; #the odd number of equally spaced knots S2=matrix(rnorm((n2+2)*(k+1)),ncol=(k+1)) spl2=construct(xi2,k,S2) #construction of a local spline spspl@der[[1]][1:15,]=sym2one(spl1@der[[1]]) spspl@der[[1]][16,]=rep(0,k+1) spspl@der[[1]][17:43,]=sym2one(spl2@der[[1]]) spspl@der[[1]][1:43,]=sym2one(spspl@der[[1]][1:43,],inv=TRUE) is.splinets(spspl) #three intervals in the support are repported exsupp(spspl@der[[1]],spspl@supp[[1]]) gather Combining two Splinets objects Description The function returns the Splinets-object that gathers two input Splinets-objects together. The input objects have to be of the same order and over the same knots. Usage gather(Sp1, Sp2) Arguments Sp1 Splinets object; Sp2 Splinets object; Value Splinets object, contains grouped splines from the input objects; References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for diagnostic of the Splinets-objects; construct for constructing such an object; subsample for subsampling Splinets-objects; plot,Splinets-method for plotting Splinets- objects; Examples #-------------------------------------------------------------# #-----------------Grouping into a 'Splinets' object-----------# #-------------------------------------------------------------# #Gathering three 'Splinets' objects using three different #method to correct the derivative matrix set.seed(5) n=13;xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; k=4 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #constructing the first correct spline spl=gather(spl,construct(xi,k,S,mthd='CRFC')) #the second and the first ones spl=gather(spl,construct(xi,k,S,mthd='CRLC')) #the third is added is.splinets(spl)[[1]] #diagnostic spl@supp #the entire range for the support #Example with different support ranges, the 3rd order set.seed(5) n=25; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; k=3 supp=list(t(c(2,12)),t(c(4,20)),t(c(6,25))) #support ranges for three splines #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[[1]][1,2]-supp[[1]][1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[[2]][1,2]-supp[[2]][1,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[[3]][1,2]-supp[[3]][1,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[[1]]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[[2]]) spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[[3]]) spl=gather(spl,nspl) #the third is added is.splinets(spl)[[1]] spl@supp spl@der gramian Gramian matrix, norms, and inner products of splines Description R The function performs evaluation of the matrix of the inner products S(t) · T (t)dt of all the pairs of splines S, T from the input object. The program utilizes the Taylor expansion of splines, see the reference for details. Usage gramian(Sp, norm_only = FALSE, sID = NULL, Sp2 = NULL, s2ID = NULL) Arguments Sp Splinets object; norm_only logical, indicates if only the square norm of the elements in the input object is calculated; The default is norm_only=FALSE; sID vector of integers, the indicies specifying splines in the Splinets list Sp to be evaluated; If sID=NULL (default), then the inner products for all the pairs taken from the object are evaluated. Sp2 Splinets object, the optional second Splinets-object; The inner products be- tween splines in Sp and in Sp2 are evaluated, i.e. the cross-gramian matrix. s2ID vector of integers, the indicies specifying splines in the Sp2 to be considered in the cross-gramian; Details If there is only one input Splinet-object, then the non-negative symmetrix matrix of the splines in this object is returned. If there are two input Splinet-objects, then the m × r matrix of the cross- inner product is returned, where m is the number of splines in the first object and r is their number in the second one. If only the norms are evaluated (norm_only= TRUE) it is always evaluating the norms of the first object. In the case of two input Splinets-objects, they should be over the same set of knots and of the same smoothness order. Value • norm_only=FALSE – the Gram matrix of inner products of the splines within the input Splinets- objects is returned, • Sp2 = NULL – the non-negative definite matrix of the inner products of splines in Sp is returned, • both Sp and Sp2 are non-NULL and contain splines Si ’s and Tj ’s, respectively == the cross- gramian matris of the inner products for the pairs of splines (Si , Tj ) is returned, • norm_only=FALSE– the vector of the norms of Sp is returned. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also lincomb for evaluation of a linear combination of splines; project for projections to the spaces of Splines; Examples #---------------------------------------# #---- Simple three splines example -----# #---------------------------------------# n=25; k=3 xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[2,]) spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[3,]) spl=gather(spl,nspl) #the third is added plot(spl) gramian(spl) gramian(spl, norm_only = TRUE) gramian(spl, sID = c(1,3)) gramian(spl,sID=c(2,3),Sp2=spl,s2ID=c(1)) #the cross-Gramian matrix #-----------------------------------------# #--- Example with varying support sets ---# #-----------------------------------------# n=40; xi=seq(0,1,by=1/(n+1)); k=2; support=list(matrix(c(2,9,15,24,30,37),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp1 = is.splinets(sp)[[2]] #the correction of 'der' matrices support=list(matrix(c(5,12,17,29),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp2 = is.splinets(sp)[[2]] spp = gather(sp1,sp2) support=list(matrix(c(3,10,14,21,27,34),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp3 = is.splinets(sp)[[2]] spp = gather(spp, sp3) plot(spp) gramian(spp) #the regular gramian matrix spp2=subsample(spp,sample(1:3,size=3,rep=TRUE)) gramian(Sp=spp,Sp2=spp2) #cross-Gramian matrix #-----------------------------------------# #--------- Grammian for B-splines --------# #-----------------------------------------# n=25; xi=seq(0,1,by=1/(n+1)); k=2; Sp=splinet(xi) #B-splines and corresponding splinet gramian(Sp$bs) #band grammian matrix for B-splines gramian(Sp$os) #diagonal gramian matrix for the splinet A=gramian(Sp=Sp$bs,Sp2=Sp$os) #cross-Gramian matrix, the coefficients of #the decomposition of the B-splines plot(Sp$bs) plot(lincomb(Sp$os,A)) integra Indefinite integrals of splines Description The function generates the indefinite integrals for given input splines. The integral is a function of the upper limit of the definite integral and is a spline of the higher order that does not satisfy the zero boundary conditions at the RHS endpoint, unless the definite integral over the whole range is equal to zero. Moreover, the support of the function is extended in the RHS up to the RHS end point unless the definite integral of the input is zero, in which the case the support is extracted from the obtained spline. Usage integra(object, epsilon = 1e-07) Arguments object a Splinets object of the smoothness order k; epsilon non-negative number indicating accuracy when close to zero value are detected; This accuracy is used in when the boundary conditions of the integral are checked. Details The value on the RHS is not zero, so the zero boundary condition typically is not satisfied and the support is is extended to the RHS end of the whole domain of splines. However, the function returns proper support if the original spline is a derivative of a spline that satisfies the boundary conditons. Value A Splinets-object with order k+1 that contains the indefinite integrals of the input object. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also deriva for computing derivatives of splines; dintegra for the definite integral; Examples #------------------------------------# #--- Generate indefinite integral ---# #------------------------------------# n=18; k=3; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 # generate a random matrix S set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #constructing a spline plot(spl) dspl = deriva(spl) #derivative plot(dspl) is.splinets(dspl) dintegra(dspl) #the definite integral is 0 (the boundary conditions for 'spl') ispl = integra(spl) #the integral of a spline plot(ispl) #the boundary condition on the rhs not satisfied (non-zero value) ispl@smorder is.splinets(ispl) #the object does not satisfy the boundary condition for the spline spll = integra(dspl) plot(spll) is.splinets(spll) #the boundary conditions of the integral of the derivative satisfied. #----------------------------------------------# #--- Examples with different support ranges ---# #----------------------------------------------# n=25; k=2; set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first proper spline nspl=construct(xi,k,SS2,supp[2,],'CRFC') spl=gather(spl,nspl) #the second and the first together nspl=construct(xi,k,SS3,supp[3,],'CRLC') spl=gather(spl,nspl) #the third is added plot(spl) spl@supp dspl = deriva(spl) #derivative of the splines plot(dspl) dintegra(dspl) #the definite integral over the entire range of knots is zero idspl = integra(dspl) #integral of the derivative returns the original splines plot(idspl) is.splinets(idspl) #and confirms that the object is a spline with boundary conditions #satified idspl@supp #Since integral is taken over a function that integrates to zero over spl@supp #each of the support interval, the support of all three objects are the same. dspl@supp ispl=integra(spl) plot(ispl) #the zero boundary condition at the RHS-end for the splines are not satisfied. is.splinets(ispl) #thus the object is reported as a non-spline plot(deriva(ispl)) displ=deriva(ispl) displ@supp #Comparison of the supports spl@supp #Here the integrals have extended support as it is taken from a function ispl@supp #that does not integrate to zero. #---------------------------------------# #---Example with complicated supports---# #---------------------------------------# n=40; xi=seq(0,1,by=1/(n+1)); k=3; support=list(matrix(c(2,12,15,27,30,40),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp1 = is.splinets(sp)[[2]] #Comparison of the corrected and the original 'der' matrices support=list(matrix(c(2,13,17,30),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp2 = is.splinets(sp)[[2]] sp = gather(sp1,sp2) #a group of two splines plot(sp) dsp = deriva(sp) #derivative plot(dsp) spl = integra(dsp) plot(spl) #the spline retrieved spl@supp #the supports are retrieved as well sp@supp is.splinets(spl) #the proper splinet object that satisfies the boundaries ispl = integra(sp) plot(ispl) ispl@supp #full support shown by empty list in SLOT 'supp' is.splinets(ispl) #diagnostic confirms no zeros at the boundaries spll = deriva(ispl) plot(spll) spll@supp is.splinets Diagnostics of splines and their generic correction Description The method performs verification of the properties of SLOTS of an object belonging to the Splinets– class. In the case when all the properties are satisfied the logical TRUE is returned. Otherwise, FALSE is returned together with suggested corrections. Usage is.splinets(object) Arguments object Splinets object, the object to be diagnosed; For this object to be corrected properly each support interval has to have at least 2*smorder+4 knots. Value A list made of: a logical value is, a Splinets object robject, and a numeric value Er. • The logical value is indicates if all the condtions for the elements of Splinets object to be a collection of valid splines are satisfied, additional diagnostic messages are printed out. • The object robject is a modified input object that has all SLOT fields modified so the condi- tions/restrictions to be a proper spline are satisfied. • The numeric value Er is giving the total squared error of deviation of the input matrix of derivative from the conditions required for a spline. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also Splinets-class for the definition of the Splinets-class; construct for constructing such an object from the class; gather and subsample for combining and subsampling Splinets-objects, respectively; plot,Splinets-method for plotting Splinets objects; Examples #-----------------------------------------------------# #--------Diagnostics of simple Splinets objects-------# #-----------------------------------------------------# #----------Full support equidistant cases-------------# #-----------------------------------------------------# #Zero order splines, equidistant case, full support n=20; xi=seq(0,1,by=1/(n+1)) sp=new("Splinets",knots=xi) sp@equid #equidistance flag #Diagnostic of 'Splinets' object 'sp' is.splinets(sp) IS=is.splinets(sp) IS[[1]] #informs if the object is a spline IS$is #equivalent to the above #Third order splines with a noisy matrix of the derivative set.seed(5) k=3; sp@smorder=k; sp@der[[1]]=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) IS=is.splinets(sp) IS[[2]]@taylor #corrections sp@taylor IS[[2]]@der #corrections sp@der is.splinets(IS[[2]]) #The output object is a valid splinet #-----------------------------------------------------# #--------Full support non-equidistant cases-----------# #-----------------------------------------------------# #Zero order splines, non-equidistant case, full support set.seed(5) n=17; xi=sort(runif(n+2)) xi[1]=0 ;xi[n+1]=1 #The last knot is not in the order. #It will be reported and corrected in the output. sp=new("Splinets",knots=xi) xi #original knots sp@knots #vs. corrected ones sp@taylor #Diagnostic of 'Splinets' object 'sp' is.splinets(sp) IS=is.splinets(sp) nsp=IS$robject #the output spline -- a corrected version of the input nsp@der sp@der #Third order splines nsp@smorder=3 IS=is.splinets(nsp) IS[[2]]@taylor #corrections nsp@taylor IS[[2]]@der #corrections nsp@der is.splinets(IS[[2]]) #verification that the correction is a valid object #Randomly assigning the derivative -- a very 'unstable' function. set.seed(5) k=nsp@smorder; S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)); nsp@der[[1]]=S IS=is.splinets(nsp) #the 2nd element of 'IS' is a spline obtained by correcting 'S' nsp=is.splinets(IS[[2]]) nsp$is #The 'Splinets' object is correct, alternatively use 'nsp$[[1]]'. nsp$robject #A correct spline object, alternatively use 'nsp$[[2]]'. #-----------------------------------------------------# #-----Splinets objects with varying support sets------# #-----------------------------------------------------# #-----------------Eequidistant cases------------------# #-----------------------------------------------------# #Zero order splines, equidistant case, support with three components n=20; xi=seq(0,1,by=1/(n+1)) support=list(matrix(c(2,5,6,8,12,18),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,supp=support) is.splinets(sp) IS=is.splinets(sp) sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support dim(IS[[2]]@der[[1]])[1] #the number of rows in the derivative matrix IS[[2]]@der[[1]] #the corrected object sp@der #the input derivative matrix #Third order splines n=40; xi=seq(0,1,by=1/(n+1)); k=3; support=list(matrix(c(2,12,15,27,30,40),ncol=2,byrow = TRUE)) m=sum(support[[1]][,2]-support[[1]][,1]+1) #the number of knots in the support SS=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp=new("Splinets",knots=xi,smorder=k,supp=support,der=SS) IS=is.splinets(sp) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random IS=is.splinets(sp) #Comparison of the corrected and the original 'der' matrices sp@der IS[[2]]@der is.splinets(IS[[2]]) #verification #-----------------------------------------------------# #----------------Non-equidistant cases----------------# #-----------------------------------------------------# #Zero order splines, non-equidistant case, support with three components set.seed(5) n=43; xi=seq(0,1,by=1/(n+1)); k=3; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; support=list(matrix(c(2,14,17,30,32,43),ncol=2,byrow = TRUE)) ssp=new("Splinets",knots=xi,supp=support) #with partial support nssp=is.splinets(ssp)$robject nssp@supp nssp@der #Third order splines nssp@smorder=3 #changing the order of the 'Splinets' object set.seed(5) m=sum(nssp@supp[[1]][,2]-nssp@supp[[1]][,1]+1) #the number of knots in the support nssp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random IS=is.splinets(nssp) IS$robject@der is.splinets(IS$robject)$is #verification of the corrected output object is.splinets,Splinets-method Diagnostics of splines Description This short information is added to satisfy an R-package building requirement, see is.splinets for the actual information. Usage ## S4 method for signature 'Splinets' is.splinets(object) Arguments object Splinets object, the object to be diagnosed; lincomb Linear transformation of splines. Description A linear combination of the splines Sj in the input object is computed according to Xd Ri = aij Sj , i = 1, . . . , l. j=0 and returned as a Splinet-object. Usage lincomb(object, A, reduced = TRUE, SuppExtr = TRUE) Arguments object Splinets object containing d splines; A l x d matrix; coefficients of the linear transformation, reduced logical; If TRUE (default), then the linear combination is calculated accounting for the actual support sets (recommended for sparse splines), if FALSE, then the full support computations are used (can be faster for lower dimension or non- sparse cases). SuppExtr logical; If TRUE (default), the true support is extracted, otherwise, full range is reported as the support. Applies only to the case when reduced=FALSE. Value A Splinet-object that contains l splines obtained by linear combinations of using coefficients in rows of A. The SLOT type of the output splinet objects is sp. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also exsupp for extracting the correct support; construct for building a valid spline; rspline for ran- dom generation of splines; Examples #-------------------------------------------------------------# #------------Simple linear operations on Splinets-------------# #-------------------------------------------------------------# #Gathering three 'Splinets' objects n=53; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1;Nspl=10 set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #constructing the first proper spline spl@epsilon=1.0e-5 #to avoid FALSE in the next function due to inaccuracies is.splinets(spl) RS=rspline(spl,Nspl) #Random splines plot(RS) A = matrix(rnorm(5*Nspl, mean = 2, sd = 100), ncol = Nspl) new_sp1 = lincomb(RS, A) plot(new_sp1) new_sp2 = lincomb(RS, A, reduced = FALSE) plot(new_sp2) #---------------------------------------------# #--- Example with different support ranges ---# #---------------------------------------------# n=25; k=3 set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 #Defining support ranges for three splines supp=matrix(c(2,12,4,20,6,25),byrow=TRUE,ncol=2) #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[1,2]-supp[1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[2,2]-supp[2,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[3,2]-supp[3,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[1,]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[2,]) spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[3,]) spl=gather(spl,nspl) #the third is added A = matrix(rnorm(3*2, mean = 2, sd = 100), ncol = 3) new_sp1 = lincomb(spl, A) # based on reduced supports plot(new_sp1) new_sp2 = lincomb(spl, A, reduced = FALSE) # based on full support plot(new_sp2) # new_sp1 and new_sp2 are same #-----------------------------------------# #--- Example with varying support sets ---# #-----------------------------------------# n=40; xi=seq(0,1,by=1/(n+1)); k=2; support=list(matrix(c(2,9,15,24,30,37),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp1 = is.splinets(sp)[[2]] #the corrected vs. the original 'der' matrices support=list(matrix(c(5,12,17,29),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp2 = is.splinets(sp)[[2]] #building a valid spline spp = gather(sp1,sp2) support=list(matrix(c(3,10,14,21,27,34),ncol=2,byrow = TRUE)) sp=new("Splinets",knots=xi,smorder=k,supp=support) m=sum(sp@supp[[1]][,2]-sp@supp[[1]][,1]+1) #the number of knots in the support sp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random sp3 = is.splinets(sp)[[2]] #building a valid spline spp = gather(spp, sp3) plot(spp) spp@supp #the supports set.seed(5) A = matrix(rnorm(3*4, mean = 2, sd = 100), ncol = 3) new_sp1 = lincomb(spp, A) # based on reduced supports plot(new_sp1) new_sp1@supp #the support of the output from 'lincomb' new_sp2 = lincomb(spp, A, reduced = FALSE) # based on full support plot(new_sp2) # new_sp1 and new_sp2 are same new_sp2@supp #the support of the output from 'lincomb' with full support computations #-------------------------------------# #--- Support needs some extra care ---# #-------------------------------------# set.seed(5) n=53; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 supp1 = matrix(c(1, ceiling(n/2)+1), ncol = 2) supp2 = matrix(c(ceiling(n/2)+1, n+2), ncol = 2) S = matrix(rnorm(5*(ceiling(n/2)+1)), ncol = k+1) a = construct(xi,k,S,supp = supp1) #constructing the first proper spline S = matrix(rnorm(5*(ceiling(n/2)+1)), ncol = k+1) b = construct(xi,k,S,supp = supp2) #constructing the first proper spline sp = gather(a,b) plot(sp) # create a+b and a-b s = lincomb(sp, matrix(c(1,1,1,-1), byrow = TRUE, nrow = 2)) plot(s) s@supp # Sum has smaller support than its terms s1 = lincomb(s, matrix(c(1,1), nrow = 1), reduced = TRUE) plot(s1) s1@supp # lincomb based on support, the full support is reported s2 = lincomb(s, matrix(c(1,1), nrow = 1), reduced = FALSE) plot(s2) s2@supp # lincomb using full der matrix s3=lincomb(s, matrix(c(1,1), nrow = 1), reduced = FALSE, SuppExtr=FALSE) s3@supp #the full range is reported as support ES=exsupp(s1@der[[1]]) #correcting the matrix and the support s1@der[[1]]=ES[[1]] s1@supp[[1]]=ES[[2]] plot(s1) s1@supp[[1]] lines,Splinets-method Adding graphs of splines to a plot Description A standard method of adding splines to an existing plot. Usage ## S4 method for signature 'Splinets' lines(x, sID = NULL, ...) Arguments x Splinets object; sID vector, specifying indices of splines in the splinet object to be plotted; ... other standard graphical parameters; References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also plot,Splinets-method for graphical visualization of splines; evspline for evaluation of a Splinet- object; Examples #-----------------------------------------------------# #------Adding spline lines to an existing graph-------# #-----------------------------------------------------# n=17; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl,main="Mean Spline",lty=2,lwd=2) RS=rspline(spl,5) plot(RS,main="Random splines around the mean spline" ) lines(spl,col='red',lwd=4,lty=2) plot,Splinets-method Plotting splines Description The method provides graphical visualization of a Splinets-class object. The method plot a Splinets in a cartesian or a polar coordinate if it is a regular splines or a periodic splines, respectively. Usage ## S4 method for signature 'Splinets' plot( object, x = NULL, sID = NULL, vknots = TRUE, type = "stnd", mrgn = 2, lwd = 2, ... ) Arguments object Splinets object; x vector, specifying where the splines will be evaluated for the plots; sID vector, specifying indices of the splines to be plotted; vknots logic, indicates if auxiliary vertical lines will be added to highlight the positions of knots; The default is TRUE. type string, controls the layout of graphs; The following options are available • "stnd" – if object@type="dspnt" or ="spnt", then the plots are over the dyadic net of supports, other types of the bases are on a single plot with information about the basis printed out, • "simple" – all the objects are plotted in a single plot, • "dyadic" – if object@type="sp" is not true (unstructured collection of splines), then the plot is over the dyadic net of supports. mrgn number, specifying the margin size in the dyadic structure plot; lwd positive integer, the line width; ... other standard graphical parameters can be passed; Details The standard method of plotting splines in a Splinet-object. It plots a single graph with all splines in the object except if the field type of the object represents a splinet. In the latter case, the default is the (dyadic) net plot of the basis. The string argument type can overide this to produce a plot that does not use the dyadic net. Most of the standard graphical parameters can be passed to this function. Value A plot visualizing a Splinet object. The entire set of splines will be displayed in a plot. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also evspline for manually evaluating splines in a Splinet-object; Splinets-class for the definition of the Splinet-class; lines,Splinets-method for adding graphs to existing plots; Examples #-----------------------------------------------------# #-------------------Ploting splinets------------------# #-----------------------------------------------------# #Constructed splines n=25; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; k=3 supp=list(t(c(2,12)),t(c(4,20)),t(c(6,25))) #defining support ranges for three splines #Initial random matrices of the derivative for each spline SS1=matrix(rnorm((supp[[1]][1,2]-supp[[1]][1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[[2]][1,2]-supp[[2]][1,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[[3]][1,2]-supp[[3]][1,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[[1]]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[[2]],'CRFC') spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[[3]],'CRLC') spl=gather(spl,nspl) #the third is added plot(spl) plot(spl,sID=c(1,3)) plot(spl,sID=2) t = seq(0,0.5,length.out = 1000) plot(spl, t, sID = 1) #Random splines n=17; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl,main="Mean Spline",lty=2,lwd=2,xlab='') RS=rspline(spl,5) plot(RS,main="Random splines around the mean spline",ylim=3*range(spl@der[[1]][,1]) ) lines(spl,col='red',lwd=4,lty=2) #Periodic splines xi = seq(0, 1, length.out = 25) so = splinet(xi, periodic = TRUE) plot(so$bs) plot(so$os) plot(so$bs,type= "dyadic") plot(so$bs, sID=c(4,6)) plot(so$os, type="simple",sID=c(4,6)) project Projecting into spline spaces Description The projection of splines or functional data into the linear spline space spanned over a given set of knots. Usage project( fdsp, knots = NULL, smorder = 3, periodic = FALSE, basis = NULL, type = "spnt", graph = FALSE ) Arguments fdsp Splinets-object or a n x (N+1) matrix, a representation of N functions to be pro- jected to the space spanned by a Splinets-basis over a specific set of knots; If the parameter is a Splinets-object containing N splines, then it is orthogonally projected or represented in the basis that is specified by other parameters. If the paramater is a matrix, then it is treated as N piecewise constant functions with the arguments in the first column and the corresponding values of the functions in the remaining N columns. knots vector, the knots of the projection space, together with smorder fully character- izes the projection space; This parameter is overridden by the SLOT basis@knots of the basis input if this one is not NULL. smorder integer, the order of smoothness of the projection space; This parameter is over- ridden by the SLOT basis@smorder of the basis input if this one is not NULL. periodic logical, a flag to indicate if B-splines will be of periodic type or not; In the case of periodic splines, the arguments of the input and the knots need to be within [0,1] or, otherwise, an error occurs and a message advising the recentering and rescaling data is shown. basis Splinets-object, the basis used for the representation of the projection of the input fdsp; type string, the choice of the basis in the projection space used only if the basis- parameter is not given; The following choices are available • 'bs' for the unorthogonalized B-splines, • 'spnt' for the orthogonal splinet (the default), • 'gsob' for the Gramm-Schmidt (one-sided) OB-splines, • 'twob' for the two-sided OB-splines. The default is 'spnt'. graph logical, indicator if the illustrative plots are to be produced: • the splinet used in the projection(s) on the dyadic grid, • the coefficients of the projection(s) on the dyadic grid, • the input function(s), • the projection(s). ; Details The obtained coefficients A = (aji ) with respect to the basis allow to evaluate the splines Sj in the projection according to n−k−1 X Sj = aji OBi , j = 1, . . . , N, i=1 where n is the number of the knots (including the endpoints), k is the spline smoothness order, N is the number of the projected functions and OBi ’s consitute the considered basis. The coeffi- cient for the splinet basis are always evaluated and thus, for example, PFD=project(FD,knots); ProjDataSplines=lincomb(PFD$coeff,PFD$basis) creates a Splinets-object made of the pro- jections of the input functional data in FD. If the input parameter basis is given, then the function utilizes this basis and does not need to build it. However, if basis is the B-spline basis, then the B-spline orthogonalization is performed anyway, thus the computational gain is smaller than in the case when basis is an orthogonal basis. Value The value of the function is a list made of four elements • project$input – fdsp, when the input is a Splinets-object or a matrix with the first column in an increasing order, otherwise it is the input numeric matrix after ordering according to the first column, • project$coeff – N x (n-k+1) matrix of the coefficients of representation of the projection of the input in the splinet basis, • project$basis – the spline basis, • projedt$sp – the Splinets-object containing the projected splines. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also refine for embeding a Splinets-object into the space of splines with an extended set of knots; lincomb for evaluation of a linear combination of splines; splinet for obtaining the spline bases given the set of knots and the smootheness order; Examples #-------------------------------------------------# #----Representing splines in the spline bases-----# #-------------------------------------------------# k=3 # order n = 10 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) # plotting a spline spls=rspline(spl,5) # a random sample of splines Repr=project(spl) #decomposition of splines into the splinet coefficients Repr=project(spl, graph = TRUE) #decomposition of splines with the following graphs #that illustrate the decomposition: # 1) The orthogonal spine basis on the dyadic grid; # 2) The coefficients of the projections on the dyadic grid; # 3) The input splines; # 4) The projections of the input. Repr$coeff #the coefficients of the decomposition plot(Repr$sp) #plot of the reconstruction of the spline plot(spls) Reprs=project(spls,basis = Repr$basis) #decomposing splines using the available basis plot(Reprs$sp) Reprs2=project(spls,type = 'gsob') #using the Gram-Schmidt basis #The case of the regular non-normalized B-splines: Reprs3=project(spls,type = 'bs') plot(Reprs3$basis) gramian(Reprs3$basis,norm_only = TRUE) #the B-splines follow the classical definition and #thus are not normalized plot(spls) plot(Reprs3$basis) #Bsplines plot(Reprs3$sp) #reconstruction using the B-splines and the decomposition #a non-equidistant example n=10; k=3 set.seed(5) xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) spls=rspline(spl,5) # a random sample of splines plot(spls) Reprs=project(spls,type = 'twob') #decomposing using the two-sided orthogonalization plot(Reprs$basis) plot(Reprs$sp) #The case of the regular non-normalized B-splines: Reprs2=project(spls,basis=Reprs$basis) plot(Reprs2$sp) #reconstruction using the B-splines and the decomposition #-------------------------------------------------# #-----Projecting splines into a spline space------# #-------------------------------------------------# k=3 # order n = 10 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) #the spline knots=runif(8) Prspl=project(spl,knots) plot(Prspl$sp) #the projection spline Rspl=refine(spl,newknots = knots) #embedding the spline to the common space plot(Rspl) RPspl=refine(Prspl$sp,newknots = xi) #embedding the projection spline to the common space plot(RPspl) All=gather(RPspl,Rspl) #creating the Splinets-object with the spline and its projection Rbasis=refine(Prspl$basis,newknots = xi) #embedding the basis to the common space plot(Rbasis) Res=lincomb(All,matrix(c(1,-1),ncol=2)) plot(Res) gramian(Res,Sp2 = Rbasis) #the zero valued innerproducts -- the orthogonality of the residual spline spls=rspline(spl,5) # a random sample of splines Prspls=project(spls,knots,type='bs') #projection in the B-spline representation plot(spls) lines(Prspls$sp) #presenting projections on the common plot with the original splines Prspls$sp@knots Prspls$sp@supp plot(Prspls$basis) #Bspline basis #An example with partial support Bases=splinet(xi,k) BS_Two=subsample(Bases$bs,c(2,length(Bases$bs@der))) plot(BS_Two) A=matrix(c(1,-2),ncol=2) spl=lincomb(BS_Two,A) plot(spl) knots=runif(13) Prspl=project(spl,knots) plot(Prspl$sp) Prspl$sp@knots Prspl$sp@supp #Using explicit bases k=3 # order n = 10 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) spls=rspline(spl,5) # a random sample of splines plot(spls) knots=runif(20) base=splinet(knots,smorder=k) plot(base$os) Prsps=project(spls,basis=base$os) plot(Prsps$sp) #projection splines vs. the original splines lines(spls) #------------------------------------------------------# #---Projecting discretized data into a spline space----# #------------------------------------------------------# k=3; n = 10; xi = seq(0, 1, length.out = n+2) set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S); spls=rspline(spl,10) # a random sample of splines x=runif(50) FData=evspline(spls,x=x) #discrete functional data matplot(FData[,1],FData[,-1],pch='.',cex=3) #adding small noise to the data noise=matrix(rnorm(length(x)*10,0,sqrt(var(FData[,2]/10))),ncol=10) FData[,-1]=FData[,-1]+noise matplot(FData[,1],FData[,-1],pch='.',cex=3) knots=runif(12) DatProj=project(FData,knots) lines(DatProj$sp) #the projections at the top of the original noised data plot(DatProj$basis) #the splinet in the problem #Adding knots to the projection space so that all data points are included #in the range of the knots for the splinet basis knots=c(-0.1,0.0,0.1,0.85, 0.9, 1.1,knots) bases=splinet(knots) DatProj1=project(FData,basis = bases$os) matplot(FData[,1],FData[,-1],pch='.',cex=3) lines(DatProj1$sp) #Using the B-spline basis knots=xi bases=splinet(knots,type='bs') DatProj3=project(FData,basis = bases$bs) matplot(FData[,1],FData[,-1],pch='.',cex=3) lines(DatProj3$sp) DatProj4=project(FData,knots,k,type='bs') #this includes building the base of order 4 matplot(FData[,1],FData[,-1],pch='.',cex=3) lines(DatProj4$sp) lines(spls) #overlying the functions that the original data were built from #Using two-sided orthonormal basis DatProj5=project(FData,knots,type='twob') matplot(FData[,1],FData[,-1],pch='.',cex=3) lines(DatProj5$sp) lines(spls) #--------------------------------------------------# #-----Projecting into a periodic spline space------# #--------------------------------------------------# #generating periodic splines n=1# number of samples k=3 N=3 n_knots=2^N*k-1 #the number of internal knots for the dyadic case xi = seq(0, 1, length.out = n_knots+2) so = splinet(xi,smorder = k, periodic = TRUE) #The splinet basis stwo = splinet(xi,smorder = k,type='twob', periodic = TRUE) #The two-sided orthogonal basis plot(so$bs,type='dyadic',main='B-Splines on dyadic structure') #B-splines on the dyadic graph plot(stwo$os,main='Symmetric OB-Splines') #The two-sided orthogonal basis plot(stwo$os,type='dyadic',main='Symmetric OB-Splines on dyadic structure') # generating a periodic spline as a linear combination of the periodic splines A1= as.matrix(c(1,0,0,0.7,0,0,0,0.8,0,0,0,0.4,0,0,0, 1, 0,0,0,0,0,1,0, .5),nrow= 1) circular_spline=lincomb(so$os,t(A1)) plot(circular_spline) #Graphical visualizations of the projections Pro_spline=project(circular_spline,basis = so$os,graph = TRUE) plot(Pro_spline$sp) #---------------------------------------------------------------# #---Projecting discretized data into a periodic spline space----# #---------------------------------------------------------------# nx=100 # number of discritization n=1# number of samples k=3 N=3 n_knots=2^N*k-1 #the number of internal knots for the dyadic case xi = seq(0, 1, length.out = n_knots+2) so = splinet(xi,smorder = k, periodic = TRUE) hf=1/nx grid=seq (hf , 1, by=hf) #grid l=length(grid) BB = evspline(so$os, x =grid) fbases=matrix(c(BB[,2],BB[,5],BB[,9],BB[,13],BB[,17], BB[,23], BB[,25]), nrow = nx) #constructing periodic data f_circular=matrix(0,ncol=n+1,nrow=nx) lambda=c(1,0.7,0.8,0.4, 1,1,.5) f_circular[,1]= BB[,1] f_circular[,2]= fbases%*%lambda plot(f_circular[,1], f_circular[,2], type='l') Pro=project(f_circular,basis = so$os) plot(Pro$sp) refine Refining splines through adding knots Description Any spline of a given order remains a spline of the same order if one considers it on a bigger set of knots than the original one. However, this embedding changes the Splinets representation of the so-refined spline. The function evaluates the corresponding Splinets-object. Usage refine(object, mult = 2, newknots = NULL) Arguments object Splinets-object, the object to be represented as a Splinets-object over a re- fined set of knots; mult positive integer, refining rate; The number of the knots to be put equally spaced between the existing knots. newknots m vector, new knots; The knots do not need to be ordered and knots from the input Splinets-object knots are allowed since any ties are resolved. Details The function merges new knots with the ones from the input object. It utilizes deriva()-function to evaluate the derivative at the refined knots. It removes duplications of the refined knots, and account also for the not-fully supported case. In the case when the range of the additional knots extends beyond the knots of the input Splinets-object, the support sets of the output Splinets- object account for the smaller than the full support. Value A Splinet object with the new refined knots and the new matrix of derivatives is evaluated at the new knots combined with the original ones. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also deriva for computing derivatives at selected points; project for an orthogonal projection into a space of splines; Examples #-------------------------------------------------# #----Refining splines - the full support case-----# #-------------------------------------------------# k=3 # order n = 16 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) # plotting a spline rspl=refine(spl) # refining the equidistant by doubling its knots plot(rspl) rspl@equid # the outcome is equidistant #a non-equidistant case n=17; k=4 xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) mult=3 #adding two knots between each subsequent pair of the original knots rspl=refine(spl,mult) is.splinets(rspl) plot(rspl) #adding specific knots rspl=refine(spl,newknots=c(0.5,0.75)) rspl@knots is.splinets(rspl) plot(rspl) #----------------------------------------------------# #----Refining splines - the partial support case-----# #----------------------------------------------------# Bases=splinet(xi,k) plot(Bases$bs) Base=Bases$bs BS_Two=subsample(Bases$bs,c(1,length(Base@der))) plot(BS_Two) A=matrix(c(1,-1),ncol=2) spl=lincomb(BS_Two,A) rspl=refine(spl) #doubling the number of knots plot(rspl) is.splinets(rspl) rspl@supp #the support is evaluated spl@supp #The case of adding knots explicitely BS_Middle=subsample(Bases$bs,c(floor(length(Base@der)/2))) spls=gather(spl,BS_Middle) plot(spls) rspls=refine(spls, newknots=c(0.2,0.5,0.85)) #two splines with partial support sets #by adding three knots to B-splines plot(rspls) #----------------------------------------------------# #------Refining splines over the larger range--------# #----------------------------------------------------# k=4 # order n = 25 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) plot(spl) # plotting a spline newknots=c(-0.1,0.4,0.6,1.2) #the added knots create larger range rspl=refine(spl,newknots=newknots) spl@supp #the original spline has the full support rspl@supp #the embedded spline has partial support spl@equid rspl@equid plot(rspl) rspline Random splines Description The function simulates a random Splinets-object that is made of random splines with the center at the input spline and the matrix of derivatives has the added error term of the form where Z is a (n + 2) × (k + 1) matrix having iid standard normal variables as its entries, while Σ and Θ are matrix parameters. This matrix error term is then corrected by one of the methods and thus resulting in a matrix of derivatives at knots corresponding to a valid spline. Usage rspline(S, N = 1, Sigma = NULL, Theta = NULL, mthd = "RRM") Arguments S Splinets-object with n+2 knots and of the order of smoothness k, representing the center of randomly simulated splines; When the number of splines in the object is bigger than one, only the first spline in the object is used. N positive integer, size of the sample; Sigma matrix; • If (n+2)x(n+2) matrix, it controls correlations between derivatives of the same order at different knots. • If a positive number, it represents a diagonal (n+2)x(n+2) matrix with this number on the diagonal. • If a n+2 vector, it represents a diagonal (n+2)x(n+2) matrix with the vector entries on the diagonal. • If NULL (default) represents the identity matrix. Theta matrix; • If (k+1)x(k+1), this controls correlations between different derivatives at each knot. • If a positive number, it represents a diagonal matrix with this number on the diagonal. • If a k+1 vector, it represents a diagonal matrix with the vector entries on the diagonal. • If NULL (default), it represents the k+1 identity matrix; mthd string, one of the three methods: RCC, CR-LC, CR-FC, to adjust random error matrix so it corresponds to a valid spline; Value A Splinets-object that contains N generated splines constituting an iid sample of splines; References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for diagnostics of the Splinets-objects; construct for constructing a Splinets- object; gather for combining Splinets-objects into a bigger object; subsample for subsampling Splinets-objects; plot,Splinets-method for plotting Splinets-objects; Examples #-----------------------------------------------------# #-------Simulation of a standard random splinet-------# #-----------------------------------------------------# n=17; k=4; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1 S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #Construction of the mean spline RS=rspline(spl) graphsp=evspline(RS) #Evaluating the random spline meansp=evspline(spl) RS=rspline(spl,5) #Five more samples graphsp5=evspline(RS) m=min(graphsp[,2],meansp[,2],graphsp5[,2:6]) M=max(graphsp[,2],meansp[,2],graphsp5[,2:6]) plot(graphsp,type='l',ylim=c(m,M)) lines(meansp,col='red',lwd=3,lty=2) #the mean spline for(i in 1:5){lines(graphsp5[,1],graphsp5[,i+1],col=i)} #-----------------------------------------------------# #------------Different construction method------------# #-----------------------------------------------------# RS=rspline(spl,8,mthd='CRLC'); graphsp8=evspline(RS) m=min(graphsp[,2],meansp[,2],graphsp8[,2:6]) M=max(graphsp[,2],meansp[,2],graphsp8[,2:6]) plot(meansp,col='red',type='l',lwd=3,lty=2,ylim=c(m,M)) #the mean spline for(i in 1:8){lines(graphsp8[,1],graphsp8[,i+1],col=i)} #-----------------------------------------------------# #-------Simulation of with different variances--------# #-----------------------------------------------------# Sigma=seq(0.1,1,n+2);Theta=seq(0.1,1,k+1) RS=rspline(spl,N=10,Sigma=Sigma) #Ten samples RS2=rspline(spl,N=10,Sigma=Sigma,Theta=Theta) #Ten samples graphsp10=evspline(RS); graphsp102=evspline(RS2) m=min(graphsp[,2],meansp[,2],graphsp10[,2:10]) M=max(graphsp[,2],meansp[,2],graphsp10[,2:10]) plot(meansp,type='l',ylim=c(m,M),col='red',lwd=3,lty=2) for(i in 1:10){lines(graphsp10[,1],graphsp10[,i+1],col=i)} m=min(graphsp[,2],meansp[,2],graphsp102[,2:10]) M=max(graphsp[,2],meansp[,2],graphsp102[,2:10]) plot(meansp,type='l',ylim=c(m,M),col='red',lwd=3,lty=2) for(i in 1:10){lines(graphsp102[,1],graphsp102[,i+1],col=i)} #-----------------------------------------------------# #-------Simulation for the mean spline to be----------# #------=----defined on incomplete supports------------# #-----------------------------------------------------# n=43; xi=seq(0,1,by=1/(n+1)); k=3; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; support=list(matrix(c(2,14,25,43),ncol=2,byrow = TRUE)) ssp=new("Splinets",knots=xi,supp=support) #with partial support nssp=is.splinets(ssp)$robject nssp@smorder=3 #changing the order of the 'Splinets' object m=sum(nssp@supp[[1]][,2]-nssp@supp[[1]][,1]+1) #the number of knots in the support nssp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random spl=is.splinets(nssp)$robject RS=rspline(spl,Sigma=0.05,Theta=c(1,0.5,0.3,0.05)) graphsp=evspline(RS); meansp=evspline(spl) m=min(graphsp[,2],meansp[,2],graphsp5[,2:6]) M=max(graphsp[,2],meansp[,2],graphsp5[,2:6]) plot(graphsp,type='l',ylim=c(m,M)) lines(meansp,col='red',lwd=3,lty=2) #the mean spline seq2dyad Organizing indices in a spline basis in the net form Description This auxiliary function generates the map between the sequential order and the dyadic net structure of a spline basis. It works only with indices so it can be utilized to any basis in the space of splines with the zero-boundary conditions. The function is useful for creating the dyadic structure of the graphs and whenever a reference to the k-tuples and the levels of support is needed. Usage seq2dyad(n_sp, k) Arguments n_sp positive integer, the number of splines to be organized into the dyadic net; The dyadic net does not need to be fully dyadic, i.e. n_sp does not need to be equal to k2n − 1, where n is the number of the internal knots. See the references for more details. k the size of a tuple in the dyadic net; It naturally corresponds to the smoothness order of splines for which the net is build. Value The double indexed list of single row matrices of positive integers in the range 1:n_sp. Each vector has typically the length k and some of them may correspond to incomplete tuplets and thus can be shorter. The first index in the list points to the level in the dyadic structure, the second one to the the number of the tuplet at the given level. The integers in the vector pointed by the list correspond to the sequential index of the element belonging to this tuplet. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also plot,Splinets-method for plotting splinets in the dydadic graphical representation; lincomb for evaluation of a linear combination of splines; refine for refinment of a spline to a larger number of knots; Examples #-------------------------------------------------------# #--The support layers of the dyadic structure of bases--# #-------------------------------------------------------# k=4 # order n = 36 # number of the internal knots (excluding the endpoints) xi = seq(0, 1, length.out = n+2) spnt=splinet(xi,k) plot(spnt$os) #standard plotting plot(spnt$bs,type='dyadic') #dyadic format of plots net=seq2dyad(n-k+1,k) #retrieving dyadic structure ind1=c(net[[4]][[1]],net[[4]][[2]]) plot(subsample(spnt$os,ind1)) ind2=c(net[[4]][[3]],net[[4]][[4]]) #the lowest support in the dyadic net lines(subsample(spnt$bs,ind2)) splinet B-splines, periodic B-splines and their orthogonalization Description The B-splines (periodic B-splines) are either given in the input or generated inside the routine. Then, given the B-splines and the argument type, the routine additionally generates a Splinets- object representing an orthonormal spline basis obtained from a certain orthonormalization of the B-splines. Orthonormal spline bases are obtained by one of the following methods: the Gram- Schmidt method, the two-sided method, and/or the splinet algorithm, which is the default method. All spline bases are kept in the format of Splinets-objects. Usage splinet( knots = NULL, smorder = 3, type = "spnt", Bsplines = NULL, periodic = FALSE, norm = F ) Arguments knots n+2 vector, the knots (presented in the increasing order); It is not needed, when Bsplines argumment is not NULL, in which the case the knots from Bsplines are inherited. smorder integer, the order of the splines, the default is smorder=3; Again it is inherited from the Bsplines argumment if the latter is not NULL. type string, the type of the basis; The following choices are available • 'bs' for the unorthogonalized B-splines, • 'spnt' for the orthogonal splinet (the default), • 'gsob' for the Gramm-Schmidt (one-sided) O-splines, • 'twob' for the two-sided O-splines. Bsplines Splinet-object, the basis of the B-splines (if not NULL); When this argument is not NULL the first two arguments are not needed since they will be inherited from Bsplines. periodic logical, a flag to indicate if B-splines will be of periodic type or not; norm logical, a flag to indicate if the output B-splines should be normalized; Details The B-spline basis, if not given in the input, is computed from the following recurrent (with respect to the smoothness order of the B-splines) formula ξ x − ξl ξl+1+k − x Bl,k (x) = Bξ (x) + Bξ (x), l = 0, . . . , n − k. ξl+k − ξl l,k−1 ξl+1+k − ξl+1 l+1,k−1 The dyadic algorithm that is implemented takes into account efficiencies due to the equally space knots (exhibited in the Toeplitz form of the Gram matrix) only if the problem is fully dyadic, i.e. if the number of the internal knots is smorder*2^N-1, for some integer N. To utilize this efficiency it may be advantageous, for a large number of equally spaced knots, to choose them so that their number follows the fully dyadic form. An additional advantage of the dyadic form is the complete symmetry at all levels of the support. The algorithm works with both zero boundary splines and periodic splines. Value Either a list list("bs"=Bsplines) made of a single Splinet-object Bsplines when type=='bs', which represents the B-splines (the B-splines are normalized or not, depending on the norm-flag), or a list of two Splinets-objects: list("bs"=Bsplines,"os"=Splinet), where Bsplines are either computed (in the input Bspline= NULL) or taken from the input Bspline (this output will be nor- malized or not depending on the norm-flag), Splinet is the B-spline orthognalization determined by the input argument type. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also project for projecting into the functional spaces spanned by the spline bases; lincomb for evalua- tion of a linear combination of splines; seq2dyad for building the dyadic structure for a splinet of a given smoothness order; plot,Splinets-method for visualisation of splinets; Examples #--------------------------------------# #----Splinet, equally spaced knots-----# #--------------------------------------# k=2 # order n_knots = 5 # number of knots xi = seq(0, 1, length.out = n_knots) so = splinet(xi, k) plot(so$bs) #Plotting B-splines plot(so$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(so$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #An example of the dyadic structure with equally spaced knots k=3 N=3 n_knots=2^N*k-1 #the number of internal knots for the dyadic case xi = seq(0, 1, length.out = n_knots+2) so = splinet(xi) plot(so$bs,type="simple",vknots=FALSE,lwd=3) #Plotting B-splines in a single simple plot plot(so$os,type="simple",vknots=FALSE,lwd=3) plot(so$os,lwd=3,mrgn=2) #Plotting the splinet on the dyadic net of support intervals so=splinet(xi, Bsplines=so$bs, type='gsob') #Obtaining the Gram-Schmidt orthogonalization plot(so$os,type="simple",vknots=FALSE) #Without computing B-splines again so=splinet(xi, Bsplines=so$bs, type='twob') #Obtaining the symmetrize orthogonalization plot(so$os,type="simple",vknots=FALSE) #-------------------------------------# #---Splinet, unequally spaced knots---# #-------------------------------------# n_knots=25 xi = c(0, sort(runif(n_knots)), 1) sone = splinet(xi, k) plot(sone$bs, type='dyadic') #Plotting B-splines plot(sone$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sone$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #------------------------------------------# #---Dyadic splinet, equally spaced knots---# #------------------------------------------# k = 2 # order N = 3 # support level n_so = k*(2^N-1) # number of splines in a dyadic structure with N and k n_knots = n_so + k + 1 # number of knots xi = seq(0, 1, length.out = n_knots) sodyeq = splinet(xi, k) plot(sodyeq$bs) #Plotting B-splines plot(sodyeq$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sodyeq$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #--------------------------------------------# #---Dyadic splinet, unequally spaced knots---# #--------------------------------------------# xi = c(0, sort(runif(n_knots)), 1) sody = splinet(xi, k) plot(sody$bs) #Plotting B-splines plot(sody$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sody$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #-----------------------------------------# #---Bspline basis, equally spaced knots---# #-----------------------------------------# n = 15 xi = seq(0,1,length.out = n+2) order = 2 bs = splinet(xi, order, type = 'bs') plot(bs$bs) #---------------------------------------------# #---Bspline basis, non-equally spaced knots---# #---------------------------------------------# n = 6 xi = c(0,sort(runif(n)),1) order = 3 so = splinet(xi, order, type = 'bs') #unnormalized version plot(so$bs) so1 = splinet(type='bs',Bsplines=so$bs,norm=TRUE) #normalized version plot(so1$bs) #-------------------------------------------------# #---Gram-Schmidt osplines, equally spaced knots---# #-------------------------------------------------# so = splinet(xi, order, type = 'gsob') plot(so$bs) plot(so$os) #Using the previously generated B-splines and normalizing them so1 = splinet(Bsplines=so$bs, type = "gsob",norm=TRUE) plot(so1$bs) #normalized B-splines plot(so1$os) #the one sided osplines gm = gramian(so1$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #verification of the orghonoalization of the matrix #-----------------------------------------------------# #---Gram-Schmidt osplines, non-equally spaced knots---# #-----------------------------------------------------# so = splinet(Bsplines=sody$bs, type = 'gsob') #previously genereted Bsplines plot(so$bs) plot(so$os) gm = gramian(so$os) diag(gm) sum(gm - diag(diag(gm))) #---------------------------------------------# #---Twosided osplines, equally spaced knots---# #---------------------------------------------# so = splinet(Bsplines=bs$bs, type = 'twob') plot(so$os) gm = gramian(so$os) #verification of the orthogonality diag(gm) sum(gm - diag(diag(gm))) #-------------------------------------------------# #---Twosided osplines, non equally spaced knots---# #-------------------------------------------------# so = splinet(Bsplines=sody$bs, type = 'twob') plot(so$os) gm = gramian(so$os) #verification of the orthogonality diag(gm) sum(gm - diag(diag(gm))) #--------------------------------------------# #---Periodic splinet, equally spaced knots---# #--------------------------------------------# k=2 # order n_knots = 12 # number of knots xi = seq(0, 1, length.out = n_knots) so = splinet(xi, k, periodic = TRUE) plot(so$bs) #Plotting B-splines plot(so$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(so$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #An example of the dyadic structure with equally spaced knots k=3 N=3 n_knots=2^N*k-1 #the number of internal knots for the dyadic case xi = seq(0, 1, length.out = n_knots+2) so = splinet(xi, periodic = TRUE) plot(so$bs,type="simple") #Plotting B-splines in a single simple plot plot(so$os,type="simple") plot(so$os) #Plotting the splinet on the dyadic net of support intervals so=splinet(xi, Bsplines=so$bs, type='gsob') #Obtaining the Gram-Schmidt orthogonalization plot(so$os,type="simple") #Without computing B-splines again so=splinet(xi, Bsplines=so$bs , type='twob') #Obtaining the symmetrize orthogonalization plot(so$os,type="simple") #-------------------------------------# #---Splinet, unequally spaced knots---# #-------------------------------------# n_knots=25 xi = c(0, sort(runif(n_knots)), 1) sone = splinet(xi, k, periodic = TRUE) plot(sone$bs, type='dyadic') #Plotting B-splines plot(sone$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sone$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #------------------------------------------# #---Dyadic splinet, equally spaced knots---# #------------------------------------------# k = 2 # order N = 3 # support level n_so = k*(2^N-1) # number of splines in a dyadic structure with N and k n_knots = n_so + k + 1 # number of knots xi = seq(0, 1, length.out = n_knots) sodyeq = splinet(xi, k, periodic = TRUE) plot(sodyeq$bs) #Plotting B-splines plot(sodyeq$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sodyeq$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) #--------------------------------------------# #---Dyadic splinet, unequally spaced knots---# #--------------------------------------------# xi = c(0, sort(runif(n_knots)), 1) sody = splinet(xi, k, periodic = TRUE) plot(sody$bs) #Plotting B-splines plot(sody$os) #Plotting Splinet #Verifying the orthogonalization gm = gramian(sody$os) #evaluation of the inner products diag(gm) sum(gm - diag(diag(gm))) Splinets-class The class to represent a collection of splines Description The main class in the splinets-package used for representing a collection of splines. Value running new("Splinets") return an object that belongs to the class Splinets, with the initializa- tion of the default values for the fields. Slots knots numeric n+2 vector, a vector of n+2 knot locations presented in the increasing order and without ties; smorder non-negative integer, the smoothnes order of the splines, i.e. the highest order of non-zero derivative; equid logical, indicates if the knots are equidistant; Some computations in the equidistant case are simpler so this information helps to account for it. supp list (of matrices), • length(supp)==0 – the full support set for all splines, • length(supp)==N – support sets for N splines; If non-empty, a list containing Nsupp x 2 matrices (of positive integers). If Nsupp is equal to one it should be a row matrix (not a vector). The rows in the matrices, supp[[i]][l,], l in 1:Nsupp represents the indices of the knots that are the endpoints of the intervals in the support sets. Each of the support set is represented as a union of disjoint Nsupp intervals, with knots as the endpoints. Outside the set (support), the spline vanishes. Each matrix in this list is ordered so the rows closer to the top correspond to the intervals closer to the LHS end of the support. der list (of matrices); a list of the length N containing sum(supp[[i]][,2]-supp[[i]][,1]+1) x (smorder+1) matrices, where i is the index running through the list. Each matrix in the list includes the values of the derivatives at the knots in the support of the corresponding spline. taylor (n+1) x (smorder+1), if equid=FALSE, or 1 x (smorder+1) if equid=TRUE, columnwise vectors of the Taylor expansion coefficients at the knots; Vectors instead of matrices are rec- ognized properly. The knot and order dependent matrix of rows of coefficients used in the Taylor expansion of splines. Once evaluated it can be used in computations for any spline of the given order over the given knots. The columns of this matrix are used for evaluation of the values of the splines in-between knots, see the references for further details. type string, one of the following character strings: bs,gsob,twob,dspnt,spnt,sp; The default is sp which indicates any unstructured collection of splines. The rest of the strings indicate different spline bases: • bs for B-splines, • gsob for Gram-Schmidt O-splines, • twob for two-sided O-splines, • dspnt for a fully dyadic splinet, • spnt for a non-dyadic splinet. periodic logical, indicates if the B-splines are periodic or not. epsilon numeric (positive), an accuracy used to detect a problem with the conditions required for the matrix of the derivatives (controls relative deviation from the conditions); References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for evaluation of a Splinets-object; construct for constructing a Splinets-object; plot,Splinets-method for plotting methods for Splinets-objects; Examples #-------------------------------------------------------------# #-------Generating an object from the class 'Splinets'--------# #-------------------------------------------------------------# #The most generic generation of an object of class 'Splinets': sp=new("Splinets") #a generic format for 'Splinets' object sp #The most important SLOTs of 'Splinets' - the default values sp@knots sp@smorder sp@der sp@supp set.seed(5); n=13; xi=sort(runif(n+2)); xi[1]=0;xi[n+2]=1 sp@knots=xi #randomly assigned knots #Changing the order of #smoothness and intializing Taylor coefficients ssp=new("Splinets",knots=xi,smorder=2) ssp@taylor #Equidistant case ssp=new("Splinets",knots=seq(0,1,1/(n+1)),smorder=3) ssp@taylor ssp@equid subsample Subsampling from a set of splines Description The function constructs a Splinets-object that is made of subsampled elements of the input Splinets- object. The input objects have to be of the same order and over the same knots. Usage subsample(Sp, ss) Arguments Sp Splinets-object, a collection of s splines; ss vector of integers, the coordinates from 1:s; Details The output Splinet-object made of subsampled splines is always is of the regular type, i.e. SLOT type='sp'. Value An Splinets-object containing length(ss) splines that are selected from the input object./ References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also is.splinets for diagnostic of Splinets-objects; construct for constructing such a Splinets- object; gather for combining Splinets-objects; refine for refinment of a spline to a larger num- ber of knots; plot,Splinets-method for plotting Splinets-objects; Examples #-----------------------------------------------------# #---------------------Subsampling---------------------# #-----------------------------------------------------# #Example with different support ranges, the 3rd order n=25; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; k=3 supp=list(t(c(2,12)),t(c(4,20)),t(c(6,25))) #defining support ranges for three splines #Initial random matrices of the derivative for each spline set.seed(5) SS1=matrix(rnorm((supp[[1]][1,2]-supp[[1]][1,1]+1)*(k+1)),ncol=(k+1)) SS2=matrix(rnorm((supp[[2]][1,2]-supp[[2]][1,1]+1)*(k+1)),ncol=(k+1)) SS3=matrix(rnorm((supp[[3]][1,2]-supp[[3]][1,1]+1)*(k+1)),ncol=(k+1)) spl=construct(xi,k,SS1,supp[[1]]) #constructing the first correct spline nspl=construct(xi,k,SS2,supp[[2]],'CRFC') #See 'gather' function for more details on what follows spl=gather(spl,nspl) #the second and the first ones nspl=construct(xi,k,SS3,supp[[3]],'CRLC') spl=gather(spl,nspl) #the third is added #Replicating by subsampling with replacement sz=length(spl@der) ss=sample(1:sz,size=10,rep=TRUE) spl=subsample(spl,ss) is.splinets(spl)[[1]] spl@supp spl@der #Subsampling without replacements ss=c(3,8,1) sspl=subsample(spl,ss) sspl@supp sspl@der is.splinets(sspl)[[1]] #A single spline sampled from a 'Splinets' object is.splinets(subsample(sspl,1)) sym2one Switching between representations of the matrices of derivatives Description A technical but useful transformation of the matrix of derivatives form the one-sided to symmetric representations, or a reverse one. It allows for switching between the standard representation of the matrix of the derivatives for Splinets which is symmetric around the central knot(s) to the one-sided that yields the RHS limits at the knots, which is more convenient for computations. Usage sym2one(S, supp = NULL, inv = FALSE) Arguments S (m+2) x (k+1) numeric matrix, the derivatives in one of the two representations; supp (Nsupp x 2) or NULL matrix, row-wise the endpoint indices of the support in- tervals; If it is equal to NULL (which is also the default), then the full support is assumed. inv logical; If FALSE (default), then the function assumes that the input is in the symmetric format and transforms it to the left-to-right format. If TRUE, then the inverse transformation is applied. Details The transformation essentially changes only the last column in S, i.e. the highest (discontinuous) derivatives so that the one-sided representation yields the right-hand-side limit. It is expected that the number of rows in S is the same as the total size of the support as indicated by supp, i.e. if supp!=NULL, then sum(supp[,2]-supp[,1]+1)=m+2. If the latter is true, than all derivative sub- matrices of the components in S will be reversed. However, this condition formally is not checked in the code, which may lead to switch of the representations only for parts of the matrix S. Value A matrix that is the respective transformation of the input. References <NAME>., <NAME>., <NAME>. "Dyadic diagonalization of positive definite band matrices and efficient B-spline orthogonalization." Journal of Computational and Applied Mathematics (2022) <https://doi.org/10.1016/j.cam.2022.114444>. <NAME>. (2021) "Splinets – splines through the Taylor expansion, their support sets and orthogonal bases." <arXiv:2102.00733>. <NAME>., <NAME>. (2023) "Splinets 1.5.0 – Periodic Splinets." <arXiv:2302.07552> See Also Splinets-class for the description of the Splinets-class; is.splinets for diagnostic of Splinets- objects; Examples #-----------------------------------------------------# #-------Representations of derivatives at knots-------# #-----------------------------------------------------# n=10; k=3; xi=seq(0,1,by=1/(n+1)) #the even number of equally spaced knots set.seed(5) S=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl=construct(xi,k,S) #construction of a spline a=spl@der[[1]] b=sym2one(a) aa=sym2one(b,inv=TRUE) # matrix 'aa' is the same as 'a' n=11; xi2=seq(0,1,by=1/(n+1)) #the odd number of knots case S2=matrix(rnorm((n+2)*(k+1)),ncol=(k+1)) spl2=construct(xi2,k,S2) #construction of a spline a2=spl2@der[[1]] b2=sym2one(a2) aa2=sym2one(b2, inv=TRUE) # matrix 'aa2' is the same as 'a2' #-----------------------------------------------------# #--------------More complex support sets--------------# #-----------------------------------------------------# #Zero order splines, non-equidistant case, support with three components n=43; xi=seq(0,1,by=1/(n+1)); k=3; xi=sort(runif(n+2)); xi[1]=0; xi[n+2]=1; support=list(matrix(c(2,14,17,30,32,43),ncol=2,byrow = TRUE)) #Third order splines ssp=new("Splinets",knots=xi,supp=support,smorder=k) #with partial support m=sum(ssp@supp[[1]][,2]-ssp@supp[[1]][,1]+1) #the total number of knots in the support ssp@der=list(matrix(rnorm(m*(k+1)),ncol=(k+1))) #the derivative matrix at random IS=is.splinets(ssp) IS$robject@der IS$robject@supp b=sym2one(IS$robject@der[[1]],IS$robject@supp[[1]]) #the RHS limits at the knots a=sym2one(b,IS$robject@supp[[1]],inv=TRUE) #is the same as the SLOT supp in IS@robject tire Data on tire responses to a rough road profile Description These are simulated data of tire responses to a rough road at the high-transient event. The simu- lations have been made based on the fit of the so-called Slepian model to a non-Gaussian rough road profile. Further details can be found in the reference. The responses provided are measured at the wheel and thus describing the tire response. There are 100 functional measurments, kept column-wise in the matrix. Additionally, the time instants of the measurements are given as the first column in the matrix. Since the package uses the so-called "lazy load", the matrix is directly available without an explicit load of the data. This means that data(tire) does not need to be invoked. The data were saved using compress='xz' option, which requires 3.5 or higher version of R. The data are uploaded as a dataframe, thus as.matrix(tire) is needed if the matrix form is required. Usage data(tire) Format numerical 4095 x 101 dataframe: tire References Podgórski, K, <NAME>. and <NAME>. (2015) Slepian noise approach for gaussian and Laplace moving average processes. Extremes, 18(4):665–695, <doi:10.1007/s10687-015-0227-z>. See Also truck for a related dataset; Examples #-----------------------------------------------------# #----------- Plotting the trucktire data -------------# #-----------------------------------------------------# #Activating data: data(tire) data(truck) matplot(tire[,1],tire[,2:11],type='l',lty=1) #ploting the first 10 tire responses matplot(truck[,1],truck[,2:11],type='l',lty=1) #ploting the first 10 truck responses #Projecting truck data into splinet bases knots1=seq(0,50, by=2) Subtruck= truck[2048:3080,] # selecting the truck data that in the interval[0,50] TruckProj=project(as.matrix(Subtruck),knots1) MeanTruck=matrix(colMeans(TruckProj$coeff),ncol=dim(TruckProj$coeff)[2]) MeanTruckSp=lincomb(TruckProj$basis,MeanTruck) plot(MeanTruckSp) #the mean spline of the projections plot(TruckProj$sp,sID=1:10) #the first ten projections of the functional data Sigma=cov(TruckProj$coeff) Spect=eigen(Sigma,symmetric = TRUE) plot(Spect$values, type ='l',col='blue', lwd=4 ) #the eigenvalues EigenTruckSp=lincomb(TruckProj$basis,t(Spect$vec)) plot(EigenTruckSp,sID=1:5) #the first five largest eigenfunctions truck Data on truck responses to a rough road profile Description These are simulated data of truck responses to a rough road at the high transient event. The simula- tions have been made based on the fit of the so-called Slepian model to a non-Gaussian rough road profile. Details can be found in the reference. The responses provided are at the driver seat. There are 100 functional measurments, kept column-wise in the matrix. Additionally, the time instants of the measurements are given as the first column in the matrix. Since the package uses the so-called "lazy load", the matrix is directly available without an explicit load of the data, thus data(truck) does not need to be invoked. Data were saved using compress='xz' option, which requires 3.5 or higher version of R. The data are uploaded as a dataframe, thus as.matrix(tire) is needed if the matrix form is required. Usage data(truck) Format numerical 4095 x 101 dataframe: truck References <NAME>, <NAME>. and <NAME>. (2015) Slepian noise approach for gaussian and Laplace moving average processes. Extremes, 18(4):665–695, <doi:10.1007/s10687-015-0227-z>. See Also tire for a related dataset; Examples #-----------------------------------------------------# #----------- Plotting the trucktire data -------------# #-----------------------------------------------------# #Activating data: data(tire) data(truck) matplot(tire[,1],tire[,2:11],type='l',lty=1) #ploting the first 10 tire responses matplot(truck[,1],truck[,2:11],type='l',lty=1) #ploting the first 10 truck responses #Projecting truck data into splinet bases knots1=seq(0,50, by=2) Subtruck= truck[2048:3080,] # selecting the truck data that in the interval[0,50] TruckProj=project(as.matrix(Subtruck),knots1) MeanTruck=matrix(colMeans(TruckProj$coeff),ncol=dim(TruckProj$coeff)[2]) MeanTruckSp=lincomb(TruckProj$basis,MeanTruck) plot(MeanTruckSp) #the mean spline of the projections plot(TruckProj$sp,sID=1:10) #the first ten projections of the functional data Sigma=cov(TruckProj$coeff) Spect=eigen(Sigma,symmetric = TRUE) plot(Spect$values, type ='l',col='blue', lwd=4 ) #the eigenvalues EigenTruckSp=lincomb(TruckProj$basis,t(Spect$vec)) plot(EigenTruckSp,sID=1:5) #the first five largest eigenfunctions wind Data on wind direction and speed. Description NASA/POWER CERES/MERRA2 Native Resolution Hourly Data • Dates: 01/01/2015 through 03/05/2015 • Location: Latitude 25.7926 Longitude -80.3239 • Elevation from MERRA-2: Average for 0.5 x 0.625 degree lat/lon region = 5.4 meters Data frame fields: • YEAR – Year of a measurement • MO – Month of a measurement • DY – Day of a measurement • HR – Hour of a measurement • WD10M – MERRA-2 Wind Direction at 10 Meters (Degrees) • WS50M – MERRA-2 Wind Speed at 50 Meters (m/s) • WD50M – MERRA-2 Wind Direction at 50 Meters (Degrees) • WS10M – MERRA-2 Wind Speed at 10 Meters (m/s) Usage data(wind) Format numerical 1536 x 8 dataframe: wind References The data was obtained from the National Aeronautics and Space Administration (NASA) Lang- ley Research Center (LaRC) Prediction of Worldwide Energy Resource (POWER) Project funded through the NASA Earth Science/Applied Science Program. https://power.larc.nasa.gov/ data-access-viewer/ Examples #------------------------------------------------# #----------- Plotting the Wind data -------------# #------------------------------------------------# data(wind) #activating the data wind1=wind[,-1] #Removing YEAR as irrelevant #Transforming data to daily with the periodic form, i.e. the arguments in [0,1], #which is required in the periodic case. numbdays=length(wind1[,1])/24 Days=vector(mode='list', length=numbdays) for(i in 1:numbdays){ Days[[i]]=wind1[i*(1:24),] Days[[i]][,c(4,6)]=Days[[i]][,c(4,6)]/360 #the direction in [0,1] } #Raw discretized data for the first day par(mfrow=c(2,2)) hist(Days[[1]][,4],xlim=c(0,1),xlab='Wind direction',main='First day 10[m]') hist(Days[[1]][,6],xlim=c(0,1),xlab='Wind direction',main='First day 50[m]') plot(Days[[1]][,4],Days[[1]][,5],xlim=c(0,1),pch='.',cex=4,xlab='Wind direction',ylab='Wind speed') plot(Days[[1]][,6],Days[[1]][,7],xlim=c(0,1),pch='.',cex=4,xlab='Wind direction',ylab='Wind speed') #First Day Data: #Projections of the histograms to the periodic spline form FirstDayDataF1=cbind(hist(Days[[1]][,4],xlim=c(0,1),breaks=seq(0,1,by=0.1))$mids, hist(Days[[1]][,4],xlim=c(0,1),breaks=seq(0,1,by=0.1))$counts) k=3 N=2 n_knots=2^N*k-1 #the number of internal knots for the dyadic case xi = seq(0, 1, length.out = n_knots+2) #Note that the range of the argument is assumed to be between 0 and 1 PrF1=project(FirstDayDataF1,xi,periodic = TRUE, graph = TRUE) F1=PrF1$sp #The first day projection of the direction histogram at 10[m] #Projections of the scatterplots to the periodic spline form #The bivariate sampl FirstDayDataF1V1=as.matrix(Days[[1]][,4:5]) #we note that wind directions are scaled but not ordered #Padding the data with zeros as the sampling frequency is not sufficiently dense over [0,1] FirstDayDataF1V1=rbind(FirstDayDataF1V1,cbind(seq(0,1,by=1/24),rep(0,25))) #Another knot selection with more knots but still dyadic case k=4 N=3 n_knots2=2^N*k-1 #the number of internal knots for the dyadic case xi2 = seq(0, 1, length.out = n_knots2+2) #For illustration one can plot the B-splines and the corresponding splinet so = splinet(xi2,smorder = k, periodic = TRUE,norm = TRUE) plot(so$bs) plot(so$bs,type='dyadic') #To facilitate the comparison with the splinet better #one can choose the dyadic grapph plot(so$os) #Projecting direction/wind data onto splines PrS1=project(FirstDayDataF1V1,xi2,smorder=k,periodic = TRUE, graph = TRUE) S1=PrS1$sp #the next 7 days days= 7 #Transforming to the periodic data #The direction histogram for(i in 2:days){ DataF1=cbind(hist(Days[[i]][,4],plot=FALSE,breaks=seq(0,1,by=0.1))$mids, hist(Days[[i]][,4],plot=FALSE,breaks=seq(0,1,by=0.1))$counts) PrF1=project(DataF1,xi,periodic = TRUE) F1=gather(F1,PrF1$sp) #Collecting projections of daily wind-direction histograms at 10[m] } plot(F1) #plot of all daily functional data wind direction distributions #Wind direction vs speed data at 10[m] for(i in 2:days){ DataF1V1=as.matrix(Days[[i]][,4:5]) #we note that wind directions are scaled but not ordered #Padding the data with zeros as the sampling frequency is not sufficiently dense over [0,1] DataF1V1=rbind(DataF1V1,cbind(seq(0,1,by=1/24),rep(0,25))) PrS1=project(DataF1V1,xi2,smorder=k,periodic = TRUE) S1=gather(S1,PrS1$sp) #Collecting projections of daily wind-direction histograms at 10[m] } plot(S1) #plot of all daily functional data wind speed at wind direction #Computing means of the data A=matrix(rep(1/days,days),ncol=days) MeanF1=lincomb(F1,A) plot(MeanF1) MeanS1=lincomb(S1,A) plot(MeanS1)
github.com/antchfx/gxpath
go
Go
README [¶](#section-readme) --- ### XPath [![GoDoc](https://godoc.org/github.com/antchfx/xpath?status.svg)](https://godoc.org/github.com/antchfx/xpath) [![Coverage Status](https://coveralls.io/repos/github/antchfx/xpath/badge.svg?branch=master)](https://coveralls.io/github/antchfx/xpath?branch=master) [![Build Status](https://travis-ci.org/antchfx/xpath.svg?branch=master)](https://travis-ci.org/antchfx/xpath) [![Go Report Card](https://goreportcard.com/badge/github.com/antchfx/xpath)](https://goreportcard.com/report/github.com/antchfx/xpath) XPath is Go package provides selecting nodes from XML, HTML or other documents using XPath expression. ### Implementation * [htmlquery](https://github.com/antchfx/htmlquery) - an XPath query package for HTML document * [xmlquery](https://github.com/antchfx/xmlquery) - an XPath query package for XML document. * [jsonquery](https://github.com/antchfx/jsonquery) - an XPath query package for JSON document ### Supported Features ###### The basic XPath patterns. > The basic XPath patterns cover 90% of the cases that most stylesheets will need. * `node` : Selects all child elements with nodeName of node. * `*` : Selects all child elements. * `@attr` : Selects the attribute attr. * `@*` : Selects all attributes. * `node()` : Matches an org.w3c.dom.Node. * `text()` : Matches a org.w3c.dom.Text node. * `comment()` : Matches a comment. * `.` : Selects the current node. * `..` : Selects the parent of current node. * `/` : Selects the document node. * `a[expr]` : Select only those nodes matching a which also satisfy the expression expr. * `a[n]` : Selects the nth matching node matching a When a filter's expression is a number, XPath selects based on position. * `a/b` : For each node matching a, add the nodes matching b to the result. * `a//b` : For each node matching a, add the descendant nodes matching b to the result. * `//b` : Returns elements in the entire document matching b. * `a|b` : All nodes matching a or b, union operation(not boolean or). * `(a, b, c)` : Evaluates each of its operands and concatenates the resulting sequences, in order, into a single result sequence ###### Node Axes * `child::*` : The child axis selects children of the current node. * `descendant::*` : The descendant axis selects descendants of the current node. It is equivalent to '//'. * `descendant-or-self::*` : Selects descendants including the current node. * `attribute::*` : Selects attributes of the current element. It is equivalent to @* * `following-sibling::*` : Selects nodes after the current node. * `preceding-sibling::*` : Selects nodes before the current node. * `following::*` : Selects the first matching node following in document order, excluding descendants. * `preceding::*` : Selects the first matching node preceding in document order, excluding ancestors. * `parent::*` : Selects the parent if it matches. The '..' pattern from the core is equivalent to 'parent::node()'. * `ancestor::*` : Selects matching ancestors. * `ancestor-or-self::*` : Selects ancestors including the current node. * `self::*` : Selects the current node. '.' is equivalent to 'self::node()'. ###### Expressions The gxpath supported three types: number, boolean, string. * `path` : Selects nodes based on the path. * `a = b` : Standard comparisons. + a = b True if a equals b. + a != b True if a is not equal to b. + a < b True if a is less than b. + a <= b True if a is less than or equal to b. + a > b True if a is greater than b. + a >= b True if a is greater than or equal to b. * `a + b` : Arithmetic expressions. + `- a` Unary minus + a + b Add + a - b Substract + a * b Multiply + a div b Divide + a mod b Floating point mod, like Java. * `a or b` : Boolean `or` operation. * `a and b` : Boolean `and` operation. * `(expr)` : Parenthesized expressions. * `fun(arg1, ..., argn)` : Function calls: | Function | Supported | | --- | --- | | `boolean()` | ✓ | | `ceiling()` | ✓ | | `choose()` | ✗ | | `concat()` | ✓ | | `contains()` | ✓ | | `count()` | ✓ | | `current()` | ✗ | | `document()` | ✗ | | `element-available()` | ✗ | | `ends-with()` | ✓ | | `false()` | ✓ | | `floor()` | ✓ | | `format-number()` | ✗ | | `function-available()` | ✗ | | `generate-id()` | ✗ | | `id()` | ✗ | | `key()` | ✗ | | `lang()` | ✗ | | `last()` | ✓ | | `local-name()` | ✓ | | `name()` | ✓ | | `namespace-uri()` | ✓ | | `normalize-space()` | ✓ | | `not()` | ✓ | | `number()` | ✓ | | `position()` | ✓ | | `replace()` | ✓ | | `reverse()` | ✓ | | `round()` | ✓ | | `starts-with()` | ✓ | | `string()` | ✓ | | `string-length()` | ✓ | | `substring()` | ✓ | | `substring-after()` | ✓ | | `substring-before()` | ✓ | | `sum()` | ✓ | | `system-property()` | ✗ | | `translate()` | ✓ | | `true()` | ✓ | | `unparsed-entity-url()` | ✗ | ### Changelogs 2019-03-19 * optimize XPath `|` operation performance. [#33](https://github.com/antchfx/xpath/issues/33). Tips: suggest split into multiple subquery if you have a lot of `|` operations. 2019-01-29 * improvement `normalize-space` function. [#32](https://github.com/antchfx/xpath/issues/32) 2018-12-07 * supports XPath 2.0 Sequence expressions. [#30](https://github.com/antchfx/xpath/pull/30) by [@minherz](https://github.com/minherz). Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Example [¶](#example-package) XPath package example. ``` package main import ( "fmt" "github.com/antchfx/xpath" ) func main() { expr, err := xpath.Compile("count(//book)") if err != nil { panic(err) } var root xpath.NodeNavigator // using Evaluate() method val := expr.Evaluate(root) // it returns float64 type fmt.Println(val.(float64)) // using Evaluate() method expr = xpath.MustCompile("//book") val = expr.Evaluate(root) // it returns NodeIterator type. iter := val.(*xpath.NodeIterator) for iter.MoveNext() { fmt.Println(iter.Current().Value()) } // using Select() method iter = expr.Select(root) // it always returns NodeIterator object. for iter.MoveNext() { fmt.Println(iter.Current().Value()) } } ``` ``` Output: ``` Share Format Run ### Index [¶](#pkg-index) * [type Expr](#Expr) * + [func Compile(expr string) (*Expr, error)](#Compile) + [func MustCompile(expr string) *Expr](#MustCompile) * + [func (expr *Expr) Evaluate(root NodeNavigator) interface{}](#Expr.Evaluate) + [func (expr *Expr) Select(root NodeNavigator) *NodeIterator](#Expr.Select) + [func (expr *Expr) String() string](#Expr.String) * [type NodeIterator](#NodeIterator) * + [func Select(root NodeNavigator, expr string) *NodeIterator](#Select) * + [func (t *NodeIterator) Current() NodeNavigator](#NodeIterator.Current) + [func (t *NodeIterator) MoveNext() bool](#NodeIterator.MoveNext) * [type NodeNavigator](#NodeNavigator) * [type NodeType](#NodeType) #### Examples [¶](#pkg-examples) * [Package](#example-package) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Expr](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L107) [¶](#Expr) ``` type Expr struct { // contains filtered or unexported fields } ``` Expr is an XPath expression for query. #### func [Compile](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L140) [¶](#Compile) ``` func Compile(expr [string](/builtin#string)) (*[Expr](#Expr), [error](/builtin#error)) ``` Compile compiles an XPath expression string. #### func [MustCompile](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L155) [¶](#MustCompile) ``` func MustCompile(expr [string](/builtin#string)) *[Expr](#Expr) ``` MustCompile compiles an XPath expression string and ignored error. #### func (*Expr) [Evaluate](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L120) [¶](#Expr.Evaluate) ``` func (expr *[Expr](#Expr)) Evaluate(root [NodeNavigator](#NodeNavigator)) interface{} ``` Evaluate returns the result of the expression. The result type of the expression is one of the follow: bool,float64,string,NodeIterator). #### func (*Expr) [Select](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L130) [¶](#Expr.Select) ``` func (expr *[Expr](#Expr)) Select(root [NodeNavigator](#NodeNavigator)) *[NodeIterator](#NodeIterator) ``` Select selects a node set using the specified XPath expression. #### func (*Expr) [String](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L135) [¶](#Expr.String) ``` func (expr *[Expr](#Expr)) String() [string](/builtin#string) ``` String returns XPath expression string. #### type [NodeIterator](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L74) [¶](#NodeIterator) ``` type NodeIterator struct { // contains filtered or unexported fields } ``` NodeIterator holds all matched Node object. #### func [Select](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L98) [¶](#Select) ``` func Select(root [NodeNavigator](#NodeNavigator), expr [string](/builtin#string)) *[NodeIterator](#NodeIterator) ``` Select selects a node set using the specified XPath expression. This method is deprecated, recommend using Expr.Select() method instead. #### func (*NodeIterator) [Current](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L80) [¶](#NodeIterator.Current) ``` func (t *[NodeIterator](#NodeIterator)) Current() [NodeNavigator](#NodeNavigator) ``` Current returns current node which matched. #### func (*NodeIterator) [MoveNext](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L85) [¶](#NodeIterator.MoveNext) ``` func (t *[NodeIterator](#NodeIterator)) MoveNext() [bool](/builtin#bool) ``` MoveNext moves Navigator to the next match node. #### type [NodeNavigator](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L32) [¶](#NodeNavigator) ``` type NodeNavigator interface { // NodeType returns the XPathNodeType of the current node. NodeType() [NodeType](#NodeType) // LocalName gets the Name of the current node. LocalName() [string](/builtin#string) // Prefix returns namespace prefix associated with the current node. Prefix() [string](/builtin#string) // Value gets the value of current node. Value() [string](/builtin#string) // Copy does a deep copy of the NodeNavigator and all its components. Copy() [NodeNavigator](#NodeNavigator) // MoveToRoot moves the NodeNavigator to the root node of the current node. MoveToRoot() // MoveToParent moves the NodeNavigator to the parent node of the current node. MoveToParent() [bool](/builtin#bool) // MoveToNextAttribute moves the NodeNavigator to the next attribute on current node. MoveToNextAttribute() [bool](/builtin#bool) // MoveToChild moves the NodeNavigator to the first child node of the current node. MoveToChild() [bool](/builtin#bool) // MoveToFirst moves the NodeNavigator to the first sibling node of the current node. MoveToFirst() [bool](/builtin#bool) // MoveToNext moves the NodeNavigator to the next sibling node of the current node. MoveToNext() [bool](/builtin#bool) // MoveToPrevious moves the NodeNavigator to the previous sibling node of the current node. MoveToPrevious() [bool](/builtin#bool) // MoveTo moves the NodeNavigator to the same position as the specified NodeNavigator. MoveTo([NodeNavigator](#NodeNavigator)) [bool](/builtin#bool) } ``` NodeNavigator provides cursor model for navigating XML data. #### type [NodeType](https://github.com/antchfx/gxpath/blob/v1.1.10/xpath.go#L9) [¶](#NodeType) ``` type NodeType [int](/builtin#int) ``` NodeType represents a type of XPath node. ``` const ( // RootNode is a root node of the XML document or node tree. RootNode [NodeType](#NodeType) = [iota](/builtin#iota) // ElementNode is an element, such as <element>. ElementNode // AttributeNode is an attribute, such as id='123'. AttributeNode // TextNode is the text content of a node. TextNode // CommentNode is a comment node, such as <!-- my comment --> CommentNode ) ```
learnpythontherightway_com
free_programming_book
Python
# Learn Python the Right Way Learn Python the Right Way is a modern adaption of How to Think Like a Computer Scientist. ### More about the Book How to Think Like a Computer Scientist was already the best introduction to Python book available, but we have republished it to: * Use the online IDE Replit instead of showing students how to set up Python on Windows (a point where many aspiring programmers give up). * Modernize the presentation and convert the source to Markdown so readers can more easily contribute. * Use the online app PythonTutor.com for step-by-step visualisation and stepping through code. We've also changed the title to better indicate that we believe that this is the best book for beginners to learn Python, not only in the academic context of computer science. Part 2 of our Python course is codewithrepl.it, a series of Python projects in tutorial form that learners can work through and extend. We believe that working through first this book (to learn the fundamentals) and then the set of tutorials (to see what is possible and gain experience with various libaries) is the best way to learn Python in 2021. # Chapter 1: The way of the program Date: 2013-01-01 Categories: Tags: (Watch a video based on this chapter here on YouTube.) The goal of this book is to teach you to think like a computer scientist. This way of thinking combines some of the best features of mathematics, engineering, and natural science. Like mathematicians, computer scientists use formal languages to denote ideas (specifically computations). Like engineers, they design things, assembling components into systems and evaluating tradeoffs among alternatives. Like scientists, they observe the behavior of complex systems, form hypotheses, and test predictions. The single most important skill for a computer scientist is problem solving. Problem solving means the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. As it turns out, the process of learning to program is an excellent opportunity to practice problem-solving skills. That’s why this chapter is called, The way of the program. On one level, you will be learning to program, a useful skill by itself. On another level, you will use programming as a means to an end. As we go along, that end will become clearer. The programming language you will be learning is Python. Python is an example of a high-level language; other high-level languages you might have heard of are C++, PHP, Pascal, C#, and Java. As you might infer from the name high-level language, there are also low-level languages, sometimes referred to as machine languages or assembly languages. Loosely speaking, computers can only execute programs written in low-level languages. Thus, programs written in a high-level language have to be translated into something more suitable before they can run. Almost all programs are written in high-level languages because of their advantages. It is much easier to program in a high-level language so programs take less time to write, they are shorter and easier to read, and they are more likely to be correct. Second, high-level languages are portable, meaning that they can run on different kinds of computers with few or no modifications. In this edition of the textbook, we use an online programming environment called Replit. To follow along with the examples and complete the exercises, all you need is a free account - just navigate to https://replit.com and complete the sign up process. Once you have an account, create a new repl and choose Python as the language from the dropdown. You’ll see it automatically creates a file called `main.py` . By convention, files that contain Python programs have names that end with `.py` . The engine that translates and runs Python is called the Python Interpreter: There are two ways to use it: immediate mode and script mode. In immediate mode, you type Python expressions into the Python Interpreter window, and the interpreter immediately shows the result: The `>>>` or `>` is called the Python prompt. The interpreter uses the prompt to indicate that it is ready for instructions. We typed `2 + 2` , and the interpreter evaluated our expression, and replied `4` , and on the next line it gave a new prompt, indicating that it is ready for more input. Working directly in the interpreter is convenient for testing short bits of code because you get immediate feedback. Think of it as scratch paper used to help you work out problems. Anything longer than a few lines should be put into a script. Scripts have the advantage that they can be saved to disk, printed, and so on. To create a script, you can enter the code into the middle pane, as shown below ``` print("My first program adds two numbers") print(2+3) ``` To execute the program, click the Run button in Replit. You’re now a computer programmer! Let’s take a look at some more theory before we start writing more advanced programs. A program is a sequence of instructions that specifies how to perform a computation. The computation might be something mathematical, such as solving a system of equations or finding the roots of a polynomial, but it can also be a symbolic computation, such as searching and replacing text in a document or (strangely enough) compiling a program. The details look different in different languages, but a few basic instructions appear in just about every language: input output math conditional execution repetition Believe it or not, that’s pretty much all there is to it. Every program you’ve ever used, no matter how complicated, is made up of instructions that look more or less like these. Thus, we can describe programming as the process of breaking a large, complex task into smaller and smaller subtasks until the subtasks are simple enough to be performed with sequences of these basic instructions. That may be a little vague, but we will come back to this topic later when we talk about algorithms. Programming is a complex process, and because it is done by human beings, it often leads to errors. Programming errors are called bugs and the process of tracking them down and correcting them is called debugging. Use of the term bug to describe small engineering difficulties dates back to at least 1889, when <NAME> had a bug with his phonograph. Three kinds of errors can occur in a program: syntax errors, runtime errors, and semantic errors. It is useful to distinguish between them in order to track them down more quickly. Python can only execute a program if the program is syntactically correct; otherwise, the process fails and returns an error message. Syntax refers to the structure of a program and the rules about that structure. For example, in English, a sentence must begin with a capital letter and end with a period. this sentence contains a syntax error. So does this one For most readers, a few syntax errors are not a significant problem, which is why we can read the poetry of E. E. Cummings without problems. Python is not so forgiving. If there is a single syntax error anywhere in your program, Python will display an error message and quit, and you will not be able to run your program. During the first few weeks of your programming career, you will probably spend a lot of time tracking down syntax errors. As you gain experience, though, you will make fewer errors and find them faster. The second type of error is a runtime error, so called because the error does not appear until you run the program. These errors are also called exceptions because they usually indicate that something exceptional (and bad) has happened. Runtime errors are rare in the simple programs you will see in the first few chapters, so it might be a while before you encounter one. The third type of error is the semantic error. If there is a semantic error in your program, it will run successfully, in the sense that the computer will not generate any error messages, but it will not do the right thing. It will do something else. Specifically, it will do what you told it to do. The problem is that the program you wrote is not the program you wanted to write. The meaning of the program (its semantics) is wrong. Identifying semantic errors can be tricky because it requires you to work backward by looking at the output of the program and trying to figure out what it is doing. One of the most important skills you will acquire is debugging. Although it can be frustrating, debugging is one of the most intellectually rich, challenging, and interesting parts of programming. In some ways, debugging is like detective work. You are confronted with clues, and you have to infer the processes and events that led to the results you see. Debugging is also like an experimental science. Once you have an idea what is going wrong, you modify your program and try again. If your hypothesis was correct, then you can predict the result of the modification, and you take a step closer to a working program. If your hypothesis was wrong, you have to come up with a new one. As <NAME> pointed out, When you have eliminated the impossible, whatever remains, however improbable, must be the truth. (<NAME> Doyle, The Sign of Four) For some people, programming and debugging are the same thing. That is, programming is the process of gradually debugging a program until it does what you want. The idea is that you should start with a program that does something and make small modifications, debugging them as you go, so that you always have a working program. For example, Linux is an operating system kernel that contains millions of lines of code, but it started out as a simple program <NAME> used to explore the Intel 80386 chip. According to <NAME>, one of Linus’s earlier projects was a program that would switch between displaying AAAA and BBBB. This later evolved to Linux (The Linux Users’ Guide Beta Version 1). Later chapters will make more suggestions about debugging and other programming practices. Natural languages are the languages that people speak, such as English, Spanish, and French. They were not designed by people (although people try to impose some order on them); they evolved naturally. Formal languages are languages that are designed by people for specific applications. For example, the notation that mathematicians use is a formal language that is particularly good at denoting relationships among numbers and symbols. Chemists use a formal language to represent the chemical structure of molecules. And most importantly: Programming languages are formal languages that have been designed to express computations. Formal languages tend to have strict rules about syntax. For example, `3+3=6` is a syntactically correct mathematical statement, but `3=+6$` is not. `H2O` is a syntactically correct chemical name, but `2Zz` is not. Syntax rules come in two flavors, pertaining to tokens and structure. Tokens are the basic elements of the language, such as words, numbers, parentheses, commas, and so on. In Python, a statement like ``` print("Happy New Year for ",2013) ``` has 6 tokens: a function name, an open parenthesis (round bracket), a string, a comma, a number, and a close parenthesis. It is possible to make errors in the way one constructs tokens. One of the problems with `3=+6$` is that `$` is not a legal token in mathematics (at least as far as we know). Similarly, `2Zz` is not a legal token in chemistry notation because there is no element with the abbreviation `Zz` . The second type of syntax rule pertains to the structure of a statement— that is, the way the tokens are arranged. The statement 3=+6$ is structurally illegal because you can’t place a plus sign immediately after an equal sign. Similarly, molecular formulas have to have subscripts after the element name, not before. And in our Python example, if we omitted the comma, or if we changed the two parentheses around to say ``` print)"Happy New Year for ",2013( ``` our statement would still have six legal and valid tokens, but the structure is illegal. When you read a sentence in English or a statement in a formal language, you have to figure out what the structure of the sentence is (although in a natural language you do this subconsciously). This process is called parsing. For example, when you hear the sentence, “The other shoe fell”, you understand that the other shoe is the subject and fell is the verb. Once you have parsed a sentence, you can figure out what it means, or the semantics of the sentence. Assuming that you know what a shoe is and what it means to fall, you will understand the general implication of this sentence. Although formal and natural languages have many features in common — tokens, structure, syntax, and semantics — there are many differences: ambiguity redundancy literalness People who grow up speaking a natural language—everyone—often have a hard time adjusting to formal languages. In some ways, the difference between formal and natural language is like the difference between poetry and prose, but more so: poetry prose Here are some suggestions for reading programs (and other formal languages). First, remember that formal languages are much more dense than natural languages, so it takes longer to read them. Also, the structure is very important, so it is usually not a good idea to read from top to bottom, left to right. Instead, learn to parse the program in your head, identifying the tokens and interpreting the structure. Finally, the details matter. Little things like spelling errors and bad punctuation, which you can get away with in natural languages, can make a big difference in a formal language. Traditionally, the first program written in a new language is called Hello, World! because all it does is display the words, Hello, World! In Python, the script looks like this: (For scripts, we’ll show line numbers to the left of the Python statements.) ``` print("Hello, World!") ``` This is an example of using the print function, which doesn’t actually print anything on paper. It displays a value on the screen. In this case, the result shown is `Hello, World!` The quotation marks in the program mark the beginning and end of the value; they don’t appear in the result. Some people judge the quality of a programming language by the simplicity of the Hello, World! program. By this standard, Python does about as well as possible. As programs get bigger and more complicated, they get more difficult to read. Formal languages are dense, and it is often difficult to look at a piece of code and figure out what it is doing, or why. For this reason, it is a good idea to add notes to your programs to explain in natural language what the program is doing. A comment in a computer program is text that is intended only for the human reader — it is completely ignored by the interpreter. In Python, the `#` token starts a comment. The rest of the line is ignored. Here is a new version of Hello, World!. ``` #--------------------------------------------------- # This demo program shows off how elegant Python is! # Written by <NAME>, December 2010. # Anyone may freely copy or modify this program. #--------------------------------------------------- print("Hello, World!") # Isn't this easy! ``` You’ll also notice that we’ve left a blank line in the program. Blank lines are also ignored by the interpreter, but comments and blank lines can make your programs much easier for humans to parse. Use them liberally! A set of specific steps for solving a category of problems. bug An error in a program. comment Information in a program that is meant for other programmers (or anyone reading the source code) and has no effect on the execution of the program. debugging The process of finding and removing any of the three kinds of programming errors. exception Another name for a runtime error. formal language Any one of the languages that people have designed for specific purposes, such as representing mathematical ideas or computer programs; all programming languages are formal languages. high-level language A programming language like Python that is designed to be easy for humans to read and write. immediate mode A style of using Python where we type expressions at the command prompt, and the results are shown immediately. Contrast with script, and see the entry under Python shell. interpreter The engine that executes your Python scripts or expressions. low-level language A programming language that is designed to be easy for a computer to execute; also called machine language or assembly language. natural language Any one of the languages that people speak that evolved naturally. object code The output of the compiler after it translates the program. parse To examine a program and analyze the syntactic structure. portability A property of a program that can run on more than one kind of computer. print function A function used in a program or script that causes the Python interpreter to display a value on its output device. problem solving The process of formulating a problem, finding a solution, and expressing the solution. a sequence of instructions that specifies to a computer actions and computations to be performed. Python shell An interactive user interface to the Python interpreter. The user of a Python shell types commands at the prompt (>>>), and presses the return key to send these commands immediately to the interpreter for processing. The word shell comes from Unix. In the PyScripter used in this RLE version of the book, the Interpreter Window is where we’d do the immediate mode interaction. runtime error An error that does not occur until the program has started to execute but that prevents the program from continuing. script A program stored in a file (usually one that will be interpreted). semantic error An error in a program that makes it do something other than what the programmer intended. semantics The meaning of a program. source code A program in a high-level language before being compiled. syntax The structure of a program. syntax error An error in a program that makes it impossible to parse — and therefore impossible to interpret. token One of the basic elements of the syntactic structure of a program, analogous to a word in a natural language. Write an English sentence with understandable semantics but incorrect syntax. Write another English sentence which has correct syntax but has semantic errors. Using the Python interpreter, type `1 + 2` and then hit return. Python evaluates this expression, displays the result, and then shows another prompt. `*` is the multiplication operator, and `**` is the exponentiation operator. Experiment by entering different expressions and recording what is displayed by the Python interpreter. Type `1 2` and then hit return. Python tries to evaluate the expression, but it can’t because the expression is not syntactically legal. Instead, it shows the error message: ``` File "<interactive input>", line 1 1 2 ^ SyntaxError: invalid syntax ``` In many cases, Python indicates where the syntax error occurred, but it is not always right, and it doesn’t give you much information about what is wrong. So, for the most part, the burden is on you to learn the syntax rules. In this case, Python is complaining because there is no operator between the numbers. See if you can find a few more examples of things that will produce error messages when you enter them at the Python prompt. Write down what you enter at the prompt and the last line of the error message that Python reports back to you. Type `print("hello")` . Python executes this, which has the effect of printing the letters h-e-l-l-o. Notice that the quotation marks that you used to enclose the string are not part of the output. Now type `"hello"` and describe your result. Make notes of when you see the quotation marks and when you don’t. Type cheese without the quotation marks. The output will look something like this: ``` Traceback (most recent call last): File "<interactive input>", line 1, in ? NameError: name 'cheese' is not defined ``` This is a run-time error; specifically, it is a NameError, and even more specifically, it is an error because the name cheese is not defined. If you don’t know what that means yet, you will soon. Type 6 + 4 * 9 at the Python prompt and hit enter. Record what happens. Now create a Python script with the following contents: `6 + 4 * 9` What happens when you run this script? Now change the script contents to: `print(6 + 4 * 9)` and run it again. What happened this time? Whenever an expression is typed at the Python prompt, it is evaluated and the result is automatically shown on the line below. (Like on your calculator, if you type this expression you’ll get the result 42.) A script is different, however. Evaluations of expressions are not automatically displayed, so it is necessary to use the print function to make the answer show up. It is hardly ever necessary to use the print function in immediate mode at the command prompt. # Chapter 2: Variables, expressions and statements (Watch a video based on this chapter here on YouTube.) A value is one of the fundamental things — like a letter or a number — that a program manipulates. The values we have seen so far are `4` (the result when we added `2 + 2` ), and `"Hello, World!"` . These values are classified into different classes, or data types: `4` is an integer, and `"Hello, World!"` is a string, so-called because it contains a string of letters. You (and the interpreter) can identify strings because they are enclosed in quotation marks. If you are not sure what class a value falls into, Python has a function called type which can tell you. ``` >>> type("Hello, World!") <class 'str'> >>> type(17) <class 'int'> ``` Not surprisingly, strings belong to the class str and integers belong to the class int. Less obviously, numbers with a decimal point belong to a class called float, because these numbers are represented in a format called floating-point. At this stage, you can treat the words class and type interchangeably. We’ll come back to a deeper understanding of what a class is in later chapters. ``` >>> type(3.2) <class 'float'> ``` What about values like `"17"` and `"3.2"` ? They look like numbers, but they are in quotation marks like strings. ``` >>> type("17") <class 'str'> >>> type("3.2") <class 'str'> ``` They’re strings! Strings in Python can be enclosed in either single quotes (’) or double quotes (“), or three of each (’’’ or”““) ``` >>> type('This is a string.') <class 'str'> >>> type("And so is this.") <class 'str'> >>> type("""and this.""") <class 'str'> >>> type('''and even this...''') <class 'str'> ``` Double quoted strings can contain single quotes inside them, as in `"Bruce's beard"` , and single quoted strings can have double quotes inside them, as in ``` 'The knights who say "Ni!"' ``` . Strings enclosed with three occurrences of either quote symbol are called triple quoted strings. They can contain either single or double quotes: ``` >>> print('''"Oh no", she exclaimed, "Ben's bike is broken!"''') "Oh no", she exclaimed, "Ben's bike is broken!" >>> ``` Triple quoted strings can even span multiple lines: ``` >>> message = """This message will ... span several ... lines.""" >>> print(message) This message will span several lines. >>> ``` Python doesn’t care whether you use single or double quotes or the three-of-a-kind quotes to surround your strings: once it has parsed the text of your program or command, the way it stores the value is identical in all cases, and the surrounding quotes are not part of the value. But when the interpreter wants to display a string, it has to decide which quotes to use to make it look like a string. ``` >>> 'This is a string.' 'This is a string.' >>> """And so is this.""" 'And so is this.' ``` So the Python language designers usually chose to surround their strings by single quotes. What do you think would happen if the string already contained single quotes? When you type a large integer, you might be tempted to use commas between groups of three digits, as in `42,000` . This is not a legal integer in Python, but it does mean something else, which is legal: ``` >>> 42000 42000 >>> 42,000 (42, 0) ``` Well, that’s not what we expected at all! Because of the comma, Python chose to treat this as a pair of values. We’ll come back to learn about pairs later. But, for the moment, remember not to put commas or spaces in your integers, no matter how big they are. Also revisit what we said in the previous chapter: formal languages are strict, the notation is concise, and even the smallest change might mean something quite different from what you intended. One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value. The assignment statement gives a value to a variable: This example makes three assignments. The first assigns the string value `"What's up, Doc?"` to a variable named message. The second gives the integer `17` to `n` , and the third assigns the floating-point number `3.14159` to a variable called `pi` . The assignment token, `=` , should not be confused with equals, which uses the token `==` . The assignment statement binds a name, on the left-hand side of the operator, to a value, on the right-hand side. This is why you will get an error if you enter: ``` >>> 17 = n File "<interactive input>", line 1 SyntaxError: can't assign to literal ``` Tip: When reading or writing code, say to yourself “n is assigned 17” or “n gets the value 17”. Don’t say “n equals 17”. A common way to represent variables on paper is to write the name with an arrow pointing to the variable’s value. This kind of figure is called a state snapshot because it shows what state each of the variables is in at a particular instant in time. (Think of it as the variable’s state of mind). This diagram shows the result of executing the assignment statements: If you ask the interpreter to evaluate a variable, it will produce the value that is currently linked to the variable: We use variables in a program to “remember” things, perhaps the current score at the football game. But variables are variable. This means they can change over time, just like the scoreboard at a football game. You can assign a value to a variable, and later assign a different value to the same variable. (This is different from maths. In maths, if you give `x` the value `3` , it cannot change to link to a different value half-way through your calculations!) ``` >>> day = "Thursday" >>> day 'Thursday' >>> day = "Friday" >>> day 'Friday' >>> day = 21 >>> day 21 ``` You’ll notice we changed the value of `day` three times, and on the third assignment we even made it refer to a value that was of a different type. A great deal of programming is about having the computer remember things, e.g. The number of missed calls on your phone, and then arranging to update or change the variable when you miss another call. Variable names can be arbitrarily long. They can contain both letters and digits, but they have to begin with a letter or an underscore. Although it is legal to use uppercase letters, by convention we don’t. If you do, remember that case matters. `Bruce` and `bruce` are different variables. The underscore character ( _) can appear in a name. It is often used in names with multiple words, such as `my_name` or ``` price_of_tea_in_china ``` . There are some situations in which names beginning with an underscore have special meaning, so a safe rule for beginners is to start all names with a letter. If you give a variable an illegal name, you get a syntax error: ``` >>> 76trombones = "big parade" SyntaxError: invalid syntax >>> more$ = 1000000 SyntaxError: invalid syntax >>> class = "Computer Science 101" SyntaxError: invalid syntax ``` `76trombones` is illegal because it does not begin with a letter. `more$` is illegal because it contains an illegal character, the dollar sign. But what’s wrong with `class` ? It turns out that `class` is one of the Python keywords. Keywords define the language’s syntax rules and structure, and they cannot be used as variable names. Python has thirty-something keywords (and every now and again improvements to Python introduce or eliminate one or two): and | as | assert | break | class | continue | | --- | --- | --- | --- | --- | --- | def | del | elif | else | except | exec | finally | for | from | global | if | import | in | is | lambda | nonlocal | not | or | pass | raise | return | try | while | with | yield | True | False | None | You might want to keep this list handy. If the interpreter complains about one of your variable names and you don’t know why, see if it is on this list. Programmers generally choose names for their variables that are meaningful to the human readers of the program — they help the programmer document, or remember, what the variable is used for. Caution Beginners sometimes confuse “meaningful to the human readers” with “meaningful to the computer”. So they’ll wrongly think that because they’ve called some variable `average` or `pi` , it will somehow magically calculate an average, or magically know that the variable `pi` should have a value like `3.14159` . No! The computer doesn’t understand what you intend the variable to mean. So you’ll find some instructors who deliberately don’t choose meaningful names when they teach beginners — not because we don’t think it is a good habit, but because we’re trying to reinforce the message that you — the programmer — must write the program code to calculate the average, and you must write an assignment statement to give the variable `pi` the value you want it to have. A statement is an instruction that the Python interpreter can execute. We have only seen the assignment statement so far. Some other kinds of statements that we’ll see shortly are `while` statements, `for` statements, `if` statements, and `import` statements. (There are other kinds too!) When you type a statement on the command line, Python executes it. Statements don’t produce any result. An expression is a combination of values, variables, operators, and calls to functions. If you type an expression at the Python prompt, the interpreter evaluates it and displays the result: ``` >>> 1 + 1 2 >>> len("hello") 5 ``` In this example `len` is a built-in Python function that returns the number of characters in a `string` . We’ve previously seen the `type` functions, so this is our third example of a function! The evaluation of an expression produces a value, which is why expressions can appear on the right hand side of assignment statements. A value all by itself is a simple expression, and so is a variable. ``` >>> 17 17 >>> y = 3.14 >>> x = len("hello") >>> x 5 >>> y 3.14 ``` Operators are special tokens that represent computations like addition, multiplication and division. The values the operator uses are called operands. The following are all legal Python expressions whose meaning is more or less clear: ``` 20+32 hour-1 hour*60+minute minute/60 5**2 (5+9)*(15-7) ``` The tokens `+` , `-` , and `*` , and the use of parenthesis for grouping, mean in Python what they mean in mathematics. The asterisk ( `*` ) is the token for multiplication, and `**` is the token for exponentiation. ``` >>> 2 ** 3 8 >>> 3 ** 2 9 ``` When a variable name appears in the place of an operand, it is replaced with its value before the operation is performed. Addition, subtraction, multiplication, and exponentiation all do what you expect. Example: so let us convert 645 minutes into hours: ``` >>> minutes = 645 >>> hours = minutes / 60 >>> hours 10.75 ``` Oops! In Python 3, the division operator `/` always yields a floating point result. What we might have wanted to know was how many whole hours there are, and how many minutes remain. Python gives us two different flavors of the division operator. The second, called floor division uses the token `//` . Its result is always a whole number — and if it has to adjust the number it always moves it to the left on the number line. So `6 // 4` yields `1` , but `-6 // 4` might surprise you! ``` >>> 7 / 4 1.75 >>> 7 // 4 1 >>> minutes = 645 >>> hours = minutes // 60 >>> hours 10 ``` Take care that you choose the correct flavor of the division operator. If you’re working with expressions where you need floating point values, use the division operator that does the division accurately. Here we’ll look at three more Python functions, `int` , `float` and `str` , which will (attempt to) convert their arguments into types `int` , `float` and `str` respectively. We call these type converter functions. The int function can take a floating point number or a string, and turn it into an int. For floating point numbers, it discards the decimal portion of the number — a process we call truncation towards zero on the number line. Let us see this in action: ``` >>> int(3.14) 3 >>> int(3.9999) # This doesn't round to the closest int! 3 >>> int(3.0) 3 >>> int(-3.999) # Note that the result is closer to zero -3 >>> int(minutes / 60) 10 >>> int("2345") # Parse a string to produce an int 2345 >>> int(17) # It even works if arg is already an int 17 >>> int("23 bottles") ``` This last case doesn’t look like a number — what do we expect? ``` Traceback (most recent call last): File "<interactive input>", line 1, in <module> ValueError: invalid literal for int() with base 10: '23 bottles' ``` The type converter `float` can turn an integer, a float, or a syntactically legal string into a float: ``` >>> float(17) 17.0 >>> float("123.45") 123.45 ``` The type converter `str` turns its argument into a string: ``` >>> str(17) '17' >>> str(123.45) '123.45' ``` When more than one operator appears in an expression, the order of evaluation depends on the rules of precedence. Python follows the same precedence rules for its mathematical operators that mathematics does. The acronym PEMDAS is a useful way to remember the order of operations: Parentheses have the highest precedence and can be used to force an expression to evaluate in the order you want. Since expressions in parentheses are evaluated first, `2 * (3-1)` is `4` , and `(1+1)**(5-2)` is `8` . You can also use parentheses to make an expression easier to read, as in `(minute * 100) / 60` , even though it doesn’t change the result. Exponentiation has the next highest precedence, so `2**1+1` is `3` and not `4` , and `3*1**3` is `3` and not `27` . Multiplication and both Division operators have the same precedence, which is higher than Addition and Subtraction, which also have the same precedence. So `2*3-1` yields `5` rather than `4` , and `5-2*2` is `1` , not `6` . Operators with the same precedence are evaluated from left-to-right. In algebra we say they are left-associative. So in the expression `6-3+2` , the subtraction happens first, yielding `3` . We then add `2` to get the result `5` . If the operations had been evaluated from right to left, the result would have been `6-(3+2)` , which is `1` . (The acronym PEDMAS could mislead you to thinking that division has higher precedence than multiplication, and addition is done ahead of subtraction - don’t be misled. Subtraction and addition are at the same precedence, and the left-to-right rule applies.) Due to some historical quirk, an exception to the left-to-right left-associative rule is the exponentiation operator `**` , so a useful hint is to always use parentheses to force exactly the order you want when exponentiation is involved: ``` >>> 2 ** 3 ** 2 # The right-most ** operator gets done first! 512 >>> (2 ** 3) ** 2 # Use parentheses to force the order you want! 64 ``` The immediate mode command prompt of Python is great for exploring and experimenting with expressions like this. In general, you cannot perform mathematical operations on strings, even if the strings look like numbers. The following are illegal (assuming that message has type string): ``` >>> message - 1 # Error >>> "Hello" / 123 # Error >>> message * "Hello" # Error >>> "15" + 2 # Error ``` Interestingly, the `+` operator does work with strings, but for strings, the `+` operator represents concatenation, not addition. Concatenation means joining the two operands by linking them end-to-end. For example: ``` fruit = "banana" baked_good = " nut bread" print(fruit + baked_good) ``` The output of this program is banana nut bread. The space before the word nut is part of the string, and is necessary to produce the space between the concatenated strings. The `*` operator also works on strings; it performs repetition. For example, `'Fun'*3` is `'FunFunFun'` . One of the operands has to be a string; the other has to be an integer. On one hand, this interpretation of `+` and `*` makes sense by analogy with addition and multiplication. Just as `4*3` is equivalent to `4+4+4` , we expect `"Fun"*3` to be the same as `"Fun"+"Fun"+"Fun"` , and it is. On the other hand, there is a significant way in which string concatenation and repetition are different from integer addition and multiplication. Can you think of a property that addition and multiplication have that string concatenation and repetition do not? There is a built-in function in Python for getting input from the user: ``` n = input("Please enter your name: ") ``` A sample run of this script in Replit would populate your input question in the console to the left like this: The user of the program can enter the name and press enter, and when this happens the text that has been entered is returned from the input function, and in this case assigned to the variable n. Even if you asked the user to enter their age, you would get back a string like `"17"` . It would be your job, as the programmer, to convert that string into a int or a float, using the `int` or `float` converter functions we saw earlier. So far, we have looked at the elements of a program — variables, expressions, statements, and function calls — in isolation, without talking about how to combine them. One of the most useful features of programming languages is their ability to take small building blocks and compose them into larger chunks. For example, we know how to get the user to enter some input, we know how to convert the string we get into a float, we know how to write a complex expression, and we know how to print values. Let’s put these together in a small four-step program that asks the user to input a value for the radius of a circle, and then computes the area of the circle from the formula Firstly, we’ll do the four steps one at a time: ``` response = input("What is your radius? ") r = float(response) area = 3.14159 * r**2 print("The area is ", area) ``` Now let’s compose the first two lines into a single line of code, and compose the second two lines into another line of code. ``` r = float( input("What is your radius? ") ) print("The area is ", 3.14159 * r**2) ``` If we really wanted to be tricky, we could write it all in one statement: ``` print("The area is ", 3.14159*float(input("What is your radius?"))**2) ``` Such compact code may not be most understandable for humans, but it does illustrate how we can compose bigger chunks from our building blocks. If you’re ever in doubt about whether to compose code or fragment it into smaller steps, try to make it as simple as you can for the human to follow. My choice would be the first case above, with four separate steps. The modulus operator works on integers (and integer expressions) and gives the remainder when the first number is divided by the second. In Python, the modulus operator is a percent sign ( `%` ). The syntax is the same as for other operators. It has the same precedence as the multiplication operator. ``` >>> q = 7 // 3 # This is integer division operator >>> print(q) 2 >>> r = 7 % 3 >>> print(r) 1 ``` So `7` divided by `3` is `2` with a remainder of `1` . The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another—if `x % y` is zero, then `x` is divisible by `y` . Also, you can extract the right-most digit or digits from a number. For example, `x % 10` yields the right-most digit of `x` (in base `10` ). Similarly `x % 100` yields the last two digits. It is also extremely useful for doing conversions, say from seconds, to hours, minutes and seconds. So let’s write a program to ask the user to enter some seconds, and we’ll convert them into hours, minutes, and remaining seconds. ``` total_secs = int(input("How many seconds, in total?")) hours = total_secs // 3600 secs_still_remaining = total_secs % 3600 minutes = secs_still_remaining // 60 secs_finally_remaining = secs_still_remaining % 60 print("Hrs=", hours, " mins=", minutes, "secs=", secs_finally_remaining) ``` assignment statement A statement that assigns a value to a name (variable). To the left of the assignment operator, `=` , is a name. To the right of the assignment token is an expression which is evaluated by the Python interpreter and then assigned to the name. The difference between the left and right hand sides of the assignment statement is often confusing to new programmers. In the following assignment: `n = n + 1` `n` plays a very different role on each side of the `=` . On the right it is a value and makes up part of the expression which will be evaluated by the Python interpreter before assigning it to the name on the left. assignment token `=` is Python’s assignment token. Do not confuse it with equals, which is an operator for comparing values. composition The ability to combine simple expressions and statements into compound statements and expressions in order to represent complex computations concisely. concatenate To join two strings end-to-end. data type A set of values. The type of a value determines how it can be used in expressions. So far, the types you have seen are integers ( `int` ), floating-point numbers ( `float` ), and strings ( `str` ). evaluate To simplify an expression by performing the operations in order to yield a single value. expression A combination of variables, operators, and values that represents a single result value. float A Python data type which stores floating-point numbers. Floating-point numbers are stored internally in two parts: a base and an exponent. When printed in the standard format, they look like decimal numbers. Beware of rounding errors when you use `floats` , and remember that they are only approximate values. floor division An operator (denoted by the token `//` ) that divides one number by another and yields an integer, or, if the result is not already an integer, it yields the next smallest integer. int A Python data type that holds positive and negative whole numbers. keyword A reserved word that is used by the compiler to parse programs; you cannot use keywords like `if` , `def` , and `while` as variable names. modulus operator An operator, denoted with a percent sign ( `%` ), that works on integers and yields the remainder when one number is divided by another. operand One of the values on which an operator operates. operator A special symbol that represents a simple computation like addition, multiplication, or string concatenation. rules of precedence The set of rules governing the order in which expressions involving multiple operators and operands are evaluated. state snapshot A graphical representation of a set of variables and the values to which they refer, taken at a particular instant during the program’s execution. statement An instruction that the Python interpreter can execute. So far we have only seen the assignment statement, but we will soon meet the `import` statement and the `for` statement. str A Python data type that holds a string of characters. value A number or string (or other things to be named later) that can be stored in a variable or computed in an expression. variable A name that refers to a value. variable name A name given to a variable. Variable names in Python consist of a sequence of letters ( `a..z` , `A..Z` , and `_` ) and digits (0..9) that begins with a letter. In best programming practice, variable names should be chosen so that they describe their use in the program, making the program self documenting. Take the sentence: All work and no play makes Jack a dull boy. Store each word in a separate variable, then print out the sentence on one line using print. Add parenthesis to the expression `6 * 1 - 2` to change its value from `4` to `-6` . Place a comment before a line of code that previously worked, and record what happens when you rerun the program. Start the Python interpreter and enter `bruce + 4` at the prompt. This will give you an error: ``` NameError: name 'bruce' is not defined ``` Assign a value to bruce so that `bruce + 4` evaluates to `10` . The formula for computing the final amount if one is earning compound interest is given on Wikipedia as Write a Python program that assigns the principal amount of $10000 to variable `P` , assign to n the value `12` , and assign to `r` the interest rate of 8%. Then have the program prompt the user for the number of years `t` that the money will be compounded for. Calculate and print the final amount after `t` years. Evaluate the following numerical expressions in your head, then use the Python interpreter to check your results: ``` >>> 5 % 2 >>> 9 % 5 >>> 15 % 12 >>> 12 % 15 >>> 6 % 6 >>> 0 % 7 >>> 7 % 0 ``` What happened with the last example? Why? If you were able to correctly anticipate the computer’s response in all but the last one, it is time to move on. If not, take time now to make up examples of your own. Explore the modulus operator until you are confident you understand how it works. You look at the clock and it is exactly 2pm. You set an alarm to go off in 51 hours. At what time does the alarm go off? (Hint: you could count on your fingers, but this is not what we’re after. If you are tempted to count on your fingers, change the 51 to 5100.) Write a Python program to solve the general version of the above problem. Ask the user for the time now (in hours), and ask for the number of hours to wait. Your program should output what the time will be on the clock when the alarm goes off. # Chapter 3: Hello, little turtles! There are many modules in Python that provide very powerful features that we can use in our own programs. Some of these can send email, or fetch web pages. The one we’ll look at in this chapter allows us to create turtles and get them to draw shapes and patterns. The turtles are fun, but the real purpose of the chapter is to teach ourselves a little more Python, and to develop our theme of computational thinking, or thinking like a computer scientist. Most of the Python covered here will be explored in more depth later. Let’s write a couple of lines of Python program to create a new turtle and start drawing a rectangle. (We’ll call the variable that refers to our first turtle `alex` , but we can choose another name if we follow the naming rules from the previous chapter). ``` import turtle # Allows us to use turtles wn = turtle.Screen() # Creates a playground for turtles alex = turtle.Turtle() # Create a turtle, assign to alex alex.forward(50) # Tell alex to move forward by 50 units alex.left(90) # Tell alex to turn by 90 degrees alex.forward(30) # Complete the second side of a rectangle wn.mainloop() # Wait for user to close window ``` When we run this program, a new window pops up: Here are a couple of things we’ll need to understand about this program. The first line tells Python to load a module named `turtle` . That module brings us two new types that we can use: the `Turtle` type, and the `Screen` type. The dot notation `turtle.Turtle` means “The Turtle type that is defined within the turtle module”. (Remember that Python is case sensitive, so the module name, with a lowercase `t` , is different from the type `Turtle` .) We then create and open what it calls a screen (we would prefer to call it a window), which we assign to variable `wn` . Every window contains a canvas, which is the area inside the window on which we can draw. In line 3 we create a turtle. The variable `alex` is made to refer to this turtle. So these first three lines have set things up, we’re ready to get our turtle to draw on our canvas. In lines 5-7, we instruct the object `alex` to move, and to turn. We do this by invoking, or activating, `alex’s` methods — these are the instructions that all turtles know how to respond to. The last line plays a part too: the `wn` variable refers to the window shown above. When we invoke its `mainloop` method, it enters a state where it waits for events (like keypresses, or mouse movement and clicks). The program will terminate when the user closes the window. An object can have various methods — things it can do — and it can also have attributes — (sometimes called properties). For example, each turtle has a color attribute. The method invocation `alex.color("red")` will make `alex` red, and drawing will be red too. (Note the word color is spelled the American way!) The color of the turtle, the width of its pen, the position of the turtle within the window, which way it is facing, and so on are all part of its current state. Similarly, the window object has a background color, and some text in the title bar, and a size and position on the screen. These are all part of the state of the window object. Quite a number of methods exist that allow us to modify the turtle and the window objects. We’ll just show a couple. In this program we’ve only commented those lines that are different from the previous example (and we’ve used a different variable name for this turtle): ``` import turtle wn = turtle.Screen() wn.bgcolor("lightgreen") # Set the window background color wn.title("Hello, Tess!") # Set the window title tess = turtle.Turtle() tess.color("blue") # Tell tess to change her color tess.pensize(3) # Tell tess to set her pen width tess.forward(50) tess.left(120) tess.forward(50) When we run this program, this new window pops up, and will remain on the screen until we close it. Extend this program … Modify this program so that before it creates the window, it prompts the user to enter the desired background color. It should store the user’s responses in a variable, and modify the color of the window according to the user’s wishes. (Hint: you can find a list of permitted color names at http://www.tcl.tk/man/tcl8.4/TkCmd/colors.htm. It includes some quite unusual ones, like “peach puff” and “HotPink”.) Do similar changes to allow the user, at runtime, to set tess’ color. Do the same for the width of `tess’` pen. Hint: your dialog with the user will return a string, but tess’ `pensize` method expects its argument to be an int. So you’ll need to convert the string to an int before you pass it to `pensize` . Just like we can have many different integers in a program, we can have many turtles. Each of them is called an instance. Each instance has its own attributes and methods — so alex might draw with a thin black pen and be at some position, while `tess` might be going in her own direction with a fat pink pen. ``` import turtle wn = turtle.Screen() # Set up the window and its attributes wn.bgcolor("lightgreen") wn.title("Tess & Alex") alex = turtle.Turtle() # Create alex tess.forward(80) # Make tess draw equilateral triangle tess.left(120) tess.forward(80) tess.left(120) tess.forward(80) tess.left(120) # Complete the triangle tess.right(180) # Turn tess around tess.forward(80) # Move her away from the origin alex.forward(50) # Make alex draw a square alex.left(90) alex.forward(50) alex.left(90) alex.forward(50) alex.left(90) alex.forward(50) alex.left(90) Here is what happens when `alex` completes his rectangle, and `tess` completes her triangle: Here are some How to think like a computer scientist observations: There are 360 degrees in a full circle. If we add up all the turns that a turtle makes, no matter what steps occurred between the turns, we can easily figure out if they add up to some multiple of 360. This should convince us that alex is facing in exactly the same direction as he was when he was first created. (Geometry conventions have 0 degrees facing East, and that is the case here too!) We could have left out the last turn for alex, but that would not have been as satisfying. If we’re asked to draw a closed shape like a square or a rectangle, it is a good idea to complete all the turns and to leave the turtle back where it started, facing the same direction as it started in. This makes reasoning about the program and composing chunks of code into bigger programs easier for us humans! We did the same with tess: she drew her triangle, and turned through a full 360 degrees. Then we turned her around and moved her aside. Even the blank line 18 is a hint about how the programmer’s mental chunking is working: in big terms, tess’ movements were chunked as “draw the triangle” (lines 12-17) and then “move away from the origin” (lines 19 and 20). One of the key uses for comments is to record our mental chunking, and big ideas. They’re not always explicit in the code. And, uh-huh, two turtles may not be enough for a herd. But the important idea is that the turtle module gives us a kind of factory that lets us create as many turtles as we need. Each instance has its own state and behaviour. When we drew the square, it was quite tedious. We had to explicitly repeat the steps of moving and turning four times. If we were drawing a hexagon, or an octagon, or a polygon with 42 sides, it would have been worse. So a basic building block of all programs is to be able to repeat some code, over and over again. Python’s `for` loop solves this for us. Let’s say we have some friends, and we’d like to send them each an email inviting them to our party. We don’t quite know how to send email yet, so for the moment we’ll just print a message for each friend: ``` for f in ["Joe","Zoe","Brad","Angelina","Zuki","Thandi","Paris"]: invite = "Hi " + f + ". Please come to my party on Saturday!" print(invite) # more code can follow here ... ``` When we run this, the output looks like this: ``` Hi Joe. Please come to my party on Saturday! Hi Zoe. Please come to my party on Saturday! Hi Brad. Please come to my party on Saturday! Hi Angelina. Please come to my party on Saturday! Hi Zuki. Please come to my party on Saturday! Hi Thandi. Please come to my party on Saturday! Hi Paris. Please come to my party on Saturday! ``` The variable `f` in the for statement at line `1` is called the loop variable. We could have chosen any other variable name instead. Lines `2` and `3` are the loop body. The loop body is always indented. The indentation determines exactly what statements are “in the body of the loop”. On each iteration or pass of the loop, first a check is done to see if there are still more items to be processed. If there are none left (this is called the terminating condition of the loop), the loop has finished. Program execution continues at the next statement after the loop body, (e.g. in this case the next statement below the comment in line `4` ). If there are items still to be processed, the loop variable is updated to refer to the next item in the list. This means, in this case, that the loop body is executed here `7` times, and each time `f` will refer to a different friend. At the end of each execution of the body of the loop, Python returns to the for statement, to see if there are more items to be handled, and to assign the next one to `f` . As a program executes, the interpreter always keeps track of which statement is about to be executed. We call this the control flow, of the flow of execution of the program. When humans execute programs, they often use their finger to point to each statement in turn. So we could think of control flow as “Python’s moving finger”. Control flow until now has been strictly top to bottom, one statement at a time. The for loop changes this. Flowchart of a for loop Control flow is often easy to visualize and understand if we draw a flowchart. This shows the exact steps and logic of how the for statement executes. To draw a square we’d like to do the same thing four times — move the turtle, and turn. We previously used 8 lines to have alex draw the four sides of a square. This does exactly the same, but using just three lines: ``` for i in [0,1,2,3]: alex.forward(50) alex.left(90) ``` Some observations: While “saving some lines of code” might be convenient, it is not the big deal here. What is much more important is that we’ve found a “repeating pattern” of statements, and reorganized our program to repeat the pattern. Finding the chunks and somehow getting our programs arranged around those chunks is a vital skill in computational thinking. The values `[0,1,2,3]` were provided to make the loop body execute 4 times. We could have used any four values, but these are the conventional ones to use. In fact, they are so popular that Python gives us special built-in range objects: ``` for i in range(4): # Executes the body with i = 0, then 1, then 2, then 3 for x in range(10): # Sets x to each of ... [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Computer scientists like to count from `0` ! `range` can deliver a sequence of values to the loop variable in the `for` loop. They start at `0` , and in these cases do not include the `4` or the `10` . Our little trick earlier to make sure that alex did the final turn to complete 360 degrees has paid off: if we had not done that, then we would not have been able to use a loop for the fourth side of the square. It would have become a “special case”, different from the other sides. When possible, we’d much prefer to make our code fit a general pattern, rather than have to create a special case. So to repeat something four times, a good Python programmer would do this: ``` for i in range(4): alex.forward(50) alex.left(90) ``` By now you should be able to see how to change our previous program so that tess can also use a `for` loop to draw her equilateral triangle. But now, what would happen if we made this change? ``` for c in ["yellow", "red", "purple", "blue"]: alex.color(c) alex.forward(50) alex.left(90) ``` A variable can also be assigned a value that is a list. So lists can also be used in more general situations, not only in the `for` loop. The code above could be rewritten like this: ``` # Assign a list to a variable clrs = ["yellow", "red", "purple", "blue"] for c in clrs: alex.color(c) alex.forward(50) alex.left(90) ``` Turtle methods can use negative angles or distances. So `tess.forward(-100)` will move tess backwards, and `tess.left(-30)` turns her to the right. Additionally, because there are 360 degrees in a circle, turning 30 to the left will get tess facing in the same direction as turning 330 to the right! (The on-screen animation will differ, though — you will be able to tell if tess is turning clockwise or counter-clockwise!) This suggests that we don’t need both a left and a right turn method — we could be minimalists, and just have one method. There is also a backward method. (If you are very nerdy, you might enjoy saying `alex.backward(-100)` to move `alex` forward!) Part of thinking like a scientist is to understand more of the structure and rich relationships in our field. So revising a few basic facts about geometry and number lines, and spotting the relationships between left, right, backward, forward, negative and positive distances or angles values is a good start if we’re going to play with turtles. A turtle’s pen can be picked up or put down. This allows us to move a turtle to a different place without drawing a line. The methods are ``` alex.penup() alex.forward(100) # This moves alex, but no line is drawn alex.pendown() ``` Every turtle can have its own shape. The ones available “out of the box” are `arrow` , `blank` , `circle` , `classic` , `square` , `triangle` , `turtle` . `alex.shape("turtle")` We can speed up or slow down the turtle’s animation speed. (Animation controls how quickly the turtle turns and moves forward). Speed settings can be set between `1` (slowest) to `10` (fastest). But if we set the speed to `0` , it has a special meaning — turn off animation and go as fast as possible. `alex.speed(10)` A turtle can “stamp” its footprint onto the canvas, and this will remain after the turtle has moved somewhere else. Stamping works, even when the pen is up. Let’s do an example that shows off some of these new features: ``` import turtle wn = turtle.Screen() wn.bgcolor("lightgreen") tess = turtle.Turtle() tess.shape("turtle") tess.color("blue") tess.penup() # This is new size = 20 for i in range(30): tess.stamp() # Leave an impression on the canvas size = size + 3 # Increase the size on every iteration tess.forward(size) # Move tess along tess.right(24) # ... and turn her Be careful now! How many times was the body of the loop executed? How many turtle images do we see on the screen? All except one of the shapes we see on the screen here are footprints created by stamp. But the program still only has one turtle instance — can you figure out which one here is the real tess? (Hint: if you’re not sure, write a new line of code after the for loop to change tess’ color, or to put her pen down and draw a line, or to change her shape, etc.) attribute Some state or value that belongs to a particular object. For example, tess has a color. canvas A surface within a window where drawing takes place. control flow See flow of execution in the next chapter. for loop A statement in Python for convenient repetition of statements in the body of the loop. loop body Any number of statements nested inside a loop. The nesting is indicated by the fact that the statements are indented under the `for` loop statement. loop variable A variable used as part of a `for` loop. It is assigned a different value on each iteration of the loop. instance An object of a certain type, or class. tess and alex are different instances of the class Turtle. method A function that is attached to an object. Invoking or activating the method causes the object to respond in some way, e.g. forward is the method when we say `tess.forward(100)` . invoke An object has methods. We use the verb invoke to mean activate the method. Invoking a method is done by putting parentheses after the method name, with some possible arguments. So `tess.forward()` is an invocation of the forward method. module A file containing Python definitions and statements intended for use in other Python programs. The contents of a module are made available to the other program by using the import statement. object A “thing” to which a variable can refer. This could be a screen window, or one of the turtles we have created. range A built-in function in Python for generating sequences of integers. It is especially useful when we need to write a for loop that executes a fixed number of times. terminating condition A condition that occurs which causes a loop to stop repeating its body. In the for loops we saw in this chapter, the terminating condition has been when there are no more elements to assign to the loop variable. Write a program that prints ``` We like Python's turtles! ``` 1000 times. Give three attributes of your cellphone object. Give three methods of your cellphone. Write a program that uses a `for` loop to print ``` One of the months of the year is January One of the months of the year is February ... ``` Suppose our turtle tess is at heading `0` — facing east. We execute the statement `tess.left(3645)` . What does tess do, and what is her final heading? Assume you have the assignment ``` xs = [12, 10, 32, 3, 66, 17, 42, 99, 20] ``` Write a loop that prints each of the numbers on a new line. Write a loop that prints each number and its square on a new line. Write a loop that adds all the numbers from the list into a variable called `total` . You should set the `total` variable to have the value `0` before you start adding them up, and print the value in total after the loop has completed. Print the product of all the numbers in the list. (product means all multiplied together) Use `for` loops to make a turtle draw these regular polygons (regular means all sides the same lengths, all angles the same): A drunk pirate makes a random turn and then takes `100` steps forward, makes another random turn, takes another `100` steps, turns another random amount, etc. A social science student records the angle of each turn before the next 100 steps are taken. Her experimental data is ``` [160, -43, 270, -97, -43, 200, -940, 17, -86] ``` . (Positive angles are counter-clockwise.) Use a turtle to draw the path taken by our drunk friend. Enhance your program above to also tell us what the drunk pirate’s heading is after he has finished stumbling around. (Assume he begins at heading `0` ). If you were going to draw a regular polygon with 18 sides, what angle would you need to turn the turtle at each corner? At the interactive prompt, anticipate what each of the following lines will do, and then record what happens. Score yourself, giving yourself one point for each one you anticipate correctly: ``` >>> import turtle >>> wn = turtle.Screen() >>> tess = turtle.Turtle() >>> tess.right(90) >>> tess.left(3600) >>> tess.right(-90) >>> tess.speed(10) >>> tess.left(3600) >>> tess.speed(0) >>> tess.left(3645) >>> tess.forward(-100) ``` Hints: Try this on a piece of paper, moving and turning your cellphone as if it was a turtle. Watch how many complete rotations your cellphone makes before you complete the star. Since each full rotation is 360 degrees, you can figure out the total number of degrees that your phone was rotated through. If you divide that by 5, because there are five points to the star, you’ll know how many degrees to turn the turtle at each point. You can hide a turtle behind its invisibility cloak if you don’t want it shown. It will still draw its lines if its pen is down. The method is invoked as `tess.hideturtle()` . To make the turtle visible again, use `tess.showturtle()` . Create a turtle, and assign it to a variable. When you ask for its type, what do you get? What is the collective noun for turtles? (Hint: they don’t come in herds.) What the collective noun for pythons? Is a python a viper? Is a python venomous? # Chapter 4: Functions In Python, a function is a named sequence of statements that belong together. Their primary purpose is to help us organize programs into chunks that match how we think about the problem. The syntax for a function definition is: ``` def NAME( PARAMETERS ): STATEMENTS ``` We can make up any names we want for the functions we create, except that we can’t use a name that is a Python keyword, and the names must follow the rules for legal identifiers. There can be any number of statements inside the function, but they have to be indented from the `def` . In the examples in this book, we will use the standard indentation of four spaces. Function definitions are the second of several compound statements we will see, all of which have the same pattern: A header line which begins with a keyword and ends with a colon. A body consisting of one or more Python statements, each indented the same amount — the Python style guide recommends 4 spaces — from the header line. We’ve already seen the `for` loop which follows this pattern. So looking again at the function definition, the keyword in the header is `def` , which is followed by the name of the function and some parameters enclosed in parentheses. The parameter list may be empty, or it may contain any number of parameters separated from one another by commas. In either case, the parentheses are required. The parameters specifies what information, if any, we have to provide in order to use the new function. Suppose we’re working with turtles, and a common operation we need is to draw squares. “Draw a square” is an abstraction, or a mental chunk, of a number of smaller steps. So let’s write a function to capture the pattern of this “building block”: def draw_square(t, sz): """Make turtle t draw a square of sz.""" for i in range(4): t.forward(sz) t.left(90) wn = turtle.Screen() # Set up the window and its attributes wn.bgcolor("lightgreen") wn.title("Alex meets a function") alex = turtle.Turtle() # Create alex draw_square(alex, 50) # Call the function to draw the square wn.mainloop() ``` This function is named `draw_square` . It has two parameters: one to tell the function which turtle to move around, and the other to tell it the size of the square we want drawn. Make sure you know where the body of the function ends — it depends on the indentation, and the blank lines don’t count for this purpose! Docstrings for documentation If the first thing after the function header is a string, it is treated as a docstring and gets special treatment in Python and in some programming tools. For example, when we type a built-in function name with an unclosed parenthesis in Repl.it, a tooltip pops up, telling us what arguments the function takes, and it shows us any other text contained in the docstring. Docstrings are the key way to document our functions in Python and the documentation part is important. Because whoever calls our function shouldn’t have to need to know what is going on in the function or how it works; they just need to know what arguments our function takes, what it does, and what the expected result is. Enough to be able to use the function without having to look underneath. This goes back to the concept of abstraction of which we’ll talk more about. Docstrings are usually formed using triple-quoted strings as they allow us to easily expand the docstring later on should we want to write more than a one-liner. Just to differentiate from comments, a string at the start of a function (a docstring) is retrievable by Python tools at runtime. By contrast, comments are completely eliminated when the program is parsed. Defining a new function does not make the function run. To do that we need a function call. We’ve already seen how to call some built-in functions like `range` and `int` . Function calls contain the name of the function being executed followed by a list of values, called arguments, which are assigned to the parameters in the function definition. So in the second last line of the program, we call the function, and pass alex as the turtle to be manipulated, and 50 as the size of the square we want. While the function is executing, then, the variable `sz` refers to the value `50` , and the variable t refers to the same turtle instance that the variable `alex` refers to. Once we’ve defined a function, we can call it as often as we like, and its statements will be executed each time we call it. And we could use it to get any of our turtles to draw a square. In the next example, we’ve changed the draw_square function a little, and we get tess to draw 15 squares, with some variations. def draw_multicolor_square(t, sz): """Make turtle t draw a multi-color square of sz.""" for i in ["red", "purple", "hotpink", "blue"]: t.color(i) t.forward(sz) t.left(90) size = 20 # Size of the smallest square for i in range(15): draw_multicolor_square(tess, size) size = size + 10 # Increase the size for next time tess.forward(10) # Move tess along a little tess.right(18) # and give her some turn Let’s assume now we want a function to draw a rectangle. We need to be able to call the function with different arguments for width and height. And, unlike the case of the square, we cannot repeat the same thing 4 times, because the four sides are not equal. So we eventually come up with this rather nice code that can draw a rectangle. ``` def draw_rectangle(t, w, h): """Get turtle t to draw a rectangle of width w and height h.""" for i in range(2): t.forward(w) t.left(90) t.forward(h) t.left(90) ``` The parameter names are deliberately chosen as single letters to ensure they’re not misunderstood. In real programs, once we’ve had more experience, we will insist on better variable names than this. But the point is that the program doesn’t “understand” that we’re drawing a rectangle, or that the parameters represent the width and the height. Concepts like rectangle, width, and height are the meaning we humans have, not concepts that the program or the computer understands. Thinking like a scientist involves looking for patterns and relationships. In the code above, we’ve done that to some extent. We did not just draw four sides. Instead, we spotted that we could draw the rectangle as two halves, and used a loop to repeat that pattern twice. But now we might spot that a square is a special kind of rectangle. We already have a function that draws a rectangle, so we can use that to draw our square. ``` def draw_square(tx, sz): # A new version of draw_square draw_rectangle(tx, sz, sz) ``` There are some points worth noting here: Functions can call other functions. Rewriting `draw_square` like this captures the relationship that we’ve spotted between squares and rectangles. A caller of this function might say ``` draw_square(tess, 50) ``` . The parameters of this function, `tx` and `sz` , are assigned the values of the tess object, and the `int 50` respectively. In the body of the function they are just like any other variable. When the call is made to `draw_rectangle` , the values in variables `tx` and `sz` are fetched first, then the call happens. So as we enter the top of function `draw_rectangle` , its variable `t` is assigned the `tess` object, and `w` and `h` in that function are both given the value `50` . So far, it may not be clear why it is worth the trouble to create all of these new functions. Actually, there are a lot of reasons, but this example demonstrates two: Creating a new function gives us an opportunity to name a group of statements. Functions can simplify a program by hiding a complex computation behind a single command. The function (including its name) can capture our mental chunking, or abstraction, of the problem. Creating a new function can make a program smaller by eliminating repetitive code. As we might expect, we have to create a function before we can execute it. In other words, the function definition has to be executed before the function is called. In order to ensure that a function is defined before its first use, we have to know the order in which statements are executed, which is called the flow of execution. We’ve already talked about this a little in the previous chapter. Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom. Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called. Although it is not common, we can define one function inside another. In this case, the inner definition isn’t executed until the outer function is called. Function calls are like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the first line of the called function, executes all the statements there, and then comes back to pick up where it left off. That sounds simple enough, until we remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function! Fortunately, Python is adept at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates. What’s the moral of this sordid tale? When we read a program, don’t read from top to bottom. Instead, follow the flow of execution. Watch the flow of execution in action Repl.it does not have “single-stepping” functionality. For this we would recommend a different IDE like PyScripter. In PyScripter, we can watch the flow of execution by “single-stepping” through any program. PyScripter will highlight each line of code just before it is about to be executed. PyScripter also lets us hover the mouse over any variable in the program, and it will pop up the current value of that variable. So this makes it easy to inspect the “state snapshot” of the program — the current values that are assigned to the program’s variables. This is a powerful mechanism for building a deep and thorough understanding of what is happening at each step of the way. Learn to use the single-stepping feature well, and be mentally proactive: as you work through the code, challenge yourself before each step: “What changes will this line make to any variables in the program?” and “Where will flow of execution go next?” Let us go back and see how this works with the program above that draws 15 multicolor squares. First, we’re going to add one line of magic below the import statement — not strictly necessary, but it will make our lives much simpler, because it prevents stepping into the module containing the turtle code. ``` import turtle __import__("turtle").__traceable__ = False ``` Now we’re ready to begin. Put the mouse cursor on the line of the program where we create the turtle screen, and press the F4 key. This will run the Python program up to, but not including, the line where we have the cursor. Our program will “break” now, and provide a highlight on the next line to be executed, something like this: At this point we can press the `F7` key (step into) repeatedly to single step through the code. Observe as we execute lines 10, 11, 12, … how the turtle window gets created, how its canvas color is changed, how the title gets changed, how the turtle is created on the canvas, and then how the flow of execution gets into the loop, and from there into the function, and into the function’s loop, and then repeatedly through the body of that loop. While we do this, we can also hover our mouse over some of the variables in the program, and confirm that their values match our conceptual model of what is happening. After a few loops, when we’re about to execute line 20 and we’re starting to get bored, we can use the key `F8` to “step over” the function we are calling. This executes all the statements in the function, but without having to step through each one. We always have the choice to either “go for the detail”, or to “take the high-level view” and execute the function as a single chunk. There are some other options, including one that allow us to resume execution without further stepping. Find them under the Run menu of PyScripter. Most functions require arguments: the arguments provide for generalization. For example, if we want to find the absolute value of a number, we have to indicate what the number is. Python has a built-in function for computing the absolute value: ``` >>> abs(5) 5 >>> abs(-5) 5 ``` In this example, the arguments to the `abs` function are `5` and `-5` . Some functions take more than one argument. For example the built-in function `pow` takes two arguments, the base and the exponent. Inside the function, the values that are passed get assigned to variables called parameters. ``` >>> pow(2, 3) 8 >>> pow(7, 4) 2401 ``` Another built-in function that takes more than one argument is `max` . ``` >>> max(7, 11) 11 >>> max(4, 1, 17, 2, 12) 17 >>> max(3 * 11, 5**3, 512 - 9, 1024**0) 503 ``` `max` can be passed any number of arguments, separated by commas, and will return the largest value passed. The arguments can be either simple values or expressions. In the last example, `503` is returned, since it is larger than `33` , `125` , and `1` . All the functions in the previous section return values. Furthermore, functions like `range` , `int` , `abs` all return values that can be used to build more complex expressions. So an important difference between these functions and one like `draw_square` is that `draw_square` was not executed because we wanted it to compute a value — on the contrary, we wrote `draw_square` because we wanted it to execute a sequence of steps that caused the turtle to draw. A function that returns a value is called a fruitful function in this book. The opposite of a fruitful function is void function — one that is not executed for its resulting value, but is executed because it does something useful. (Languages like Java, C#, C and C++ use the term “void function”, other languages like Pascal call it a procedure.) Even though void functions are not executed for their resulting value, Python always wants to return something. So if the programmer doesn’t arrange to return a value, Python will automatically return the value `None` . How do we write our own fruitful function? In the exercises at the end of chapter 2 we saw the standard formula for compound interest, which we’ll now write as a fruitful function: ``` def final_amt(p, r, n, t): """ Apply the compound interest formula to p to produce the final amount. """ a = p * (1 + r/n) ** (n*t) return a # This is new, and makes the function fruitful. # now that we have the function above, let us call it. toInvest = float(input("How much do you want to invest?")) fnl = final_amt(toInvest, 0.08, 12, 5) print("At the end of the period you'll have", fnl) ``` The return statement is followed by an expression ( `a` in this case). This expression will be evaluated and returned to the caller as the “fruit” of calling this function. We prompted the user for the principal amount. The type of `toInvest` is a string, but we need a number before we can work with it. Because it is money, and could have decimal places, we’ve used the `float` type converter function to parse the string and return a float. Notice how we entered the arguments for 8% interest, compounded 12 times per year, for 5 years. When we run this, we get the output ``` At the end of the period you’ll have 14898.457083 ``` This is a bit messy with all these decimal places, but remember that Python doesn’t understand that we’re working with money: it just does the calculation to the best of its ability, without rounding. Later we’ll see how to format the string that is printed in such a way that it does get nicely rounded to two decimal places before printing. The line ``` toInvest = float(input("How much do you want to invest?")) ``` also shows yet another example of composition — we can call a function like `float` , and its arguments can be the results of other function calls (like input) that we’ve called along the way. Notice something else very important here. The name of the variable we pass as an argument — `toInvest` — has nothing to do with the name of the parameter — `p` . It is as if `p = toInvest` is executed when `final_amt` is called. It doesn’t matter what the value was named in the caller, in `final_amt` its name is `p` . These short variable names are getting quite tricky, so perhaps we’d prefer one of these versions instead: ``` def final_amt_v2(principalAmount, nominalPercentageRate, numTimesPerYear, years): a = principalAmount * (1 + nominalPercentageRate / numTimesPerYear) ** (numTimesPerYear*years) return a def final_amt_v3(amt, rate, compounded, years): a = amt * (1 + rate/compounded) ** (compounded*years) return a ``` They all do the same thing. Use your judgement to write code that can be best understood by other humans! Short variable names are more economical and sometimes make code easier to read: `E = mc2` would not be nearly so memorable if Einstein had used longer variable names! If you do prefer short names, make sure you also have some comments to enlighten the reader about what the variables are used for. When we create a local variable inside a function, it only exists inside the function, and we cannot use it outside. For example, consider again this function: ``` def final_amt(p, r, n, t): a = p * (1 + r/n) ** (n*t) return a ``` If we try to use `a` , outside the function, we’ll get an error: ``` >>> a NameError: name 'a' is not defined ``` The variable `a` is local to `final_amt` , and is not visible outside the function. Additionally, `a` only exists while the function is being executed — we call this its lifetime. When the execution of the function terminates, the local variables are destroyed. Parameters are also local, and act like local variables. For example, the lifetimes of `p` , `r` , `n` , `t` begin when `final_amt` is called, and the lifetime ends when the function completes its execution. So it is not possible for a function to set some local variable to a value, complete its execution, and then when it is called again next time, recover the local variable. Each call of the function creates new local variables, and their lifetimes expire when the function returns to the caller. Now that we have fruitful functions, we can focus our attention on reorganizing our code so that it fits more nicely into our mental chunks. This process of rearrangement is called refactoring the code. Two things we’re always going to want to do when working with turtles is to create the window for the turtle, and to create one or more turtles. We could write some functions to make these tasks easier in future: ``` def make_window(colr, ttle): """ Set up the window with the given background color and title. Returns the new window. """ w = turtle.Screen() w.bgcolor(colr) w.title(ttle) return w def make_turtle(colr, sz): """ Set up a turtle with the given color and pensize. Returns the new turtle. """ t = turtle.Turtle() t.color(colr) t.pensize(sz) return t wn = make_window("lightgreen", "Tess and Alex dancing") tess = make_turtle("hotpink", 5) alex = make_turtle("black", 1) dave = make_turtle("yellow", 2) ``` The trick about refactoring code is to anticipate which things we are likely to want to change each time we call the function: these should become the parameters, or changeable parts, of the functions we write. argument A value provided to a function when the function is called. This value is assigned to the corresponding parameter in the function. The argument can be the result of an expression which may involve operators, operands and calls to other fruitful functions. The second part of a compound statement. The body consists of a sequence of statements all indented the same amount from the beginning of the header. The standard amount of indentation used within the Python community is 4 spaces. compound statement A statement that consists of two parts: header - which begins with a keyword determining the statement type, and ends with a colon. body - containing one or more statements indented the same amount from the header. The syntax of a compound statement looks like this: ``` keyword ... : statement statement ... ``` docstring A special string that is attached to a function as its `__doc__` attribute. Tools like Repl.it can use docstrings to provide documentation or hints for the programmer. When we get to modules, classes, and methods, we’ll see that docstrings can also be used there. flow of execution The order in which statements are executed during a program run. frame A box in a stack diagram that represents a function call. It contains the local variables and parameters of the function. function A named sequence of statements that performs some useful operation. Functions may or may not take parameters and may or may not produce a result. function call A statement that executes a function. It consists of the name of the function followed by a list of arguments enclosed in parentheses. function composition Using the output from one function call as the input to another. function definition A statement that creates a new function, specifying its name, parameters, and the statements it executes. fruitful function A function that returns a value when it is called. header line The first part of a compound statement. A header line begins with a keyword and ends with a colon (:) import statement A statement which permits functions and variables defined in another Python module to be brought into the environment of another script. To use the features of the turtle, we need to first import the turtle module. lifetime Variables and objects have lifetimes — they are created at some point during program execution, and will be destroyed at some time. local variable A variable defined inside a function. A local variable can only be used inside its function. Parameters of a function are also a special kind of local variable. parameter A name used inside a function to refer to the value which was passed to it as an argument. refactor A fancy word to describe reorganizing our program code, usually to make it more understandable. Typically, we have a program that is already working, then we go back to “tidy it up”. It often involves choosing better variable names, or spotting repeated patterns and moving that code into a function. stack diagram A graphical representation of a stack of functions, their variables, and the values to which they refer. traceback A list of the functions that are executing, printed when a runtime error occurs. A traceback is also commonly referred to as a stack trace, since it lists the functions in the order in which they are stored in the runtime stack. void function The opposite of a fruitful function: one that does not return a value. It is executed for the work it does, rather than for the value it returns. `draw_poly(t, n, sz)` which makes a turtle draw a regular polygon. When called with ``` draw_poly(tess, 8, 50) ``` , it will draw a shape like this: Write a void function ``` draw_equitriangle(t, sz) ``` which calls `draw_poly` from the previous question to have its turtle draw a equilateral triangle. Write a fruitful function `sum_to(n)` that returns the sum of all integer numbers up to and including `n` . So `sum_to(10)` would be `1+2+3…+10` which would return the value `55` . Write a function `area_of_circle(r)` which returns the area of a circle of radius `r` . Write a void function to draw a star, where the length of each side is 100 units. (Hint: You should turn the turtle by 144 degrees at each point.) What would it look like if you didn’t pick up the pen? # Chapter 5: Conditionals Programs get really interesting when we can test conditions and change the program behaviour depending on the outcome of the tests. That’s what this chapter is about. A Boolean value is either true or false. It is named after the British mathematician, <NAME>, who first formulated Boolean algebra — some rules for reasoning about and combining these values. This is the basis of all modern computer logic. In Python, the two Boolean values are `True` and `False` (the capitalization must be exactly as shown), and the Python type is `bool` . ``` >>> type(True) <class 'bool'> >>> type(true) Traceback (most recent call last): File "<interactive input>", line 1, in <module> NameError: name 'true' is not defined ``` A Boolean expression is an expression that evaluates to produce a result which is a Boolean value. For example, the operator `==` tests if two values are equal. It produces (or yields) a Boolean value: ``` >>> 5 == (3 + 2) # Is 5 equal to the result of 3 + 2? True >>> 5 == 6 False >>> j = "hel" >>> j + "lo" == "hello" True ``` In the first statement, the two operands evaluate to equal values, so the expression evaluates to `True` ; in the second statement, `5` is not equal to `6` , so we get `False` . The `==` operator is one of six common comparison operators which all produce a bool result; here are all six: ``` x == y # Produce True if ... x is equal to y x != y # ... x is not equal to y x > y # ... x is greater than y x < y # ... x is less than y x >= y # ... x is greater than or equal to y x <= y # ... x is less than or equal to y ``` Although these operations are probably familiar, the Python symbols are different from the mathematical symbols. A common error is to use a single equal sign ( `=` ) instead of a double equal sign ( `==` ). Remember that `=` is an assignment operator and `==` is a comparison operator. Also, there is no such thing as `=<` or `=>` . Like any other types we’ve seen so far, Boolean values can be assigned to variables, printed, etc. ``` >>> age = 18 >>> old_enough_to_get_driving_licence = age >= 17 >>> print(old_enough_to_get_driving_licence) True >>> type(old_enough_to_get_driving_licence) <class 'bool'> ``` There are three logical operators, `and` , `or` , and `not` , that allow us to build more complex Boolean expressions from simpler Boolean expressions. The semantics (meaning) of these operators is similar to their meaning in English. For example, `x > 0 and x < 10` produces `True` only if `x` is greater than `0` and at the same time, `x` is less than `10` . ``` n % 2 == 0 or n % 3 == 0 ``` is `True` if either of the conditions is `True` , that is, if the number `n` is divisible by `2` or it is divisible by `3` . (What do you think happens if `n` is divisible by both `2` and by `3` at the same time? Will the expression yield `True` or `False` ? Try it in your Python interpreter.) Finally, the `not` operator negates a Boolean value, so `not (x > y)` is `True` if `(x > y)` is `False` , that is, if `x` is less than or equal to `y` . The expression on the left of the or operator is evaluated first: if the result is `True` , Python does not (and need not) evaluate the expression on the right — this is called short-circuit evaluation. Similarly, for the `and` operator, if the expression on the left yields `False` , Python does not evaluate the expression on the right. So there are no unnecessary evaluations. A truth table is a small table that allows us to list all the possible inputs, and to give the results for the logical operators. Because the `and` and `or` operators each have two operands, there are only four rows in a truth table that describes the semantics of `and` . a | b | a and b | | --- | --- | --- | False | False | False | False | True | False | True | False | False | True | True | True | In a Truth Table, we sometimes use `T` and `F` as shorthand for the two Boolean values: here is the truth table describing or: a | b | a or b | | --- | --- | --- | F | F | F | F | T | T | T | F | T | T | T | T | The third logical operator, `not` , only takes a single operand, so its truth table only has two rows: a | not a | | --- | --- | F | T | T | F | A set of rules for simplifying and rearranging expressions is called an algebra. For example, we are all familiar with school algebra rules, such as: `n * 0 == 0` Here we see a different algebra — the Boolean algebra — which provides rules for working with Boolean values. First, the `and` operator: ``` x and False == False False and x == False y and x == x and y x and True == x True and x == x x and x == x ``` Here are some corresponding rules for the `or` operator: ``` x or False == x False or x == x y or x == x or y x or True == True True or x == True x or x == x ``` Two not operators cancel each other: `not (not x) == x` In order to write useful programs, we almost always need the ability to check conditions and change the behavior of the program accordingly. Conditional statements give us this ability. The simplest form is the `if` statement: ``` if x % 2 == 0: print(x, " is even.") print("Did you know that 2 is the only even number that is prime?") else: print(x, " is odd.") print("Did you know that multiplying two odd numbers " + "always gives an odd result?") ``` The Boolean expression after the `if` statement is called the condition. If it is true, then all the indented statements get executed. If not, then all the statements indented under the `else` clause get executed. Flowchart of an `if` statement with an `else` clause The syntax for an if statement looks like this: ``` if BOOLEAN EXPRESSION: STATEMENTS_1 # Executed if condition evaluates to True else: STATEMENTS_2 # Executed if condition evaluates to False ``` As with the function definition from the last chapter and other compound statements like `for` , the `if` statement consists of a header line and a body. The header line begins with the keyword `if` followed by a Boolean expression and ends with a colon ( `:` ). The indented statements that follow are called a block. The first unindented statement marks the end of the block. Each of the statements inside the first block of statements are executed in order if the Boolean expression evaluates to `True` . The entire first block of statements is skipped if the Boolean expression evaluates to `False` , and instead all the statements indented under the `else` clause are executed. There is no limit on the number of statements that can appear under the two clauses of an `if` statement, but there has to be at least one statement in each block. Occasionally, it is useful to have a section with no statements (usually as a place keeper, or scaffolding, for code we haven’t written yet). In that case, we can use the `pass` statement, which does nothing except act as a placeholder. ``` if True: # This is always True, pass # so this is always executed, but it does nothing else: pass ``` Flowchart of an `if` statement with no `else` clause Another form of the `if` statement is one in which the `else` clause is omitted entirely. In this case, when the condition evaluates to `True` , the statements are executed, otherwise the flow of execution continues to the statement after the `if` . ``` if x < 0: print("The negative number ", x, " is not valid here.") x = 42 print("I've decided to use the number 42 instead.") print("The square root of ", x, "is", math.sqrt(x)) ``` In this case, the `if` — not because we left a blank line, but because of the way the code is indented. Note too that the function call `math.sqrt(x)` will give an error unless we have an `import math` statement, usually placed near the top of our script. Python terminology Python documentation sometimes uses the term suite of statements to mean what we have called a block here. They mean the same thing, and since most other languages and computer scientists use the word block, we’ll stick with that. Notice too that `else` is not a statement. The `if` statement has two clauses, one of which is the (optional) `else` clause. Sometimes there are more than two possibilities and we need more than two branches. One way to express a computation like that is a chained conditional: Flowchart of this chained conditional `elif` is an abbreviation of else if. Again, exactly one branch will be executed. There is no limit of the number of `elif` statements but only a single (and optional) final `else` statement is allowed and it must be the last branch in the statement: ``` if choice == "a": function_one() elif choice == "b": function_two() elif choice == "c": function_three() else: print("Invalid choice.") ``` Each condition is checked in order. If the first is false, the next is checked, and so on. If one of them is true, the corresponding branch executes, and the statement ends. Even if more than one condition is true, only the first true branch executes. One conditional can also be nested within another. (It is the same theme of composability, again!) We could have written the previous example as follows: Flowchart of this nested conditional The outer conditional contains two branches. The second branch contains another `if` statement, which has two branches of its own. Those two branches could contain conditional statements as well. Although the indentation of the statements makes the structure apparent, nested conditionals very quickly become difficult to read. In general, it is a good idea to avoid them when we can. Logical operators often provide a way to simplify nested conditional statements. For example, we can rewrite the following code using a single conditional: ``` if 0 < x: # Assume x is an int here if x < 10: print("x is a positive single digit.") ``` The `if` statements each with a simple condition, we could make a more complex condition using the `and` operator. Now we only need a single `if` statement: ``` if 0 < x and x < 10: print("x is a positive single digit.") ``` The `return` statement, with or without a value, depending on whether the function is fruitful or void, allows us to terminate the execution of a function before (or when) we reach the end. One reason to use an early return is if we detect an error condition: ``` def print_square_root(x): if x <= 0: print("Positive numbers only, please.") return result = x**0.5 print("The square root of", x, "is", result) ``` The function `print_square_root` has a parameter named `x` . The first thing it does is check whether `x` is less than or equal to `0` , in which case it displays an error message and then uses `return` to exit the function. The flow of execution immediately returns to the caller, and the remaining lines of the function are not executed. Each of the six relational operators has a logical opposite: for example, suppose we can get a driving licence when our age is greater or equal to 17, we can not get the driving licence when we are less than 17. Notice that the opposite of `>=` is `<` . operator | logical opposite | | --- | --- | | | | | | | | | | | | | Understanding these logical opposites allows us to sometimes get rid of `not` operators. `not` operators are often quite difficult to read in computer code, and our intentions will usually be clearer if we can eliminate them. For example, if we wrote this Python: ``` if not (age >= 17): print("Hey, you're too young to get a driving licence!") ``` it would probably be clearer to use the simplification laws, and to write instead: ``` if age < 17: print("Hey, you're too young to get a driving licence!") ``` Two powerful simplification laws (called de Morgan’s laws) that are often helpful when dealing with complicated Boolean expressions are: ``` not (x and y) == (not x) or (not y) not (x or y) == (not x) and (not y) ``` For example, suppose we can slay the dragon only if our magic lightsabre sword is charged to 90% or higher, and we have 100 or more energy units in our protective shield. We find this fragment of Python code in the game: de Morgan’s laws together with the logical opposites would let us rework the condition in a (perhaps) easier to understand way like this: We could also get rid of the `not` by swapping around the then and else parts of the conditional. So here is a third version, also equivalent: This version is probably the best of the three, because it very closely matches the initial English statement. Clarity of our code (for other humans), and making it easy to see that the code does what was expected should always be a high priority. As our programming skills develop we’ll find we have more than one way to solve any problem. So good programs are designed. We make choices that favour clarity, simplicity, and elegance. The job title software architect says a lot about what we do — we are architects who engineer our products to balance beauty, functionality, simplicity and clarity in our creations. Tip Once our program works, we should play around a bit trying to polish it up. Write good comments. Think about whether the code would be clearer with different variable names. Could we have done it more elegantly? Should we rather use a function? Can we simplify the conditionals? We think of our code as our creation, our work of art! We make it great. We’ve had a first look at this in an earlier chapter. Seeing it again won’t hurt! Many Python types come with a built-in function that attempts to convert values of another type into its own type. The `int` function, for example, takes any value and converts it to an integer, if possible, or complains otherwise: ``` >>> int("32") 32 >>> int("Hello") ValueError: invalid literal for int() with base 10: 'Hello' ``` `int` can also convert floating-point values to integers, but remember that it truncates the fractional part: ``` >>> int(-2.3) -2 >>> int(3.99999) 3 >>> int("42") 42 >>> int(1.0) 1 ``` The float function converts integers and strings to floating-point numbers: ``` >>> float(32) 32.0 >>> float("3.14159") 3.14159 >>> float(1) 1.0 ``` It may seem odd that Python distinguishes the integer value `1` from the floating-point value `1.0` . They may represent the same number, but they belong to different types. The reason is that they are represented differently inside the computer. The `str` function converts any argument given to it to type string: ``` >>> str(32) '32' >>> str(3.14149) '3.14149' >>> str(True) 'True' >>> str(true) Traceback (most recent call last): File "<interactive input>", line 1, in <module> NameError: name 'true' is not defined ``` `str` will work with any value and convert it into a string. As mentioned earlier, `True` is a Boolean value; true is just an ordinary variable name, and is not defined here, so we get an error. The turtle has a lot more power than we’ve seen so far. The full documentation can be found at http://docs.python.org/py3k/library/turtle.html. Here are a couple of new tricks for our turtles: We can get a turtle to display text on the canvas at the turtle’s current position. The method to do that is `alex.write("Hello")` . We can fill a shape (circle, semicircle, triangle, etc.) with a color. It is a two-step process. First we call the method `alex` . `begin_fill()` , then we draw the shape, then we call `alex.end_fill()` . We’ve previously set the color of our turtle — we can now also set its fill color, which need not be the same as the turtle and the pen color. We use ``` alex.color("blue","red") ``` to set the turtle to draw in blue, and fill in red. Ok, so can we get tess to draw a bar chart? Let us start with some data to be charted, ``` xs = [48, 117, 200, 240, 160, 260, 220] ``` Corresponding to each data measurement, we’ll draw a simple rectangle of that height, with a fixed width. ``` def draw_bar(t, height): """ Get turtle t to draw one bar, of height. """ t.left(90) t.forward(height) # Draw up the left side t.right(90) t.forward(40) # Width of bar, along the top t.right(90) t.forward(height) # And down again! t.left(90) # Put the turtle facing the way we found it. t.forward(10) # Leave small gap after each bar ... for v in xs: # Assume xs and tess are ready draw_bar(tess, v) ``` Ok, not fantastically impressive, but it is a nice start! The important thing here was the mental chunking, or how we broke the problem into smaller pieces. Our chunk is to draw one bar, and we wrote a function to do that. Then, for the whole chart, we repeatedly called our function. Next, at the top of each bar, we’ll print the value of the data. We’ll do this in the body of `draw_bar` , by adding ``` t.write(' ' + str(height)) ``` as the new third line of the body. We’ve put a little space in front of the number, and turned the number into a string. Without this extra space we tend to cramp our text awkwardly against the bar to the left. The result looks a lot better now: And now we’ll add two lines to fill each bar. Our final program now looks like this: def draw_bar(t, height): """ Get turtle t to draw one bar, of height. """ t.begin_fill() # Added this line t.left(90) t.forward(height) t.write(" "+ str(height)) t.right(90) t.forward(40) t.right(90) t.forward(height) t.left(90) t.end_fill() # Added this line t.forward(10) tess = turtle.Turtle() # Create tess and set some attributes tess.color("blue", "red") tess.pensize(3) xs = [48,117,200,240,160,260,220] for a in xs: draw_bar(tess, a) It produces the following, which is more satisfying: Mmm. Perhaps the bars should not be joined to each other at the bottom. We’ll need to pick up the pen while making the gap between the bars. We’ll leave that (and a few more tweaks) as exercises for you! block A group of consecutive statements with the same indentation. The block of statements in a compound statement that follows the header. Boolean algebra Some rules for rearranging and reasoning about Boolean expressions. Boolean expression An expression that is either true or false. Boolean value There are exactly two Boolean values: `True` and `False` . Boolean values result when a Boolean expression is evaluated by the Python interpreter. They have type `bool` . branch One of the possible paths of the flow of execution determined by conditional execution. chained conditional A conditional branch with more than two possible flows of execution. In Python chained conditionals are written with `if` … `elif` … `else` statements. comparison operator One of the six operators that compares two values: `==` , `!=` , `>` , `<` , `>=` , and `<=` . condition The Boolean expression in a conditional statement that determines which branch is executed. conditional statement A statement that controls the flow of execution depending on some condition. In Python the keywords `if` , `elif` , and `else` are used for conditional statements. logical operator One of the operators that combines Boolean expressions: `and` , `or` , and `not` . nesting One program structure within another, such as a conditional statement inside a branch of another conditional statement. prompt A visual cue that tells the user that the system is ready to accept input data. truth table A concise table of Boolean values that can describe the semantics of an operator. type conversion An explicit function call that takes a value of one type and computes a corresponding value of another type. wrapping code in a function The process of adding a function header and parameters to a sequence of program statements is often referred to as “wrapping the code in a function”. This process is very useful whenever the program statements in question are going to be used multiple times. It is even more useful when it allows the programmer to express their mental chunking, and how they’ve broken a complex problem into pieces. Assume the days of the week are numbered `0,1,2,3,4,5,6` from Sunday to Saturday. Write a function which is given the day number, and it returns the day name (a string). You go on a wonderful holiday (perhaps to jail, if you don’t like happy exercises) leaving on day number 3 (a Wednesday). You return home after 137 sleeps. Write a general version of the program which asks for the starting day number, and the length of your stay, and it will tell you the name of day of the week you will return on. Give the logical opposites of these conditions ``` a > b a >= b a >= 18 and day == 3 a >= 18 and day != 3 ``` What do these expressions evaluate to? ``` 3 == 3 3 != 3 3 >= 4 not (3 < 4) ``` Complete this truth table: p | q | r | (not (p and q)) or r | | --- | --- | --- | --- | F | F | F | ? | F | F | T | ? | F | T | F | ? | F | T | T | ? | T | F | F | ? | T | F | T | ? | T | T | F | ? | T | T | T | ? | Write a function which is given an exam mark, and it returns a string — the grade for that mark — according to this scheme: Mark | Grade | | --- | --- | >= 75 | First | [70-75) | Upper Second | [60-70) | Second | [50-60) | Third | [45-50) | F1 Supp | [40-45) | F2 | < 40 | F3 | The square and round brackets denote closed and open intervals. A closed interval includes the number, and open interval excludes it. So `39.99999` gets grade `F3` , but `40` gets grade `F2` . Assume ``` xs = [83, 75, 74.9, 70, 69.9, 65, 60, 59.9, 55, 50, 49.9, 45, 44.9, 40, 39.9, 2, 0] ``` Test your function by printing the mark and the grade for all the elements in this list. Modify the turtle bar chart program so that the pen is up for the small gaps between each bar. Modify the turtle bar chart program so that the bar for any value of 200 or more is filled with red, values between [100 and 200) are filled with yellow, and bars representing values less than 100 are filled with green. In the turtle bar chart program, what do you expect to happen if one or more of the data values in the list is negative? Try it out. Change the program so that when it prints the text value for the negative bars, it puts the text below the bottom of the bar. Write a function `find_hypot` which, given the length of two sides of a right-angled triangle, returns the length of the hypotenuse. (Hint: `x ** 0.5` will return the square root.) Write a function `is_rightangled` which, given the length of three sides of a triangle, will determine whether the triangle is right-angled. Assume that the third argument to the function is always the longest side. It will return `True` if the triangle is right-angled, or `False` otherwise. Hint: Floating point arithmetic is not always exactly accurate, so it is not safe to test floating point numbers for equality. If a good programmer wants to know whether `x` is equal or close enough to `y` , they would probably code it up as: ``` if abs(x-y) < 0.000001: # If x is approximately equal to y ... ``` Extend the above program so that the sides can be given to the function in any order. If you’re intrigued by why floating point arithmetic is sometimes inaccurate, on a piece of paper, divide 10 by 3 and write down the decimal result. You’ll find it does not terminate, so you’ll need an infinitely long sheet of paper. The representation of numbers in computer memory or on your calculator has similar problems: memory is finite, and some digits may have to be discarded. So small inaccuracies creep in. Try this script: ``` import math a = math.sqrt(2.0) print(a, a*a) print(a*a == 2.0) ``` # Chapter 6: Fruitful functions The built-in functions we have used, such as `abs` , `pow` , `int` , `max` , and `range` , have produced results. Calling each of these functions generates a value, which we usually assign to a variable or use as part of an expression. ``` biggest = max(3, 7, 2, 5) x = abs(3 - 11) + 10 ``` We also wrote our own function to return the final amount for a compound interest calculation. In this chapter, we are going to write more functions that return values, which we will call fruitful functions, for want of a better name. The first example is area, which returns the area of a circle with the given radius: ``` def area(radius): b = 3.14159 * radius**2 return b ``` We have seen the return statement before, but in a fruitful function the `return` statement includes a return value. This statement means: evaluate the return expression, and then return it immediately as the result (the fruit) of this function. The expression provided can be arbitrarily complicated, so we could have written this function like this: ``` def area(radius): return 3.14159 * radius * radius ``` On the other hand, temporary variables like `b` above often make debugging easier. Sometimes it is useful to have multiple `return` statements, one in each branch of a conditional. We have already seen the built-in `abs` , now we see how to write our own: Another way to write the above function is to leave out the `else` and just follow the `if` condition by the second `return` statement. Think about this version and convince yourself it works the same as the first one. Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code, or unreachable code. In a fruitful function, it is a good idea to ensure that every possible path through the program hits a `return` statement. The following version of `absolute_value` fails to do this: ``` def bad_absolute_value(x): if x < 0: return -x elif x > 0: return x ``` This version is not correct because if `x` happens to be `0` , neither condition is true, and the function ends without hitting a `return` statement. In this case, the `return` value is a special value called `None` : ``` >>> print(bad_absolute_value(0)) None ``` All Python functions return `None` whenever they do not return another value. It is also possible to use a `return` statement in the middle of a `for` loop, in which case control immediately returns from the function. Let us assume that we want a function which looks through a list of words. It should return the first 2-letter word. If there is not one, it should return the empty string: ``` def find_first_2_letter_word(xs): for wd in xs: if len(wd) == 2: return wd return "" ``` ``` >>> find_first_2_letter_word(["This", "is", "a", "dead", "parrot"]) 'is' >>> find_first_2_letter_word(["I", "like", "cheese"]) '' ``` Single-step through this code and convince yourself that in the first test case that we’ve provided, the function returns while processing the second element in the list: it does not have to traverse the whole list. At this point, you should be able to look at complete functions and tell what they do. Also, if you have been doing the exercises, you have written some small functions. As you write larger functions, you might start to have more difficulty, especially with runtime and semantic errors. To deal with increasingly complex programs, we are going to suggest a technique called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time. As an example, suppose we want to find the distance between two points, given by the coordinates `(x1, y1)` and `(x2, y2)` . By the Pythagorean theorem, the distance is: The first step is to consider what a `distance` function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)? In this case, the two points are the inputs, which we can represent using four parameters. The return value is the distance, which is a floating-point value. Already we can write an outline of the function that captures our thinking so far: ``` def distance(x1, y1, x2, y2): return 0.0 ``` Obviously, this version of the function doesn’t compute distances; it always returns zero. But it is syntactically correct, and it will run, which means that we can test it before we make it more complicated. To test the new function, we call it with sample values: ``` >>> distance(1, 2, 4, 6) 0.0 ``` We chose these values so that the horizontal distance equals `3` and the vertical distance equals `4` ; that way, the result is `5` (the hypotenuse of a `3` - `4` - `5` triangle). When testing a function, it is useful to know the right answer. At this point we have confirmed that the function is syntactically correct, and we can start adding lines of code. After each incremental change, we test the function again. If an error occurs at any point, we know where it must be — in the last line we added. A logical first step in the computation is to find the differences `x2- x1` and `y2- y1` . We will refer to those values using temporary variables named `dx` and `dy` . ``` def distance(x1, y1, x2, y2): dx = x2 - x1 dy = y2 - y1 return 0.0 ``` If we call the function with the arguments shown above, when the flow of execution gets to the return statement, `dx` should be `3` and `dy` should be `4` . We can check that this is the case in PyScripter by putting the cursor on the return statement, and running the program to break execution when it gets to the cursor (using the `F4` key). Then we inspect the variables `dx` and `dy` by hovering the mouse above them, to confirm that the function is getting the right parameters and performing the first computation correctly. If not, there are only a few lines to check. Next we compute the sum of squares of `dx` and `dy` : Again, we could run the program at this stage and check the value of `dsquared` (which should be `25` ). Finally, using the fractional exponent `0.5` to find the square root, we compute and return the result: If that works correctly, you are done. Otherwise, you might want to inspect the value of `result` before the return statement. When you start out, you might add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger conceptual chunks. Either way, stepping through your code one line at a time and verifying that each step matches your expectations can save you a lot of debugging time. As you improve your programming skills you should find yourself managing bigger and bigger chunks: this is very similar to the way we learned to read letters, syllables, words, phrases, sentences, paragraphs, etc., or the way we learn to chunk music — from individual notes to chords, bars, phrases, and so on. The key aspects of the process are: Start with a working skeleton program and make small incremental changes. At any point, if there is an error, you will know exactly where it is. Use temporary variables to refer to intermediate values so that you can easily inspect and check them. Once the program is working, relax, sit back, and play around with your options. (There is interesting research that links “playfulness” to better understanding, better learning, more enjoyment, and a more positive mindset about what you can achieve — so spend some time fiddling around!) You might want to consolidate multiple statements into one bigger compound expression, or rename the variables you’ve used, or see if you can make the function shorter. A good guideline is to aim for making code as easy as possible for others to read. Here is another version of the function. It makes use of a square root function that is in the `math` module (we’ll learn about modules shortly). Which do you prefer? Which looks “closer” to the Pythagorean formula we started out with? ``` import math def distance(x1, y1, x2, y2): return math.sqrt( (x2-x1)**2 + (y2-y1)**2 ) ``` ``` >>> distance(1, 2, 4, 6) 5.0 ``` Another powerful technique for debugging (an alternative to single-stepping and inspection of program variables), is to insert extra print functions in carefully selected places in your code. Then, by inspecting the output of the program, you can check whether the algorithm is doing what you expect it to. Be clear about the following, however: You must have a clear solution to the problem, and must know what should happen before you can debug a program. Work on solving the problem on a piece of paper (perhaps using a flowchart to record the steps you take) before you concern yourself with writing code. Writing a program doesn’t solve the problem — it simply automates the manual steps you would take. So first make sure you have a pen-and-paper manual solution that works. Programming then is about making those manual steps happen automatically. Do not write chatterbox functions. A chatterbox is a fruitful function that, in addition to its primary task, also asks the user for input, or prints output, when it would be more useful if it simply shut up and did its work quietly. For example, we’ve seen built-in functions like `range` , `max` and `abs` . None of these would be useful building blocks for other programs if they prompted the user for input, or printed their results while they performed their tasks. So a good tip is to avoid calling `input` functions inside fruitful functions, unless the primary purpose of your function is to perform input and output. The one exception to this rule might be to temporarily sprinkle some calls to As you should expect by now, you can call one function from within another. This ability is called composition. As an example, we’ll write a function that takes two points, the center of the circle and a point on the perimeter, and computes the area of the circle. Assume that the center point is stored in the variables `xc` and `yc` , and the perimeter point is in `xp` and `yp` . The first step is to find the radius of the circle, which is the distance between the two points. Fortunately, we’ve just written a function, `distance` , that does just that, so now all we have to do is use it: ``` radius = distance(xc, yc, xp, yp) ``` The second step is to find the area of a circle with that radius and return it. Again we will use one of our earlier functions: ``` result = area(radius) return result ``` Wrapping that up in a function, we get: ``` def area2(xc, yc, xp, yp): radius = distance(xc, yc, xp, yp) result = area(radius) return result ``` We called this function `area2` to distinguish it from the `area` function defined earlier. The temporary variables `radius` and `result` are useful for development, debugging, and single-stepping through the code to inspect what is happening, but once the program is working, we can make it more concise by composing the function calls: ``` def area2(xc, yc, xp, yp): return area(distance(xc, yc, xp, yp)) ``` Functions can return Boolean values, which is often convenient for hiding complicated tests inside functions. For example: ``` def is_divisible(x, y): """ Test if x is exactly divisible by y """ if x % y == 0: return True else: return False ``` It is common to give Boolean functions names that sound like yes/no questions. `is_divisible` returns either `True` or `False` to indicate whether the `x` is or is not divisible by `y` . We can make the function more concise by taking advantage of the fact that the condition of the `if` statement is itself a Boolean expression. We can return it directly, avoiding the `if` statement altogether: ``` def is_divisible(x, y): return x % y == 0 ``` This session shows the new function in action: ``` >>> is_divisible(6, 4) False >>> is_divisible(6, 3) True ``` Boolean functions are often used in conditional statements: ``` if is_divisible(x, y): ... # Do something ... else: ... # Do something else ... ``` It might be tempting to write something like: ``` if is_divisible(x, y) == True: ``` but the extra comparison is unnecessary. Readability is very important to programmers, since in practice programs are read and modified far more often then they are written. But, like most rules, we occasionally break them. Most of the code examples in this book will be consistent with the Python Enhancement Proposal 8 (PEP 8), a style guide developed by the Python community. We’ll have more to say about style as our programs become more complex, but a few pointers will be helpful already: It is a common best practice in software development to include automatic unit testing of source code. Unit testing provides a way to automatically verify that individual pieces of code, such as functions, are working properly. This makes it possible to change the implementation of a function at a later time and quickly test that it still does what it was intended to do. Some years back organizations had the view that their valuable asset was the program code and documentation. Organizations will now spend a large portion of their software budgets on crafting (and preserving) their tests. Unit testing also forces the programmer to think about the different cases that the function needs to handle. You also only have to type the tests once into the script, rather than having to keep entering the same test data over and over as you develop your code. Extra code in your program which is there because it makes debugging or testing easier is called scaffolding. A collection of tests for some code is called its test suite. There are a few different ways to do unit testing in Python — but at this stage we’re going to ignore what the Python community usually does, and we’re going to start with two functions that we’ll write ourselves. We’ll use these for writing our unit tests. Let’s start with the `absolute_value` function that we wrote earlier in this chapter. Recall that we wrote a few different versions, the last of which was incorrect, and had a bug. Would tests have caught this bug? First we plan our tests. We’d like to know if the function returns the correct value when its argument is negative, or when its argument is positive, or when its argument is zero. When planning your tests, you’ll always want to think carefully about the “edge” cases — here, an argument of `0` to `absolute_value` is on the edge of where the function behaviour changes, and as we saw at the beginning of the chapter, it is an easy spot for the programmer to make a mistake! So it is a good case to include in our test suite. We’re going to write a helper function for checking the results of one test. It takes a boolean argument and will either print a message telling us that the test passed, or it will print a message to inform us that the test failed. The first line of the body (after the function’s docstring) magically determines the line number in the script where the call was made from. This allows us to print the line number of the test, which will help when we want to identify which tests have passed or failed. ``` import sys def test(did_pass): """ Print the result of a test. """ linenum = sys._getframe(1).f_lineno # Get the caller's line number. if did_pass: msg = "Test at line {0} ok.".format(linenum) else: msg = "Test at line {0} FAILED.".format(linenum) print(msg) ``` There is also some slightly tricky string formatting using the format method which we will gloss over for the moment, and cover in detail in a future chapter. But with this function written, we can proceed to construct our test suite: ``` def test_suite(): """ Run the suite of tests for code in this module (this file). """ test(absolute_value(17) == 17) test(absolute_value(-17) == 17) test(absolute_value(0) == 0) test(absolute_value(3.14) == 3.14) test(absolute_value(-3.14) == 3.14) test_suite() # Here is the call to run the tests ``` Here you’ll see that we’ve constructed five tests in our test suite. We could run this against the first or second versions (the correct versions) of `absolute_value` , and we’d get output similar to the following: But let’s say you change the function to an incorrect version like this: ``` def absolute_value(n): # Buggy version """ Compute the absolute value of n """ if n < 0: return 1 elif n > 0: return n ``` Can you find at least two mistakes in this code? Our test suite can! We get: These are three examples of failing tests. There is a built-in Python statement called assert that does almost the same as our `test` function (except the program stops when the first assertion fails). You may want to read about it, and use it instead of our `test` function. Boolean function A function that returns a Boolean value. The only possible values of the bool type are `False` and `True` . chatterbox function A function which interacts with the user (using `input` or composition (of functions) Calling one function from within the body of another, or using the `return` value of one function as an argument to the call of another. dead code Part of a program that can never be executed, often because it appears after a `return` statement. fruitful function A function that yields a `return` value instead of `None` . incremental development A program development plan intended to simplify debugging by adding and testing only a small amount of code at a time. None A special Python value. One use in Python is that it is returned by functions that do not execute a `return` statement with a `return` argument. return value The value provided as the result of a function call. scaffolding Code that is used during program development to assist with development and debugging. The unit test code that we added in this chapter are examples of scaffolding. temporary variable A variable used to store an intermediate value in a complex calculation. test suite A collection of tests for some code you have written. unit testing An automatic procedure used to validate that individual units of code are working properly. Having a test suite is extremely useful when somebody modifies or extends the code: it provides a safety net against going backwards by putting new bugs into previously working code. The term regression testing is often used to capture this idea that we don’t want to go backwards! All of the exercises below should be added to a single file. In that file, you should also add the `test` and `test_suite` scaffolding functions shown above, and then, as you work through the exercises, add the new tests to your test suite. (If you open the online version of the textbook, you can easily copy and paste the tests and the fragments of code into your Python editor.) After completing each exercise, confirm that all the tests pass. The four compass points can be abbreviated by single-letter strings as `“N”` , `“E”` , `“S”` , and `“W”` . Write a function `turn_clockwise` that takes one of these four compass points as its parameter, and returns the next compass point in the clockwise direction. Here are some tests that should pass: ``` test(turn_clockwise("N") == "E") test(turn_clockwise("W") == "N") ``` You might ask “What if the argument to the function is some other value?” For all other cases, the function should return the value `None` : ``` test(turn_clockwise(42) == None) test(turn_clockwise("rubbish") == None) ``` Write a function `day_name` that converts an integer number `0` to `6` into the name of a day. Assume day `0` is `“Sunday”` . Once again, return `None` if the arguments to the function are not valid. Here are some tests that should pass: ``` test(day_name(3) == "Wednesday") test(day_name(6) == "Saturday") test(day_name(42) == None) ``` Write the inverse function `day_num` which is given a day name, and returns its number: ``` test(day_num("Friday") == 5) test(day_num("Sunday") == 0) test(day_num(day_name(3)) == 3) test(day_name(day_num("Thursday")) == "Thursday") ``` Once again, if this function is given an invalid argument, it should return `None` : ``` test(day_num("Halloween") == None) ``` Write a function that helps answer questions like ‘“Today is Wednesday. I leave on holiday in 19 days time. What day will that be?”’ So the function must take a day name and a delta argument — the number of days to add — and should return the resulting day name: ``` test(day_add("Monday", 4) == "Friday") test(day_add("Tuesday", 0) == "Tuesday") test(day_add("Tuesday", 14) == "Tuesday") test(day_add("Sunday", 100) == "Tuesday") ``` Hint: use the first two functions written above to help you write this one. Can your `day_add` function already work with negative deltas? For example, `-1` would be yesterday, or `-7` would be a week ago: ``` test(day_add("Sunday", -1) == "Saturday") test(day_add("Sunday", -7) == "Sunday") test(day_add("Tuesday", -100) == "Sunday") ``` If your function already works, explain why. If it does not work, make it work. Hint: Play with some cases of using the modulus function `%` (introduced at the beginning of the previous chapter). Specifically, explore what happens to `x % 7` when `x` is negative. Write a function `days_in_month` which takes the name of a month, and returns the number of days in the month. Ignore leap years: ``` test(days_in_month("February") == 28) test(days_in_month("December") == 31) ``` If the function is given invalid arguments, it should return `None` . Write a function `to_secs` that converts hours, minutes and seconds to a total number of seconds. Here are some tests that should pass: ``` test(to_secs(2, 30, 10) == 9010) test(to_secs(2, 0, 0) == 7200) test(to_secs(0, 2, 0) == 120) test(to_secs(0, 0, 42) == 42) test(to_secs(0, -10, 10) == -590) ``` Extend `to_secs` so that it can cope with real values as inputs. It should always return an integer number of seconds (truncated towards zero): ``` test(to_secs(2.5, 0, 10.71) == 9010) test(to_secs(2.433,0,0) == 8758) ``` Write three functions that are the “inverses” of `to_secs` : `hours_in` returns the whole integer number of hours represented by a total number of seconds. `minutes_in` returns the whole integer number of left over minutes in a total number of seconds, once the hours have been taken out. `seconds_in` returns the left over seconds represented by a total number of seconds. You may assume that the total number of seconds passed to these functions is an integer. Here are some test cases: ``` test(hours_in(9010) == 2) test(minutes_in(9010) == 30) test(seconds_in(9010) == 10) ``` It won’t always be obvious what is wanted … In the third case above, the requirement seems quite ambiguous and fuzzy. But the test clarifies what we actually need to do. Unit tests often have this secondary benefit of clarifying the specifications. If you write your own test suites, consider it part of the problem-solving process as you ask questions about what you really expect to happen, and whether you’ve considered all the possible cases. Given our emphasis on thinking like a computer scientist, you might enjoy reading at least one reference about thinking, and about fun ideas like fluid intelligence, a key ingredient in problem solving. See, for example, http://psychology.about.com/od/cognitivepsychology/a/fluid-crystal.htm. Learning Computer Science requires a good mix of both fluid and crystallized kinds of intelligence. Which of these tests fail? Explain why. ``` test(3 % 4 == 0) test(3 % 4 == 3) test(3 / 4 == 0) test(3 // 4 == 0) test(3+4 * 2 == 14) test(4-2+2 == 0) test(len("hello, world!") == 13) ``` Write a compare function that returns `1` if `a > b` , `0` if `a == b` , and `-1` if `a < b` ``` test(compare(5, 4) == 1) test(compare(7, 7) == 0) test(compare(2, 3) == -1) test(compare(42, 1) == 1) ``` Write a function called hypotenuse that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as parameters: ``` test(hypotenuse(3, 4) == 5.0) test(hypotenuse(12, 5) == 13.0) test(hypotenuse(24, 7) == 25.0) test(hypotenuse(9, 12) == 15.0) ``` Write a function ``` slope(x1, y1, x2, y2) ``` that returns the slope of the line through the points `(x1, y1)` and `(x2, y2)` . Be sure your implementation of slope can pass the following tests: ``` test(slope(5, 3, 4, 2) == 1.0) test(slope(1, 2, 3, 2) == 0.0) test(slope(1, 2, 3, 3) == 0.5) test(slope(2, 4, 1, 2) == 2.0) ``` Then use a call to `slope` in a new function named ``` intercept(x1, y1, x2, y2) ``` that returns the y-intercept of the line through the points `(x1, y1)` and `(x2, y2)` ``` test(intercept(1, 6, 3, 12) == 3.0) test(intercept(6, 1, 1, 6) == 7.0) test(intercept(4, 6, 12, 8) == 5.0) ``` Write a function called `is_even(n)` that takes an integer as an argument and returns `True` if the argument is an even number and `False` if it is odd. Add your own tests to the test suite. Now write the function `is_odd(n)` that returns `True` when `n` is odd and `False` otherwise. Include unit tests for this function too. Finally, modify it so that it uses a call to `is_even` to determine if its argument is an odd integer, and ensure that its test still pass. Write a function `is_factor(f, n)` that passes these tests: ``` test(is_factor(3, 12)) test(not is_factor(5, 12)) test(is_factor(7, 14)) test(not is_factor(7, 15)) test(is_factor(1, 15)) test(is_factor(15, 15)) test(not is_factor(25, 15)) ``` An important role of unit tests is that they can also act as unambiguous “specifications” of what is expected. These test cases answer the question “Do we treat 1 and 15 as factors of 15”? Write `is_multiple` to satisfy these unit tests: ``` test(is_multiple(12, 3)) test(is_multiple(12, 4)) test(not is_multiple(12, 5)) test(is_multiple(12, 6)) test(not is_multiple(12, 7)) ``` Can you find a way to use `is_factor` in your definition of `is_multiple` ? Write the function `f2c(t)` designed to return the integer value of the nearest degree Celsius for given temperature in Fahrenheit. (hint: you may want to make use of the built-in function, `round` . Try printing `round.__doc__` in a Python shell or looking up help for the round function, and experimenting with it until you are comfortable with how it works.) ``` test(f2c(212) == 100) # Boiling point of water test(f2c(32) == 0) # Freezing point of water test(f2c(-40) == -40) # Wow, what an interesting case! test(f2c(36) == 2) test(f2c(37) == 3) test(f2c(38) == 3) test(f2c(39) == 4) ``` Now do the opposite: write the function `c2f` which converts Celsius to Fahrenheit: ``` test(c2f(0) == 32) test(c2f(100) == 212) test(c2f(-40) == -40) test(c2f(12) == 54) test(c2f(18) == 64) test(c2f(-48) == -54) ``` # Chapter 7: Iteration Computers are often used to automate repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly. Repeated execution of a set of statements is called iteration. Because iteration is so common, Python provides several language features to make it easier. We’ve already seen the `for` statement in chapter 3. This the the form of iteration you’ll likely be using most often. But in this chapter we’ve going to look at the `while` statement — another way to have your program do iteration, useful in slightly different circumstances. Before we do that, let’s just review a few ideas… As we have mentioned previously, it is legal to make more than one assignment to the same variable. A new assignment makes an existing variable refer to a new value (and stop referring to the old value). ``` airtime_remaining = 15 print(airtime_remaining) airtime_remaining = 7 print(airtime_remaining) ``` The output of this program is: ``` 15 7 ``` because the first time `airtime_remaining` is printed, its value is `15` , and the second time, its value is `7` . It is especially important to distinguish between an assignment statement and a Boolean expression that tests for equality. Because Python uses the equal token ( `=` ) for assignment, it is tempting to interpret a statement like `a = b` as a Boolean test. Unlike mathematics, it is not! Remember that the Python token for the equality operator is `==` . Note too that an equality test is symmetric, but assignment is not. For example, if `a == 7` then `7 == a` . But in Python, the statement `a = 7` is legal and `7 = a` is not. In Python, an assignment statement can make two variables equal, but because further assignments can change either of them, they don’t have to stay that way: ``` a = 5 b = a # After executing this line, a and b are now equal a = 3 # After executing this line, a and b are no longer equal ``` The third line changes the value of `a` but does not change the value of `b` , so they are no longer equal. (In some programming languages, a different symbol is used for assignment, such as `<-` or `:=` , to avoid confusion. Some people also think that variable was an unfortunate word to choose, and instead we should have called them assignables. Python chooses to follow common terminology and token usage, also found in languages like C, C++, Java, and C#, so we use the tokens `=` for assignment, `==` for equality, and we talk of variables. When an assignment statement is executed, the right-hand side expression (i.e. the expression that comes after the assignment token) is evaluated first. This produces a value. Then the assignment is made, so that the variable on the left-hand side now refers to the new value. One of the most common forms of assignment is an update, where the new value of the variable depends on its old value. Deduct 40 cents from my airtime balance, or add one run to the scoreboard. ``` n = 5 n = 3 * n + 1 ``` Line 2 means get the current value of `n` , multiply it by three and add one, and assign the answer to `n` , thus making `n` refer to the value. So after executing the two lines above, `n` will point/refer to the integer `16` . If you try to get the value of a variable that has never been assigned to, you’ll get an error: ``` >>> w = x + 1 Traceback (most recent call last): File "<interactive input>", line 1, in NameError: name 'x' is not defined ``` Before you can update a variable, you have to initialize it to some starting value, usually with a simple assignment: ``` runs_scored = 0 ... runs_scored = runs_scored + 1 ``` Line 3 — updating a variable by adding `1` to it — is very common. It is called an increment of the variable; subtracting `1` is called a decrement. Sometimes programmers also talk about bumping a variable, which means the same as incrementing it by `1` . `for` loop revisited Recall that the for loop processes each item in a list. Each item in turn is (re-)assigned to the loop variable, and the body of the loop is executed. We saw this example in an earlier chapter: ``` for f in ["Joe", "Zoe", "Brad", "Angelina", "Zuki", "Thandi", "Paris"]: invitation = "Hi " + f + ". Please come to my party on Saturday!" print(invitation) ``` Running through all the items in a list is called traversing the list, or traversal. Let us write a function now to sum up all the elements in a list of numbers. Do this by hand first, and try to isolate exactly what steps you take. You’ll find you need to keep some “running total” of the sum so far, either on a piece of paper, in your head, or in your calculator. Remembering things from one step to the next is precisely why we have variables in a program: so we’ll need some variable to remember the “running total”. It should be initialized with a value of zero, and then we need to traverse the items in the list. For each item, we’ll want to update the running total by adding the next number to it. ``` def mysum(xs): """ Sum all the numbers in the list xs, and return the total. """ running_total = 0 for x in xs: running_total = running_total + x return running_total # Add tests like these to your test suite ... test(mysum([1, 2, 3, 4]) == 10) test(mysum([1.25, 2.5, 1.75]) == 5.5) test(mysum([1, -2, 3]) == 2) test(mysum([ ]) == 0) test(mysum(range(11)) == 55) # 11 is not included in the list. ``` `while` statement Here is a fragment of code that demonstrates the use of the while statement: ``` def sum_to(n): """ Return the sum of 1+2+3 ... n """ ss = 0 v = 1 while v <= n: ss = ss + v v = v + 1 return ss # For your test suite test(sum_to(4) == 10) test(sum_to(1000) == 500500) ``` You can almost read the `while` statement as if it were English. It means, while `v` is less than or equal to `n` , continue executing the body of the loop. Within the body, each time, increment `v` . When `v` passes n, return your accumulated sum. More formally, here is precise flow of execution for a `while` statement: Evaluate the condition at line 5, yielding a value which is either False or True. If the value is False, exit the while statement and continue execution at the next statement (line 8 in this case). If the value is True, execute each of the statements in the body (lines 6 and 7) and then go back to the `while` statement at line 5. The body consists of all of the statements indented below the while keyword. Notice that if the loop condition is False the first time we get loop, the statements in the body of the loop are never executed. The body of the loop should change the value of one or more variables so that eventually the condition becomes false and the loop terminates. Otherwise the loop will repeat forever, which is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, “lather, rinse, repeat”, are an infinite loop. In the case here, we can prove that the loop terminates because we know that the value of `n` is finite, and we can see that the value of `v` increments each time through the loop, so eventually it will have to exceed `n` . In other cases, it is not so easy, even impossible in some cases, to tell if the loop will ever terminate. What you will notice here is that the `while` loop is more work for you — the programmer — than the equivalent `for` loop. When using a `while` loop one has to manage the loop variable yourself: give it an initial value, test for completion, and then make sure you change something in the body so that the loop terminates. By comparison, here is an equivalent function that uses `for` instead: ``` def sum_to(n): """ Return the sum of 1+2+3 ... n """ ss = 0 for v in range(n+1): ss = ss + v return ss ``` Notice the slightly tricky call to the `range` function — we had to add one onto `n` , because `range` generates its list up to but excluding the value you give it. It would be easy to make a programming mistake and overlook this, but because we’ve made the investment of writing some unit tests, our test suite would have caught our error. So why have two kinds of loop if `for` looks easier? This next example shows a case where we need the extra power that we get from the `while` loop. Let’s look at a simple sequence that has fascinated and foxed mathematicians for many years. They still cannot answer even quite simple questions about this. The “computational rule” for creating the sequence is to start from some given `n` , and to generate the next term of the sequence from `n` , either by halving `n` , (whenever `n` is even), or else by multiplying it by three and adding 1. The sequence terminates when `n` reaches `1` . This Python function captures that algorithm: ``` def seq3np1(n): """ Print the 3n+1 sequence from n, terminating when it reaches 1. """ while n != 1: print(n, end=", ") if n % 2 == 0: # n is even n = n // 2 else: # n is odd n = n * 3 + 1 print(n, end=".\n") ``` Notice first that the `end=", "` . This tells the `print(n, end=".\n")` at line 11 after the loop terminates will then print the final value of `n` followed by a period and a newline character. (You’ll cover the `\n` (newline character) in the next chapter). The condition for continuing with this loop is `n != 1` , so the loop will continue running until it reaches its termination condition, (i.e. `n == 1` ). Each time through the loop, the program outputs the value of `n` and then checks whether it is even or odd. If it is even, the value of `n` is divided by `2` using integer division. If it is odd, the value is replaced by `n * 3 + 1` . Here are some examples: ``` >>> seq3np1(3) 3, 10, 5, 16, 8, 4, 2, 1. >>> seq3np1(19) 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1. >>> seq3np1(21) 21, 64, 32, 16, 8, 4, 2, 1. >>> seq3np1(16) 16, 8, 4, 2, 1. >>> ``` Since `n` sometimes increases and sometimes decreases, there is no obvious proof that `n` will ever reach `1` , or that the program terminates. For some particular values of `n` , we can prove termination. For example, if the starting value is a power of two, then the value of `n` will be even each time through the loop until it reaches 1. The previous example ends with such a sequence, starting with `16` . See if you can find a small starting number that needs more than a hundred steps before it terminates. Particular values aside, the interesting question was first posed by a German mathematician called <NAME>: the Collatz conjecture (also known as the 3n + 1 conjecture), is that this sequence terminates for all positive values of `n` . So far, no one has been able to prove it or disprove it! (A conjecture is a statement that might be true, but nobody knows for sure.) Think carefully about what would be needed for a proof or disproof of the conjecture “All positive integers will eventually converge to `1` using the Collatz rules”. With fast computers we have been able to test every integer up to very large values, and so far, they have all eventually ended up at `1` . But who knows? Perhaps there is some as-yet untested number which does not reduce to `1` . You’ll notice that if you don’t stop when you reach `1` , the sequence gets into its own cyclic loop: 1, 4, 2, 1, 4, 2, 1, 4 … So one possibility is that there might be other cycles that we just haven’t found yet. Wikipedia has an informative article about the Collatz conjecture. The sequence also goes under other names (Hailstone sequence, Wonderous numbers, etc.), and you’ll find out just how many integers have already been tested by computer, and found to converge! Choosing between for and while Use a `for` loop if you know, before you start looping, the maximum number of times that you’ll need to execute the body. For example, if you’re traversing a list of elements, you know that the maximum number of loop iterations you can possibly need is “all the elements in the list”. Or if you need to print the 12 times table, we know right away how many times the loop will need to run. So any problem like “iterate this weather model for 1000 cycles”, or “search this list of words”, “find all prime numbers up to 10000” suggest that a `for` loop is best. By contrast, if you are required to repeat some computation until some condition is met, and you cannot calculate in advance when (or if) this will happen, as we did in this `3n + 1` problem, you’ll need a `while` loop. We call the first case definite iteration — we know ahead of time some definite bounds for what is needed. The latter case is called indefinite iteration — we’re not sure how many iterations we’ll need — we cannot even establish an upper bound! To write effective computer programs, and to build a good conceptual model of program execution, a programmer needs to develop the ability to trace the execution of a computer program. Tracing involves becoming the computer and following the flow of execution through a sample program run, recording the state of all variables and any output the program generates after each instruction is executed. To understand this process, let’s trace the call to `seq3np1(3)` from the previous section. At the start of the trace, we have a variable, `n` (the parameter), with an initial value of `3` . Since `3` is not equal to `1` , the `while` loop body is executed. `3` is printed and `3 % 2 == 0` is evaluated. Since it evaluates to `False` , the `else` branch is executed and `3 * 3 + 1` is evaluated and assigned to `n` . To keep track of all this as you hand trace a program, make a column heading on a piece of paper for each variable created as the program runs and another one for output. Our trace so far would look something like this: ``` n output printed so far -- --------------------- 3 3, 10 ``` Since `10 != 1` evaluates to `True` , the loop body is again executed, and `10` is printed. `10 % 2 == 0` is `True` , so the `if` branch is executed and `n` becomes `5` . By the end of the trace we have: ``` n output printed so far -- --------------------- 3 3, 10 3, 10, 5 3, 10, 5, 16 3, 10, 5, 16, 8 3, 10, 5, 16, 8, 4 3, 10, 5, 16, 8, 4, 2 3, 10, 5, 16, 8, 4, 2, 1 3, 10, 5, 16, 8, 4, 2, 1. ``` Tracing can be a bit tedious and error prone (that’s why we get computers to do this stuff in the first place!), but it is an essential skill for a programmer to have. From this trace we can learn a lot about the way our code works. We can observe that as soon as `n` becomes a power of `2` , for example, the program will require `log2(n)` executions of the loop body to complete. We can also see that the final `1` will not be printed as output within the body of the loop, which is why we put the special Tracing a program is, of course, related to single-stepping through your code and being able to inspect the variables. Using the computer to single-step for you is less error prone and more convenient. Also, as your programs get more complex, they might execute many millions of steps before they get to the code that you’re really interested in, so manual tracing becomes impossible. Being able to set a breakpoint where you need one is far more powerful. So we strongly encourage you to invest time in learning using to use your programming environment to full effect. There are also some great visualization tools becoming available to help you trace and understand small fragments of Python code. The one we recommend is at http://pythontutor.com/ We’ve cautioned against chatterbox functions, but used them here. As we learn a bit more Python, we’ll be able to show you how to generate a list of values to hold the sequence, rather than having the function print them. Doing this would remove the need to have all these pesky print functions in the middle of our logic, and will make the function more useful. The following function counts the number of decimal digits in a positive integer: ``` def num_digits(n): count = 0 while n != 0: count = count + 1 n = n // 10 return count ``` A call to ``` print(num_digits(710)) ``` will print `3` . Trace the execution of this function call (perhaps using the single step function in PyScripter, or the Python visualizer, or on some paper) to convince yourself that it works. This function demonstrates an important pattern of computation called a counter. The variable `count` is initialized to `0` and then incremented each time the loop body is executed. When the loop exits, `count` contains the result — the total number of times the loop body was executed, which is the same as the number of digits. If we wanted to only count digits that are either 0 or 5, adding a conditional before incrementing the counter will do the trick: ``` def num_zero_and_five_digits(n): count = 0 while n > 0: digit = n % 10 if digit == 0 or digit == 5: count = count + 1 n = n // 10 return count ``` Confirm that ``` test(num_zero_and_five_digits(1055030250) == 7) ``` passes. Notice, however, that ``` test(num_digits(0) == 1) ``` fails. Explain why. Do you think this is a bug in the code, or a bug in the specifications, or our expectations, or the tests? Incrementing a variable is so common that Python provides an abbreviated syntax for it: ``` >>> count = 0 >>> count += 1 >>> count 1 >>> count += 1 >>> count 2 ``` `count += 1` is an abbreviation for `count = count + 1` . We pronounce the operator as “plus-equals”. The increment value does not have to be `1` : ``` >>> n = 2 >>> n += 5 >>> n 7 ``` There are similar abbreviations for `-=` , `*=` , `/=` , `//=` and `%=` : ``` >>> n = 2 >>> n *= 5 >>> n 10 >>> n -= 4 >>> n 6 >>> n //= 2 >>> n 3 >>> n %= 2 >>> n 1 ``` Python comes with extensive documentation for all its built-in functions, and its libraries. Different systems have different ways of accessing this help. In PyScripter, click on the Help menu item, and select Python Manuals. Then search for help on the built-in function `range` . You’ll get something like this: Notice the square brackets in the description of the arguments. These are examples of meta-notation — notation that describes Python syntax, but is not part of it. The square brackets in this documentation mean that the argument is optional — the programmer can omit it. So what this first line of help tells us is that `range` must always have a `stop` argument, but it may have an optional `start` argument (which must be followed by a comma if it is present), and it can also have an optional `step` argument, preceded by a comma if it is present. The examples from help show that `range` can have either 1, 2 or 3 arguments. The list can start at any starting value, and go up or down in increments other than 1. The documentation here also says that the arguments must be integers. Other meta-notation you’ll frequently encounter is the use of bold and italics. The bold means that these are tokens — keywords or symbols — typed into your Python code exactly as they are, whereas the italic terms stand for “something of this type”. So the syntax description for variable in list: means you can substitute any legal variable and any legal list when you write your Python code. This (simplified) description of the print( [object, …] ) Meta-notation gives us a concise and powerful way to describe the pattern of some syntax or feature. One of the things loops are good for is generating tables. Before computers were readily available, people had to calculate logarithms, sines and cosines, and other mathematical functions by hand. To make that easier, mathematics books contained long tables listing the values of these functions. Creating the tables was slow and boring, and they tended to be full of errors. When computers appeared on the scene, one of the initial reactions was, “This is great! We can use the computers to generate the tables, so there will be no errors.” That turned out to be true (mostly) but shortsighted. Soon thereafter, computers and calculators were so pervasive that the tables became obsolete. Well, almost. For some operations, computers use tables of values to get an approximate answer and then perform computations to improve the approximation. In some cases, there have been errors in the underlying tables, most famously in the table the Intel Pentium processor chip used to perform floating-point division. Although a log table is not as useful as it once was, it still makes a good example of iteration. The following program outputs a sequence of values in the left column and 2 raised to the power of that value in the right column: ``` for x in range(13): # Generate numbers 0 to 12 print(x, "\t", 2**x) ``` The string `"\t"` represents a tab character. The backslash character in `"\t"` indicates the beginning of an escape sequence. Escape sequences are used to represent invisible characters like tabs and newlines. The sequence `\n` represents a newline. An escape sequence can appear anywhere in a string; in this example, the tab escape sequence is the only thing in the string. How do you think you represent a backslash in a string? As characters and strings are displayed on the screen, an invisible marker called the cursor keeps track of where the next character will go. After a The tab character shifts the cursor to the right until it reaches one of the tab stops. Tabs are useful for making columns of text line up, as in the output of the previous program: ``` 0 1 1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024 11 2048 12 4096 ``` Because of the tab characters between the columns, the position of the second column does not depend on the number of digits in the first column. A two-dimensional table is a table where you read the value at the intersection of a row and a column. A multiplication table is a good example. Let’s say you want to print a multiplication table for the values from `1` to `6` . A good way to start is to write a loop that prints the multiples of `2` , all on one line: ``` for i in range(1, 7): print(2 * i, end=" ") print() ``` Here we’ve used the `range` function, but made it start its sequence at `1` . As the loop executes, the value of `i` changes from `1` to `6` . When all the elements of the range have been assigned to `i` , the loop terminates. Each time through the loop, it displays the value of `2 * i` , followed by three spaces. Again, the extra `end=" "` argument in the The output of the program is: ``` 2 4 6 8 10 12 ``` So far, so good. The next step is to encapsulate and generalize. Encapsulation is the process of wrapping a piece of code in a function, allowing you to take advantage of all the things functions are good for. You have already seen some examples of encapsulation, including `is_divisible` in a previous chapter. Generalization means taking something specific, such as printing the multiples of `2` , and making it more general, such as printing the multiples of any integer. This function encapsulates the previous loop and generalizes it to print multiples of `n` : ``` def print_multiples(n): for i in range(1, 7): print(n * i, end=" ") print() ``` To encapsulate, all we had to do was add the first line, which declares the name of the function and the parameter list. To generalize, all we had to do was replace the value 2 with the parameter `n` . If we call this function with the argument `2` , we get the same output as before. With the argument `3` , the output is: ``` 3 6 9 12 15 18 ``` With the argument `4` , the output is: ``` 4 8 12 16 20 24 ``` By now you can probably guess how to print a multiplication table — by calling `print_multiples` repeatedly with different arguments. In fact, we can use another loop: Notice how similar this loop is to the one inside `print_multiples` . All we did was replace the print function with a function call. The output of this program is a multiplication table: ``` 1 2 3 4 5 6 2 4 6 8 10 12 3 6 9 12 15 18 4 8 12 16 20 24 5 10 15 20 25 30 6 12 18 24 30 36 ``` To demonstrate encapsulation again, let’s take the code from the last section and wrap it up in a function: This process is a common development plan. We develop code by writing lines of code outside any function, or typing them in to the interpreter. When we get the code working, we extract it and wrap it up in a function. This development plan is particularly useful if you don’t know how to divide the program into functions when you start writing. This approach lets you design as you go along. You might be wondering how we can use the same variable, `i` , in both `print_multiples` and `print_mult_table` . Doesn’t it cause problems when one of the functions changes the value of the variable? The answer is no, because the `i` in `print_multiples` and the `i` in `print_mult_table` are not the same variable. Variables created inside a function definition are local; you can’t access a local variable from outside its home function. That means you are free to have multiple variables with the same name as long as they are not in the same function. Python examines all the statements in a function — if any of them assign a value to a variable, that is the clue that Python uses to make the variable a local variable. The stack diagram for this program shows that the two variables named `i` are not the same variable. They can refer to different values, and changing one does not affect the other. The value of `i` in `print_mult_table` goes from `1` to `6` . In the diagram it happens to be `3` . The next time through the loop it will be `4` . Each time through the loop, `print_mult_table` calls `print_multiples` with the current value of `i` as an argument. That value gets assigned to the parameter `n` . Inside `print_multiples` , the value of `i` goes from `1` to `6` . In the diagram, it happens to be `2` . Changing this variable has no effect on the value of `i` in `print_mult_table` . It is common and perfectly legal to have different local variables with the same name. In particular, names like `i` and `j` are used frequently as loop variables. If you avoid using them in one function just because you used them somewhere else, you will probably make the program harder to read. The visualizer at http://pythontutor.com/ shows very clearly how the two variables `i` are distinct variables, and how they have independent values. The break statement is used to immediately leave the body of its loop. The next statement to be executed is the first one after the body: ``` for i in [12, 16, 17, 24, 29]: if i % 2 == 1: # If the number is odd break # ... immediately exit the loop print(i) print("done") ``` ``` 12 16 done ``` The pre-test loop — standard loop behaviour `for` and `while` loops do their tests at the start, before executing any part of the body. They’re called pre-test loops, because the test happens before (pre) the body. `break` and `return` are our tools for adapting this standard behaviour. Sometimes we’d like to have the middle-test loop with the exit test in the middle of the body, rather than at the beginning or at the end. Or a post-test loop that puts its exit test as the last thing in the body. Other languages have different syntax and keywords for these different flavours, but Python just uses a combination of `while` and `if condition: break` to get the job done. A typical example is a problem where the user has to input numbers to be summed. To indicate that there are no more inputs, the user enters a special value, often the value `-1` , or the empty string. This needs a middle-exit loop pattern: input the next number, then test whether to exit, or else process the number: The middle-test loop flowchart ``` total = 0 while True: response = input("Enter the next number. (Leave blank to end)") if response == "": break total += int(response) print("The total of the numbers you entered is ", total) ``` Convince yourself that this fits the middle-exit loop flowchart: line 3 does some useful work, lines 4 and 5 can exit the loop, and if they don’t line 6 does more useful work before the next iteration starts. The `while bool-expr` : uses the Boolean expression to determine whether to iterate again. `True` is a trivial Boolean expression, so `while True:` means always do the loop body again. This is a language idiom — a convention that most programmers will recognize immediately. Since the expression on line 2 will never terminate the loop, (it is a dummy test) the programmer must arrange to break (or return) out of the loop body elsewhere, in some other way (i.e. in lines 4 and 5 in this sample). A clever compiler or interpreter will understand that line 2 is a fake test that must always succeed, so it won’t even generate a test, and our flowchart never even put the diamond-shape dummy test box at the top of the loop! Similarly, by just moving the `if condition: break` to the end of the loop body we create a pattern for a post-test loop. Post-test loops are used when you want to be sure that the loop body always executes at least once (because the first test only happens at the end of the execution of the first loop body). This is useful, for example, if we want to play an interactive game against the user — we always want to play at least one game: ``` while True: play_the_game_once() response = input("Play again? (yes or no)") if response != "yes": break print("Goodbye!") ``` Hint: Think about where you want the exit test to happen. Once you’ve recognized that you need a loop to repeat something, think about its terminating condition — when will I want to stop iterating? Then figure out whether you need to do the test before starting the first (and every other) iteration, or at the end of the first (and every other) iteration, or perhaps in the middle of each iteration. Interactive programs that require input from the user or read from files often need to exit their loops in the middle or at the end of an iteration, when it becomes clear that there is no more data to process, or the user doesn’t want to play our game anymore. The following program implements a simple guessing game: ``` import random # We cover random numbers in the rng = random.Random() # modules chapter, so peek ahead. number = rng.randrange(1, 1000) # Get random number between [1 and 1000). guesses = 0 msg = "" while True: guess = int(input(msg + "\nGuess my number between 1 and 1000: ")) guesses += 1 if guess > number: msg += str(guess) + " is too high.\n" elif guess < number: msg += str(guess) + " is too low.\n" else: break input("\n\nGreat, you got it in {0} guesses!\n\n".format(guesses)) ``` This program makes use of the mathematical law of trichotomy (given real numbers `a` and `b` , exactly one of these three must be true: `a > b` , `a < b` , or `a == b` ). At line 18 there is a call to the `input` function, but we don’t do anything with the result, not even assign it to a variable. This is legal in Python. Here it has the effect of popping up the input dialog window and waiting for the user to respond before the program terminates. Programmers often use the trick of doing some extra input at the end of a script, just to keep the window open. Also notice the use of the `msg` variable, initially an empty string, on lines 6, 12 and 14. Each time through the loop we extend the message being displayed: this allows us to display the program’s feedback right at the same place as we’re asking for the next guess. This is a control flow statement that causes the program to immediately skip the processing of the rest of the body of the loop, for the current iteration. But the loop still carries on running for its remaining iterations: ``` for i in [12, 16, 17, 24, 29, 30]: if i % 2 == 1: # If the number is odd continue # Don't process it print(i) print("done") ``` ``` 12 16 24 30 done ``` As another example of generalization, imagine you wanted a program that would print a multiplication table of any size, not just the six-by-six table. You could add a parameter to `print_mult_table` : ``` def print_mult_table(high): for i in range(1, high+1): print_multiples(i) ``` We replaced the value 7 with the expression high+1. If we call `print_mult_table` with the argument 7, it displays: This is fine, except that we probably want the table to be square — with the same number of rows and columns. To do that, we add another parameter to `print_multiples` to specify how many columns the table should have. Just to be annoying, we call this parameter high, demonstrating that different functions can have parameters with the same name (just like local variables). Here’s the whole program: ``` def print_multiples(n, high): for i in range(1, high+1): print(n * i, end=" ") print() def print_mult_table(high): for i in range(1, high+1): print_multiples(i, high) ``` Notice that when we added a new parameter, we had to change the first line of the function (the function heading), and we also had to change the place where the function is called in `print_mult_table` . Now, when we call `print_mult_table(7)` : When you generalize a function appropriately, you often get a program with capabilities you didn’t plan. For example, you might notice that, because `ab = ba` , all the entries in the table appear twice. You could save ink by printing only half the table. To do that, you only have to change one line of `print_mult_table` . Change ``` print_multiples(i, high+1) ``` to ``` print_multiples(i, i+1) ``` and you get: A few times now, we have mentioned all the things functions are good for. By now, you might be wondering what exactly those things are. Here are some of them: `play_the_game_once` . This chunking allowed us to put aside the details of the particular game — is it a card game, or noughts and crosses, or a role playing game — and simply focus on one isolated part of our program logic — letting the player choose whether they want to play again. We’ve already seen lists of names and lists of numbers in Python. We’re going to peek ahead in the textbook a little, and show a more advanced way of representing our data. Making a pair of things in Python is as simple as putting them into parentheses, like this: We can put many pairs into a list of pairs: ``` celebs = [("<NAME>", 1963), ("<NAME>", 1937), ("<NAME>", 1994)] ``` Here is a quick sample of things we can do with structured data like this. First, print all the celebs: ``` print(celebs) print(len(celebs)) [("<NAME>", 1963), ("<NAME>", 1937), ("<NAME>", 1994)] 3 ``` Notice that the celebs list has just 3 elements, each of them pairs. Now we print the names of those celebrities born before 1980: ``` for (nm, yr) in celebs: if yr < 1980: print(nm) ``` ``` <NAME> <NAME> ``` This demonstrates something we have not seen yet in the `for` loop: instead of using a single loop control variable, we’ve used a pair of variable names, `(nm, yr)` , instead. The loop is executed three times — once for each pair in the list, and on each iteration both the variables are assigned values from the pair of data that is being handled. Now we’ll come up with an even more adventurous list of structured data. In this case, we have a list of students. Each student has a name which is paired up with another list of subjects that they are enrolled for: Here we’ve assigned a list of five elements to the variable `students` . Let’s print out each student name, and the number of subjects they are enrolled for: ``` # Print all students with a count of their courses. for (name, subjects) in students: print(name, "takes", len(subjects), "courses") ``` Python agreeably responds with the following output: ``` John takes 2 courses Vusi takes 3 courses Jess takes 4 courses Sarah takes 4 courses Zuki takes 5 courses ``` Now we’d like to ask how many students are taking CompSci. This needs a counter, and for each student we need a second loop that tests each of the subjects in turn: ``` # Count how many students are taking CompSci counter = 0 for (name, subjects) in students: for s in subjects: # A nested loop! if s == "CompSci": counter += 1 ``` The number of students taking CompSci is 3 ``` You should set up a list of your own data that interests you — perhaps a list of your CDs, each containing a list of song titles on the CD, or a list of movie titles, each with a list of movie stars who acted in the movie. You could then ask questions like “Which movies starred <NAME>?” Loops are often used in programs that compute numerical results by starting with an approximate answer and iteratively improving it. For example, before we had calculators or computers, people needed to calculate square roots manually. Newton used a particularly good method (there is some evidence that this method was known many years before). Suppose that you want to know the square root of `n` . If you start with almost any approximation, you can compute a better approximation (closer to the actual answer) with the following formula: ``` better = (approx + n/approx)/2 ``` Repeat this calculation a few times using your calculator. Can you see why each iteration brings your estimate a little closer? One of the amazing properties of this particular algorithm is how quickly it converges to an accurate answer — a great advantage for doing it manually. By using a loop and repeating this formula until the better approximation gets close enough to the previous one, we can write a function for computing the square root. (In fact, this is how your calculator finds square roots — it may have a slightly different formula and method, but it is also based on repeatedly improving its guesses.) This is an example of an indefinite iteration problem: we cannot predict in advance how many times we’ll want to improve our guess — we just want to keep getting closer and closer. Our stopping condition for the loop will be when our old guess and our improved guess are “close enough” to each other. Ideally, we’d like the old and new guess to be exactly equal to each other when we stop. But exact equality is a tricky notion in computer arithmetic when real numbers are involved. Because real numbers are not represented absolutely accurately (after all, a number like `pi` or the square root of two has an infinite number of decimal places because it is irrational), we need to formulate the stopping test for the loop by asking “is `a` close enough to `b”` ? This stopping condition can be coded like this: ``` if abs(a-b) < 0.001: # Make this smaller for better accuracy break ``` Notice that we take the absolute value of the difference between `a` and `b` ! This problem is also a good example of when a middle-exit loop is appropriate: ``` def sqrt(n): approx = n/2.0 # Start with some or other guess at the answer while True: better = (approx + n/approx)/2.0 if abs(approx - better) < 0.001: return better approx = better # Test cases print(sqrt(25.0)) print(sqrt(49.0)) print(sqrt(81.0)) ``` The output is: ``` 5.00000000002 7.0 9.0 ``` See if you can improve the approximations by changing the stopping condition. Also, step through the algorithm (perhaps by hand, using your calculator) to see how many iterations were needed before it achieved this level of accuracy for sqrt(25). Newton’s method is an example of an algorithm: it is a mechanical process for solving a category of problems (in this case, computing square roots). Some kinds of knowledge are not algorithmic. For example, learning dates from history or your multiplication tables involves memorization of specific solutions. But the techniques you learned for addition with carrying, subtraction with borrowing, and long division are all algorithms. Or if you are an avid Sudoku puzzle solver, you might have some specific set of steps that you always follow. One of the characteristics of algorithms is that they do not require any intelligence to carry out. They are mechanical processes in which each step follows from the last according to a simple set of rules. And they’re designed to solve a general class or category of problems, not just a single problem. Understanding that hard problems can be solved by step-by-step algorithmic processes (and having technology to execute these algorithms for us) is one of the major breakthroughs that has had enormous benefits. So while the execution of the algorithm may be boring and may require no intelligence, algorithmic or computational thinking — i.e. using algorithms and automation as the basis for approaching problems — is rapidly transforming our society. Some claim that this shift towards algorithmic thinking and processes is going to have even more impact on our society than the invention of the printing press. And the process of designing algorithms is interesting, intellectually challenging, and a central part of what we call programming. Some of the things that people do naturally, without difficulty or conscious thought, are the hardest to express algorithmically. Understanding natural language is a good example. We all do it, but so far no one has been able to explain how we do it, at least not in the form of a step-by-step mechanical algorithm. A step-by-step process for solving a category of problems. The statements inside a loop. breakpoint A place in your program code where program execution will pause (or break), allowing you to inspect the state of the program’s variables, or single-step through individual statements, executing them one at a time. bump Programmer slang. Synonym for increment. continue statement A statement that causes the remainder of the current iteration of a loop to be skipped. The flow of execution goes back to the top of the loop, evaluates the condition, and if this is true the next iteration of the loop will begin. counter A variable used to count something, usually initialized to zero and incremented in the body of a loop. cursor An invisible marker that keeps track of where the next character will be printed. decrement Decrease by 1. definite iteration A loop where we have an upper bound on the number of times the body will be executed. Definite iteration is usually best coded as a `for` loop. development plan A process for developing a program. In this chapter, we demonstrated a style of development based on developing code to do simple, specific things and then encapsulating and generalizing. encapsulate To divide a large complex program into components (like functions) and isolate the components from each other (by using local variables, for example). escape sequence An escape character, `\` , followed by one or more printable characters used to designate a non printable character. generalize To replace something unnecessarily specific (like a constant value) with something appropriately general (like a variable or parameter). Generalization makes code more versatile, more likely to be reused, and sometimes even easier to write. increment Both as a noun and as a verb, increment means to increase by `1` . infinite loop A loop in which the terminating condition is never satisfied. indefinite iteration A loop where we just need to keep going until some condition is met. A `while` statement is used for this case. initialization (of a variable) To initialize a variable is to give it an initial value. Since in Python, variables don’t exist until they are assigned values, they are initialized when they are created. In other programming languages this is not the case, and variables can be created without being initialized, in which case they have either default or garbage values. iteration Repeated execution of a set of programming statements. loop The construct that allows allows us to repeatedly execute a statement or a group of statements until a terminating condition is satisfied. loop variable A variable used as part of the terminating condition of a loop. meta-notation Extra symbols or notation that helps describe other notation. Here we introduced square brackets, ellipses, italics, and bold as meta-notation to help describe optional, repeatable, substitutable and fixed parts of the Python syntax. middle-test loop A loop that executes some of the body, then tests for the exit condition, and then may execute some more of the body. We don’t have a special Python construct for this case, but can use `while` and `break` together. nested loop A loop inside the body of another loop. newline A special character that causes the cursor to move to the beginning of the next line. post-test loop A loop that executes the body, then tests for the exit condition. We don’t have a special Python construct for this, but can use `while` and `break` together. pre-test loop A loop that tests before deciding whether the execute its body. `for` and `while` are both pre-test loops. single-step A mode of interpreter execution where you are able to execute your program one step at a time, and inspect the consequences of that step. Useful for debugging and building your internal mental model of what is going on. tab A special character that causes the cursor to move to the next tab stop on the current line. trichotomy Given any real numbers `a` and `b` , exactly one of the following relations holds: `a < b` , `a > b` , or `a == b` . Thus when you can establish that two of the relations are false, you can assume the remaining one is true. trace To follow the flow of execution of a program by hand, recording the change of state of the variables and any output produced. This chapter showed us how to sum a list of items, and how to count items. The counting example also had an `if` statement that let us only count some selected items. In the previous chapter we also showed a function ``` find_first_2_letter_word ``` that allowed us an “early exit” from inside a loop by using `return` when some condition occurred. We now also have `break` to exit a loop but not the enclosing function, and `continue` to abandon the current iteration of the loop without ending the loop. Composition of list traversal, summing, counting, testing conditions and early exit is a rich collection of building blocks that can be combined in powerful ways to create many functions that are all slightly different. The first six questions are typical functions you should be able to write using only these building blocks. Write a function to count how many odd numbers are in a list. Sum up all the even numbers in a list. Sum up all the negative numbers in a list. Count how many words in a list have length 5. Sum all the elements in a list up to but not including the first even number. (Write your unit tests. What if there is no even number?) Count how many words occur in a list up to and including the first occurrence of the word “sam”. (Write your unit tests for this case too. What if “sam” does not occur?) Add a print function to Newton’s sqrt function that prints out better each time it is calculated. Call your modified function with 25 as an argument and record the results. Trace the execution of the last version of print_mult_table and figure out how it works. Write a function print_triangular_numbers(n) that prints out the first n triangular numbers. A call to print_triangular_numbers(5) would produce the following output: ``` 1 1 2 3 3 6 4 10 5 15 ``` (hint: use a web search to find out what a triangular number is.) Write a function, is_prime, which takes a single integer argument and returns True when the argument is a prime number and False otherwise. Add tests for cases like this: ``` test(is_prime(11)) test(not is_prime(35)) test(is_prime(19911121)) ``` The last case could represent your birth date. Were you born on a prime day? In a class of 100 students, how many do you think would have prime birth dates? Revisit the drunk pirate problem from the exercises in chapter 3. This time, the drunk pirate makes a turn, and then takes some steps forward, and repeats this. Our social science student now records pairs of data: the angle of each turn, and the number of steps taken after the turn. Her experimental data is [(160, 20), (-43, 10), (270, 8), (-43, 12)]. Use a turtle to draw the path taken by our drunk friend. Many interesting shapes can be drawn by the turtle by giving a list of pairs like we did above, where the first item of the pair is the angle to turn, and the second item is the distance to move forward. Set up a list of pairs so that the turtle draws a house with a cross through the centre, as show here. This should be done without going over any of the lines / edges more than once, and without lifting your pen. Now read Wikipedia’s article (http://en.wikipedia.org/wiki/Eulerian_path) about Eulerian paths. Learn how to tell immediately by inspection whether it is possible to find a solution or not. If the path is possible, you’ll also know where to put your pen to start drawing, and where you should end up! What will `num_digits(0)` return? Modify it to return 1 for this case. Why does a call to `num_digits(-24)` result in an infinite loop? (hint: -1//10 evaluates to -1) Modify `num_digits` so that it works correctly with any integer value. Add these tests: ``` test(num_digits(0) == 1) test(num_digits(-12345) == 5) ``` Write a function `num_even_digits(n)` that counts the number of even digits in n. These tests should pass: ``` test(num_even_digits(123456) == 3) test(num_even_digits(2468) == 4) test(num_even_digits(1357) == 0) test(num_even_digits(0) == 1) ``` Write a function `sum_of_squares(xs)` that computes the sum of the squares of the numbers in the list xs. For example, ``` sum_of_squares([2, 3, 4]) ``` should return 4+9+16 which is 29: ``` test(sum_of_squares([2, 3, 4]) == 29) test(sum_of_squares([ ]) == 0) test(sum_of_squares([2, -3, 4]) == 29) ``` You and your friend are in a team to write a two-player game, human against computer, such as Tic-Tac-Toe / Noughts and Crosses. Your friend will write the logic to play one round of the game, while you will write the logic to allow many rounds of play, keep score, decide who plays, first, etc. The two of you negotiate on how the two parts of the program will fit together, and you come up with this simple scaffolding (which your friend will improve later): ``` # Your friend will complete this function def play_once(human_plays_first): """ Must play one round of the game. If the parameter is True, the human gets to play first, else the computer gets to play first. When the round ends, the return value of the function is one of -1 (human wins), 0 (game drawn), 1 (computer wins). """ # This is all dummy scaffolding code right at the moment... import random # See Modules chapter ... rng = random.Random() # Pick a random result between -1 and 1. result = rng.randrange(-1,2) print("Human plays first={0}, winner={1} " .format(human_plays_first, result)) return result ``` Write the main program which repeatedly calls this function to play the game, and after each round it announces the outcome as “I win!”, “You win!”, or “Game drawn!”. It then asks the player “Do you want to play again?” and either plays again, or says “Goodbye”, and terminates. Keep score of how many wins each player has had, and how many draws there have been. After each round of play, also announce the scores. Add logic so that the players take turns to play first. Compute the percentage of wins for the human, out of all games played. Also announce this at the end of each round. Draw a flowchart of your logic. # Chapter 8: Strings So far we have seen built-in types like `int` , `float` , `bool` , `str` and we’ve seen lists and pairs. Strings, lists, and pairs are qualitatively different from the others because they are made up of smaller pieces. In the case of strings, they’re made up of smaller strings each containing one character Types that comprise smaller pieces are called compound data types. Depending on what we are doing, we may want to treat a compound data type as a single thing, or we may want to access its parts. This ambiguity is useful. We previously saw that each turtle instance has its own attributes and a number of methods that can be applied to the instance. For example, we could set the turtle’s color, and we wrote `tess.turn(90)` . Just like a turtle, a string is also an object. So each string instance has its own attributes and methods. ``` >>> ss = "Hello, World!" >>> tt = ss.upper() >>> tt 'HELLO, WORLD!' ``` `upper` is a method that can be invoked on any string object to create a new string, in which all the characters are in uppercase. (The original string `ss` remains unchanged.) There are also methods such as `lower` , `capitalize` , and `swapcase` that do other interesting stuff. To learn what methods are available, you can consult the Help documentation, look for string methods, and read the documentation. Or, if you’re a bit lazier, simply type the following into a PyScripter script: When you type the period to select one of the methods of `ss` , PyScripter will pop up a selection window showing all the methods (there are around 70 of them — thank goodness we’ll only use a few of those!) that could be used on your string. When you type the name of the method, some further help about its parameter and return type, and its docstring, will be displayed. This is a good example of a tool — PyScripter — using the meta-information — the docstrings — provided by the module programmers. The indexing operator (Python uses square brackets to enclose the index) selects a single character substring from a string: ``` >>> fruit = "banana" >>> m = fruit[1] >>> print(m) ``` The expression `fruit[1]` selects character number 1 from `fruit` , and creates a new string containing just this one character. The variable `m` refers to the result. When we display `m` , we could get a surprise: `a` Computer scientists always start counting from zero! The letter at subscript position zero of `"banana"` is `b` . So at position `[1]` we have the letter `a` . If we want to access the zero-eth letter of a string, we just place 0, or any expression that evaluates to 0, inbetween the brackets: ``` >>> m = fruit[0] >>> print(m) b ``` The expression in brackets is called an index. An index specifies a member of an ordered collection, in this case the collection of characters in the string. The index indicates which one you want, hence the name. It can be any integer expression. We can use `enumerate` to visualize the indices: ``` >>> fruit = "banana" >>> list(enumerate(fruit)) [(0, 'b'), (1, 'a'), (2, 'n'), (3, 'a'), (4, 'n'), (5, 'a')] ``` Do not worry about `enumerate` at this point, we will see more of it in the chapter on lists. Note that indexing returns a string — Python has no special type for a single character. It is just a string of length 1. We’ve also seen lists previously. The same indexing notation works to extract elements from a list: ``` >>> prime_nums = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31] >>> prime_nums[4] 11 >>> friends = ["Joe", "Zoe", "Brad", "Angelina", "Zuki", "Thandi", "Paris"] >>> friends[3] 'Angelina' ``` The `len` function, when applied to a string, returns the number of characters in a string: ``` >>> fruit = "banana" >>> len(fruit) 6 ``` To get the last letter of a string, you might be tempted to try something like this: That won’t work. It causes the runtime error ``` IndexError: string index out of range ``` . The reason is that there is no character at index position 6 in `"banana"` . Because we start counting at zero, the six indexes are numbered 0 to 5. To get the last character, we have to subtract 1 from the length of `fruit` : Alternatively, we can use negative indices, which count backward from the end of the string. The expression `fruit[-1]` yields the last letter, `fruit[-2]` yields the second to last, and so on. As you might have guessed, indexing with a negative index also works like this for lists. We won’t use negative indexes in the rest of these notes — not many computer languages use this idiom, and you’ll probably be better off avoiding it. But there is plenty of Python code out on the Internet that will use this trick, so it is best to know that it exists. `for` loop A lot of computations involve processing a string one character at a time. Often they start at the beginning, select each character in turn, do something to it, and continue until the end. This pattern of processing is called a traversal. One way to encode a traversal is with a `while` statement: This loop traverses the string and displays each letter on a line by itself. The loop condition is `ix < len(fruit)` , so when `ix` is equal to the length of the string, the condition is false, and the body of the loop is not executed. The last character accessed is the one with the index `len(fruit)-1` , which is the last character in the string. But we’ve previously seen how the `for` loop can easily iterate over the elements in a list and it can do so for strings as well: Each time through the loop, the next character in the string is assigned to the variable `c` . The loop continues until no characters are left. Here we can see the expressive power the `for` loop gives us compared to the `while` loop when traversing a string. The following example shows how to use concatenation and a `for` loop to generate an abecedarian series. Abecedarian refers to a series or list in which the elements appear in alphabetical order. For example, in <NAME>’s book Make Way for Ducklings, the names of the ducklings are Jack, Kack, Lack, Mack, Nack, Ouack, Pack, and Quack. This loop outputs these names in order: The output of this program is: ``` Jack Kack Lack Mack Nack Oack Pack Qack ``` Of course, that’s not quite right because Ouack and Quack are misspelled. You’ll fix this as an exercise below. A substring of a string is obtained by taking a slice. Similarly, we can slice a list to refer to some sublist of the items in the list: ``` >>> s = "Pirates of the Caribbean" >>> print(s[0:7]) Pirates >>> print(s[11:14]) the >>> print(s[15:24]) Caribbean >>> friends = ["Joe", "Zoe", "Brad", "Angelina", "Zuki", "Thandi", "Paris"] >>> print(friends[2:4]) ['Brad', 'Angelina'] ``` The operator `[n:m]` returns the part of the string from the n’th character to the m’th character, including the first but excluding the last. This behavior makes sense if you imagine the indices pointing between the characters, as in the following diagram: If you imagine this as a piece of paper, the slice operator `[n:m]` copies out the part of the paper between the `n` and `m` positions. Provided `m` and `n` are both within the bounds of the string, your result will be of length (m-n). Three tricks are added to this: if you omit the first index (before the colon), the slice starts at the beginning of the string (or list). If you omit the second index, the slice extends to the end of the string (or list). Similarly, if you provide value for `n` that is bigger than the length of the string (or list), the slice will take all the values up to the end. (It won’t give an “out of range” error like the normal indexing operation does.) Thus: ``` >>> fruit = "banana" >>> fruit[:3] 'ban' >>> fruit[3:] 'ana' >>> fruit[3:999] 'ana' ``` What do you think `s[:]` means? What about `friends[4:]` ? The comparison operators work on strings. To see if two strings are equal: Other comparison operations are useful for putting words in lexicographical order: This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters. As a result: ``` Your word, Zebra, comes before banana. ``` A common way to address this problem is to convert strings to a standard format, such as all lowercase, before performing the comparison. A more difficult problem is making the program realize that zebras are not fruit. It is tempting to use the `[]` operator on the left side of an assignment, with the intention of changing a character in a string. For example: Instead of producing the output `Jello, world!` , this code produces the runtime error ``` TypeError: 'str' object does not support item assignment ``` . Strings are immutable, which means you can’t change an existing string. The best you can do is create a new string that is a variation on the original: The solution here is to concatenate a new first letter onto a slice of `greeting` . This operation has no effect on the original string. `in` and `not in` operators The `in` operator tests for membership. When both of the arguments to `in` are strings, `in` checks whether the left argument is a substring of the right argument. ``` >>> "p" in "apple" True >>> "i" in "apple" False >>> "ap" in "apple" True >>> "pa" in "apple" False ``` Note that a string is a substring of itself, and the empty string is a substring of any other string. (Also note that computer scientists like to think about these edge cases quite carefully!) ``` >>> "a" in "a" True >>> "apple" in "apple" True >>> "" in "a" True >>> "" in "apple" True ``` The `not in` operator returns the logical opposite results of `in` : ``` >>> "x" not in "apple" True ``` Combining the `in` operator with string concatenation using `+` , we can write a function that removes all the vowels from a string: ``` def remove_vowels(s): vowels = "aeiouAEIOU" s_sans_vowels = "" for x in s: if x not in vowels: s_sans_vowels += x return s_sans_vowels test(remove_vowels("compsci") == "cmpsc") test(remove_vowels("aAbEefIijOopUus") == "bfjps") ``` `find` function What does the following function do? ``` def find(strng, ch): """ Find and return the index of ch in strng. Return -1 if ch does not occur in strng. """ ix = 0 while ix < len(strng): if strng[ix] == ch: return ix ix += 1 return -1 test(find("Compsci", "p") == 3) test(find("Compsci", "C") == 0) test(find("Compsci", "i") == 6) test(find("Compsci", "x") == -1) ``` In a sense, `find` is the opposite of the indexing operator. Instead of taking an index and extracting the corresponding character, it takes a character and finds the index where that character appears. If the character is not found, the function returns `-1` . This is another example where we see a `return` statement inside a loop. If `strng[ix] == ch` , the function returns immediately, breaking out of the loop prematurely. If the character doesn’t appear in the string, then the program exits the loop normally and returns `-1` . This pattern of computation is sometimes called a eureka traversal or short-circuit evaluation, because as soon as we find what we are looking for, we can cry “Eureka!”, take the short-circuit, and stop looking. The following program counts the number of times the letter `a` appears in a string, and is another example of the counter pattern introduced in counting. ``` def count_a(text): count = 0 for c in text: if c == "a": count += 1 return(count) test(count_a("banana") == 3) ``` To find the locations of the second or third occurrence of a character in a string, we can modify the `find` function, adding a third parameter for the starting position in the search string: ``` def find2(strng, ch, start): ix = start while ix < len(strng): if strng[ix] == ch: return ix ix += 1 return -1 test(find2("banana", "a", 2) == 3) ``` The call ``` find2("banana", "a", 2) ``` now returns `3` , the index of the first occurrence of “a” in “banana” starting the search at index 2. What does ``` find2("banana", "n", 3) ``` return? If you said, 4, there is a good chance you understand how `find2` works. Better still, we can combine `find` and `find2` using an optional parameter: ``` def find(strng, ch, start=0): ix = start while ix < len(strng): if strng[ix] == ch: return ix ix += 1 return -1 ``` When a function has an optional parameter, the caller may provide a matching argument. If the third argument is provided to `find` , it gets assigned to `start` . But if the caller leaves the argument out, then start is given a default value indicated by the assignment `start=0` in the function definition. So the call ``` find("banana", "a", 2) ``` to this version of `find` behaves just like `find2` , while in the call `find("banana", "a")` , `start` will be set to the default value of `0` . Adding another optional parameter to `find` makes it search from a starting position, up to but not including the end position: ``` def find(strng, ch, start=0, end=None): ix = start if end is None: end = len(strng) while ix < end: if strng[ix] == ch: return ix ix += 1 return -1 ``` The optional value for `end` is interesting: we give it a default value `None` if the caller does not supply any argument. In the body of the function we test what `end` is, and if the caller did not supply any argument, we reassign `end` to be the length of the string. If the caller has supplied an argument for `end` , however, the caller’s value will be used in the loop. The semantics of `start` and `end` in this function are precisely the same as they are in the `range` function. Here are some test cases that should pass: ``` ss = "Python strings have some interesting methods." test(find(ss, "s") == 7) test(find(ss, "s", 7) == 7) test(find(ss, "s", 8) == 13) test(find(ss, "s", 8, 13) == -1) test(find(ss, ".") == len(ss)-1) ``` `find` method Now that we’ve done all this work to write a powerful `find` function, we can reveal that strings already have their own built-in `find` method. It can do everything that our code can do, and more! The built-in `find` method is more general than our version. It can find substrings, not just single characters: ``` >>> "banana".find("nan") >>> "banana".find("na", 3) 4 ``` Usually we’d prefer to use the methods that Python provides rather than reinvent our own equivalents. But many of the built-in functions and methods make good teaching exercises, and the underlying techniques you learn are your building blocks to becoming a proficient programmer. `split` method One of the most useful methods on strings is the `split` method: it splits a single multi-word string into a list of individual words, removing all the whitespace between them. (Whitespace means any tabs, newlines, or spaces.) This allows us to read input as a single string, and split it into words. ``` >>> ss = "Well I never did said Alice" >>> wds = ss.split() >>> wds ['Well', 'I', 'never', 'did', 'said', 'Alice'] ``` We’ll often work with strings that contain punctuation, or tab and newline characters, especially, as we’ll see in a future chapter, when we read our text from files or from the Internet. But if we’re writing a program, say, to count word frequencies or check the spelling of each word, we’d prefer to strip off these unwanted characters. We’ll show just one example of how to strip punctuation from a string. Remember that strings are immutable, so we cannot change the string with the punctuation — we need to traverse the original string and create a new string, omitting any punctuation: ``` punctuation = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~" def remove_punctuation(s): s_sans_punct = "" for letter in s: if letter not in punctuation: s_sans_punct += letter return s_sans_punct ``` Setting up that first assignment is messy and error-prone. Fortunately, the Python `string` module already does it for us. So we will make a slight improvement to this program — we’ll import the `string` module and use its definition: ``` import string def remove_punctuation(s): s_without_punct = "" for letter in s: if letter not in string.punctuation: s_without_punct += letter return s_without_punct test(remove_punctuation('"Well, I never did!", said Alice.') == "Well I never did said Alice") test(remove_punctuation("Are you very, very, sure?") == "Are you very very sure") ``` Composing together this function and the `split` method from the previous section makes a useful combination — we’ll clean out the punctuation, and `split` will clean out the newlines and tabs while turning the string into a list of words: ``` my_story = """ Pythons are constrictors, which means that they will 'squeeze' the life out of their prey. They coil themselves around their prey and with each breath the creature takes the snake will squeeze a little tighter until they stop breathing completely. Once the heart stops the prey is swallowed whole. The entire animal is digested in the snake's stomach except for fur or feathers. What do you think happens to the fur, feathers, beaks, and eggshells? The 'extra stuff' gets passed out as --- you guessed it --- snake POOP! """ wds = remove_punctuation(my_story).split() print(wds) ``` The output: ``` ['Pythons', 'are', 'constrictors', ... , 'it', 'snake', 'POOP'] ``` There are other useful string methods, but this book isn’t intended to be a reference manual. On the other hand, the Python Library Reference is. Along with a wealth of other documentation, it is available at the Python website. The easiest and most powerful way to format a string in Python 3 is to use the `format` method. To see how this works, let’s start with a few examples: ``` s1 = "His name is {0}!".format("Arthur") print(s1) name = "Alice" age = 10 s2 = "I am {1} and I am {0} years old.".format(age, name) print(s2) n1 = 4 n2 = 5 s3 = "2**10 = {0} and {1} * {2} = {3:f}".format(2**10, n1, n2, n1 * n2) print(s3) ``` Running the script produces: ``` His name is Arthur! I am Alice and I am 10 years old. 2**10 = 1024 and 4 * 5 = 20.000000 ``` The template string contains place holders, ``` ... {0} ... {1} ... {2} ... ``` etc. The `format` method substitutes its arguments into the place holders. The numbers in the place holders are indexes that determine which argument gets substituted — make sure you understand line 6 above! But there’s more! Each of the replacement fields can also contain a format specification — it is always introduced by the `:` symbol (Line 11 above uses one.) This modifies how the substitutions are made into the template, and can control things like: `<` , center `^` , or right `>` `10` ) `f` , as we did in line 11 of the code above, or perhaps we’ll ask integer numbers to be converted to hexadecimal using `x` ) `.2f` is useful for working with currencies to two decimal places.) Let’s do a few simple and common examples that should be enough for most needs. If you need to do anything more esoteric, use help and read all the powerful, gory details. ``` n1 = "Paris" n2 = "Whitney" n3 = "Hilton" print("Pi to three decimal places is {0:.3f}".format(3.1415926)) print("123456789 123456789 123456789 123456789 123456789 123456789") print("|||{0:<15}|||{1:^15}|||{2:>15}|||Born in {3}|||" .format(n1,n2,n3,1981)) print("The decimal value {0} converts to hex value {0:x}" .format(123456)) ``` This script produces the output: ``` Pi to three decimal places is 3.142 123456789 123456789 123456789 123456789 123456789 123456789 |||Paris ||| Whitney ||| Hilton|||Born in 1981||| The decimal value 123456 converts to hex value 1e240 ``` You can have multiple placeholders indexing the same argument, or perhaps even have extra arguments that are not referenced at all: ``` letter = """ Dear {0} {2}. {0}, I have an interesting money-making proposition for you! If you deposit $10 million into my bank account, I can double your money ... """ print(letter.format("Paris", "Whitney", "Hilton")) print(letter.format("Bill", "Henry", "Gates")) ``` This produces the following: ``` Dear <NAME>. Paris, I have an interesting money-making proposition for you! If you deposit $10 million into my bank account, I can double your money ... Dear <NAME>. Bill, I have an interesting money-making proposition for you! If you deposit $10 million into my bank account I can double your money ... ``` As you might expect, you’ll get an index error if your placeholders refer to arguments that you do not provide: ``` >>> "hello {3}".format("Dave") Traceback (most recent call last): File "<interactive input>", line 1, in <module> IndexError: tuple index out of range ``` The following example illustrates the real utility of string formatting. First, we’ll try to print a table without using string formatting: ``` print("i\ti**2\ti**3\ti**5\ti**10\ti**20") for i in range(1, 11): print(i, "\t", i**2, "\t", i**3, "\t", i**5, "\t", i**10, "\t", i**20) ``` This program prints out a table of various powers of the numbers from 1 to 10. (This assumes that the tab width is 8. You might see something even worse than this if you tab width is set to 4.) In its current form it relies on the tab character ( `\t` ) to align the columns of values, but this breaks down when the values in the table get larger than the tab width: One possible solution would be to change the tab width, but the first column already has more space than it needs. The best solution would be to set the width of each column independently. As you may have guessed by now, string formatting provides a much nicer solution. We can also right-justify each field: ``` layout = "{0:>4}{1:>6}{2:>6}{3:>8}{4:>13}{5:>24}" print(layout.format("i", "i**2", "i**3", "i**5", "i**10", "i**20")) for i in range(1, 11): print(layout.format(i, i**2, i**3, i**5, i**10, i**20)) ``` Running this version produces the following (much more satisfying) output: This chapter introduced a lot of new ideas. The following summary may prove helpful in remembering what you learned. indexing ( `[]` ) Access a single character in a string using its position (starting from 0). Example: `"This"[2]` evaluates to `"i"` . length function ( `len` ) Returns the number of characters in a string. Example: `len("happy")` evaluates to `5` . for loop traversal ( `for` ) Traversing a string means accessing each character in the string, one at a time. For example, the following for loop: ``` for ch in "Example": ... ``` executes the body of the loop 7 times with different values of `ch` each time. slicing ( `[:]` ) A slice is a substring of a string. Example: ``` 'bananas and cream'[3:6] ``` evaluates to `ana` (so does ``` 'bananas and cream'[1:4] ``` ). string comparison ( `>, <, >=, <=, ==, !=` ) The six common comparison operators work with strings, evaluating according to `lexicographical` order. Examples: `"apple" < "banana"` evaluates to `True` . `"Zeta" < "Appricot"` evaluates to `False` . ``` "Zebra" <= "aardvark" ``` evaluates to `True` because all upper case letters precede lower case letters. in and not in operator ( `in` , `not in` ) The `in` operator tests for membership. In the case of strings, it tests whether one string is contained inside another string. Examples: ``` "heck" in "I'll be checking for you." ``` evaluates to `True` . ``` "cheese" in "I'll be checking for you." ``` evaluates to `False` . compound data type A data type in which the values are made up of components, or elements, that are themselves values. default value The value given to an optional parameter if no argument for it is provided in the function call. docstring A string constant on the first line of a function or module definition (and as we will see later, in class and method definitions as well). Docstrings provide a convenient way to associate documentation with code. Docstrings are also used by programming tools to provide interactive help. dot notation Use of the dot operator, `.` , to access methods and attributes of an object. immutable data value A data value which cannot be modified. Assignments to elements or slices (sub-parts) of immutable values cause a runtime error. index A variable or value used to select a member of an ordered collection, such as a character from a string, or an element from a list. optional parameter A parameter written in a function header with an assignment to a default value which it will receive if no corresponding argument is given for it in the function call. short-circuit evaluation A style of programming that shortcuts extra work as soon as the outcome is know with certainty. In this chapter our `find` function returned as soon as it found what it was looking for; it didn’t traverse all the rest of the items in the string. slice A part of a string (substring) specified by a range of indices. More generally, a subsequence of any sequence type in Python can be created using the slice operator ( `sequence[start:stop]` ). traverse To iterate through the elements of a collection, performing a similar operation on each. whitespace Any of the characters that move the cursor without printing visible characters. The constant `string.whitespace` contains all the white-space characters. We suggest you create a single file containing the test scaffolding from our previous chapters, and put all functions that require tests into that file. ``` >>> "Python"[1] >>> "Strings are sequences of characters."[5] >>> len("wonderful") >>> "Mystery"[:4] >>> "p" in "Pineapple" >>> "apple" in "Pineapple" >>> "pear" not in "Pineapple" >>> "apple" > "pineapple" >>> "pineapple" < "Peach" ``` ``` prefixes = "JKLMNOPQ" suffix = "ack" for letter in prefixes: print(letter + suffix) ``` so that `Ouack` and `Quack` are spelled correctly. ``` fruit = "banana" count = 0 for char in fruit: if char == "a": count += 1 print(count) ``` in a function named `count_letters` , and generalize it so that it accepts the string and the letter as arguments. Make the function return the number of characters, rather than print the answer. The caller should do the printing. Now rewrite the count_letters function so that instead of traversing the string, it repeatedly calls the find method, with the optional third parameter to locate new occurrences of the letter being counted. Assign to a variable in your program a triple-quoted string that contains your favourite paragraph of text — perhaps a poem, a speech, instructions to bake a cake, some inspirational verses, etc. Write a function which removes all punctuation from the string, breaks the string into a list of words, and counts the number of words in your text that contain the letter “e”. Your program should print an analysis of the text like this: ``` Your text contains 243 words, of which 109 (44.8%) contain an "e". ``` ``` 1 2 3 4 5 6 7 8 9 10 11 12 :-------------------------------------------------- 1: 1 2 3 4 5 6 7 8 9 10 11 12 2: 2 4 6 8 10 12 14 16 18 20 22 24 3: 3 6 9 12 15 18 21 24 27 30 33 36 4: 4 8 12 16 20 24 28 32 36 40 44 48 5: 5 10 15 20 25 30 35 40 45 50 55 60 6: 6 12 18 24 30 36 42 48 54 60 66 72 7: 7 14 21 28 35 42 49 56 63 70 77 84 8: 8 16 24 32 40 48 56 64 72 80 88 96 9: 9 18 27 36 45 54 63 72 81 90 99 108 10: 10 20 30 40 50 60 70 80 90 100 110 120 11: 11 22 33 44 55 66 77 88 99 110 121 132 12: 12 24 36 48 60 72 84 96 108 120 132 144 ``` ``` test(reverse("happy") == "yppah") test(reverse("Python") == "nohtyP") test(reverse("") == "") test(reverse("a") == "a") ``` ``` test(mirror("good") == "gooddoog") test(mirror("Python") == "PythonnohtyP") test(mirror("") == "") test(mirror("a") == "aa") ``` ``` test(remove_letter("a", "apple") == "pple") test(remove_letter("a", "banana") == "bnn") test(remove_letter("z", "banana") == "banana") test(remove_letter("i", "Mississippi") == "Msssspp") test(remove_letter("b", "") = "") test(remove_letter("b", "c") = "c") ``` `reverse` function to make this easy!): ``` test(is_palindrome("abba")) test(not is_palindrome("abab")) test(is_palindrome("tenet")) test(not is_palindrome("banana")) test(is_palindrome("straw warts")) test(is_palindrome("a")) # test(is_palindrome("")) # Is an empty string a palindrome? ``` ``` test(count("is", "Mississippi") == 2) test(count("an", "banana") == 2) test(count("ana", "banana") == 2) test(count("nana", "banana") == 1) test(count("nanan", "banana") == 0) test(count("aaa", "aaaaaa") == 4) ``` ``` test(remove("an", "banana") == "bana") test(remove("cyc", "bicycle") == "bile") test(remove("iss", "Mississippi") == "Missippi") test(remove("eggs", "bicycle") == "bicycle") ``` ``` test(remove_all("an", "banana") == "ba") test(remove_all("cyc", "bicycle") == "bile") test(remove_all("iss", "Mississippi") == "Mippi") test(remove_all("eggs", "bicycle") == "bicycle") ``` # Chapter 9: Tuples Date: 1999-01-01 Categories: Tags: We saw earlier that we could group together pairs of values by surrounding with parentheses. Recall this example: This is an example of a data structure --- a mechanism for grouping and organizing data to make it easier to use. The pair is an example of a tuple. Generalizing this, a tuple can be used to group any number of items into a single compound value. Syntactically, a tuple is a comma-separated sequence of values. Although it is not necessary, it is conventional to enclose tuples in parentheses: ``` >>> julia = ("Julia", "Roberts", 1967, "Duplicity", 2009, "Actress", "Atlanta, Georgia") ``` Tuples are useful for representing what other languages often call records ---some related information that belongs together, like your student record. There is no description of what each of these fields means, but we can guess. A tuple lets us "chunk" together related information and use it as a single thing. Tuples support the same sequence operations as strings. The index operator selects an element from a tuple. ``` >>> julia[2] 1967 ``` But if we try to use item assignment to modify one of the elements of the tuple, we get an error: ``` >>> julia[0] = "X" TypeError: 'tuple' object does not support item assignment ``` So like strings, tuples are immutable. Once Python has created a tuple in memory, it cannot be changed. Of course, even if we can't modify the elements of a tuple, we can always make the `julia` variable reference a new tuple holding different information. To construct the new tuple, it is convenient that we can slice parts of the old tuple and join up the bits to make the new tuple. So if `julia` has a new recent film, we could change her variable to reference a new tuple that used some information from the old one: ``` >>> julia = julia[:3] + ("Eat Pray Love", 2010) + julia[5:] >>> julia ("Julia", "Roberts", 1967, "Eat Pray Love", 2010, "Actress", "Atlanta, Georgia") ``` To create a tuple with a single element (but you're probably not likely to do that too often), we have to include the final comma, because without the final comma, Python treats the `(5)` below as an integer in parentheses: ``` >>> tup = (5,) >>> type(tup) <class 'tuple'>>> x = (5) >>> type(x) <class 'int'> ``` Python has a very powerful tuple assignment feature that allows a tuple of variables on the left of an assignment to be assigned values from a tuple on the right of the assignment. (We already saw this used for pairs, but it generalizes.) ``` (name, surname, b_year, movie, m_year, profession, b_place) = julia ``` This does the equivalent of seven assignment statements, all on one easy line. One requirement is that the number of variables on the left must match the number of elements in the tuple. One way to think of tuple assignment is as tuple packing/unpacking. In tuple packing, the values on the left are 'packed' together in a tuple: ``` >>> b = ("Bob", 19, "CS") # tuple packing ``` In tuple unpacking, the values in a tuple on the right are 'unpacked' into the variables/names on the left: ``` >>> b = ("Bob", 19, "CS") >>> (name, age, studies) = b # tuple unpacking >>> name 'Bob' >>> age 19 >>> studies 'CS' ``` Once in a while, it is useful to swap the values of two variables. With conventional assignment statements, we have to use a temporary variable. For example, to swap `a` and `b` : ``` temp = a a = b b = temp ``` Tuple assignment solves this problem neatly: `(a, b) = (b, a)` The left side is a tuple of variables; the right side is a tuple of values. Each value is assigned to its respective variable. All the expressions on the right side are evaluated before any of the assignments. This feature makes tuple assignment quite versatile. Naturally, the number of variables on the left and the number of values on the right have to be the same: ``` >>> (a, b, c, d) = (1, 2, 3) ValueError: need more than 3 values to unpack ``` Functions can always only return a single value, but by making that value a tuple, we can effectively group together as many values as we like, and return them together. This is very useful --- we often want to know some batsman's highest and lowest score, or we want to find the mean and the standard deviation, or we want to know the year, the month, and the day, or if we're doing some some ecological modelling we may want to know the number of rabbits and the number of wolves on an island at a given time. For example, we could write a function that returns both the area and the circumference of a circle of radius r: ``` def f(r): """ Return (circumference, area) of a circle of radius r """ c = 2 * math.pi * r a = math.pi * r * r return (c, a) ``` We saw in an earlier chapter that we could make a list of pairs, and we had an example where one of the items in the tuple was itself a list: Tuples items can themselves be other tuples. For example, we could improve the information about our movie stars to hold the full date of birth rather than just the year, and we could have a list of some of her movies and dates that they were made, and so on: ``` julia_more_info = ( ("Julia", "Roberts"), (8, "October", 1967), "Actress", ("Atlanta", "Georgia"), [ ("Duplicity", 2009), ("<NAME>", 1999), ("Pretty Woman", 1990), ("<NAME>", 2000), ("<NAME>", 2010), ("<NAME>", 2003), ("<NAME>", 2004) ]) ``` Notice in this case that the tuple has just five elements --- but each of those in turn can be another tuple, a list, a string, or any other kind of Python value. This property is known as being heterogeneous, meaning that it can be composed of elements of different types. data structure An organization of data for the purpose of making it easier to use. tuple An immutable data value that contains related elements. Tuples are used to group together related data, such as a person’s name, their age, and their gender. tuple assignment An assignment to all of the elements in a tuple using a single assignment statement. Tuple assignment occurs simultaneously rather than in sequence, making it useful for swapping values. # Chapter 10: Event handling Date: 2000-01-01 Categories: Tags: Most programs and devices like a cellphone respond to events – things that happen. For example, you might move your mouse, and the computer responds. Or you click a button, and the program does something interesting. In this chapter we’ll touch very briefly on how event-driven programming works. Here’s a program with some new features. Copy it into your workspace, run it. When the turtle window opens, press the arrow keys and make tess move about! turtle.setup(400,500) # Determine the window size wn = turtle.Screen() # Get a reference to the window wn.title("Handling keypresses!") # Change the window title wn.bgcolor("lightgreen") # Set the background color tess = turtle.Turtle() # Create our favorite turtle # The next four functions are our "event handlers". def h1(): tess.forward(30) def h2(): tess.left(45) def h3(): tess.right(45) def h4(): wn.bye() # Close down the turtle window # These lines "wire up" keypresses to the handlers we've defined. wn.onkey(h1, "Up") wn.onkey(h2, "Left") wn.onkey(h3, "Right") wn.onkey(h4, "q") # Now we need to tell the window to start listening for events, # If any of the keys that we're monitoring is pressed, its # handler will be called. wn.listen() wn.mainloop() ``` Here are some points to note: `listen` method at line 31, otherwise it won’t notice our keypresses. `h1` , `h2` and so on, but we can choose better names. The handlers can be arbitrarily complex functions that call other functions, etc. `q` key on the keyboard calls function `h4` (because we bound the `q` key to `h4` on line 26). While executing `h4` , the window’s `bye` method (line 20) closes the turtle window, which causes the window’s mainloop call (line 31) to end its execution. Since we did not write any more statements after line 32, this means that our program has completed everything, so it too will terminate. A mouse event is a bit different from a keypress event because its handler needs two parameters to receive x,y coordinate information telling us where the mouse was when the event occurred. turtle.setup(400,500) wn = turtle.Screen() wn.title("How to handle mouse clicks on the window!") wn.bgcolor("lightgreen") tess = turtle.Turtle() tess.color("purple") tess.pensize(3) tess.shape("circle") def h1(x, y): tess.goto(x, y) wn.onclick(h1) # Wire up a click on the window. wn.mainloop() ``` There is a new turtle method used at line 14 – this allows us to move the turtle to an absolute coordinate position. (Most of the examples that we’ve seen so far move the turtle relative to where it currently is). So what this program does is move the turtle (and draw a line) to wherever the mouse is clicked. Try it out! If we add this line before line 14, we’ll learn a useful debugging trick too: ``` wn.title("Got click at coords {0}, {1}".format(x, y)) ``` Because we can easily change the text in the window’s title bar, it is a useful place to display occasional debugging or status information. (Of course, this is not the real purpose of the window title!) But there is more! Not only can the window receive mouse events: individual turtles can also have their own handlers for mouse clicks. The turtle that “receives” the click event will be the one under the mouse. So we’ll create two turtles. Each will bind a handler to its own `onclick` event. And the two handlers can do different things for their turtles. turtle.setup(400,500) # Determine the window size wn = turtle.Screen() # Get a reference to the window wn.title("Handling mouse clicks!") # Change the window title wn.bgcolor("lightgreen") # Set the background color tess = turtle.Turtle() # Create two turtles tess.color("purple") alex = turtle.Turtle() # Move them apart alex.color("blue") alex.forward(100) def handler_for_tess(x, y): wn.title("Tess clicked at {0}, {1}".format(x, y)) tess.left(42) tess.forward(30) def handler_for_alex(x, y): wn.title("Alex clicked at {0}, {1}".format(x, y)) alex.right(84) alex.forward(50) tess.onclick(handler_for_tess) alex.onclick(handler_for_alex) Run this, click on the turtles, see what happens! Alarm clocks, kitchen timers, and thermonuclear bombs in James Bond movies are set to create an “automatic” event after a certain interval. The turtle module in Python has a timer that can cause an event when its time is up. turtle.setup(400,500) wn = turtle.Screen() wn.title("Using a timer") wn.bgcolor("lightgreen") tess = turtle.Turtle() tess.color("purple") tess.pensize(3) def h1(): tess.forward(100) tess.left(56) wn.ontimer(h1, 2000) wn.mainloop() ``` On line 16 the timer is started and set to explode in 2000 milliseconds (2 seconds). When the event does occur, the handler is called, and tess springs into action. Unfortunately, when one sets a timer, it only goes off once. So a common idiom, or style, is to restart the timer inside the handler. In this way the timer will keep on giving new events. Try this program: turtle.setup(400,500) wn = turtle.Screen() wn.title("Using a timer to get events!") wn.bgcolor("lightgreen") tess = turtle.Turtle() tess.color("purple") def h1(): tess.forward(100) tess.left(56) wn.ontimer(h1, 60) h1() wn.mainloop() ``` A state machine is a system that can be in one of a few different states. We draw a state diagram to represent the machine, where each state is drawn as a circle or an ellipse. Certain events occur which cause the system to leave one state and transition into a different state. These state transitions are usually drawn as an arrow on the diagram. This idea is not new: when first turning on a cellphone, it goes into a state which we could call “Awaiting PIN”. When the correct PIN is entered, it transitions into a different state – say “Ready”. Then we could lock the phone, and it would enter a “Locked” state, and so on. A simple state machine that we encounter often is a traffic light. Here is a state diagram which shows that the machine continually cycles through three different states, which we’ve numbered 0, 1 and 2. We’re going to build a program that uses a turtle to simulate the traffic lights. There are three lessons here. The first shows off some different ways to use our turtles. The second demonstrates how we would program a state machine in Python, by using a variable to keep track of the current state, and a number of different `if` statements to inspect the current state, and take the actions as we change to a different state. The third lesson is to use events from the keyboard to trigger the state changes. Copy and run this program. Make sure you understand what each line does, consulting the documentation as you need to. ``` import turtle # Tess becomes a traffic light. turtle.setup(400,500) wn = turtle.Screen() wn.title("Tess becomes a traffic light!") wn.bgcolor("lightgreen") tess = turtle.Turtle() def draw_housing(): """ Draw a nice housing to hold the traffic lights """ tess.pensize(3) tess.color("black", "darkgrey") tess.begin_fill() tess.forward(80) tess.left(90) tess.forward(200) tess.circle(40, 180) tess.forward(200) tess.left(90) tess.end_fill() draw_housing() tess.penup() # Position tess onto the place where the green light should be tess.forward(40) tess.left(90) tess.forward(50) # Turn tess into a big green circle tess.shape("circle") tess.shapesize(3) tess.fillcolor("green") # A traffic light is a kind of state machine with three states, # Green, Orange, Red. We number these states 0, 1, 2 # When the machine changes state, we change tess' position and # her fillcolor. # This variable holds the current state of the machine state_num = 0 def advance_state_machine(): global state_num if state_num == 0: # Transition from state 0 to state 1 tess.forward(70) tess.fillcolor("orange") state_num = 1 elif state_num == 1: # Transition from state 1 to state 2 tess.forward(70) tess.fillcolor("red") state_num = 2 else: # Transition from state 2 to state 0 tess.back(140) tess.fillcolor("green") state_num = 0 # Bind the event handler to the space key. wn.onkey(advance_state_machine, "space") wn.listen() # Listen for events wn.mainloop() ``` The new Python statement is at line 46. The `global` keyword tells Python not to create a new local variable for `state_num` (in spite of the fact that the function assigns to this variable at lines 50, 54, and 58). Instead, in this function, `state_num` always refers to the variable that was created at line 42. What the code in ``` advance_state_machine ``` does is advance from whatever the current state is, to the next state. On the state change we move tess to her new position, change her color, and, of course, we assign to `state_num` the number of the new state we’ve just entered. Each time the space bar is pressed, the event handler causes the traffic light machine to move to its new state. bind We bind a function (or associate it) with an event, meaning that when the event occurs, the function is called to handle it. event Something that happens “outside” the normal control flow of our program, usually from some user action. Typical events are mouse operations and keypresses. We’ve also seen that a timer can be primed to create an event. handler A function that is called in response to an event. Add some new key bindings to the first sample program: Change the traffic light program so that changes occur automatically, driven by a timer. In an earlier chapter we saw two turtle methods, `hideturtle` and `showturtle` that can hide or show a turtle. This suggests that we could take a different approach to the traffic lights program. Add to your program above as follows: draw a second housing for another set of traffic lights. Create three separate turtles to represent each of the green, orange and red lights, and position them appropriately within your new housing. As your state changes occur, just make one of the three turtles visible at any time. Once you’ve made the changes, sit back and ponder some deep thoughts: you’ve now got two different ways to use turtles to simulate the traffic lights, and both seem to work. Is one approach somehow preferable to the other? Which one more closely resembles reality – i.e. the traffic lights in your town? Now that you’ve got a traffic light program with different turtles for each light, perhaps the visibility / invisibility trick wasn’t such a great idea. If we watch traffic lights, they turn on and off – but when they’re off they are still there, perhaps just a darker color. Modify the program now so that the lights don’t disappear: they are either on, or off. But when they’re off, they’re still visible. Your traffic light controller program has been patented, and you’re about to become seriously rich. But your new client needs a change. They want four states in their state machine: Green, then Green and Orange together, then Orange only, and then Red. Additionally, they want different times spent in each state. The machine should spend 3 seconds in the Green state, followed by one second in the Green+Orange state, then one second in the Orange state, and then 2 seconds in the Red state. Change the logic in the state machine. If you don’t know how tennis scoring works, ask a friend or consult Wikipedia. A single game in tennis between player A and player B always has a score. We want to think about the “state of the score” as a state machine. The game starts in state (0, 0), meaning neither player has any score yet. We’ll assume the first element in this pair is the score for player A. If player A wins the first point, the score becomes (15, 0). If B wins the first point, the state becomes (0, 15). Below are the first few states and transitions for a state diagram. In this diagram, each state has two possible outcomes (A wins the next point, or B does), and the uppermost arrow is always the transition that happens when A wins the point. Complete the diagram, showing all transitions and all states. (Hint: there are twenty states, if you include the duece state, the advantage states, and the “A wins” and “B wins” states in your diagram.) # Chapter 11: Lists A list is an ordered collection of values. The values that make up a list are called its elements, or its items. We will use the term element or item to mean the same thing. Lists are similar to strings, which are ordered collections of characters, except that the elements of a list can be of any type. Lists and strings — and other collections that maintain the order of their items — are called sequences. There are several ways to create a new list; the simplest is to enclose the elements in square brackets ( `[` and `]` ): ``` ps = [10, 20, 30, 40] qs = ["spam", "bungee", "swallow"] ``` The first example is a list of four integers. The second is a list of three strings. The elements of a list don’t have to be the same type. The following list contains a string, a float, an integer, and (amazingly) another list: ``` zs = ["hello", 2.0, 5, [10, 20]] ``` A list within another list is said to be nested. Finally, a list with no elements is called an empty list, and is denoted `[]` . We have already seen that we can assign list values to variables or pass lists as parameters to functions: ``` >>> vocabulary = ["apple", "cheese", "dog"] >>> numbers = [17, 123] >>> an_empty_list = [] >>> print(vocabulary, numbers, an_empty_list) ["apple", "cheese", "dog"] [17, 123] [] ``` The syntax for accessing the elements of a list is the same as the syntax for accessing the characters of a string — the index operator: `[]` (not to be confused with an empty list). The expression inside the brackets specifies the index. Remember that the indices start at 0: ``` >>> numbers[0] 17 ``` Any expression evaluating to an integer can be used as an index: ``` >>> numbers[9-8] 5 >>> numbers[1.0] Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: list indices must be integers, not float ``` If you try to access or assign to an element that does not exist, you get a runtime error: ``` >>> numbers[2] Traceback (most recent call last): File "<interactive input>", line 1, in <module> IndexError: list index out of range ``` It is common to use a loop variable as a list index. for i in [0, 1, 2, 3]: print(horsemen[i]) ``` Each time through the loop, the variable `i` is used as an index into the list, printing the `i` ’th element. This pattern of computation is called a list traversal. The above sample doesn’t need or use the index `i` for anything besides getting the items from the list, so this more direct version — where the `for` loop gets the items — might be preferred: for h in horsemen: print(h) ``` The function `len` returns the length of a list, which is equal to the number of its elements. If you are going to use an integer index to access the list, it is a good idea to use this value as the upper bound of a loop instead of a constant. That way, if the size of the list changes, you won’t have to go through the program changing all the loops; they will work correctly for any size list: for i in range(len(horsemen)): print(horsemen[i]) ``` The last time the body of the loop is executed, `i` is `len(horsemen) - 1` , which is the index of the last element. (But the version without the index looks even better now!) Although a list can contain another list, the nested list still counts as a single element in its parent list. The length of this list is 4: ``` >>> len(["car makers", 1, ["Ford", "Toyota", "BMW"], [1, 2, 3]]) 4 ``` `in` and `not in` are Boolean operators that test membership in a sequence. We used them previously with strings, but they also work with lists and other sequences: ``` >>> horsemen = ["war", "famine", "pestilence", "death"] >>> "pestilence" in horsemen True >>> "debauchery" in horsemen False >>> "debauchery" not in horsemen True ``` Using this produces a more elegant version of the nested loop program we previously used to count the number of students doing Computer Science in the section `nested_data` : # Count how many students are taking CompSci counter = 0 for (name, subjects) in students: if "CompSci" in subjects: counter += 1 The `+` operator concatenates lists: ``` >>> a = [1, 2, 3] >>> b = [4, 5, 6] >>> c = a + b >>> c [1, 2, 3, 4, 5, 6] ``` Similarly, the `*` operator repeats a list a given number of times: ``` >>> [0] * 4 [0, 0, 0, 0] >>> [1, 2, 3] * 3 [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` The first example repeats `[0]` four times. The second example repeats the list `[1, 2, 3]` three times. The slice operations we saw previously with strings let us work with sublists: ``` >>> a_list = ["a", "b", "c", "d", "e", "f"] >>> a_list[1:3] ['b', 'c'] >>> a_list[:4] ['a', 'b', 'c', 'd'] >>> a_list[3:] ['d', 'e', 'f'] >>> a_list[:] ['a', 'b', 'c', 'd', 'e', 'f'] ``` Unlike strings, lists are mutable, which means we can change their elements. Using the index operator on the left side of an assignment, we can update one of the elements: ``` >>> fruit = ["banana", "apple", "quince"] >>> fruit[0] = "pear" >>> fruit[2] = "orange" >>> fruit ['pear', 'apple', 'orange'] ``` The bracket operator applied to a list can appear anywhere in an expression. When it appears on the left side of an assignment, it changes one of the elements in the list, so the first element of `fruit` has been changed from `"banana"` to `"pear"` , and the last from `"quince"` to `"orange"` . An assignment to an element of a list is called item assignment. Item assignment does not work for strings: ``` >>> my_string = "TEST" >>> my_string[2] = "X" Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: 'str' object does not support item assignment ``` but it does for lists: ``` >>> my_list = ["T", "E", "S", "T"] >>> my_list[2] = "X" >>> my_list ['T', 'E', 'X', 'T'] ``` With the slice operator we can update a whole sublist at once: ``` >>> a_list = ["a", "b", "c", "d", "e", "f"] >>> a_list[1:3] = ["x", "y"] >>> a_list ['a', 'x', 'y', 'd', 'e', 'f'] ``` We can also remove elements from a list by assigning an empty list to them: ``` >>> a_list = ["a", "b", "c", "d", "e", "f"] >>> a_list[1:3] = [] >>> a_list ['a', 'd', 'e', 'f'] ``` And we can add elements to a list by squeezing them into an empty slice at the desired location: ``` >>> a_list = ["a", "d", "f"] >>> a_list[1:1] = ["b", "c"] >>> a_list ['a', 'b', 'c', 'd', 'f'] >>> a_list[4:4] = ["e"] >>> a_list ['a', 'b', 'c', 'd', 'e', 'f'] ``` Using slices to delete list elements can be error-prone. Python provides an alternative that is more readable. The `del` statement removes an element from a list: ``` >>> a = ["one", "two", "three"] >>> del a[1] >>> a ['one', 'three'] ``` As you might expect, `del` causes a runtime error if the index is out of range. You can also use `del` with a slice to delete a sublist: ``` >>> a_list = ["a", "b", "c", "d", "e", "f"] >>> del a_list[1:5] >>> a_list ['a', 'f'] ``` As usual, the sublist selected by slice contains all the elements up to, but not including, the second index. After we execute these assignment statements ``` a = "banana" b = "banana" ``` we know that `a` and `b` will refer to a string object with the letters `"banana"` . But we don’t know yet whether they point to the same string object. There are two possible ways the Python interpreter could arrange its memory: In one case, `a` and `b` refer to two different objects that have the same value. In the second case, they refer to the same object. We can test whether two names refer to the same object using the `is` operator: ``` >>> a is b True ``` This tells us that both `a` and `b` refer to the same object, and that it is the second of the two state snapshots that accurately describes the relationship. Since strings are immutable, Python optimizes resources by making two names that refer to the same string value refer to the same object. This is not the case with lists: The state snapshot here looks like this: `a` and `b` have the same value but do not refer to the same object. Since variables refer to objects, if we assign one variable to another, both variables refer to the same object: In this case, the state snapshot looks like this: Because the same list has two different names, `a` and `b` , we say that it is aliased. Changes made with one alias affect the other: ``` >>> b[0] = 5 >>> a [5, 2, 3] ``` Although this behavior can be useful, it is sometimes unexpected or undesirable. In general, it is safer to avoid aliasing when you are working with mutable objects (i.e. lists at this point in our textbook, but we’ll meet more mutable objects as we cover classes and objects, dictionaries and sets). Of course, for immutable objects (i.e. strings, tuples), there’s no problem — it is just not possible to change something and get a surprise when you access an alias name. That’s why Python is free to alias strings (and any other immutable kinds of data) when it sees an opportunity to economize. If we want to modify a list and also keep a copy of the original, we need to be able to make a copy of the list itself, not just the reference. This process is sometimes called cloning, to avoid the ambiguity of the word copy. The easiest way to clone a list is to use the slice operator: ``` >>> a = [1, 2, 3] >>> b = a[:] >>> b [1, 2, 3] ``` Taking any slice of `a` creates a new list. In this case the slice happens to consist of the whole list. So now the relationship is like this: Now we are free to make changes to `b` without worrying that we’ll inadvertently be changing `a` : ``` >>> b[0] = 5 >>> a [1, 2, 3] ``` `for` loops The `for` loop also works with lists, as we’ve already seen. The generalized syntax of a `for` loop is: ``` for VARIABLE in LIST: BODY ``` So, as we’ve seen ``` friends = ["Joe", "Zoe", "Brad", "Angelina", "Zuki", "Thandi", "Paris"] for friend in friends: print(friend) ``` It almost reads like English: For (every) friend in (the list of) friends, print (the name of the) friend. Any list expression can be used in a `for` loop: ``` for number in range(20): if number % 3 == 0: print(number) for fruit in ["banana", "apple", "quince"]: print("I like to eat " + fruit + "s!") ``` The first example prints all the multiples of 3 between 0 and 19. The second example expresses enthusiasm for various fruits. Since lists are mutable, we often want to traverse a list, changing each of its elements. The following squares all the numbers in the list `xs` : for i in range(len(xs)): xs[i] = xs[i]**2 ``` Take a moment to think about `range(len(xs))` until you understand how it works. In this example we are interested in both the value of an item, (we want to square that value), and its index (so that we can assign the new value to that position). This pattern is common enough that Python provides a nicer way to implement it: for (i, val) in enumerate(xs): xs[i] = val**2 ``` `enumerate` generates pairs of both (index, value) during the list traversal. Try this next example to see more clearly how `enumerate` works: ``` for (i, v) in enumerate(["banana", "apple", "pear", "lemon"]): print(i, v) ``` ``` 0 banana 1 apple 2 pear 3 lemon ``` Passing a list as an argument actually passes a reference to the list, not a copy or clone of the list. So parameter passing creates an alias for you: the caller has one variable referencing the list, and the called function has an alias, but there is only one underlying list object. For example, the function below takes a list as an argument and multiplies each element in the list by 2: ``` def double_stuff(a_list): """ Overwrite each element in a_list with double its value. """ for (idx, val) in enumerate(a_list): a_list[idx] = 2 * val ``` If we add the following onto our script: ``` things = [2, 5, 9] double_stuff(things) print(things) ``` When we run it we’ll get: `[4, 10, 18]` In the function above, the parameter `a_list` and the variable `things` are aliases for the same object. So before any changes to the elements in the list, the state snapshot looks like this: Since the list object is shared by two frames, we drew it between them. If a function modifies the items of a list parameter, the caller sees the change. Use the Python visualizer! We’ve already mentioned the Python visualizer at http://pythontutor.com. It is a very useful tool for building a good understanding of references, aliases, assignments, and passing arguments to functions. Pay special attention to cases where you clone a list or have two separate lists, and cases where there is only one underlying list, but more than one variable is aliased to reference the list. The dot operator can also be used to access built-in methods of list objects. We’ll start with the most useful method for adding something onto the end of an existing list: ``` >>> mylist = [] >>> mylist.append(5) >>> mylist.append(27) >>> mylist.append(3) >>> mylist.append(12) >>> mylist [5, 27, 3, 12] ``` `append` is a list method which adds the argument passed to it to the end of the list. We’ll use it heavily when we’re creating new lists. Continuing with this example, we show several other list methods: ``` >>> mylist.insert(1, 12) # Insert 12 at pos 1, shift other items up >>> mylist [5, 12, 27, 3, 12] >>> mylist.count(12) # How many times is 12 in mylist? 2 >>> mylist.extend([5, 9, 5, 11]) # Put whole list onto end of mylist >>> mylist [5, 12, 27, 3, 12, 5, 9, 5, 11]) >>> mylist.index(9) # Find index of first 9 in mylist 6 >>> mylist.reverse() >>> mylist [11, 5, 9, 5, 12, 3, 27, 12, 5] >>> mylist.sort() >>> mylist [3, 5, 5, 5, 9, 11, 12, 12, 27] >>> mylist.remove(12) # Remove the first 12 in the list >>> mylist [3, 5, 5, 5, 9, 11, 12, 27] ``` Experiment and play with the list methods shown here, and read their documentation until you feel confident that you understand how they work. Functions which take lists as arguments and change them during execution are called modifiers and the changes they make are called side effects. A pure function does not produce side effects. It communicates with the calling program only through parameters, which it does not modify, and a return value. Here is `double_stuff` written as a pure function: ``` def double_stuff(a_list): """ Return a new list which contains doubles of the elements in a_list. """ new_list = [] for value in a_list: new_elem = 2 * value new_list.append(new_elem) return new_list ``` This version of `double_stuff` does not change its arguments: An early rule we saw for assignment said “first evaluate the right hand side, then assign the resulting value to the variable”. So it is quite safe to assign the function result to the same variable that was passed to the function: Which style is better? Anything that can be done with modifiers can also be done with pure functions. In fact, some programming languages only allow pure functions. There is some evidence that programs that use pure functions are faster to develop and less error-prone than programs that use modifiers. Nevertheless, modifiers are convenient at times, and in some cases, functional programs are less efficient. In general, we recommend that you write pure functions whenever it is reasonable to do so and resort to modifiers only if there is a compelling advantage. This approach might be called a functional programming style. The pure version of `double_stuff` above made use of an important pattern for your toolbox. Whenever you need to write a function that creates and returns a list, the pattern is usually: ``` initialize a result variable to be an empty list loop create a new element append it to result return the result ``` Let us show another use of this pattern. Assume you already have a function `is_prime(x)` that can test if x is prime. Write a function to return a list of all prime numbers less than n: ``` def primes_lessthan(n): """ Return a list of all prime numbers less than n. """ result = [] for i in range(2, n): if is_prime(i): result.append(i) return result ``` Two of the most useful methods on strings involve conversion to and from lists of substrings. The `split` method (which we’ve already seen) breaks a string into a list of words. By default, any number of whitespace characters is considered a word boundary: ``` >>> song = "The rain in Spain..." >>> wds = song.split() >>> wds ['The', 'rain', 'in', 'Spain...'] ``` An optional argument called a delimiter can be used to specify which string to use as the boundary marker between substrings. The following example uses the string `ai` as the delimiter: ``` >>> song.split("ai") ['The r', 'n in Sp', 'n...'] ``` Notice that the delimiter doesn’t appear in the result. The inverse of the `split` method is `join` . You choose a desired separator string, (often called the glue) and join the list with the glue between each of the elements: ``` >>> glue = ";" >>> s = glue.join(wds) >>> s 'The;rain;in;Spain...' ``` The list that you glue together ( `wds` in this example) is not modified. Also, as these next examples show, you can use empty glue or multi-character strings as glue: ``` >>> " --- ".join(wds) 'The --- rain --- in --- Spain...' >>> "".join(wds) 'TheraininSpain...' ``` `list` and `range` Python has a built-in type conversion function called `list` that tries to turn whatever you give it into a list. ``` >>> xs = list("<NAME>") >>> xs ["C", "r", "u", "n", "c", "h", "y", " ", "F", "r", "o", "g"] >>> "".join(xs) 'C<NAME>' ``` One particular feature of `range` is that it doesn’t instantly compute all its values: it “puts off” the computation, and does it on demand, or “lazily”. We’ll say that it gives a promise to produce the values when they are needed. This is very convenient if your computation short-circuits a search and returns early, as in this case: ``` def f(n): """ Find the first positive integer between 101 and less than n that is divisible by 21 """ for i in range(101, n): if (i % 21 == 0): return i test(f(110) == 105) test(f(1000000000) == 105) ``` In the second test, if range were to eagerly go about building a list with all those elements, you would soon exhaust your computer’s available memory and crash the program. But it is cleverer than that! This computation works just fine, because the `range` object is just a promise to produce the elements if and when they are needed. Once the condition in the `if` becomes true, no further elements are generated, and the function returns. (Note: Before Python 3, `range` was not lazy. If you use an earlier versions of Python, YMMV!) YMMV: Your Mileage May Vary The acronym YMMV stands for your mileage may vary. American car advertisements often quoted fuel consumption figures for cars, e.g. that they would get 28 miles per gallon. But this always had to be accompanied by legal small-print warning the reader that they might not get the same. The term YMMV is now used idiomatically to mean “your results may differ”, e.g. The battery life on this phone is 3 days, but YMMV. You’ll sometimes find the lazy `range` wrapped in a call to `list` . This forces Python to turn the lazy promise into an actual list: ``` >>> range(10) # Create a lazy promise range(0, 10) >>> list(range(10)) # Call in the promise, to produce a list. [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` A nested list is a list that appears as an element in another list. In this list, the element with index 3 is a nested list: ``` >>> nested = ["hello", 2.0, 5, [10, 20]] ``` If we output the element at index 3, we get: ``` >>> print(nested[3]) [10, 20] ``` To extract an element from the nested list, we can proceed in two steps: ``` >>> elem = nested[3] >>> elem[0] 10 ``` Or we can combine them: ``` >>> nested[3][1] 20 ``` Bracket operators evaluate from left to right, so this expression gets the 3’th element of `nested` and extracts the 1’th element from it. Nested lists are often used to represent matrices. For example, the matrix: might be represented as: ``` >>> mx = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ``` `mx` is a list with three elements, where each element is a row of the matrix. We can select an entire row from the matrix in the usual way: ``` >>> mx[1] [4, 5, 6] ``` Or we can extract a single element from the matrix using the double-index form: ``` >>> mx[1][2] 6 ``` The first index selects the row, and the second index selects the column. Although this way of representing matrices is common, it is not the only possibility. A small variation is to use a list of columns instead of a list of rows. Later we will see a more radical alternative using a dictionary. aliases Multiple variables that contain references to the same object. clone To create a new object that has the same value as an existing object. Copying a reference to an object creates an alias but doesn’t clone the object. delimiter A character or string used to indicate where a string should be split. element One of the values in a list (or other sequence). The bracket operator selects elements of a list. Also called item. index An integer value that indicates the position of an item in a list. Indexes start from 0. item See element. list A collection of values, each in a fixed position within the list. Like other types `str` , `int` , `float` , etc. there is also a `list` type-converter function that tries to turn whatever argument you give it into a list. list traversal The sequential accessing of each element in a list. modifier A function which changes its arguments inside the function body. Only mutable types can be changed by modifiers. nested list A list that is an element of another list. object A thing to which a variable can refer. pattern A sequence of statements, or a style of coding something that has general applicability in a number of different situations. Part of becoming a mature Computer Scientist is to learn and establish the patterns and algorithms that form your toolkit. Patterns often correspond to your “mental chunking”. promise An object that promises to do some work or deliver some values if they’re eventually needed, but it lazily puts off doing the work immediately. Calling `range` produces a promise. pure function A function which has no side effects. Pure functions only make changes to the calling program through their return values. sequence Any of the data types that consist of an ordered collection of elements, with each element identified by an index. side effect A change in the state of a program made by calling a function. Side effects can only be produced by modifiers. step size The interval between successive elements of a linear sequence. The third (and optional argument) to the `range` function is called the step size. If not specified, it defaults to 1. ``` >>> list(range(10, 0, -2)) ``` ``` The three arguments to the *range* function are *start*, *stop*, and *step*, respectively. In this example, `start` is greater than `stop`. What happens if `start < stop` and `step < 0`? Write a rule for the relationships among `start`, `stop`, and `step`. ``` tess = turtle.Turtle() alex = tess alex.color("hotpink") ``` ``` Does this fragment create one or two turtle instances? Does setting the color of `alex` also change the color of `tess`? Explain in detail. ``` `a` and `b` before and after the third line of the following Python code is ``` a = [1, 2, 3] b = a[:] b[0] = 5 ``` ``` this = ["I", "am", "not", "a", "crook"] that = ["I", "am", "not", "a", "crook"] print("Test 1: {0}".format(this is that)) that = this print("Test 2: {0}".format(this is that)) ``` Provide a detailed explanation of the results. Lists can be used to represent mathematical vectors. In this exercise and several that follow you will write functions to perform standard operations on vectors. Create a script named `vectors.py` and write Python code to pass the tests in each case. Write a function `add_vectors(u, v)` that takes two lists of numbers of the same length, and returns a new list containing the sums of the corresponding elements of each: ``` test(add_vectors([1, 1], [1, 1]) == [2, 2]) test(add_vectors([1, 2], [1, 4]) == [2, 6]) test(add_vectors([1, 2, 1], [1, 4, 3]) == [2, 6, 4]) ``` `scalar_mult(s, v)` that takes a number, `s` , and a list, `v` and returns the scalar multiple of `v` by `s` : ``` test(scalar_mult(5, [1, 2]) == [5, 10]) test(scalar_mult(3, [1, 0, -1]) == [3, 0, -3]) test(scalar_mult(7, [3, 0, 5, 11, 2]) == [21, 0, 35, 77, 14]) ``` `dot_product(u, v)` that takes two lists of numbers of the same length, and returns the sum of the products of the corresponding elements of each (the dot_product). ``` test(dot_product([1, 1], [1, 1]) == 2) test(dot_product([1, 2], [1, 4]) == 9) test(dot_product([1, 2, 1], [1, 4, 3]) == 12) ``` Extra challenge for the mathematically inclined: Write a function `cross_product(u, v)` that takes two lists of numbers of length 3 and returns their cross product. You should write your own tests. Describe the relationship between ``` " ".join(song.split()) ``` and `song` in the fragment of code below. Are they the same for all strings assigned to `song` ? When would they be different? ``` song = "The rain in Spain..." ``` `replace(s, old, new)` that replaces all occurrences of `old` with `new` in a string `s` : ``` test(replace("Mississippi", "i", "I") == "MIssIssIppI") s = "I love spom! Spom is my favorite food. Spom, spom, yum!" test(replace(s, "om", "am") == "I love spam! Spam is my favorite food. Spam, spam, yum!") test(replace(s, "o", "a") == "I lave spam! Spam is my favarite faad. Spam, spam, yum!") ``` Hint: use the `split` and `join` methods. ``` def swap(x, y): # Incorrect version print("before swap statement: x:", x, "y:", y) (x, y) = (y, x) print("after swap statement: x:", x, "y:", y) a = ["This", "is", "fun"] b = [2,3,4] print("before swap function call: a:", a, "b:", b) swap(a, b) print("after swap function call: a:", a, "b:", b) ``` Run this program and describe the results. Oops! So it didn’t do what you intended! Explain why not. Using a Python visualizer like the one at http://pythontutor.com may help you build a good conceptual model of what is going on. What will be the values of `a` and `b` after the call to `swap` ? # Chapter 12: Modules A module is a file containing Python definitions and statements intended for use in other Python programs. There are many Python modules that come with Python as part of the standard library. We have seen at least two of these already, the `turtle` module and the `string` module. We have also shown you how to access help. The help system contains a listing of all the standard modules that are available with Python. Play with help! We often want to use random numbers in programs, here are a few typical uses: Python provides a module `random` that helps with tasks like this. You can look it up using help, but here are the key things we’ll do with it: # Create a black box object that generates random numbers rng = random.Random() dice_throw = rng.randrange(1,7) # Return an int, one of 1,2,3,4,5,6 delay_in_seconds = rng.random() * 5.0 ``` The `randrange` method call generates an integer between its lower and upper argument, using the same semantics as `range` — so the lower bound is included, but the upper bound is excluded. All the values have an equal probability of occurring (i.e. the results are uniformly distributed). Like `range` , `randrange` can also take an optional step argument. So let’s assume we needed a random odd number less than 100, we could say: ``` r_odd = rng.randrange(1, 100, 2) ``` Other methods can also generate other distributions e.g. a bell-shaped, or “normal” distribution might be more appropriate for estimating seasonal rainfall, or the concentration of a compound in the body after taking a dose of medicine. The `random` method returns a floating point number in the interval [0.0, 1.0) — the square bracket means “closed interval on the left” and the round parenthesis means “open interval on the right”. In other words, 0.0 is possible, but all returned numbers will be strictly less than 1.0. It is usual to scale the results after calling this method, to get them into an interval suitable for your application. In the case shown here, we’ve converted the result of the method call to a number in the interval [0.0, 5.0). Once more, these are uniformly distributed numbers — numbers close to 0 are just as likely to occur as numbers close to 0.5, or numbers close to 1.0. This example shows how to shuffle a list. ( `shuffle` cannot work directly with a lazy promise, so notice that we had to convert the range object using the `list` type converter first.) ``` cards = list(range(52)) # Generate ints [0 .. 51] # representing a pack of cards. rng.shuffle(cards) # Shuffle the pack ``` Random number generators are based on a deterministic algorithm — repeatable and predictable. So they’re called pseudo-random generators — they are not genuinely random. They start with a seed value. Each time you ask for another random number, you’ll get one based on the current seed attribute, and the state of the seed (which is one of the attributes of the generator) will be updated. For debugging and for writing unit tests, it is convenient to have repeatability — programs that do the same thing every time they are run. We can arrange this by forcing the random number generator to be initialized with a known seed every time. (Often this is only wanted during testing — playing a game of cards where the shuffled deck was always in the same order as last time you played would get boring very rapidly!) ``` drng = random.Random(123) # Create generator with known starting state ``` This alternative way of creating a random number generator gives an explicit seed value to the object. Without this argument, the system probably uses something based on the time. So grabbing some random numbers from `drng` today will give you precisely the same random sequence as it will tomorrow! Here is an example to generate a list containing n random ints between a lower and an upper bound: def make_random_ints(num, lower_bound, upper_bound): """ Generate a list containing num random ints between lower_bound and upper_bound. upper_bound is an open bound. """ rng = random.Random() # Create a random number generator result = [] for i in range(num): result.append(rng.randrange(lower_bound, upper_bound)) return result ``` ``` >>> make_random_ints(5, 1, 13) # Pick 5 random month numbers [8, 1, 8, 5, 6] ``` Notice that we got a duplicate in the result. Often this is wanted, e.g. if we throw a die five times, we would expect some duplicates. But what if you don’t want duplicates? If you wanted 5 distinct months, then this algorithm is wrong. In this case a good algorithm is to generate the list of possibilities, shuffle it, and slice off the number of elements you want: ``` xs = list(range(1,13)) # Make list 1..12 (there are no duplicates) rng = random.Random() # Make a random number generator rng.shuffle(xs) # Shuffle the list result = xs[:5] # Take the first five elements ``` In statistics courses, the first case — allowing duplicates — is usually described as pulling balls out of a bag with replacement — you put the drawn ball back in each time, so it can occur again. The latter case, with no duplicates, is usually described as pulling balls out of the bag without replacement. Once the ball is drawn, it doesn’t go back to be drawn again. TV lotto games work like this. The second “shuffle and slice” algorithm would not be so great if you only wanted a few elements, but from a very large domain. Suppose I wanted five numbers between one and ten million, without duplicates. Generating a list of ten million items, shuffling it, and then slicing off the first five would be a performance disaster! So let us have another try: def make_random_ints_no_dups(num, lower_bound, upper_bound): """ Generate a list containing num random ints between lower_bound and upper_bound. upper_bound is an open bound. The result list cannot contain duplicates. """ result = [] rng = random.Random() for i in range(num): while True: candidate = rng.randrange(lower_bound, upper_bound) if candidate not in result: break result.append(candidate) return result xs = make_random_ints_no_dups(5, 1, 10000000) print(xs) ``` This agreeably produces 5 random numbers, without duplicates: ``` [3344629, 1735163, 9433892, 1081511, 4923270] ``` Even this function has its pitfalls. Can you spot what is going to happen in this case? ``` xs = make_random_ints_no_dups(10, 1, 6) ``` `time` module As we start to work with more sophisticated algorithms and bigger programs, a natural concern is “is our code efficient?” One way to experiment is to time how long various operations take. The `time` module has a function called `process_time` that is recommended for this purpose. Whenever `process_time` is called, it returns a floating point number representing how many seconds have elapsed since your program started running. The way to use it is to call `process_time` assign the result to a variable, say `t0` , just before you start executing the code you want to measure. Then after execution, call `process_time` again, (this time we’ll save the result in variable `t1` ). The difference `t1-t0` is the time elapsed, and is a measure of how fast your program is running. Let’s try a small example. Python has a built-in `sum` function that can sum the elements in a list. We can also write our own. How do we think they would compare for speed? We’ll try to do the summation of a list [0, 1, 2 …] in both cases, and compare the results: def do_my_sum(xs): sum = 0 for v in xs: sum += v return sum sz = 10000000 # Lets have 10 million elements in the list testdata = range(sz) t0 = time.process_time() my_result = do_my_sum(testdata) t1 = time.process_time() print("my_result = {0} (time taken = {1:.4f} seconds)" .format(my_result, t1-t0)) t2 = time.process_time() their_result = sum(testdata) t3 = time.process_time() print("their_result = {0} (time taken = {1:.4f} seconds)" .format(their_result, t3-t2)) ``` On a reasonably modest laptop, we get these results: ``` my_sum = 49999995000000 (time taken = 1.5567 seconds) their_sum = 49999995000000 (time taken = 0.9897 seconds) ``` So our function runs about 57% slower than the built-in one. Generating and summing up ten million elements in under a second is not too shabby! `math` module The `math` module contains the kinds of mathematical functions you’d typically find on your calculator ( `sin` , `cos` , `sqrt` , `asin` , `log` , `log10` ) and some mathematical constants like `pi` and `e` : ``` >>> import math >>> math.pi # Constant pi 3.141592653589793 >>> math.e # Constant natural log base 2.718281828459045 >>> math.sqrt(2.0) # Square root function 1.4142135623730951 >>> math.radians(90) # Convert 90 degrees to radians 1.5707963267948966 >>> math.sin(math.radians(90)) # Find sin of 90 degrees 1.0 >>> math.asin(1.0) * 2 # Double the arcsin of 1.0 to get pi 3.141592653589793 ``` Like almost all other programming languages, angles are expressed in radians rather than degrees. There are two functions `radians` and `degrees` to convert between these two popular ways of measuring angles. Notice another difference between this module and our use of `random` and `turtle` : in `random` and `turtle` we create objects and we call methods on the object. This is because objects have state — a turtle has a color, a position, a heading, etc., and every random number generator has a seed value that determines its next result. Mathematical functions are “pure” and don’t have any state — calculating the square root of 2.0 doesn’t depend on any kind of state or history about what happened in the past. So the functions are not methods of an object — they are simply functions that are grouped together in a module called `math` . All we need to do to create our own modules is to save our script as a file with a `.py` extension. Suppose, for example, this script is saved as a file named `seqtools.py` : ``` def remove_at(pos, seq): return seq[:pos] + seq[pos+1:] ``` We can now use our module, both in scripts we write, or in the interactive Python interpreter. To do so, we must first `import` the module. ``` >>> import seqtools >>> s = "A string!" >>> seqtools.remove_at(4, s) 'A sting!' ``` We do not include the `.py` file extension when importing. Python expects the file names of Python modules to end in `.py` , so the file extension is not included in the import statement. The use of modules makes it possible to break up very large programs into manageable sized parts, and to keep related parts together. A namespace is a collection of identifiers that belong to a module, or to a function, (and as we will see soon, in classes too). Generally, we like a namespace to hold “related” things, e.g. all the math functions, or all the typical things we’d do with random numbers. Each module has its own namespace, so we can use the same identifier name in multiple modules without causing an identification problem. ``` # Module1.py question = "What is the meaning of Life, the Universe, and Everything?" answer = 42 ``` ``` # Module2.py question = "What is your quest?" answer = "To seek the holy grail." ``` We can now import both modules and access `question` and `answer` in each: ``` import module1 import module2 print(module1.question) print(module2.question) print(module1.answer) print(module2.answer) ``` will output the following: ``` What is the meaning of Life, the Universe, and Everything? What is your quest? 42 To seek the holy grail. ``` Functions also have their own namespaces: ``` def f(): n = 7 print("printing n inside of f:", n) def g(): n = 42 print("printing n inside of g:", n) n = 11 print("printing n before calling f:", n) f() print("printing n after calling f:", n) g() print("printing n after calling g:", n) ``` Running this program produces the following output: ``` printing n before calling f: 11 printing n inside of f: 7 printing n after calling f: 11 printing n inside of g: 42 printing n after calling g: 11 ``` The three `n` ’s here do not collide since they are each in a different namespace — they are three names for three different variables, just like there might be three different instances of people, all called “Bruce”. Namespaces permit several programmers to work on the same project without having naming collisions. How are namespaces, files and modules related? Python has a convenient and simplifying one-to-one mapping, one module per file, giving rise to one namespace. Also, Python takes the module name from the file name, and this becomes the name of the namespace. `math.py` is a filename, the module is called `math` , and its namespace is `math` . So in Python the concepts are more or less interchangeable. But you will encounter other languages (e.g. C#), that allow one module to span multiple files, or one file to have multiple namespaces, or many files to all share the same namespace. So the name of the file doesn’t need to be the same as the namespace. So a good idea is to try to keep the concepts distinct in your mind. Files and directories organize where things are stored in our computer. On the other hand, namespaces and modules are a programming concept: they help us organize how we want to group related functions and attributes. They are not about “where” to store things, and should not have to coincide with the file and directory structures. So in Python, if you rename the file `math.py` , its module name also changes, your `import` statements would need to change, and your code that refers to functions or attributes inside that namespace would also need to change. In other languages this is not necessarily the case. So don’t blur the concepts, just because Python blurs them! The scope of an identifier is the region of program code in which the identifier can be accessed, or used. There are three important scopes in Python: `range` and `min` that can be used without having to import anything, and are (almost) always available. Python (like most other computer languages) uses precedence rules: the same name could occur in more than one of these scopes, but the innermost, or local scope, will always take precedence over the global scope, and the global scope always gets used in preference to the built-in scope. Let’s start with a simple example: ``` def range(n): return 123*n print(range(10)) ``` What gets printed? We’ve defined our own function called `range` , so there is now a potential ambiguity. When we use `range` , do we mean our own one, or the built-in one? Using the scope lookup rules determines this: our own `range` function, not the built-in one, is called, because our function `range` is in the global namespace, which takes precedence over the built-in names. So although names likes `range` and `min` are built-in, they can be “hidden” from your use if you choose to define your own variables or functions that reuse those names. (It is a confusing practice to redefine built-in names — so to be a good programmer you need to understand the scope rules and understand that you can do nasty things that will cause confusion, and then you avoid doing them!) Now, a slightly more complex example: ``` n = 10 m = 3 def f(n): m = 7 return 2*n+m print(f(5), n, m) ``` This prints 17 10 3. The reason is that the two variables `m` and `n` in lines 1 and 2 are outside the function in the global namespace. Inside the function, new variables called `n` and `m` are created just for the duration of the execution of f. These are created in the local namespace of function `f` . Within the body of `f` , the scope lookup rules determine that we use the local variables `m` and `n` . By contrast, after we’ve returned from `f` , the `n` and `m` arguments to the `f` . Notice too that the `def` puts name `f` into the global namespace here. So it can be called on line 7. What is the scope of the variable `n` on line 1? Its scope — the region in which it is visible — is lines 1, 2, 6, 7. It is hidden from view in lines 3, 4, 5 because of the local variable `n` . Variables defined inside a module are called attributes of the module. We’ve seen that objects have attributes too: for example, most objects have a `__doc__` attribute, some functions have a `__annotations__` attribute. Attributes are accessed using the dot operator ( `.` ). The `question` attribute of `module1` and `module2` is accessed using `module1.question` and `module2.question` . Modules contain functions as well as attributes, and the dot operator is used to access them in the same way. `seqtools.remove_at` refers to the `remove_at` function in the `seqtools` module. When we use a dotted name, we often refer to it as a fully qualified name, because we’re saying exactly which `question` attribute we mean. `import` statement variants Here are three different ways to import names into the current namespace, and to use them: ``` import math x = math.sqrt(10) ``` Here just the single identifier `math` is added to the current namespace. If you want to access one of the functions in the module, you need to use the dot notation to get to it. Here is a different arrangement: ``` from math import cos, sin, sqrt x = sqrt(10) ``` The names are added directly to the current namespace, and can be used without qualification. The name `math` is not itself imported, so trying to use the qualified form `math.sqrt` would give an error. Then we have a convenient shorthand: ``` from math import * # Import all the identifiers from math, # adding them to the current namespace. x = sqrt(10) # Use them without qualification. ``` Of these three, the first method is generally preferred, even though it means a little more typing each time. Although, we can make things shorter by importing a module under a different name: ``` >>> import math as m >>> m.pi 3.141592653589793 ``` But hey, with nice editors that do auto-completion, and fast fingers, that’s a small price! Finally, observe this case: ``` def area(radius): import math return math.pi * radius * radius x = math.sqrt(10) # This gives an error ``` Here we imported `math` , but we imported it into the local namespace of `area` . So the name is usable within the function body, but not in the enclosing script, because it is not in the global namespace. Near the end of Chapter 6 (Fruitful functions) we introduced unit testing, and our own `test` function, and you’ve had to copy this into each module for which you wrote tests. Now we can put that definition into a module of its own, say `unit_tester.py` , and simply use one line in each new script instead: ``` from unit_tester import test ``` attribute A variable defined inside a module (or class or instance – as we will see later). Module attributes are accessed by using the dot operator ( `.` ). dot operator The dot operator ( `.` ) permits access to attributes and functions of a module (or attributes and methods of a class or instance – as we have seen elsewhere). fully qualified name A name that is prefixed by some namespace identifier and the dot operator, or by an instance object, e.g. `math.sqrt` or `tess.forward(10)` . import statement A statement which makes the objects contained in a module available for use within another module. There are two forms for the import statement. Using hypothetical modules named `mymod1` and `mymod2` each containing functions `f1` and `f2` , and variables `v1` and `v2` , examples of these two forms include: ``` import mymod1 from mymod2 import f1, f2, v1, v2 ``` The second form brings the imported objects into the namespace of the importing module, while the first form preserves a separate namespace for the imported module, requiring `mymod1.v1` to access the `v1` variable from that module. method Function-like attribute of an object. Methods are invoked (called) on an object using the dot operator. For example: ``` >>> s = "this is a string." >>> s.upper() 'THIS IS A STRING.' >>> ``` We say that the method, `upper` is invoked on the string, `s` . `s` is implicitely the first argument to `upper` . module A file containing Python definitions and statements intended for use in other Python programs. The contents of a module are made available to the other program by using the `import` statement. namespace A syntactic container providing a context for names so that the same name can reside in different namespaces without ambiguity. In Python, modules, classes, functions and methods all form namespaces. naming collision A situation in which two or more names in a given namespace cannot be unambiguously resolved. Using `import string` instead of `from string import *` prevents naming collisions. standard library A library is a collection of software used as tools in the development of other software. The standard library of a programming language is the set of such tools that are distributed with the core programming language. Python comes with an extensive standard library. Open help for the `calendar` module. ``` import calendar cal = calendar.TextCalendar() # Create an instance cal.pryear(2012) # What happens here? ``` ``` d = calendar.LocaleTextCalendar(6, "SPANISH") d.pryear(2012) Try a few other languages, including one that doesn't work, and see what happens. ``` `calendar.isleap` . What does it expect as an argument? What does it return as a result? What kind of a function is this? Make detailed notes about what you learned from these exercises. Open help for the `math` module. `math` module? `math.ceil` do? What about `math.floor` ? (hint: both `floor` and `ceil` expect floating point arguments.) `math.sqrt` without using the `math` module. `math` module? Record detailed notes of your investigation in this exercise. Investigate the `copy` module. What does `deepcopy` do? In which exercises from last chapter would `deepcopy` have come in handy? Create a module named `mymodule1.py` . Add attributes `myage` set to your current age, and `year` set to the current year. Create another module named `mymodule2.py` . Add attributes `myage` set to 0, and `year` set to the year you were born. Now create a file named `namespace_test.py` . Import both of the modules above and write the following statement: ``` print( (mymodule2.myage - mymodule1.myage) == (mymodule2.year - mymodule1.year) ) ``` When you will run `namespace_test.py` you will see either `True` or `False` as output depending on whether or not you’ve already had your birthday this year. What this example illustrates is that out different modules can both have attributes named `myage` and `year` . Because they’re in different namespaces, they don’t clash with one another. When we write `namespace_test.py` , we fully qualify exactly which variable `year` or `myage` we are referring to. Add the following statement to `mymodule1.py` , `mymodule2.py` , and `namespace_test.py` from the previous exercise: ``` print("My name is", __name__) ``` Run `namespace_test.py` . What happens? Why? Now add the following to the bottom of `mymodule1.py` : ``` if __name__ == "__main__": print("This won't run if I'm imported.") ``` Run `mymodule1.py` and `namespace_test.py` again. In which case do you see the new print statement? In a Python shell / interactive interpreter, try the following: `>>> import this` What does <NAME> have to say about namespaces? ``` >>> s = "If we took the bones out, it wouldn't be crunchy, would it?" >>> s.split() >>> type(s.split()) >>> s.split("o") >>> s.split("i") >>> "0".join(s.split("o")) ``` ``` def myreplace(old, new, s): """ Replace all occurrences of old with new in s. """ ... test(myreplace(",", ";", "this, that, and some other thing") == "this; that; and some other thing") test(myreplace(" ", "**", "Words will now be separated by stars.") == "Words**will**now**be**separated**by**stars.") ``` Your solution should pass the tests. Create a module named `wordtools.py` with our test scaffolding in place. Now add functions to these tests pass: ``` test(cleanword("what?") == "what") test(cleanword("'now!'") == "now") test(cleanword("?+='w-o-r-d!,@$()'") == "word") test(has_dashdash("distance--but")) test(not has_dashdash("several")) test(has_dashdash("spoke--")) test(has_dashdash("distance--but")) test(not has_dashdash("-yo-yo-")) test(extract_words("Now is the time! 'Now', is the time? Yes, now.") == ['now','is','the','time','now','is','the','time','yes','now']) test(extract_words("she tried to curtsey as she spoke--fancy") == ['she','tried','to','curtsey','as','she','spoke','fancy']) test(wordcount("now", ["now","is","time","is","now","is","is"]) == 2) test(wordcount("is", ["now","is","time","is","now","the","is"]) == 3) test(wordcount("time", ["now","is","time","is","now","is","is"]) == 1) test(wordcount("frog", ["now","is","time","is","now","is","is"]) == 0) test(wordset(["now", "is", "time", "is", "now", "is", "is"]) == ["is", "now", "time"]) test(wordset(["I", "a", "a", "is", "a", "is", "I", "am"]) == ["I", "a", "am", "is"]) test(wordset(["or", "a", "am", "is", "are", "be", "but", "am"]) == ["a", "am", "are", "be", "but", "is", "or"]) test(longestword(["a", "apple", "pear", "grape"]) == 5) test(longestword(["a", "am", "I", "be"]) == 2) test(longestword(["this","supercalifragilisticexpialidocious"]) == 34) test(longestword([ ]) == 0) ``` Save this module so you can use the tools it contains in future programs. # Chapter 13: Files While a program is running, its data is stored in random access memory (RAM). RAM is fast and inexpensive, but it is also volatile, which means that when the program ends, or the computer shuts down, data in RAM disappears. To make data available the next time the computer is turned on and the program is started, it has to be written to a non-volatile storage medium, such a hard drive, usb drive, or CD-RW. Data on non-volatile storage media is stored in named locations on the media called files. By reading and writing files, programs can save information between program runs. Working with files is a lot like working with a notebook. To use a notebook, it has to be opened. When done, it has to be closed. While the notebook is open, it can either be read from or written to. In either case, the notebook holder knows where they are. They can read the whole notebook in its natural order or they can skip around. All of this applies to files as well. To open a file, we specify its name and indicate whether we want to read or write. Let’s begin with a simple program that writes three lines of text into a file: ``` myfile = open("test.txt", "w") myfile.write("My first file written from Python\n") myfile.write("---------------------------------\n") myfile.write("Hello, world!\n") myfile.close() ``` Opening a file creates what we call a file handle. In this example, the variable `myfile` refers to the new handle object. Our program calls methods on the handle, and this makes changes to the actual file which is usually located on our disk. On line 1, the open function takes two arguments. The first is the name of the file, and the second is the mode. Mode `"w"` means that we are opening the file for writing. With mode `"w"` , if there is no file named `test.txt` on the disk, it will be created. If there already is one, it will be replaced by the file we are writing. To put data in the file we invoke the `write` method on the handle, shown in lines 2, 3 and 4 above. In bigger programs, lines 2–4 will usually be replaced by a loop that writes many more lines into the file. Closing the file handle (line 5) tells the system that we are done writing and makes the disk file available for reading by other programs (or by our own program). A handle is somewhat like a TV remote control We’re all familiar with a remote control for a TV. We perform operations on the remote control — switch channels, change the volume, etc. But the real action happens on the TV. So, by simple analogy, we’d call the remote control our handle to the underlying TV. Sometimes we want to emphasize the difference — the file handle is not the same as the file, and the remote control is not the same as the TV. But at other times we prefer to treat them as a single mental chunk, or abstraction, and we’ll just say “close the file”, or “flip the TV channel”. Now that the file exists on our disk, we can open it, this time for reading, and read all the lines in the file, one at a time. This time, the mode argument is `"r"` for reading: ``` mynewhandle = open("test.txt", "r") while True: # Keep reading forever theline = mynewhandle.readline() # Try to read next line if len(theline) == 0: # If there are no more lines break # leave the loop # Now process the line we've just read print(theline, end="") mynewhandle.close() ``` This is a handy pattern for our toolbox. In bigger programs, we’d squeeze more extensive logic into the body of the loop at line 8 —for example, if each line of the file contained the name and email address of one of our friends, perhaps we’d split the line into some pieces and call a function to send the friend a party invitation. On line 8 we suppress the newline character that `readline` method in line 3 returns everything up to and including the newline character. This also explains the end-of-file detection logic: when there are no more lines to be read from the file, `readline` returns an empty string — one that does not even have a newline at the end, hence its length is 0. Fail first … In our sample case here, we have three lines in the file, yet we enter the loop four times. In Python, you only learn that the file has no more lines by failure to read another line. In some other programming languages (e.g. Pascal), things are different: there you read three lines, but you have what is called look ahead — after reading the third line you already know that there are no more lines in the file. You’re not even allowed to try to read the fourth line. So the templates for working line-at-a-time in Pascal and Python are subtly different! When you transfer your Python skills to your next computer language, be sure to ask how you’ll know when the file has ended: is the style in the language “try, and after you fail you’ll know”, or is it “look ahead”? If we try to open a file that doesn’t exist, we get an error: ``` >>> mynewhandle = open("wharrah.txt", "r") IOError: [Errno 2] No such file or directory: "wharrah.txt" ``` It is often useful to fetch data from a disk file and turn it into a list of lines. Suppose we have a file containing our friends and their email addresses, one per line in the file. But we’d like the lines sorted into alphabetical order. A good plan is to read everything into a list of lines, then sort the list, and then write the sorted list back to another file: ``` f = open("friends.txt", "r") xs = f.readlines() f.close() xs.sort() g = open("sortedfriends.txt", "w") for v in xs: g.write(v) g.close() ``` The `readlines` method in line 2 reads all the lines and returns a list of the strings. We could have used the template from the previous section to read each line one-at-a-time, and to build up the list ourselves, but it is a lot easier to use the method that the Python implementors gave us! Another way of working with text files is to read the complete contents of the file into a string, and then to use our string-processing skills to work with the contents. We’d normally use this method of processing files if we were not interested in the line structure of the file. For example, we’ve seen the `split` method on strings which can break a string into words. So here is how we might count the number of words in a file: ``` f = open("somefile.txt") content = f.read() f.close() words = content.split() print("There are {0} words in the file.".format(len(words))) ``` Notice here that we left out the `"r"` mode in line 1. By default, if we don’t supply the mode, Python opens the file for reading. Your file paths may need to be explicitly named. In the above example, we’re assuming that the file `somefile.txt` is in the same directory as your Python source code. If this is not the case, you may need to provide a full or a relative path to the file. On Windows, a full path could look like ``` "C:\\temp\\somefile.txt" ``` , while on a Unix system the full path could be ``` "/home/jimmy/somefile.txt" ``` . We’ll return to this later in this chapter. Files that hold photographs, videos, zip files, executable programs, etc. are called binary files: they’re not organized into lines, and cannot be opened with a normal text editor. Python works just as easily with binary files, but when we read from the file we’re going to get bytes back rather than a string. Here we’ll copy one binary file to another: ``` f = open("somefile.zip", "rb") g = open("thecopy.zip", "wb") while True: buf = f.read(1024) if len(buf) == 0: break g.write(buf) f.close() g.close() ``` There are a few new things here. In lines 1 and 2 we added a `"b"` to the mode to tell Python that the files are binary rather than text files. In line 5, we see `read` can take an argument which tells it how many bytes to attempt to read from the file. Here we chose to read and write up to 1024 bytes on each iteration of the loop. When we get back an empty buffer from our attempt to read, we know we can break out of the loop and close both the files. If we set a breakpoint at line 6, (or print `type(buf)` there) we’ll see that the type of `buf` is `bytes` . We don’t do any detailed work with `bytes` objects in this textbook. Many useful line-processing programs will read a text file line-at-a-time and do some minor processing as they write the lines to an output file. They might number the lines in the output file, or insert extra blank lines after every 60 lines to make it convenient for printing on sheets of paper, or extract some specific columns only from each line in the source file, or only print lines that contain a specific substring. We call this kind of program a filter. Here is a filter that copies one file to another, omitting any lines that begin with `#` : ``` def filter(oldfile, newfile): infile = open(oldfile, "r") outfile = open(newfile, "w") while True: text = infile.readline() if len(text) == 0: break if text[0] == "#": continue # Put any more processing logic here outfile.write(text) infile.close() outfile.close() ``` The `continue` statement at line 9 skips over the remaining lines in the current iteration of the loop, but the loop will still iterate. This style looks a bit contrived here, but it is often useful to say “get the lines we’re not concerned with out of the way early, so that we have cleaner more focused logic in the meaty part of the loop that might be written around line 11.” Thus, if `text` is the empty string, the loop exits. If the first character of `text` is a hash mark, the flow of execution goes to the top of the loop, ready to start processing the next line. Only if both conditions fail do we fall through to do the processing at line 11, in this example, writing the line into the new file. Let’s consider one more case: suppose our original file contained empty lines. At line 6 above, would this program find the first empty line in the file, and terminate immediately? No! Recall that `readline` always includes the newline character in the string it returns. It is only when we try to read beyond the end of the file that we get back the empty string of length 0. Files on non-volatile storage media are organized by a set of rules known as a file system. File systems are made up of files and directories, which are containers for both files and other directories. When we create a new file by opening it and writing, the new file goes in the current directory (wherever we were when we ran the program). Similarly, when we open a file for reading, Python looks for it in the current directory. If we want to open a file somewhere else, we have to specify the path to the file, which is the name of the directory (or folder) where the file is located: ``` >>> wordsfile = open("/usr/share/dict/words", "r") >>> wordlist = wordsfile.readlines() >>> print(wordlist[:6]) ['\n', 'A\n', "A's\n", 'AOL\n', "AOL's\n", 'Aachen\n'] ``` This (Unix) example opens a file named `words` that resides in a directory named `dict` , which resides in `share` , which resides in `usr` , which resides in the top-level directory of the system, called `/` . It then reads in each line into a list using `readlines` , and prints out the first 5 elements from that list. A Windows path might be `"c:/temp/words.txt"` or ``` "c:\\temp\\words.txt" ``` . Because backslashes are used to escape things like newlines and tabs, we need to write two backslashes in a literal string to get one! So the length of these two strings is the same! We cannot use `/` or `\` as part of a filename; they are reserved as a delimiter between directory and filenames. The file ``` /usr/share/dict/words ``` should exist on Unix-based systems, and contains a list of words in alphabetical order. The Python libraries are pretty messy in places. But here is a very simple example that copies the contents at some web URL to a local file. url = "https://www.ietf.org/rfc/rfc793.txt" destination_filename = "rfc793.txt" urllib.request.urlretrieve(url, destination_filename) ``` The `urlretrieve` function — just one call — could be used to download any kind of content from the Internet. We’ll need to get a few things right before this works:- The resource we’re trying to fetch must exist! Check this using a browser. - We’ll need permission to write to the destination filename, and the file will be created in the “current directory” - i.e. the same folder that the Python program is saved in. - If we are behind a proxy server that requires authentication, (as some students are), this may require some more special handling to work around our proxy. Use a local resource for the purpose of this demonstration! Here is a slightly different example. Rather than save the web resource to our local disk, we read it directly into a string, and return it: def retrieve_page(url): """ Retrieve the contents of a web page. The contents is converted to a string before returning it. """ my_socket = urllib.request.urlopen(url) dta = str(my_socket.read()) my_socket.close() return dta the_text = retrieve_page("https://www.ietf.org/rfc/rfc793.txt") print(the_text) ``` Opening the remote url returns what we call a socket. This is a handle to our end of the connection between our program and the remote web server. We can call read, write, and close methods on the socket object in much the same way as we can work with a file handle. delimiter A sequence of one or more characters used to specify the boundary between separate parts of text. directory A named collection of files, also called a folder. Directories can contain files and other directories, which are referred to as subdirectories of the directory that contains them. file A named entity, usually stored on a hard drive, floppy disk, or CD-ROM, that contains a stream of characters. file system A method for naming, accessing, and organizing files and the data they contain. handle An object in our program that is connected to an underlying resource (e.g. a file). The file handle lets our program manipulate/read/write/close the actual file that is on our disk. mode A distinct method of operation within a computer program. Files in Python can be opened in one of four modes: read ( `"r"` ), write ( `"w"` ), append ( `"a"` ), and read and write ( `"+"` ). non-volatile memory Memory that can maintain its state without power. Hard drives, flash drives, and rewritable compact disks (CD-RW) are each examples of non-volatile memory. path A sequence of directory names that specifies the exact location of a file. text file A file that contains printable characters organized into lines separated by newline characters. socket One end of a connection allowing one to read and write information to or from another computer. volatile memory Memory which requires an electrical current to maintain state. The main memory or RAM of a computer is volatile. Information stored in RAM is lost when the computer is turned off. `snake` . # Chapter 14: List Algorithms This chapter is a bit different from what we’ve done so far: rather than introduce more new Python syntax and features, we’re going to focus on the program development process, and some algorithms that work with lists. As in all parts of this book, our expectation is that you, the reader, will copy our code into your Python environment, play and experiment, and work along with us. Part of this chapter works with the book Alice in Wonderland and a vocabulary file. Download these two files to your local machine at the following links. https://learnpythontherightway.com/_downloads/alice_in_wonderland.txt and https://learnpythontherightway.com/_downloads/vocab.txt. Early in our Fruitful functions chapter we introduced the idea of incremental development, where we added small fragments of code to slowly build up the whole, so that we could easily find problems early. Later in that same chapter we introduced unit testing and gave code for our testing framework so that we could capture, in code, appropriate tests for the functions we were writing. Test-driven development (TDD) is a software development practice which takes these practices one step further. The key idea is that automated tests should be written first. This technique is called test-driven because — if we are to believe the extremists — non-testing code should only be written when there is a failing test to make pass. We can still retain our mode of working in small incremental steps, but now we’ll define and express those steps in terms of a sequence of increasingly sophisticated unit tests that demand more from our code at each stage. We’ll turn our attention to some standard algorithms that process lists now, but as we proceed through this chapter we’ll attempt to do so in the spirit envisaged by TDD. We’d like to know the index where a specific item occurs within in a list of items. Specifically, we’ll return the index of the item if it is found, or we’ll return -1 if the item doesn’t occur in the list. Let us start with some tests: ``` friends = ["Joe", "Zoe", "Brad", "Angelina", "Zuki", "Thandi", "Paris"] test(search_linear(friends, "Zoe") == 1) test(search_linear(friends, "Joe") == 0) test(search_linear(friends, "Paris") == 6) test(search_linear(friends, "Bill") == -1) ``` Motivated by the fact that our tests don’t even run, let alone pass, we now write the function: ``` def search_linear(xs, target): """ Find and return the index of target in sequence xs """ for (i, v) in enumerate(xs): if v == target: return i return -1 ``` There are a some points to learn here: We’ve seen a similar algorithm in section 8.10 when we searched for a character in a string. There we used a `while` loop, here we’ve used a `for` loop, coupled with `enumerate` to extract the `(i, v)` pair on each iteration. There are other variants — for example, we could have used `range` and made the loop run only over the indexes, or we could have used the idiom of returning `None` when the item was not found in the list. But the essential similarity in all these variations is that we test every item in the list in turn, from first to last, using the pattern of the short-circuit eureka traversal that we introduced earlier —that we return from the function as soon as we find the target that we’re looking for. Searching all items of a sequence from first to last is called a linear search. Each time we check whether `v == target` we’ll call it a probe. We like to count probes as a measure of how efficient our algorithm is, and this will be a good enough indication of how long our algorithm will take to execute. Linear searching is characterized by the fact that the number of probes needed to find some target depends directly on the length of the list. So if the list becomes ten times bigger, we can expect to wait ten times longer when searching for things. Notice too, that if we’re searching for a target that is not present in the list, we’ll have to go all the way to the end before we can return the negative value. So this case needs N probes, where N is the length of the list. However, if we’re searching for a target that does exist in the list, we could be lucky and find it immediately in position 0, or we might have to look further, perhaps even all the way to the last item. On average, when the target is present, we’re going to need to go about halfway through the list, or N/2 probes. We say that this search has linear performance (linear meaning straight line) because, if we were to measure the average search times for different sizes of lists (N), and then plot a graph of time-to-search against N, we’d get a more-or-less straight line graph. Analysis like this is pretty meaningless for small lists — the computer is quick enough not to bother if the list only has a handful of items. So generally, we’re interested in the scalability of our algorithms — how do they perform if we throw bigger problems at them. Would this search be a sensible one to use if we had a million or ten million items (perhaps the catalog of books in your local library) in our list? What happens for really large datasets, e.g. how does Google search so brilliantly well? As children learn to read, there are expectations that their vocabulary will grow. So a child of age 14 is expected to know more words than a child of age 8. When prescribing reading books for a grade, an important question might be “which words in this book are not in the expected vocabulary at this level?” Let us assume we can read a vocabulary of words into our program, and read the text of a book, and split it into words. Let us write some tests for what we need to do next. Test data can usually be very small, even if we intend to finally use our program for larger cases: ``` vocab = ["apple", "boy", "dog", "down", "fell", "girl", "grass", "the", "tree"] book_words = "the apple fell from the tree to the grass".split() test(find_unknown_words(vocab, book_words) == ["from", "to"]) test(find_unknown_words([], book_words) == book_words) test(find_unknown_words(vocab, ["the", "boy", "fell"]) == []) ``` Notice we were a bit lazy, and used `split` to create our list of words —it is easier than typing out the list, and very convenient if you want to input a sentence into the program and turn it into a list of words. We now need to implement the function for which we’ve written tests, and we’ll make use of our linear search. The basic strategy is to run through each of the words in the book, look it up in the vocabulary, and if it is not in the vocabulary, save it into a new resulting list which we return from the function: ``` def find_unknown_words(vocab, wds): """ Return a list of words in wds that do not occur in vocab """ result = [] for w in wds: if (search_linear(vocab, w) < 0): result.append(w) return result ``` We can happily report now that the tests all pass. Now let us look at the scalability. We have more realistic vocabulary in the text file that could be downloaded at the beginning of this chapter. Upload the `vocab.txt` file to a new repl so that you can access it from your code. Now let us read in the file (as a single string) and split it into a list of words. For convenience, we’ll create a function to do this for us, and test it on the vocab file. ``` def load_words_from_file(filename): """ Read words from filename, return list of words. """ f = open(filename, "r") file_content = f.read() f.close() wds = file_content.split() return wds bigger_vocab = load_words_from_file("vocab.txt") print("There are {0} words in the vocab, starting with\n {1} " .format(len(bigger_vocab), bigger_vocab[:6])) ``` Python responds with: ``` There are 19469 words in the vocab, starting with ['a', 'aback', 'abacus', 'abandon', 'abandoned', 'abandonment'] ``` So we’ve got a more sensible size vocabulary. Now let us load up a book, once again we’ll use the one we downloaded at the beginning of this chapter. Loading a book is much like loading words from a file, but we’re going to do a little extra black magic. Books are full of punctuation, and have mixtures of lowercase and uppercase letters. We need to clean up the contents of the book. This will involve removing punctuation, and converting everything to the same case (lowercase, because our vocabulary is all in lowercase). So we’ll want a more sophisticated way of converting text to words. ``` test(text_to_words("My name is Earl!") == ["my", "name", "is", "earl"]) test(text_to_words('"Well, I never!", said Alice.') == ["well", "i", "never", "said", "alice"]) ``` There is a powerful `translate` method available for strings. The idea is that one sets up desired substitutions — for every character, we can give a corresponding replacement character. The `translate` method will apply these replacements throughout the whole string. So here we go: ``` def text_to_words(the_text): """ return a list of words with all punctuation removed, and all in lowercase. """ my_substitutions = the_text.maketrans( # If you find any of these "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!\"#$%&()*+,-./:;<=>?@[]^_`{|}~'\\", # Replace them by these "abcdefghijklmnopqrstuvwxyz ") # Translate the text now. cleaned_text = the_text.translate(my_substitutions) wds = cleaned_text.split() return wds ``` The translation turns all uppercase characters into lowercase, and all punctuation characters and digits into spaces. Then, of course, `split` will get rid of the spaces as it breaks the text into a list of words. The tests pass. Now we’re ready to read in our book: ``` def get_words_in_book(filename): """ Read a book from filename, and return a list of its words. """ f = open(filename, "r") content = f.read() f.close() wds = text_to_words(content) return wds book_words = get_words_in_book("alice_in_wonderland.txt") print("There are {0} words in the book, the first 100 are\n{1}". format(len(book_words), book_words[:100])) ``` Python prints the following (all on one line, we’ve cheated a bit for the textbook): ``` There are 27336 words in the book, the first 100 are ['alice', 's', 'adventures', 'in', 'wonderland', 'lewis', 'carroll', 'chapter', 'i', 'down', 'the', 'rabbit', 'hole', 'alice', 'was', 'beginning', 'to', 'get', 'very', 'tired', 'of', 'sitting', 'by', 'her', 'sister', 'on', 'the', 'bank', 'and', 'of', 'having', 'nothing', 'to', 'do', 'once', 'or', 'twice', 'she', 'had', 'peeped', 'into', 'the', 'book', 'her', 'sister', 'was', 'reading', 'but', 'it', 'had', 'no', 'pictures', 'or', 'conversations', 'in', 'it', 'and', 'what', 'is', 'the', 'use', 'of', 'a', 'book', 'thought', 'alice', 'without', 'pictures', 'or', 'conversation', 'so', 'she', 'was', 'considering', 'in', 'her', 'own', 'mind', 'as', 'well', 'as', 'she', 'could', 'for', 'the', 'hot', 'day', 'made', 'her', 'feel', 'very', 'sleepy', 'and', 'stupid', 'whether', 'the', 'pleasure', 'of', 'making', 'a'] ``` Well now we have all the pieces ready. Let us see what words in this book are not in the vocabulary: ``` >>> missing_words = find_unknown_words(bigger_vocab, book_words) ``` We wait a considerable time now, something like a minute, before Python finally works its way through this, and prints a list of 3398 words in the book that are not in the vocabulary. Mmm… This is not particularly scaleable. For a vocabulary that is twenty times larger (you’ll often find school dictionaries with 300 000 words, for example), and longer books, this is going to be slow. So let us make some timing measurements while we think about how we can improve this in the next section. We get the results and some timing that we can refer back to later: ``` There are 3398 unknown words. That took 49.8014 seconds. ``` If you think about what we’ve just done, it is not how we work in real life. If you were given a vocabulary and asked to tell if some word was present, you’d probably start in the middle. You can do this because the vocabulary is ordered — so you can probe some word in the middle, and immediately realize that your target was before (or perhaps after) the one you had probed. Applying this principle repeatedly leads us to a very much better algorithm for searching in a list of items that are already ordered. (Note that if the items are not ordered, you have little choice other than to look through all of them. But, if we know the items are in order, we can improve our searching technique). Lets start with some tests. Remember, the list needs to be sorted: ``` xs = [2,3,5,7,11,13,17,23,29,31,37,43,47,53] test(search_binary(xs, 20) == -1) test(search_binary(xs, 99) == -1) test(search_binary(xs, 1) == -1) for (i, v) in enumerate(xs): test(search_binary(xs, v) == i) ``` Even our test cases are interesting this time: notice that we start with items not in the list and look at boundary conditions — in the middle of the list, less than all items in the list, bigger than the biggest. Then we use a loop to use every list item as a target, and to confirm that our binary search returns the corresponding index of that item in the list. It is useful to think about having a region-of-interest (ROI) within the list being searched. This ROI will be the portion of the list in which it is still possible that our target might be found. Our algorithm will start with the ROI set to all the items in the list. On the first probe in the middle of the ROI, there are three possible outcomes: either we find the target, or we learn that we can discard the top half of the ROI, or we learn that we can discard the bottom half of the ROI. And we keep doing this repeatedly, until we find our target, or until we end up with no more items in our region of interest. We can code this as follows: ``` def search_binary(xs, target): """ Find and return the index of key in sequence xs """ lb = 0 ub = len(xs) while True: if lb == ub: # If region of interest (ROI) becomes empty return -1 # Next probe should be in the middle of the ROI mid_index = (lb + ub) // 2 # Fetch the item at that position item_at_mid = xs[mid_index] # print("ROI[{0}:{1}](size={2}), probed='{3}', target='{4}'" # .format(lb, ub, ub-lb, item_at_mid, target)) # How does the probed item compare to the target? if item_at_mid == target: return mid_index # Found it! if item_at_mid < target: lb = mid_index + 1 # Use upper half of ROI next time else: ub = mid_index # Use lower half of ROI next time ``` The region of interest is represented by two variables, a lower bound `lb` and an upper bound `ub` . It is important to be precise about what values these indexes have. We’ll make `lb` hold the index of the first item in the ROI, and make `ub` hold the index just beyond the last item of interest. So these semantics are similar to a Python slice semantics: the region of interest is exactly the slice `xs[lb:ub]` . (The algorithm never actually takes any array slices!) With this code in place, our tests pass. Great. Now if we substitute a call to this search algorithm instead of calling the `search_linear` in `find_unknown_words` , can we improve our performance? Let’s do that, and again run this test: What a spectacular difference! More than 200 times faster! ``` There are 3398 unknown words. That took 0.2262 seconds. ``` Why is this binary search so much faster than the linear search? If we uncomment the print statement on lines 15 and 16, we’ll get a trace of the probes done during a search. Let’s go ahead, and try that: ``` >>> search_binary(bigger_vocab, "magic") ROI[0:19469](size=19469), probed='known', target='magic' ROI[9735:19469](size=9734), probed='retailer', target='magic' ROI[9735:14602](size=4867), probed='overthrow', target='magic' ROI[9735:12168](size=2433), probed='mission', target='magic' ROI[9735:10951](size=1216), probed='magnificent', target='magic' ROI[9735:10343](size=608), probed='liken', target='magic' ROI[10040:10343](size=303), probed='looks', target='magic' ROI[10192:10343](size=151), probed='lump', target='magic' ROI[10268:10343](size=75), probed='machete', target='magic' ROI[10306:10343](size=37), probed='mafia', target='magic' ROI[10325:10343](size=18), probed='magnanimous', target='magic' ROI[10325:10334](size=9), probed='magical', target='magic' ROI[10325:10329](size=4), probed= maggot', target='magic' ROI[10328:10329](size=1), probed='magic', target='magic' 10328 ``` Here we see that finding the target word “magic” needed just 14 probes before it was found at index 10328. The important thing is that each probe halves (with some truncation) the remaining region of interest. By contrast, the linear search would have needed 10329 probes to find the same target word. The word binary means two. Binary search gets its name from the fact that each probe splits the list into two pieces and discards the one half from the region of interest. The beauty of the algorithm is that we could double the size of the vocabulary, and it would only need one more probe! And after another doubling, just another one probe. So as the vocabulary gets bigger, this algorithm’s performance becomes even more impressive. Can we put a formula to this? If our list size is N, what is the biggest number of probes k we could need? The maths is a bit easier if we turn the question around: how big a list N could we deal with, given that we were only allowed to make k probes? With 1 probe, we can only search a list of size 1. With two probes we could cope with lists up to size 3 - (test the middle item with the first probe, then test either the left or right sublist with the second probe). With one more probe, we could cope with 7 items (the middle item, and two sublists of size 3). With four probes, we can search 15 items, and 5 probes lets us search up to 31 items. So the general relationship is given by the formula `N = 2 ** k - 1` where k is the number of probes we’re allowed to make, and N is the maximum size of the list that can be searched in that many probes. This function is exponential in k (because k occurs in the exponent part). If we wanted to turn the formula around and solve for k in terms of N, we need to move the constant 1 to the other side, and take a log (base 2) on each side. (The log is the inverse of an exponent.) So the formula for k in terms of N is now: The square-only-on-top brackets are called ceiling brackets: this means that you must round the number up to the next whole integer. Let us try this on a calculator, or in Python, which is the mother of all calculators: suppose I have 1000 elements to be searched, what is the maximum number of probes I’ll need? (There is a pesky +1 in the formula, so let us not forget to add it on…): ``` >>> from math import log >>> log(1000 + 1, 2) 9.967226258835993 ``` Telling us that we’ll need 9.96 probes maximum, to search 1000 items is not quite what we want. We forgot to take the ceiling. The `ceil` function in the `math` module does exactly this. So more accurately, now: ``` >>> from math import log, ceil >>> ceil(log(1000 + 1, 2)) 10 >>> ceil(log(1000000 + 1, 2)) 20 >>> ceil(log(1000000000 + 1, 2)) 30 ``` This tells us that searching 1000 items needs 10 probes. (Well technically, with 10 probes we can search exactly 1023 items, but the easy and useful stuff to remember here is that “1000 items needs 10 probes, a million needs 20 probes, and a billion items only needs 30 probes”). You will rarely encounter algorithms that scale to large datasets as beautifully as binary search does! We often want to get the unique elements in a list, i.e. produce a new list in which each different element occurs just once. Consider our case of looking for words in Alice in Wonderland that are not in our vocabulary. We had a report that there are 3398 such words, but there are duplicates in that list. In fact, the word “alice” occurs 398 times in the book, and it is not in our vocabulary! How should we remove these duplicates? A good approach is to sort the list, then remove all adjacent duplicates. Let us start with removing adjacent duplicates ``` test(remove_adjacent_dups([1,2,3,3,3,3,5,6,9,9]) == [1,2,3,5,6,9]) test(remove_adjacent_dups([]) == []) test(remove_adjacent_dups(["a", "big", "big", "bite", "dog"]) == ["a", "big", "bite", "dog"]) ``` The algorithm is easy and efficient. We simply have to remember the most recent item that was inserted into the result, and avoid inserting it again: ``` def remove_adjacent_dups(xs): """ Return a new list in which all adjacent duplicates from xs have been removed. """ result = [] most_recent_elem = None for e in xs: if e != most_recent_elem: result.append(e) most_recent_elem = e return result ``` The amount of work done in this algorithm is linear — each item in `xs` causes the loop to execute exactly once, and there are no nested loops. So doubling the number of elements in `xs` should cause this function to run twice as long: the relationship between the size of the list and the time to run will be graphed as a straight (linear) line. Let us go back now to our analysis of Alice in Wonderland. Before checking the words in the book against the vocabulary, we’ll sort those words into order, and eliminate duplicates. So our new code looks like this: ``` all_words = get_words_in_book("alice_in_wonderland.txt") all_words.sort() book_words = remove_adjacent_dups(all_words) print("There are {0} words in the book. Only {1} are unique.". format(len(all_words), len(book_words))) print("The first 100 words are\n{0}". format(book_words[:100])) ``` Almost magically, we get the following output: ``` There are 27336 words in the book. Only 2570 are unique. The first 100 words are ['_i_', 'a', 'abide', 'able', 'about', 'above', 'absence', 'absurd', 'acceptance', 'accident', 'accidentally', 'account', 'accounting', 'accounts', 'accusation', 'accustomed', 'ache', 'across', 'act', 'actually', 'ada', 'added', 'adding', 'addressed', 'addressing', 'adjourn', 'adoption', 'advance', 'advantage', 'adventures', 'advice', 'advisable', 'advise', 'affair', 'affectionately', 'afford', 'afore', 'afraid', 'after', 'afterwards', 'again', 'against', 'age', 'ago', 'agony', 'agree', 'ah', 'ahem', 'air', 'airs', 'alarm', 'alarmed', 'alas', 'alice', 'alive', 'all', 'allow', 'almost', 'alone', 'along', 'aloud', 'already', 'also', 'altered', 'alternately', 'altogether', 'always', 'am', 'ambition', 'among', 'an', 'ancient', 'and', 'anger', 'angrily', 'angry', 'animal', 'animals', 'ann', 'annoy', 'annoyed', 'another', 'answer', 'answered', 'answers', 'antipathies', 'anxious', 'anxiously', 'any', 'anything', 'anywhere', 'appealed', 'appear', 'appearance', 'appeared', 'appearing', 'applause', 'apple', 'apples', 'arch'] ``` <NAME> was able to write a classic piece of literature using only 2570 different words! Suppose we have two sorted lists. Devise an algorithm to merge them together into a single sorted list. A simple but inefficient algorithm could be to simply append the two lists together, and sort the result: ``` newlist = (xs + ys) newlist.sort() ``` But this doesn’t take advantage of the fact that the two lists are already sorted, and is going to have poor scalability and performance for very large lists. Lets get some tests together first: ``` xs = [1,3,5,7,9,11,13,15,17,19] ys = [4,8,12,16,20,24] zs = xs+ys zs.sort() test(merge(xs, []) == xs) test(merge([], ys) == ys) test(merge([], []) == []) test(merge(xs, ys) == zs) test(merge([1,2,3], [3,4,5]) == [1,2,3,3,4,5]) test(merge(["a", "big", "cat"], ["big", "bite", "dog"]) == ["a", "big", "big", "bite", "cat", "dog"]) ``` Here is our merge algorithm: ``` def merge(xs, ys): """ merge sorted lists xs and ys. Return a sorted result """ result = [] xi = 0 yi = 0 while True: if xi >= len(xs): # If xs list is finished, result.extend(ys[yi:]) # Add remaining items from ys return result # And we're done. if yi >= len(ys): # Same again, but swap roles result.extend(xs[xi:]) return result # Both lists still have items, copy smaller item to result. if xs[xi] <= ys[yi]: result.append(xs[xi]) xi += 1 else: result.append(ys[yi]) yi += 1 ``` The algorithm works as follows: we create a result list, and keep two indexes, one into each list (lines 3-5). On each iteration of the loop, whichever list item is smaller is copied to the result list, and that list’s index is advanced. As soon as either index reaches the end of its list, we copy all the remaining items from the other list into the result, which we return. Underlying the algorithm for merging sorted lists is a deep pattern of computation that is widely reusable. The pattern essence is “Run through the lists always processing the smallest remaining items from each, with these cases to consider:” Lets assume we have two sorted lists. Exercise your algorithmic skills by adapting the merging algorithm pattern for each of these cases: would return `[5,11,11,12,13]` In the previous section we sorted the words from the book, and eliminated duplicates. Our vocabulary is also sorted. So third case above — find all items in the second list that are not in the first list, would be another way to implement `find_unknown_words` . Instead of searching for every word in the dictionary (either by linear or binary search), why not use a variant of the merge to return the words that occur in the book, but not in the vocabulary. ``` def find_unknowns_merge_pattern(vocab, wds): """ Both the vocab and wds must be sorted. Return a new list of words from wds that do not occur in vocab. """ result = [] xi = 0 yi = 0 while True: if xi >= len(vocab): result.extend(wds[yi:]) return result if yi >= len(wds): return result if vocab[xi] == wds[yi]: # Good, word exists in vocab yi += 1 elif vocab[xi] < wds[yi]: # Move past this vocab word, xi += 1 else: # Got word that is not in vocab result.append(wds[yi]) yi += 1 ``` Now we put it all together: Even more stunning performance here: ``` There are 828 unknown words. That took 0.0410 seconds. ``` Let’s review what we’ve done. We started with a word-by-word linear lookup in the vocabulary that ran in about 50 seconds. We implemented a clever binary search, and got that down to 0.22 seconds, more than 200 times faster. But then we did something even better: we sorted the words from the book, eliminated duplicates, and used a merging pattern to find words from the book that were not in the dictionary. This was about five times faster than even the binary lookup algorithm. At the end of the chapter our algorithm is more than a 1000 times faster than our first attempt! That is what we can call a good day at the office! As told by Wikipedia, “The eight queens puzzle is the problem of placing eight chess queens on an 8x8 chessboard so that no two queens attack each other. Thus, a solution requires that no two queens share the same row, column, or diagonal.” Please try this yourself, and find a few more solutions by hand. We’d like to write a program to find solutions to this puzzle. In fact, the puzzle generalizes to placing N queens on an NxN board, so we’re going to think about the general case, not just the 8x8 case. Perhaps we can find solutions for 12 queens on a 12x12 board, or 20 queens on a 20x20 board. How do we approach a complex problem like this? A good starting point is to think about our data structures — how exactly do we plan to represent the state of the chessboard and its queens in our program? Once we have some handle on what our puzzle is going to look like in memory, we can begin to think about the functions and logic we’ll need to solve the puzzle, i.e. how do we put another queen onto the board somewhere, and to check whether it clashes with any of the queens already on the board. The steps of finding a good representation, and then finding a good algorithm to operate on the data cannot always be done independently of each other. As you think about the operations you require, you may want to change or reorganize the data somewhat to make it easier to do the operations you need. This relationship between algorithms and data was elegantly expressed in the title of a book Algorithms + Data Structures = Programs, written by one of the pioneers in Computer Science, <NAME>, the inventor of Pascal. Let’s brainstorm some ideas about how a chessboard and queens could be represented in memory. A two dimensional matrix (a list of 8 lists, each containing 8 squares) is one possibility. At each square of the board would like to know whether it contains a queen or not — just two possible states for each square — so perhaps each element in the lists could be True or False, or, more simply, 0 or 1. Our state for the solution above could then have this data representation: ``` bd1 = [[0,0,0,1,0,0,0,0], [0,0,0,0,0,0,1,0], [0,0,1,0,0,0,0,0], [0,0,0,0,0,0,0,1], [0,1,0,0,0,0,0,0], [0,0,0,0,1,0,0,0], [1,0,0,0,0,0,0,0], [0,0,0,0,0,1,0,0]] ``` You should also be able to see how the empty board would be represented, and you should start to imagine what operations or changes you’d need to make to the data to place another queen somewhere on the board. Another idea might be to keep a list of coordinates of where the queens are. Using the notation in the illustration, for example, we could represent the state of that solution as: ``` bd2 = [ "a6", "b4", "c2", "d0", "e5", "f7", "g1", "h3" ] ``` We could make other tweaks to this — perhaps each element in this list should rather be a tuple, with integer coordinates for both axes. And being good computer scientists, we’d probably start numbering each axis from 0 instead of at 1. Now our representation could be: ``` bd3 = [(0,6), (1,4), (2,2), (3,0), (4,5), (5,7), (6,1), (7,3)] ``` Looking at this representation, we can’t help but notice that the first coordinates are `0,1,2,3,4,5,6,7` and they correspond exactly to the index position of the pairs in the list. So we could discard them, and come up with this really compact alternative representation of the solution: ``` bd4 = [6, 4, 2, 0, 5, 7, 1, 3] ``` This will be what we’ll use, let’s see where that takes us. Let us now take some grand insight into the problem. Do you think it is a coincidence that there are no repeated numbers in the solution? The solution `[6,4,2,0,5,7,1,3]` contains the numbers `0,1,2,3,4,5,6,7` , but none are duplicated! Could other solutions contain duplicate numbers, or not? A little thinking should convince you that there can never be duplicate numbers in a solution: the numbers represent the row on which the queen is placed, and because we are never permitted to put two queens in the same row, no solution will ever have duplicate row numbers in it. Our key insight In our representation, any solution to the N queens problem musttherefore be a permutation of the numbers [0 .. N-1]. Note that not all permutations are solutions. For example, `[0,1,2,3,4,5,6,7]` has all queens on the same diagonal. Wow, we seem to be making progress on this problem merely by thinking, rather than coding! Our algorithm should start taking shape now. We can start with the list [0..N-1], generate various permutations of that list, and check each permutation to see if it has any clashes (queens that are on the same diagonal). If it has no clashes, it is a solution, and we can print it. Let us be precise and clear on this issue: if we only use permutations of the rows, and we’re using our compact representation, no queens can clash on either rows or columns, and we don’t even have to concern ourselves with those cases. So the only clashes we need to test for are clashes on the diagonals. It sounds like a useful function will be one that can test if two queens share a diagonal. Each queen is on some (x,y) position. So does the queen at (5,2) share a diagonal with the one at (2,0)? Does (5,2) clash with (3,0)? ``` test(not share_diagonal(5,2,2,0)) test(share_diagonal(5,2,3,0)) test(share_diagonal(5,2,4,3)) test(share_diagonal(5,2,4,1)) ``` A little geometry will help us here. A diagonal has a slope of either 1 or -1. The question we really want to ask is is their distance between them the same in the x and the y direction? If it is, they share a diagonal. Because diagonals can be to the left or right, it will make sense for this program to use the absolute distance in each direction: ``` def share_diagonal(x0, y0, x1, y1): """ Is (x0, y0) on a shared diagonal with (x1, y1)? """ dy = abs(y1 - y0) # Calc the absolute y distance dx = abs(x1 - x0) # CXalc the absolute x distance return dx == dy # They clash if dx == dy ``` If you copy the code and run it, you’ll be happy to learn that the tests pass! Now let’s consider how we construct a solution by hand. We’ll put a queen somewhere in the first column, then place one in the second column, only if it does not clash with the one already on the board. And then we’ll put a third one on, checking it against the two queens already to its left. When we consider the queen on column 6, we’ll need to check for clashes against those in all the columns to its left, i.e. in columns 0,1,2,3,4,5. So the next building block is a function that, given a partially completed puzzle, can check whether the queen at column `c` clashes with any of the queens to its left, at columns 0,1,2,..c-1: ``` # Solutions cases that should not have any clashes test(not col_clashes([6,4,2,0,5], 4)) test(not col_clashes([6,4,2,0,5,7,1,3], 7)) # More test cases that should mostly clash test(col_clashes([0,1], 1)) test(col_clashes([5,6], 1)) test(col_clashes([6,5], 1)) test(col_clashes([0,6,4,3], 3)) test(col_clashes([5,0,7], 2)) test(not col_clashes([2,0,1,3], 1)) test(col_clashes([2,0,1,3], 2)) ``` Here is our function that makes them all pass: ``` def col_clashes(bs, c): """ Return True if the queen at column c clashes with any queen to its left. """ for i in range(c): # Look at all columns to the left of c if share_diagonal(i, bs[i], c, bs[c]): return True return False # No clashes - col c has a safe placement. ``` Finally, we’re going to give our program one of our permutations — i.e. all queens placed somewhere, one on each row, one on each column. But does the permutation have any diagonal clashes? ``` test(not has_clashes([6,4,2,0,5,7,1,3])) # Solution from above test(has_clashes([4,6,2,0,5,7,1,3])) # Swap rows of first two test(has_clashes([0,1,2,3])) # Try small 4x4 board test(not has_clashes([2,0,3,1])) # Solution to 4x4 case ``` And the code to make the tests pass: ``` def has_clashes(the_board): """ Determine whether we have any queens clashing on the diagonals. We're assuming here that the_board is a permutation of column numbers, so we're not explicitly checking row or column clashes. """ for col in range(1,len(the_board)): if col_clashes(the_board, col): return True return False ``` Summary of what we’ve done so far: we now have a powerful function called `has_clashes` that can tell if a configuration is a solution to the queens puzzle. Let’s get on now with generating lots of permutations and finding solutions! This is the fun, easy part. We could try to find all permutations of `[0,1,2,3,4,5,6,7]` — that might be algorithmically challenging, and would be a brute force way of tackling the problem. We just try everything, and find all possible solutions. Of course we know there are N! permutations of N things, so we can get an early idea of how long it would take to search all of them for all solutions. Not too long at all, actually -8! is only 40320 different cases to check out. This is vastly better than starting with 64 places to put eight queens. If you do the sums for how many ways can you choose 8 of the 64 squares for your queens, the formula (called N choose k where you’re choosing k=8 squares of the available N=64) yields a whopping 4426165368, obtained from (64! / (8! x 56!)). So our earlier key insight — that we only need to consider permutations — has reduced what we call the problem space from about 4.4 billion cases to just 40320! We’re not even going to explore all those, however. When we introduced the random number module, we learned that it had a `shuffle` method that randomly permuted a list of items. So we’re going to write a “random” algorithm to find solutions to the N queens problem. We’ll begin with the permutation [0,1,2,3,4,5,6,7] and we’ll repeatedly shuffle the list, and test each to see if it works! Along the way we’ll count how many attempts we need before we find each solution, and we’ll find 10 solutions (we could hit the same solution more than once, because shuffle is random!): ``` def main(): import random rng = random.Random() # Instantiate a generator Almost magically, and at great speed, we get this: ``` Found solution [3, 6, 2, 7, 1, 4, 0, 5] in 693 tries. Found solution [5, 7, 1, 3, 0, 6, 4, 2] in 82 tries. Found solution [3, 0, 4, 7, 1, 6, 2, 5] in 747 tries. Found solution [1, 6, 4, 7, 0, 3, 5, 2] in 428 tries. Found solution [6, 1, 3, 0, 7, 4, 2, 5] in 376 tries. Found solution [3, 0, 4, 7, 5, 2, 6, 1] in 204 tries. Found solution [4, 1, 7, 0, 3, 6, 2, 5] in 98 tries. Found solution [3, 5, 0, 4, 1, 7, 2, 6] in 64 tries. Found solution [5, 1, 6, 0, 3, 7, 4, 2] in 177 tries. Found solution [1, 6, 2, 5, 7, 4, 0, 3] in 478 tries. ``` Here is an interesting fact. On an 8x8 board, there are known to be 92 different solutions to this puzzle. We are randomly picking one of 40320 possible permutations of our representation. So our chances of picking a solution on each try are 92/40320. Put another way, on average we’ll need 40320/92 tries — about 438.26 — before we stumble across a solution. The number of tries we printed looks like our experimental data agrees quite nicely with our theory! Save this code for later. In the chapter on PyGame we plan to write a module to draw the board with its queens, and integrate that module with this code. binary search A famous algorithm that searches for a target in a sorted list. Each probe in the list allows us to discard half the remaining items, so the algorithm is very efficient. linear Relating to a straight line. Here, we talk about graphing how the time taken by an algorithm depends on the size of the data it is processing. Linear algorithms have straight-line graphs that can describe this relationship. linear search A search that probes each item in a list or sequence, from first, until it finds what it is looking for. It is used for searching for a target in unordered lists of items. Merge algorithm An efficient algorithm that merges two already sorted lists, to produce a sorted list result. The merge algorithm is really a pattern of computation that can be adapted and reused for various other scenarios, such as finding words that are in a book, but not in a vocabulary. probe Each time we take a look when searching for an item is called a probe. In our chapter on Iteration we also played a guessing game where the computer tried to guess the user’s secret number. Each of those tries would also be called a probe. test-driven development (TDD) A software development practice which arrives at a desired feature through a series of small, iterative steps motivated by automated tests which are written first that express increasing refinements of the desired feature. (see the Wikipedia article on Test-driven development for more information.) The section in this chapter called Alice in Wonderland, again started with the observation that the merge algorithm uses a pattern that can be reused in other situations. Adapt the merge algorithm to write each of these functions, as was suggested there: would return `[5,11,11,12,13]` Modify the queens program to solve some boards of size 4, 12, and 16. What is the maximum size puzzle you can usually solve in under a minute? Adapt the queens program so that we keep a list of solutions that have already printed, so that we don’t print the same solution more than once. Chess boards are symmetric: if we have a solution to the queens problem, its mirror solution — either flipping the board on the X or in the Y axis, is also a solution. And giving the board a 90 degree, 180 degree, or 270 degree rotation is also a solution. In some sense, solutions that are just mirror images or rotations of other solutions — in the same family —are less interesting than the unique “core cases”. Of the 92 solutions for the 8 queens problem, there are only 12 unique families if you take rotations and mirror images into account. Wikipedia has some fascinating stuff about this. Write a function to rotate a solution by 90 degrees anti-clockwise, and use this to provide 180 and 270 degree rotations too. Write a function which is given a solution, and it generates the family of symmetries for that solution. For example, the symmetries of `[0,4,7,5,2,6,1,3]` are : ``` [[0,4,7,5,2,6,1,3],[7,1,3,0,6,4,2,5], [4,6,1,5,2,0,3,7],[2,5,3,1,7,4,6,0], [3,1,6,2,5,7,4,0],[0,6,4,7,1,3,5,2], [7,3,0,2,5,1,6,4],[5,2,4,6,0,3,1,7]] ``` Now adapt the queens program so it won’t list solutions that are in the same family. It only prints solutions from unique families. Every week a computer scientist buys four lotto tickets. She always chooses the same prime numbers, with the hope that if she ever hits the jackpot, she will be able to go onto TV and Facebook and tell everyone her secret. This will suddenly create widespread public interest in prime numbers, and will be the trigger event that ushers in a new age of enlightenment. She represents her weekly tickets in Python as a list of lists: ``` my_tickets = [ [ 7, 17, 37, 19, 23, 43], [ 7, 2, 13, 41, 31, 43], [ 2, 5, 7, 11, 13, 17], [13, 17, 37, 19, 23, 43] ] ``` Complete these exercises. Each lotto draw takes six random balls, numbered from 1 to 49. Write a function to return a lotto draw. Write a function that compares a single ticket and a draw, and returns the number of correct picks on that ticket: ``` test(lotto_match([42,4,7,11,1,13], [2,5,7,11,13,17]) == 3) ``` Write a function that takes a list of tickets and a draw, and returns a list telling how many picks were correct on each ticket: ``` test(lotto_matches([42,4,7,11,1,13], my_tickets) == [1,2,3,1]) ``` Write a function that takes a list of integers, and returns the number of primes in the list: ``` test(primes_in([42, 4, 7, 11, 1, 13]) == 3) ``` Write a function to discover whether the computer scientist has missed any prime numbers in her selection of the four tickets. Return a list of all primes that she has missed: ``` test(prime_misses(my_tickets) == [3, 29, 47]) ``` Write a function that repeatedly makes a new draw, and compares the draw to the four tickets. Notice that we have difficulty constructing test cases here, because our random numbers are not deterministic. Automated testing only really works if you already know what the answer should be! Read Alice in Wonderland. You can read the plain text version we have with this textbook, or if youhave e-book reader software on your PC, or a Kindle, iPhone, Android, etc. you’ll be able to find a suitable version for your device at http://www.gutenberg.org. They also have html and pdf versions, with pictures, and thousands of other classic books! # Chapter 15: Classes and Objects — the Basics Python is an object-oriented programming language, which means that it provides features that support object-oriented programming (OOP). Object-oriented programming has its roots in the 1960s, but it wasn’t until the mid 1980s that it became the main programming paradigm used in the creation of new software. It was developed as a way to handle the rapidly increasing size and complexity of software systems, and to make it easier to modify these large and complex systems over time. Up to now, most of the programs we have been writing use a procedural programming paradigm. In procedural programming the focus is on writing functions or procedures which operate on data. In object-oriented programming the focus is on the creation of objects which contain both data and functionality together. (We have seen turtle objects, string objects, and random number generators, to name a few places where we’ve already worked with objects.) Usually, each object definition corresponds to some object or concept in the real world, and the functions that operate on that object correspond to the ways real-world objects interact. We’ve already seen classes like `str` , `int` , `float` and `Turtle` . We are now ready to create our own user-defined class: the `Point` . Consider the concept of a mathematical point. In two dimensions, a point is two numbers (coordinates) that are treated collectively as a single object. Points are often written in parentheses with a comma separating the coordinates. For example, `(0, 0)` represents the origin, and `(x, y)` represents the point `x` units to the right and `y` units up from the origin. Some of the typical operations that one associates with points might be calculating the distance of a point from the origin, or from another point, or finding a midpoint of two points, or asking if a point falls within a given rectangle or circle. We’ll shortly see how we can organize these together with the data. A natural way to represent a point in Python is with two numeric values. The question, then, is how to group these two values into a compound object. The quick and dirty solution is to use a tuple, and for some applications that might be a good choice. An alternative is to define a new class. This approach involves a bit more effort, but it has advantages that will be apparent soon. We’ll want our points to each have an `x` and a `y` attribute, so our first class definition looks like this: def __init__(self): """ Create a new point at the origin """ self.x = 0 self.y = 0 ``` Class definitions can appear anywhere in a program, but they are usually near the beginning (after the `import` statements). Some programmers and languages prefer to put every class in a module of its own — we won’t do that here. The syntax rules for a class definition are the same as for other compound statements. There is a header which begins with the keyword, `class` , followed by the name of the class, and ending with a colon. Indentation levels tell us where the class ends. If the first line after the class header is a string, it becomes the docstring of the class, and will be recognized by various tools. (This is also the way docstrings work in functions.) Every class should have a method with the special name `__init__` . This initializer method is automatically called whenever a new instance of `Point` is created. It gives the programmer the opportunity to set up the attributes required within the new instance by giving them their initial state/values. The `self` parameter (we could choose any other name, but `self` is the convention) is automatically set to reference the newly created object that needs to be initialized. So let’s use our new `Point` class now: ``` p = Point() # Instantiate an object of type Point q = Point() # Make a second point print(p.x, p.y, q.x, q.y) # Each point object has its own x and y ``` This program prints: `0 0 0 0` because during the initialization of the objects, we created two attributes called `x` and `y` for each, and gave them both the value 0. This should look familiar — we’ve used classes before to create more than one object: ``` from turtle import Turtle tess = Turtle() # Instantiate objects of type Turtle alex = Turtle() ``` The variables `p` and `q` are assigned references to two new `Point` objects. A function like `Turtle` or `Point` that creates a new object instance is called a constructor, and every class automatically provides a constructor function which is named the same as the class. It may be helpful to think of a class as a factory for making objects. The class itself isn’t an instance of a point, but it contains the machinery to make point instances. Every time we call the constructor, we’re asking the factory to make us a new object. As the object comes off the production line, its initialization method is executed to get the object properly set up with its factory default settings. The combined process of “make me a new object” and “get its settings initialized to the factory default settings” is called instantiation. Like real world objects, object instances have both attributes and methods. We can modify the attributes in an instance using dot notation: ``` >>> p.x = 3 >>> p.y = 4 ``` Both modules and instances create their own namespaces, and the syntax for accessing names contained in each, called attributes, is the same. In this case the attribute we are selecting is a data item from an instance. The following state diagram shows the result of these assignments: The variable `p` refers to a `Point` object, which contains two attributes. Each attribute refers to a number. We can access the value of an attribute using the same syntax: ``` >>> print(p.y) 4 >>> x = p.x >>> print(x) 3 ``` The expression `p.x` means, “Go to the object `p` refers to and get the value of `x` ”. In this case, we assign that value to a variable named `x` . There is no conflict between the variable `x` (in the global namespace here) and the attribute `x` (in the namespace belonging to the instance). The purpose of dot notation is to fully qualify which variable we are referring to unambiguously. We can use dot notation as part of any expression, so the following statements are legal: ``` print("(x={0}, y={1})".format(p.x, p.y)) distance_squared_from_origin = p.x * p.x + p.y * p.y ``` The first line outputs `(x=3, y=4)` . The second line calculates the value 25. To create a point at position (7, 6) currently needs three lines of code: ``` p = Point() p.x = 7 p.y = 6 ``` We can make our class constructor more general by placing extra parameters into the `__init__` method, as shown in this example: # Other statements outside the class continue below here. ``` The `x` and `y` parameters here are both optional. If the caller does not supply arguments, they’ll get the default values of 0. Here is our improved class in action: ``` >>> p = Point(4, 2) >>> q = Point(6, 3) >>> r = Point() # r represents the origin (0, 0) >>> print(p.x, q.y, r.x) 4 3 0 ``` Technically speaking … If we are really fussy, we would argue that the `__init__` method’s docstring is inaccurate. `__init__` doesn’t create the object (i.e. set aside memory for it), — it just initializes the object to its factory-default settings after its creation. But tools like PyScripter understand that instantiation — creation and initialization — happen together, and they choose to display the initializer’s docstring as the tooltip to guide the programmer that calls the class constructor. So we’re writing the docstring so that it makes the most sense when it pops up to help the programmer who is using our `Point` class: The key advantage of using a class like `Point` rather than a simple tuple `(6, 7)` now becomes apparent. We can add methods to the `Point` class that are sensible operations for points, but which may not be appropriate for other tuples like `(25, 12)` which might represent, say, a day and a month, e.g. Christmas day. So being able to calculate the distance from the origin is sensible for points, but not for (day, month) data. For (day, month) data, we’d like different operations, perhaps to find what day of the week it will fall on in 2020. Creating a class like `Point` brings an exceptional amount of “organizational power” to our programs, and to our thinking. We can group together the sensible operations, and the kinds of data they apply to, and each instance of the class can have its own state. A method behaves like a function but it is invoked on a specific instance, e.g. `tess.right(90)` . Like a data attribute, methods are accessed using dot notation. Let’s add another method, `distance_from_origin` , to see better how methods work: ``` class Point: """ Create a new Point, at coordinates x, y """ def distance_from_origin(self): """ Compute my distance from the origin """ return ((self.x ** 2) + (self.y ** 2)) ** 0.5 ``` Let’s create a few point instances, look at their attributes, and call our new method on them: (We must run our program first, to make our `Point` class available to the interpreter.) ``` >>> p = Point(3, 4) >>> p.x 3 >>> p.y 4 >>> p.distance_from_origin() 5.0 >>> q = Point(5, 12) >>> q.x 5 >>> q.y 12 >>> q.distance_from_origin() 13.0 >>> r = Point() >>> r.x 0 >>> r.y 0 >>> r.distance_from_origin() 0.0 ``` When defining a method, the first parameter refers to the instance being manipulated. As already noted, it is customary to name this parameter `self` . Notice that the caller of `distance_from_origin` does not explicitly supply an argument to match the `self` parameter — this is done for us, behind our back. We can pass an object as an argument in the usual way. We’ve already seen this in some of the turtle examples, where we passed the turtle to some function like `draw_bar` in the chapter titled Conditionals, so that the function could control and use whatever turtle instance we passed to it. Be aware that our variable only holds a reference to an object, so passing `tess` into a function creates an alias: both the caller and the called function now have a reference, but there is only one turtle! Here is a simple function involving our new `Point` objects: ``` def print_point(pt): print("({0}, {1})".format(pt.x, pt.y)) ``` `print_point` takes a point as an argument and formats the output in whichever way we choose. If we call `print_point(p)` with point `p` as defined previously, the output is `(3, 4)` . Most object-oriented programmers probably would not do what we’ve just done in `print_point` . When we’re working with classes and objects, a preferred alternative is to add a new method to the class. And we don’t like chatterbox methods that call `to_string` : def to_string(self): return "({0}, {1})".format(self.x, self.y) ``` Now we can say: ``` >>> p = Point(3, 4) >>> print(p.to_string()) (3, 4) ``` But don’t we already have a `str` type converter that can turn our object into a string? Yes! And doesn’t ``` >>> str(p) '<__main__.Point object at 0x01F9AA10>' >>> print(p) '<__main__.Point object at 0x01F9AA10>' ``` Python has a clever trick up its sleeve to fix this. If we call our new method `__str__` instead of `to_string` , the Python interpreter will use our code whenever it needs to convert a `Point` to a string. Let’s re-do this again, now: def __str__(self): # All we have done is renamed the method return "({0}, {1})".format(self.x, self.y) ``` and now things are looking great! ``` >>> str(p) # Python now uses the __str__ method that we wrote. (3, 4) >>> print(p) (3, 4) ``` Functions and methods can return instances. For example, given two `Point` objects, find their midpoint. First we’ll write this as a regular function: ``` def midpoint(p1, p2): """ Return the midpoint of points p1 and p2 """ mx = (p1.x + p2.x)/2 my = (p1.y + p2.y)/2 return Point(mx, my) ``` The function creates and returns a new `Point` object: Now let us do this as a method instead. Suppose we have a point object, and wish to find the midpoint halfway between it and some other target point: def halfway(self, target): """ Return the halfway point between myself and the target """ mx = (self.x + target.x)/2 my = (self.y + target.y)/2 return Point(mx, my) ``` This method is identical to the function, aside from some renaming. It’s usage might be like this: While this example assigns each point to a variable, this need not be done. Just as function calls are composable, method calls and object instantiation are also composable, leading to this alternative that uses no variables: ``` >>> print(Point(3, 4).halfway(Point(5, 12))) (4.0, 8.0) ``` The original syntax for a function call, ``` print_time(current_time) ``` , suggests that the function is the active agent. It says something like, “Hey, print_time! Here’s an object for you to print.” In object-oriented programming, the objects are considered the active agents. An invocation like ``` current_time.print_time() ``` says “Hey current_time! Please print yourself!” In our early introduction to turtles, we used an object-oriented style, so that we said `tess.forward(100)` , which asks the turtle to move itself forward by the given number of steps. This change in perspective might be more polite, but it may not initially be obvious that it is useful. But sometimes shifting responsibility from the functions onto the objects makes it possible to write more versatile functions, and makes it easier to maintain and reuse code. The most important advantage of the object-oriented style is that it fits our mental chunking and real-life experience more accurately. In real life our `cook` method is part of our microwave oven — we don’t have a `cook` function sitting in the corner of the kitchen, into which we pass the microwave! Similarly, we use the cellphone’s own methods to send an sms, or to change its state to silent. The functionality of real-world objects tends to be tightly bound up inside the objects themselves. OOP allows us to accurately mirror this when we organize our programs. Objects are most useful when we also need to keep some state that is updated from time to time. Consider a turtle object. Its state consists of things like its position, its heading, its color, and its shape. A method like `left(90)` updates the turtle’s heading, `forward` changes its position, and so on. For a bank account object, a main component of the state would be the current balance, and perhaps a log of all transactions. The methods would allow us to query the current balance, deposit new funds, or make a payment. Making a payment would include an amount, and a description, so that this could be added to the transaction log. We’d also want a method to show the transaction log. attribute One of the named data items that makes up an instance. class A user-defined compound type. A class can also be thought of as a template for the objects that are instances of it. (The iPhone is a class. By December 2010, estimates are that 50 million instances had been sold!) constructor Every class has a “factory”, called by the same name as the class, for making new instances. If the class has an initializer method, this method is used to get the attributes (i.e. the state) of the new object properly set up. initializer method A special method in Python (called `__init__` ) that is invoked automatically to set a newly created object’s attributes to their initial (factory-default) state. instance An object whose type is of some class. Instance and object are used interchangeably. instantiate To create an instance of a class, and to run its initializer. method A function that is defined inside a class definition and is invoked on instances of that class. object A compound data type that is often used to model a thing or concept in the real world. It bundles together the data and the operations that are relevant for that kind of data. Instance and object are used interchangeably. object-oriented programming A powerful style of programming in which data and the operations that manipulate it are organized into objects. object-oriented language A language that provides features, such as user-defined classes and inheritance, that facilitate object-oriented programming. Rewrite the `distance` function from the chapter titled Fruitful functions so that it takes two `Point` s as parameters instead of four numbers. Add a method `reflect_x` to `Point` which returns a new `Point` , one which is the reflection of the point about the x-axis. For example, ``` Point(3, 5).reflect_x() ``` is (3, -5) Add a method `slope_from_origin` which returns the slope of the line joining the origin to the point. For example: ``` >>> Point(4, 10).slope_from_origin() 2.5 ``` What cases will cause this method to fail? `Point` class so that if a point instance is given another point, it will compute the equation of the straight ine joining the two points. It must return the two coefficients as a tuple of two values. For example, : ``` >>> print(Point(4, 11).get_line_to(Point(6, 15))) >>> (2, 3) ``` This tells us that the equation of the line joining the two points is “y = 2x + 3”. When will this method fail? Given four points that fall on the circumference of a circle, find the midpoint of the circle. When will this function fail? Hint: You must know how to solve the geometry problem before you think of going anywhere near programming. You cannot program a solution to a problem if you don’t understand what you want the computer to do! Create a new class, SMS_store. The class will instantiate SMS_store objects, similar to an inbox or outbox on a cellphone: ``` my_inbox = SMS_store() ``` This store can hold multiple SMS messages (i.e. its internal state will just be a list of messages). Each message will be represented as a tuple: ``` (has_been_viewed, from_number, time_arrived, text_of_SMS) ``` The inbox object should provide these methods: ``` my_inbox.add_new_arrival(from_number, time_arrived, text_of_SMS) # Makes new SMS tuple, inserts it after other messages # in the store. When creating this message, its # has_been_viewed status is set False. my_inbox.message_count() # Returns the number of sms messages in my_inbox my_inbox.get_unread_indexes() # Returns list of indexes of all not-yet-viewed SMS messages my_inbox.get_message(i) # Return (from_number, time_arrived, text_of_sms) for message[i] # Also change its state to "has been viewed". # If there is no message at position i, return None my_inbox.delete(i) # Delete the message at index i my_inbox.clear() # Delete all messages from inbox ``` Write the class, create a message store object, write tests for these methods, and implement the methods. # Chapter 16: Classes and Objects — Digging a little deeper Let’s say that we want a class to represent a rectangle which is located somewhere in the XY plane. The question is, what information do we have to provide in order to specify such a rectangle? To keep things simple, assume that the rectangle is oriented either vertically or horizontally, never at an angle. There are a few possibilities: we could specify the center of the rectangle (two coordinates) and its size (width and height); or we could specify one of the corners and the size; or we could specify two opposing corners. A conventional choice is to specify the upper-left corner of the rectangle, and the size. Again, we’ll define a new class, and provide it with an initializer and a string converter method: ``` class Rectangle: """ A class to manufacture rectangle objects """ def __init__(self, posn, w, h): """ Initialize rectangle at posn, with width w, height h """ self.corner = posn self.width = w self.height = h def __str__(self): return "({0}, {1}, {2})" .format(self.corner, self.width, self.height) box = Rectangle(Point(0, 0), 100, 200) bomb = Rectangle(Point(100, 80), 5, 10) # In my video game print("box: ", box) print("bomb: ", bomb) ``` To specify the upper-left corner, we have embedded a `Point` object (as we used it in the previous chapter) within our new `Rectangle` object! We create two new `Rectangle` objects, and then print them producing: ``` box: ((0, 0), 100, 200) bomb: ((100, 80), 5, 10) ``` The dot operator composes. The expression `box.corner.x` means, “Go to the object that `box` refers to and select its attribute named `corner` , then go to that object and select its attribute named `x` ”. The figure shows the state of this object: We can change the state of an object by making an assignment to one of its attributes. For example, to grow the size of a rectangle without changing its position, we could modify the values of `width` and `height` : ``` box.width += 50 box.height += 100 ``` Of course, we’d probably like to provide a method to encapsulate this inside the class. We will also provide another method to move the position of the rectangle elsewhere: ``` class Rectangle: # ... def grow(self, delta_width, delta_height): """ Grow (or shrink) this object by the deltas """ self.width += delta_width self.height += delta_height def move(self, dx, dy): """ Move this object by the deltas """ self.corner.x += dx self.corner.y += dy ``` Let us try this: ``` >>> r = Rectangle(Point(10,5), 100, 50) >>> print(r) ((10, 5), 100, 50) >>> r.grow(25, -10) >>> print(r) ((10, 5), 125, 40) >>> r.move(-10, 10) print(r) ((0, 15), 125, 40) ``` The meaning of the word “same” seems perfectly clear until we give it some thought, and then we realize there is more to it than we initially expected. For example, if we say, “Alice and Bob have the same car”, we mean that her car and his are the same make and model, but that they are two different cars. If we say, “Alice and Bob have the same mother”, we mean that her mother and his are the same person. When we talk about objects, there is a similar ambiguity. For example, if two `Point` s are the same, does that mean they contain the same data (coordinates) or that they are actually the same object? We’ve already seen the `is` operator in the chapter on lists, where we talked about aliases: it allows us to find out if two references refer to the same object: ``` >>> p1 = Point(3, 4) >>> p2 = Point(3, 4) >>> p1 is p2 False ``` Even though `p1` and `p2` contain the same coordinates, they are not the same object. If we assign `p1` to `p3` , then the two variables are aliases of the same object: ``` >>> p3 = p1 >>> p1 is p3 True ``` This type of equality is called shallow equality because it compares only the references, not the contents of the objects. To compare the contents of the objects — deep equality —we can write a function called `same_coordinates` : ``` def same_coordinates(p1, p2): return (p1.x == p2.x) and (p1.y == p2.y) ``` Now if we create two different objects that contain the same data, we can use `same_point` to find out if they represent points with the same coordinates. ``` >>> p1 = Point(3, 4) >>> p2 = Point(3, 4) >>> same_coordinates(p1, p2) True ``` Of course, if the two variables refer to the same object, they have both shallow and deep equality. Beware of == “When I use a word,” <NAME> said, in a rather scornful tone, “it means just what I choose it to mean — neither more nor less.” Alice in Wonderland Python has a powerful feature that allows a designer of a class to decide what an operation like `==` or `<` should mean. (We’ve just shown how we can control how our own objects are converted to strings, so we’ve already made a start!) We’ll cover more detail later. But sometimes the implementors will attach shallow equality semantics, and sometimes deep equality, as shown in this little experiment: ``` p = Point(4, 2) s = Point(4, 2) print("== on Points returns", p == s) # By default, == on Point objects does a shallow equality test a = [2,3] b = [2,3] print("== on lists returns", a == b) # But by default, == does a deep equality test on lists ``` This outputs: ``` == on Points returns False == on lists returns True ``` So we conclude that even though the two lists (or tuples, etc.) are distinct objects with different memory addresses, for lists the `==` operator tests for deep equality, while in the case of points it makes a shallow test. Aliasing can make a program difficult to read because changes made in one place might have unexpected effects in another place. It is hard to keep track of all the variables that might refer to a given object. Copying an object is often an alternative to aliasing. The `copy` module contains a function called `copy` that can duplicate any object: ``` >>> import copy >>> p1 = Point(3, 4) >>> p2 = copy.copy(p1) >>> p1 is p2 False >>> same_coordinates(p1, p2) True ``` Once we import the `copy` module, we can use the `copy` function to make a new `Point` . `p1` and `p2` are not the same point, but they contain the same data. To copy a simple object like a `Point` , which doesn’t contain any embedded objects, `copy` is sufficient. This is called shallow copying. For something like a `Rectangle` , which contains a reference to a `Point` , `copy` doesn’t do quite the right thing. It copies the reference to the `Point` object, so both the old `Rectangle` and the new one refer to a single `Point` . If we create a box, `b1` , in the usual way and then make a copy, `b2` , using `copy` , the resulting state diagram looks like this: This is almost certainly not what we want. In this case, invoking `grow` on one of the `Rectangle` objects would not affect the other, but invoking `move` on either would affect both! This behavior is confusing and error-prone. The shallow copy has created an alias to the `Point` that represents the corner. Fortunately, the `copy` module contains a function named `deepcopy` that copies not only the object but also any embedded objects. It won’t be surprising to learn that this operation is called a deep copy. ``` >>> b2 = copy.deepcopy(b1) ``` Now `b1` and `b2` are completely separate objects. deep copy To copy the contents of an object as well as any embedded objects, and any objects embedded in them, and so on; implemented by the `deepcopy` function in the `copy` module. deep equality Equality of values, or two references that point to objects that have the same value. shallow copy To copy the contents of an object, including any references to embedded objects; implemented by the `copy` function in the `copy` module. shallow equality Equality of references, or two references that point to the same object. `area` to the `Rectangle` class that returns the area of any instance: ``` r = Rectangle(Point(0, 0), 10, 5) test(r.area() == 50) ``` `perimeter` method in the `Rectangle` class so that we can find the perimeter of any rectangle instance: ``` r = Rectangle(Point(0, 0), 10, 5) test(r.perimeter() == 30) ``` `flip` method in the `Rectangle` class that swaps the width and the height of any rectangle instance: ``` r = Rectangle(Point(100, 50), 10, 5) test(r.width == 10 and r.height == 5) r.flip() test(r.width == 5 and r.height == 10) ``` `Rectangle` class to test if a `Point` falls within the rectangle. For this exercise, assume that a rectangle at (0,0) with width 10 and height 5 has open upper bounds on the width and height, i.e. it stretches in the x direction from [0 to 10), where 0 is included but 10 is excluded, and from [0 to 5) in the y direction. So it does not contain the point (10,2). These tests should pass: ``` r = Rectangle(Point(0, 0), 10, 5) test(r.contains(Point(0, 0))) test(r.contains(Point(3, 3))) test(not r.contains(Point(3, 7))) test(not r.contains(Point(3, 5))) test(r.contains(Point(3, 4.99999))) test(not r.contains(Point(-3, -3))) ``` Write a function to determine whether two rectangles collide. Hint: this might be quite a tough exercise! Think carefully about all the cases before you code. # Chapter 17: PyGame PyGame is a package that is not part of the standard Python distribution, so if you do not already have it installed (i.e. `import pygame` fails), download and install a suitable version from http://pygame.org/download.shtml. These notes are based on PyGame 1.9.1, the most recent version at the time of writing. PyGame comes with a substantial set of tutorials, examples, and help, so there is ample opportunity to stretch yourself on the code. You may need to look around a bit to find these resources, though: if you’ve installed PyGame on a Windows machine, for example, they’ll end up in a folder like C:\Python31\Lib\site-packages\pygame\ where you will find directories for docs and examples. The structure of the games we’ll consider always follows this fixed pattern: In every game, in the setup section we’ll create a window, load and prepare some content, and then enter the game loop. The game loop continuously does four main things: def main(): """ Set up the game and run the main game loop """ pygame.init() # Prepare the pygame module for use surface_sz = 480 # Desired physical surface size, in pixels. # Create surface of (width, height), and its window. main_surface = pygame.display.set_mode((surface_sz, surface_sz)) # Set up some data to describe a small rectangle and its color small_rect = (300, 200, 150, 90) some_color = (255, 0, 0) # A color is a mix of (Red, Green, Blue) while True: ev = pygame.event.poll() # Look for any event if ev.type == pygame.QUIT: # Window close button clicked? break # ... leave game loop # Update your game objects and data structures here... # We draw everything from scratch on each frame. # So first fill everything with the background color main_surface.fill((0, 200, 255)) # Overpaint a smaller rectangle on the main surface main_surface.fill(some_color, small_rect) # Now the surface is ready, tell pygame to display it! pygame.display.flip() pygame.quit() # Once we leave the loop, close the window. This program pops up a window which stays there until we close it: PyGame does all its drawing onto rectangular surfaces. After initializing PyGame at line 5, we create a window holding our main surface. The main loop of the game extends from line 15 to 30, with the following key bits of logic: `main` . Your program could go on to do other things, or reinitialize pygame and create another window, but it will usually just end too. `some_color` , and `small_rect` here. `fill` method of a surface takes two arguments — the color to use for filling, and the rectangle to be filled. But the second argument is optional, and if it is left out the entire surface is filled. `some_color` . The placement and size of the rectangle are given by the tuple `small_rect` , a 4-element tuple ``` (x, y, width, height) ``` . `flip` the buffers, on line 30. To draw an image on the main surface, we load the image, say a beach ball, into its own new surface. The main surface has a `blit` method that copies pixels from the beach ball surface into its own surface. When we call `blit` , we can specify where the beach ball should be placed on the main surface. The term blit is widely used in computer graphics, and means to make a fast copy of pixels from one area of memory to another. So in the setup section, before we enter the game loop, we’d load the image, like this: ``` ball = pygame.image.load("ball.png") ``` and after line 28 in the program above, we’d add this code to display our image at position (100,120): ``` main_surface.blit(ball, (50, 70)) ``` To display text, we need do do three things. Before we enter the game loop, we instantiate a `font` object: ``` # Instantiate 16 point Courier font to draw text. my_font = pygame.font.SysFont("Courier", 16) ``` and after line 28, again, we use the font’s `render` method to create a new surface containing the pixels of the drawn text, and then, as in the case for images, we blit our new surface onto the main surface. Notice that `render` takes two extra parameters — the second tells it whether to carefully smooth edges of the text while drawing (this process is called anti-aliasing), and the second is the color that we want the text text be. Here we’ve used `(0,0,0)` which is black: ``` the_text = my_font.render("Hello, world!", True, (0,0,0)) main_surface.blit(the_text, (10, 10)) ``` We’ll demonstrate these two new features by counting the frames — the iterations of the game loop — and keeping some timing information. On each frame, we’ll display the frame count, and the frame rate. We will only update the frame rate after every 500 frames, when we’ll look at the timing interval and can do the calculations. First download an image of a beach ball. You can find one at https://learnpythontherightway.com/_downloads/ball.png. Upload it to your repl by using the “Upload file” menu. Now you can run the following code. ``` import pygame import time def main(): pygame.init() # Prepare the PyGame module for use main_surface = pygame.display.set_mode((480, 240)) # Load an image to draw. Substitute your own. # PyGame handles gif, jpg, png, etc. image types. ball = pygame.image.load("ball.png") # Create a font for rendering text my_font = pygame.font.SysFont("Courier", 16) frame_count = 0 frame_rate = 0 t0 = time.process_time() # Look for an event from keyboard, mouse, joystick, etc. ev = pygame.event.poll() if ev.type == pygame.QUIT: # Window close button clicked? break # Leave game loop # Do other bits of logic for the game here frame_count += 1 if frame_count % 500 == 0: t1 = time.process_time() frame_rate = 500 / (t1-t0) t0 = t1 # Completely redraw the surface, starting with background main_surface.fill((0, 200, 255)) # Put a red rectangle somewhere on the surface main_surface.fill((255,0,0), (300, 100, 150, 90)) # Copy our image to the surface, at this (x,y) posn main_surface.blit(ball, (50, 70)) # Make a new surface with an image of the text the_text = my_font.render("Frame = {0}, rate = {1:.2f} fps" .format(frame_count, frame_rate), True, (0,0,0)) # Copy the text surface to the main surface main_surface.blit(the_text, (10, 10)) # Now that everything is drawn, put it on display! pygame.display.flip() The frame rate is close to ridiculous — a lot faster than one’s eye can process frames. (Commercial video games usually plan their action for 60 frames per second (fps).) Of course, our rate will drop once we start doing something a little more strenuous inside our game loop. We previously solved our N queens puzzle. For the 8x8 board, one of the solutions was the list `[6,4,2,0,5,7,1,3]` . Let’s use that solution as testdata, and now use PyGame to draw that chessboard with its queens. We’ll create a new module for the drawing code, called `draw_queens.py` . When we have our test case(s) working, we can go back to our solver, import this new module, and add a call to our new function to draw a board each time a solution is discovered. We begin with a background of black and red squares for the board. Perhaps we could create an image that we could load and draw, but that approach would need different background images for different size boards. Just drawing our own red and black rectangles of the appropriate size sounds like much more fun! ``` def draw_board(the_board): """ Draw a chess board with queens, from the_board. """ Here we precompute `sq_sz` , the integer size that each square will be, so that we can fit the squares nicely into the available window. So if we’d like the board to be 480x480, and we’re drawing an 8x8 chessboard, then each square will need to have a size of 60 units. But we notice that a 7x7 board cannot fit nicely into 480 — we’re going to get some ugly border that our squares don’t fill exactly. So we recompute the surface size to exactly fit our squares before we create the window. Now let’s draw the squares, in the game loop. We’ll need a nested loop: the outer loop will run over the rows of the chessboard, the inner loop over the columns: There are two important ideas in this code: firstly, we compute the rectangle to be filled from the `row` and `col` loop variables, multiplying them by the size of the square to get their position. And, of course, each square is a fixed width and height. So `the_square` represents the rectangle to be filled on the current iteration of the loop. The second idea is that we have to alternate colors on every square. In the earlier setup code we created a list containing two colors, here we manipulate `c_indx` (which will always either have the value 0 or 1) to start each row on a color that is different from the previous row’s starting color, and to switch colors each time a square is filled. This (together with the other fragments not shown to flip the surface onto the display) leads to the pleasing backgrounds like this, for different size boards: Now, on to drawing the queens! Recall that our solution `[6,4,2,0,5,7,1,3]` means that in column 0 of the board we want a queen at row 6, at column 1 we want a queen at row 4, and so on. So we need a loop running over each queen: ``` for (col, row) in enumerate(the_board): # draw a queen at col, row... ``` In this chapter we already have a beach ball image, so we’ll use that for our queens. In the setup code before our game loop, we load the ball image (as we did before), and in the body of the loop, we add the line: ``` surface.blit(ball, (col * sq_sz, row * sq_sz)) ``` We’re getting there, but those queens need to be centred in their squares! Our problem arises from the fact that both the ball and the rectangle have their upper left corner as their reference points. If we’re going to centre this ball in the square, we need to give it an extra offset in both the x and y direction. (Since the ball is round and the square is square, the offset in the two directions will be the same, so we’ll just compute a single offset value, and use it in both directions.) The offset we need is half the (size of the square less the size of the ball). So we’ll precompute this in the game’s setup section, after we’ve loaded the ball and determined the square size: ``` ball_offset = (sq_sz - ball.get_width()) // 2 ``` Now we touch up the drawing code for the ball and we’re done: ``` surface.blit(ball, (col * sq_sz + ball_offset, row * q_sz + ball_offset)) ``` We might just want to think about what would happen if the ball was bigger than the square. In that case, `ball_offset` would become negative. So it would still be centered in the square - it would just spill over the boundaries, or perhaps obscure the square entirely! Here is the complete program: def draw_board(the_board): """ Draw a chess board with queens, as determined by the the_board. """ ball = pygame.image.load("ball.png") # Use an extra offset to centre the ball in its square. # If the square is too small, offset becomes negative, # but it will still be centered :-) ball_offset = (sq_sz-ball.get_width()) // 2 # Now that squares are drawn, draw the queens. for (col, row) in enumerate(the_board): surface.blit(ball, (col*sq_sz+ball_offset,row*sq_sz+ball_offset)) pygame.display.flip() if __name__ == "__main__": draw_board([0, 5, 3, 1, 6, 4, 2]) # 7 x 7 to test window size draw_board([6, 4, 2, 0, 5, 7, 1, 3]) draw_board([9, 6, 0, 3, 10, 7, 2, 4, 12, 8, 11, 5, 1]) # 13 x 13 draw_board([11, 4, 8, 12, 2, 7, 3, 15, 0, 14, 10, 6, 13, 1, 5, 9]) ``` There is one more thing worth reviewing here. The conditional statement on line 50 tests whether the name of the currently executing program is `__main__` . This allows us to distinguish whether this module is being run as a main program, or whether it has been imported elsewhere, and used as a module. If we run this module in Python, the test cases in lines 51-54 will be executed. However, if we import this module into another program (i.e. our N queens solver from earlier) the condition at line 50 will be false, and the statements on lines 51-54 won’t run. In Chapter 14, (14.9. Eight Queens puzzle, part 2) our main program looked like this: ``` def main(): Now we just need two changes. At the top of that program, we import the module that we’ve been working on here (assume we called it `draw_queens` ). (You’ll have to ensure that the two modules are saved in the same folder.) Then after line 10 here we add a call to draw the solution that we’ve just discovered: ``` draw_queens.draw_board(bd) ``` And that gives a very satisfying combination of program that can search for solutions to the N queens problem, and when it finds each, it pops up the board showing the solution. A sprite is an object that can move about in a game, and has internal behaviour and state of its own. For example, a spaceship would be a sprite, the player would be a sprite, and bullets and bombs would all be sprites. Object oriented programming (OOP) is ideally suited to a situation like this: each object can have its own attributes and internal state, and a couple of methods. Let’s have some fun with our N queens board. Instead of placing the queen in her final position, we’d like to drop her in from the top of the board, and let her fall into position, perhaps bouncing along the way. The first encapsulation we need is to turn each of our queens into an object. We’ll keep a list of all the active sprites (i.e. a list of queen objects), and arrange two new things in our game loop: `update` method on every sprite. This will give each sprite a chance to modify its internal state in some way — perhaps change its image, or change its position, or rotate itself, or make itself grow a bit bigger or a bit smaller. `draw` method on each sprite in turn, and delegate (hand off) the task of drawing to the object itself. This is in line with the OOP idea that we don’t say “Hey, draw, show this queen!”, but we prefer to say “Hey, queen, draw yourself!”. We start with a simple object, no movement or animation yet, just scaffolding, to see how to fit all the pieces together: ``` class QueenSprite: def __init__(self, img, target_posn): """ Create and initialize a queen for this target position on the board """ self.image = img self.target_posn = target_posn self.posn = target_posn def update(self): return # Do nothing for the moment. def draw(self, target_surface): target_surface.blit(self.image, self.posn) ``` We’ve given the sprite three attributes: an image to be drawn, a target position, and a current position. If we’re going to move the spite about, the current position may need to be different from the target, which is where we want the queen finally to end up. In this code at this time we’ve done nothing in the `update` method, and our `draw` method (which can probably remain this simple in future) simply draws itself at its current position on the surface that is provided by the caller. With its class definition in place, we now instantiate our N queens, put them into a list of sprites, and arrange for the game loop to call the `update` and `draw` methods on each frame. The new bits of code, and the revised game loop look like this: ``` all_sprites = [] # Keep a list of all sprites in the game # Create a sprite object for each queen, and populate our list. for (col, row) in enumerate(the_board): a_queen = QueenSprite(ball, (col*sq_sz+ball_offset, row*sq_sz+ball_offset)) all_sprites.append(a_queen) # Ask every sprite to update itself. for sprite in all_sprites: sprite.update() # Draw a fresh background (a blank chess board) # ... same as before ... # Ask every sprite to draw itself. for sprite in all_sprites: sprite.draw(surface) pygame.display.flip() ``` This works just like it did before, but our extra work in making objects for the queens has prepared the way for some more ambitious extensions. Let us begin with a falling queen object. At any instant, it will have a velocity i.e. a speed, in a certain direction. (We are only working with movement in the y direction, but use your imagination!) So in the object’s `update` method, we want to change its current position by its velocity. If our N queens board is floating in space, velocity would stay constant, but hey, here on Earth we have gravity! Gravity changes the velocity on each time interval, so we’ll want a ball that speeds up as it falls further. Gravity will be constant for all queens, so we won’t keep it in the instances — we’ll just make it a variable in our module. We’ll make one other change too: we will start every queen at the top of the board, so that it can fall towards its target position. With these changes, we now get the following: ``` gravity = 0.0001 class QueenSprite: def __init__(self, img, target_posn): self.image = img self.target_posn = target_posn (x, y) = target_posn self.posn = (x, 0) # Start ball at top of its column self.y_velocity = 0 # with zero initial velocity def update(self): self.y_velocity += gravity # Gravity changes velocity (x, y) = self.posn new_y_pos = y + self.y_velocity # Velocity moves the ball self.posn = (x, new_y_pos) # to this new position. def draw(self, target_surface): # Same as before. target_surface.blit(self.image, self.posn) ``` Making these changes gives us a new chessboard in which each queen starts at the top of its column, and speeds up, until it drops off the bottom of the board and disappears forever. A good start — we have movement! The next step is to get the ball to bounce when it reaches its own target position. It is pretty easy to bounce something — you just change the sign of its velocity, and it will move at the same speed in the opposite direction. Of course, if it is travelling up towards the top of the board it will be slowed down by gravity. (Gravity always sucks down!) And you’ll find it bounces all the way up to where it began from, reaches zero velocity, and starts falling all over again. So we’ll have bouncing balls that never settle. A realistic way to settle the object is to lose some energy (probably to friction) each time it bounces — so instead of simply reversing the sign of the velocity, we multiply it by some fractional factor — say -0.65. This means the ball only retains 65% of its energy on each bounce, so it will, as in real life, stop bouncing after a short while, and settle on its “ground”. The only changes are in the `update` method, which now looks like this: ``` def update(self): self.y_velocity += gravity (x, y) = self.posn new_y_pos = y + self.y_velocity (target_x, target_y) = self.target_posn # Unpack the position dist_to_go = target_y - new_y_pos # How far to our floor? if dist_to_go < 0: # Are we under floor? self.y_velocity = -0.65 * self.y_velocity # Bounce new_y_pos = target_y + dist_to_go # Move back above floor self.posn = (x, new_y_pos) # Set our new position. ``` Heh, heh, heh! We’re not going to show animated screenshots, so copy the code into your Python environment and see for yourself. The only kind of event we’re handled so far has been the QUIT event. But we can also detect keydown and keyup events, mouse motion, and mousebutton down or up events. Consult the PyGame documentation and follow the link to Event. When your program polls for and receives an event object from PyGame, its event type will determine what secondary information is available. Each event object carries a dictionary (which you may only cover in due course in these notes). The dictionary holds certain keys and values that make sense for the type of event. For example, if the type of event is MOUSEMOTION, we’ll be able to find the mouse position and information about the state of the mouse buttons in the dictionary attached to the event. Similarly, if the event is KEYDOWN, we can learn from the dictionary which key went down, and whether any modifier keys (shift, control, alt, etc.) are also down. You also get events when the game window becomes active (i.e. gets focus) or loses focus. The event object with type NOEVENT is returned if there are no events waiting. Events can be printed, allowing you to experiment and play around. So dropping these lines of code into the game loop directly after polling for any event is quite informative: ``` if ev.type != pygame.NOEVENT: # Only print if it is interesting! print(ev) ``` With this is place, hit the space bar and the escape key, and watch the events you get. Click your three mouse buttons. Move your mouse over the window. (This causes a vast cascade of events, so you may also need to filter those out of the printing.) You’ll get output that looks something like this: ``` <Event(17-VideoExpose {})> <Event(1-ActiveEvent {'state': 1, 'gain': 0})> <Event(2-KeyDown {'scancode': 57, 'key': 32, 'unicode': ' ', 'mod': 0})> <Event(3-KeyUp {'scancode': 57, 'key': 32, 'mod': 0})> <Event(2-KeyDown {'scancode': 1, 'key': 27, 'unicode': '\x1b', 'mod': 0})> <Event(3-KeyUp {'scancode': 1, 'key': 27, 'mod': 0})> ... <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (323, 194), 'rel': (-3, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (322, 193), 'rel': (-1, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (321, 192), 'rel': (-1, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (319, 192), 'rel': (-2, 0)})> <Event(5-MouseButtonDown {'button': 1, 'pos': (319, 192)})> <Event(6-MouseButtonUp {'button': 1, 'pos': (319, 192)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (319, 191), 'rel': (0, -1)})> <Event(5-MouseButtonDown {'button': 2, 'pos': (319, 191)})> <Event(5-MouseButtonDown {'button': 5, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 5, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 2, 'pos': (319, 191)})> <Event(5-MouseButtonDown {'button': 3, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 3, 'pos': (319, 191)})> ... <Event(1-ActiveEvent {'state': 1, 'gain': 0})> <Event(12-Quit {})> ``` So let us now make these changes to the code near the top of our game loop: # Look for an event from keyboard, mouse, etc. ev = pygame.event.poll() if ev.type == pygame.QUIT: break; if ev.type == pygame.KEYDOWN: key = ev.dict["key"] if key == 27: # On Escape key ... break # leave the game loop. if key == ord("r"): colors[0] = (255, 0, 0) # Change to red + black. elif key == ord("g"): colors[0] = (0, 255, 0) # Change to green + black. elif key == ord("b"): colors[0] = (0, 0, 255) # Change to blue + black. if ev.type == pygame.MOUSEBUTTONDOWN: # Mouse gone down? posn_of_click = ev.dict["pos"] # Get the coordinates. print(posn_of_click) # Just print them. ``` Lines 7-16 show typical processing for a KEYDOWN event — if a key has gone down, we test which key it is, and take some action. With this in place, we have another way to quit our queens program —by hitting the escape key. Also, we can use keys to change the color of the board that is drawn. Finally, at line 20, we respond (pretty lamely) to the mouse button going down. As a final exercise in this section, we’ll write a better response handler to mouse clicks. What we will do is figure out if the user has clicked the mouse on one of our sprites. If there is a sprite under the mouse when the click occurs, we’ll send the click to the sprite and let it respond in some sensible way. We’ll begin with some code that finds out which sprite is under the clicked position, perhaps none! We add a method to the class, `contains_point` , which returns True if the point is within the rectangle of the sprite: Now in the game loop, once we’ve seen the mouse event, we determine which queen, if any, should be told to respond to the event: ``` if ev.type == pygame.MOUSEBUTTONDOWN: posn_of_click = ev.dict["pos"] for sprite in all_sprites: if sprite.contains_point(posn_of_click): sprite.handle_click() break ``` And the final thing is to write a new method called `handle_click` in the `QueenSprite` class. When a sprite is clicked, we’ll just add some velocity in the up direction, i.e. kick it back into the air. ``` def handle_click(self): self.y_velocity += -0.3 # Kick it up ``` With these changes we have a playable game! See if you can keep all the balls on the move, not allowing any one to settle! Many games have sprites that are animated: they crouch, jump and shoot. How do they do that? Consider this sequence of 10 images: if we display them in quick succession, Duke will wave at us. (Duke is a friendly visitor from the kingdom of Javaland.) A compound image containing smaller patches which are intended for animation is called a sprite sheet. Download this sprite sheet by right-clicking in your browser and saving it in your working directory with the name `duke_spritesheet.png` . The sprite sheet has been quite carefully prepared: each of the 10 patches are spaced exactly 50 pixels apart. So, assuming we want to draw patch number 4 (numbering from 0), we want to draw only the rectangle that starts at x position 200, and is 50 pixels wide, within the sprite sheet. Here we’ve shown the patches and highlighted the patch we want to draw. The `blit` method we’ve been using — for copying pixels from one surface to another —can copy a sub-rectangle of the source surface. So the grand idea here is that each time we draw Duke, we won’t blit the whole sprite sheet. Instead we’ll provide an extra rectangle argument that determines which portion of the sprite sheet will be blitted. We’re going to add new code in this section to our existing N queens drawing game. What we want is to put some instances of Duke on the chessboard somewhere. If the user clicks on one of them, we’ll get him to respond by waving back, for one cycle of his animation. But before we do that, we need another change. Up until now, our game loop has been running at really fast frame rates that are unpredictable. So we’ve chosen some magic numbers for gravity and for bouncing and kicking the ball on the basis of trial-and-error. If we’re going to start animating more sprites, we need to tame our game loop to operate at a fixed, known frame rate. This will allow us to plan our animation better. PyGame gives us the tools to do this in just two lines of code. In the setup section of the game, we instantiate a new `Clock` object: ``` my_clock = pygame.time.Clock() ``` and right at the bottom of the game loop, we call a method on this object that limits the frame rate to whatever we specify. So let’s plan our game and animation for 60 frames per second, by adding this line at the bottom of our game loop: ``` my_clock.tick(60) # Waste time so that frame rate becomes 60 fps ``` You’ll find that you have to go back and adjust the numbers for gravity and kicking the ball now, to match this much slower frame rate. When we plan an animation so that it only works sensibly at a fixed frame rate, we say that we’ve baked the animation. In this case we’re baking our animations for 60 frames per second. To fit into the existing framework that we already have for our queens board, we want to create a `DukeSprite` class that has all the same methods as the `QueenSprite` class. Then we can add one or more Duke instances onto our list of `all_sprites` , and our existing game loop will then call methods of the Duke instance. Let us start with skeleton scaffolding for the new class: def update(self): return def draw(self, target_surface): return def handle_click(self): return def contains_point(self, pt): # Use code from QueenSprite here return ``` The only changes we’ll need to the existing game are all in the setup section. We load up the new sprite sheet and instantiate a couple of instances of Duke, at the positions we want on the chessboard. So before entering the game loop, we add this code: ``` # Load the sprite sheet duke_sprite_sheet = pygame.image.load("duke_spritesheet.png") # Instantiate two duke instances, put them on the chessboard duke1 = DukeSprite(duke_sprite_sheet,(sq_sz*2, 0)) duke2 = DukeSprite(duke_sprite_sheet,(sq_sz*5, sq_sz)) # Add them to the list of sprites which our game loop manages all_sprites.append(duke1) all_sprites.append(duke2) ``` Now the game loop will test if each instance has been clicked, will call the click handler for that instance. It will also call update and draw for all sprites. All the remaining changes we need to make will be made in the methods of the `DukeSprite` class. Let’s begin with drawing one of the patches. We’ll introduce a new attribute `curr_patch_num` into the class. It holds a value between 0 and 9, and determines which patch to draw. So the job of the `draw` method is to compute the sub-rectangle of the patch to be drawn, and to blit only that portion of the spritesheet: Now on to getting the animation to work. We need to arrange logic in `update` so that if we’re busy animating, we change the `curr_patch_num` every so often, and we also decide when to bring Duke back to his rest position, and stop the animation. An important issue is that the game loop frame rate —in our case 60 fps — is not the same as the animation rate — the rate at which we want to change Duke’s animation patches. So we’ll plan Duke wave’s animation cycle for a duration of 1 second. In other words, we want to play out Duke’s 10 animation patches over 60 calls to `update` . (This is how the baking of the animation takes place!) So we’ll keep another animation frame counter in the class, which will be zero when we’re not animating, and each call to `update` will increment the counter up to 59, and then back to 0. We can then divide that animation counter by 6, to set the `curr_patch_num` variable to select the patch we want to show. Notice that if `anim_frame_count` is zero, i.e. Duke is at rest, nothing happens here. But if we start the counter running, it will count up to 59 before settling back to zero. Notice also, that because `anim_frame_count` can only be a value between 0 and 59, the `curr_patch_num` will always stay between 0 and 9. Just what we require! Now how do we trigger the animation, and start it running? On the mouse click. Two things of interest here. We only start the animation if Duke is at rest. Clicks on Duke while he is already waving get ignored. And when we do start the animation, we set the counter to 5 — this means that on the very next call to `update` the counter becomes 6, and the image changes. If we had set the counter to 1, we would have needed to wait for 5 more calls to `update` before anything happened — a slight lag, but enough to make things feel sluggish. The final touch-up is to initialize our two new attributes when we instantiate the class. Here is the code for the whole class now: Now we have two extra Duke instances on our chessboard, and clicking on either causes that instance to wave. Find the example games with the PyGame package, (On a windows system, something like C:\Python3\Lib\site-packages\pygame\examples) and play the Aliens game. Then read the code, in an editor or Python environment that shows line numbers. It does a number of much more advanced things than we do, and relies on the PyGame framework for more of its logic. Here are some of the points to notice: Object oriented programming is a good organizational tool for software. In the examples in this chapter, we’ve started to use (and hopefully appreciate) these benefits. Here we had N queens each with its own state, falling to its own floor level, bouncing, getting kicked, etc. We might have managed without the organizational power of objects — perhaps we could have kept lists of velocities for each queen, and lists of target positions, and so on — our code would likely have been much more complicated, ugly, and a lot poorer! animation rate The rate at which we play back successive patches to create the illusion of movement. In the sample we considered in this chapter, we played Duke’s 10 patches over the duration of one second. Not the same as the frame rate. baked animation An animation that is designed to look good at a predetermined fixed frame rate. This reduces the amount of computation that needs to be done when the game is running. High-end commercial games usually bake their animations. blit A verb used in computer graphics, meaning to make a fast copy of an image or pixels from a sub-rectangle of one image or surface to another surface or image. frame rate The rate at which the game loop executes and updates the display. game loop A loop that drives the logic of a game. It will usually poll for events, then update each of the objects in the game, then get everything drawn, and then put the newly drawn frame on display. pixel A single picture element, or dot, from which images are made. poll To ask whether something like a keypress or mouse movement has happened. Game loops usually poll to discover what events have occurred. This is different from event-driven programs like the ones seen in the chapter titled “Events”. In those cases, the button click or keypress event triggers the call of a handler function in your program, but this happens behind your back. sprite An active agent or element in a game, with its own state, position and behaviour. surface This is PyGame’s term for what the Turtle module calls a canvas. A surface is a rectangle of pixels used for displaying shapes and images. # Chapter 18: Recursion Recursion means “defining something in terms of itself” usually at some smaller scale, perhaps multiple times, to achieve your objective. For example, we might say “A human being is someone whose mother is a human being”, or “a directory is a structure that holds files and (smaller) directories”, or “a family tree starts with a couple who have children, each with their own family sub-trees”. Programming languages generally support recursion, which means that, in order to solve a problem,functions can call themselves to solve smaller subproblems. For our purposes, a fractal is a drawing which also has self-similar structure, where it can be defined in terms of itself. Let us start by looking at the famous Koch fractal. An order 0 Koch fractal is simply a straight line of a given size. An order 1 Koch fractal is obtained like this: instead of drawing just one line, draw instead four smaller segments, in the pattern shown here: Now what would happen if we repeated this Koch pattern again on each of the order 1 segments? We’d get this order 2 Koch fractal: Repeating our pattern again gets us an order 3 Koch fractal: Now let us think about it the other way around. To draw a Koch fractal of order 3, we can simply draw four order 2 Koch fractals. But each of these in turn needs four order 1 Koch fractals, and each of those in turn needs four order 0 fractals. Ultimately, the only drawing that will take place is at order 0. This is very simple to code up in Python: ``` def koch(t, order, size): """ Make turtle t draw a Koch fractal of 'order' and 'size'. Leave the turtle facing the same direction. """ if order == 0: # The base case is just a straight line t.forward(size) else: koch(t, order-1, size/3) # Go 1/3 of the way t.left(60) koch(t, order-1, size/3) t.right(120) koch(t, order-1, size/3) t.left(60) koch(t, order-1, size/3) ``` The key thing that is new here is that if order is not zero, `koch` calls itself recursively to get its job done. Let’s make a simple observation and tighten up this code. Remember that turning right by 120 is the same as turning left by -120. So with a bit of clever rearrangement, we can use a loop instead of lines 10-16: ``` def koch(t, order, size): if order == 0: t.forward(size) else: for angle in [60, -120, 60, 0]: koch(t, order-1, size/3) t.left(angle) ``` The final turn is 0 degrees — so it has no effect. But it has allowed us to find a pattern and reduce seven lines of code to three, which will make things easier for our next observations. Recursion, the high-level view One way to think about this is to convince yourself that the function works correctly when you call it for an order 0 fractal. Then do a mental leap of faith, saying “the fairy godmother (or Python, if you can think of Python as your fairy godmother) knows how to handle the recursive level 0 calls for me on lines 11, 13, 15, and 17, so I don’t need to think about that detail!“ All I need to focus on is how to draw an order 1 fractal if I can assume the order 0 one is already working. You’re practicing mental abstraction — ignoring the subproblem while you solve the big problem. If this mode of thinking works (and you should practice it!), then take it to the next level. Aha! now can I see that it will work when called for order 2 under the assumption that it is already working for level 1. And, in general, if I can assume the order n-1 case works, can I just solve the level n problem? Students of mathematics who have played with proofs of induction should see some very strong similarities here. Recursion, the low-level operational view Another way of trying to understand recursion is to get rid of it! If we had separate functions to draw a level 3 fractal, a level 2 fractal, a level 1 fractal and a level 0 fractal, we could simplify the above code, quite mechanically, to a situation where there was no longer any recursion, like this: ``` def koch_0(t, size): t.forward(size) def koch_3(t, size): for angle in [60, -120, 60, 0]: koch_2(t, size/3) t.left(angle) ``` This trick of “unrolling” the recursion gives us an operational view of what happens. You can trace the program into `koch_3` , and from there, into `koch_2` , and then into `koch_1` , etc., all the way down the different layers of the recursion. This might be a useful hint to build your understanding. The mental goal is, however, to be able to do the abstraction! All of the Python data types we have seen can be grouped inside lists and tuples in a variety of ways. Lists and tuples can also be nested, providing many possibilities for organizing data. The organization of data for the purpose of making it easier to use is called a data structure. It’s election time and we are helping to compute the votes as they come in. Votes arriving from individual wards, precincts, municipalities, counties, and states are sometimes reported as a sum total of votes and sometimes as a list of subtotals of votes. After considering how best to store the tallies, we decide to use a nested number list, which we define as follows: A nested number list is a list whose elements are either: Notice that the term, nested number list is used in its own definition. Recursive definitions like this are quite common in mathematics and computer science. They provide a concise and powerful way to describe recursive data structures that are partially composed of smaller and simpler instances of themselves. The definition is not circular, since at some point we will reach a list that does not have any lists as elements. Now suppose our job is to write a function that will sum all of the values in a nested number list. Python has a built-in function which finds the sum of a sequence of numbers: ``` >>> sum([1, 2, 8]) 11 ``` For our nested number list, however, `sum` will not work: ``` >>> sum([1, 2, [11, 13], 8]) Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'list' >>> ``` The problem is that the third element of this list, `[11, 13]` , is itself a list, so it cannot just be added to `1` , `2` , and `8` . To sum all the numbers in our recursive nested number list we need to traverse the list, visiting each of the elements within its nested structure, adding any numeric elements to our sum, and recursively repeating the summing process with any elements which are themselves sub-lists. Thanks to recursion, the Python code needed to sum the values of a nested number list is surprisingly short: ``` def r_sum(nested_num_list): tot = 0 for element in nested_num_list: if type(element) == type([]): tot += r_sum(element) else: tot += element return tot ``` The body of `r_sum` consists mainly of a `for` loop that traverses `nested_num_list` . If `element` is a numerical value (the `else` branch), it is simply added to `tot` . If `element` is a list, then `r_sum` is called again, with the element as an argument. The statement inside the function definition in which the function calls itself is known as the recursive call. The example above has a base case (on line 7) which does not lead to a recursive call: the case where the element is not a (sub-) list. Without a base case, you’ll have infinite recursion, and your program will not work. Recursion is truly one of the most beautiful and elegant tools in computer science. A slightly more complicated problem is finding the largest value in our nested number list: ``` def r_max(nxs): """ Find the maximum in a recursive structure of lists within other lists. Precondition: No lists or sublists are empty. """ largest = None first_time = True for e in nxs: if type(e) == type([]): val = r_max(e) else: val = e if first_time or val > largest: largest = val first_time = False return largest test(r_max([2, 9, [1, 13], 8, 6]) == 13) test(r_max([2, [[100, 7], 90], [1, 13], 8, 6]) == 100) test(r_max([[[13, 7], 90], 2, [1, 100], 8, 6]) == 100) test(r_max(["joe", ["sam", "ben"]]) == "sam") ``` Tests are included to provide examples of `r_max` at work. The added twist to this problem is finding a value for initializing `largest` . We can’t just use `nxs[0]` , since that could be either a element or a list. To solve this problem (at every recursive call) we initialize a Boolean flag (at line 8). When we’ve found the value of interest, (at line 15) we check to see whether this is the initializing (first) value for `largest` , or a value that could potentially change `largest` . Again here we have a base case at line 13. If we don’t supply a base case, Python stops after reaching a maximum recursion depth and returns a runtime error. See how this happens, by running this little script which we will call `infinite_recursion.py`: ``` def recursion_depth(number): print("{0}, ".format(number), end="") recursion_depth(number + 1) After watching the messages flash by, you will be presented with the end of a long traceback that ends with a message like the following: ``` RuntimeError: maximum recursion depth exceeded ... ``` We would certainly never want something like this to happen to a user of one of our programs, so in the next chapter we’ll see how errors, any kinds of errors, are handled in Python. The famous Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 134, … was devised by Fibonacci (1170-1250), who used this to model the breeding of (pairs) of rabbits. If, in generation 7 you had 21 pairs in total, of which 13 were adults, then next generation the adults will all have bred new children, and the previous children will have grown up to become adults. So in generation 8 you’ll have 13+21=34, of which 21 are adults. This model to explain rabbit breeding made the simplifying assumption that rabbits never died. Scientists often make (unrealistic) simplifying assumptions and restrictions to make some headway with the problem. If we number the terms of the sequence from 0, we can describe each term recursively as the sum of the previous two terms: ``` fib(0) = 0 fib(1) = 1 fib(n) = fib(n-1) + fib(n-2) for n >= 2 ``` This translates very directly into some Python: ``` def fib(n): if n <= 1: return n t = fib(n-1) + fib(n-2) return t ``` This is a particularly inefficient algorithm, and we’ll show one way of fixing it when we learn about dictionaries: ``` import time t0 = time.clock() n = 35 result = fib(n) t1 = time.clock() print("fib({0}) = {1}, ({2:.2f} secs)".format(n, result, t1-t0)) ``` We get the correct result, but an exploding amount of work! : ``` fib(35) = 9227465, (10.54 secs) ``` The following program lists the contents of a directory and all its subdirectories. def get_dirlist(path): """ Return a sorted list of all entries in path. This returns just the names, not the full path to the names. """ dirlist = os.listdir(path) dirlist.sort() return dirlist def print_files(path, prefix = ""): """ Print recursive listing of contents of path """ if prefix == "": # Detect outermost call, print a heading print("Folder listing for", path) prefix = "| " dirlist = get_dirlist(path) for f in dirlist: print(prefix+f) # Print the line fullname = os.path.join(path, f) # Turn name into full pathname if os.path.isdir(fullname): # If a directory, recurse. print_files(fullname, prefix + "| ") ``` Calling the function `print_files` with some folder name will produce output similar to this: ``` Folder listing for c:\python31\Lib\site-packages\pygame\examples | __init__.py | aacircle.py | aliens.py | arraydemo.py | blend_fill.py | blit_blends.py | camera.py | chimp.py | cursors.py | data | | alien1.png | | alien2.png | | alien3.png ... ``` Here we have a tree fractal pattern of order 8. We’ve labelled some of the edges, showing the depth of the recursion at which each edge was drawn. In the tree above, the angle of deviation from the trunk is 30 degrees. Varying that angle gives other interesting shapes, for example, with the angle at 90 degrees we get this: An interesting animation occurs if we generate and draw trees very rapidly, each time varying the angle a little. Although the Turtle module can draw trees like this quite elegantly, we could struggle for good frame rates. So we’ll use PyGame instead, with a few embellishments and observations. (Once again, we suggest you cut and paste this code into your Python environment.) ``` import pygame, math pygame.init() # prepare the pygame module for use # Create a new surface and window. surface_size = 1024 main_surface = pygame.display.set_mode((surface_size,surface_size)) my_clock = pygame.time.Clock() def draw_tree(order, theta, sz, posn, heading, color=(0,0,0), depth=0): trunk_ratio = 0.29 # How big is the trunk relative to whole tree? trunk = sz * trunk_ratio # length of trunk delta_x = trunk * math.cos(heading) delta_y = trunk * math.sin(heading) (u, v) = posn newpos = (u + delta_x, v + delta_y) pygame.draw.line(main_surface, color, posn, newpos) if order > 0: # Draw another layer of subtrees # These next six lines are a simple hack to make the two major halves # of the recursion different colors. Fiddle here to change colors # at other depths, or when depth is even, or odd, etc. if depth == 0: color1 = (255, 0, 0) color2 = (0, 0, 255) else: color1 = color color2 = color # make the recursive calls to draw the two subtrees newsz = sz*(1 - trunk_ratio) draw_tree(order-1, theta, newsz, newpos, heading-theta, color1, depth+1) draw_tree(order-1, theta, newsz, newpos, heading+theta, color2, depth+1) def gameloop(): theta = 0 while True: # Handle evente from keyboard, mouse, etc. ev = pygame.event.poll() if ev.type == pygame.QUIT: break; # Updates - change the angle theta += 0.01 # Draw everything main_surface.fill((255, 255, 0)) draw_tree(9, theta, surface_size*0.9, (surface_size//2, surface_size-50), -math.pi/2) pygame.display.flip() my_clock.tick(120) gameloop() pygame.quit() ``` The `math` library works with angles in radians rather than degrees. Lines 14 and 15 uses some high school trigonmetry. From the length of the desired line ( `trunk` ), and its desired angle, `cos` and `sin` help us to calculate the `x` and `y` distances we need to move. Lines 22-30 are unnecessary, except if we want a colorful tree. In the main game loop at line 49 we change the angle on every frame, and redraw the new tree. Line 18 shows that PyGame can also draw lines, and plenty more. Check out the documentation. For example, drawing a small circle at each branch point of the tree can be accomplished by adding this line directly below line 18: ``` pygame.draw.circle(main_surface, color, (int(posn[0]), int(posn[1])), 3) ``` Another interesting effect — instructive too, if you wish to reinforce the idea of different instances of the function being called at different depths of recursion —is to create a list of colors, and let each recursive depth use a different color for drawing. (Use the depth of the recursion to index the list of colors.) base case A branch of the conditional statement in a recursive function that does not give rise to further recursive calls. infinite recursion A function that calls itself recursively without ever reaching any base case. Eventually, infinite recursion causes a runtime error. recursion The process of calling a function that is already executing. recursive call The statement that calls an already executing function. Recursion can also be indirect — function f can call g which calls h, and h could make a call back to f. recursive definition A definition which defines something in terms of itself. To be useful it must include base cases which are not recursive. In this way it differs from a circular definition. Recursive definitions often provide an elegant way to express complex data structures, like a directory that can contain other directories, or a menu that can contain other menus. Modify the Koch fractal program so that it draws a Koch snowflake, like this: A Sierpinski triangle of order 0 is an equilateral triangle. An order 1 triangle can be drawn by drawing 3 smaller triangles (shown slightly disconnected here, just to help our understanding). Higher order 2 and 3 triangles are also shown. Draw Sierpinski triangles of any order input by the user. Adapt the above program to change the color of its three sub-triangles at some depth of recursion. Theillustration below shows two cases: on the left, the color is changed at depth 0 (the outmost level of recursion), on the right, at depth 2. If the user supplies a negative depth, the color never changes. (Hint: add a new optional parameter `colorChangeDepth` (which defaults to -1), and make this one smaller on each recursive subcall. Then, in the section of code before you recurse, test whether the parameter is zero, and change color.) Write a function, `recursive_min` , that returns the smallest value in a nested number list. Assume there are no empty lists or sublists: ``` test(recursive_min([2, 9, [1, 13], 8, 6]) == 1) test(recursive_min([2, [[100, 1], 90], [10, 13], 8, 6]) == 1) test(recursive_min([2, [[13, -7], 90], [1, 100], 8, 6]) == -7) test(recursive_min([[[-13, 7], 90], 2, [1, 100], 8, 6]) == -13) ``` `count` that returns the number of occurrences of `target` in a nested list: ``` test(count(2, []), 0) test(count(2, [2, 9, [2, 1, 13, 2], 8, [2, 6]]) == 4) test(count(7, [[9, [7, 1, 13, 2], 8], [7, 6]]) == 2) test(count(15, [[9, [7, 1, 13, 2], 8], [2, 6]]) == 0) test(count(5, [[5, [5, [1, 5], 5], 5], [5, 6]]) == 6) test(count("a", [["this",["a",["thing","a"],"a"],"is"], ["a","easy"]]) == 4) ``` `flatten` that returns a simple list containing all the values in a nested list: ``` test(flatten([2,9,[2,1,13,2],8,[2,6]]) == [2,9,2,1,13,2,8,2,6]) test(flatten([[9,[7,1,13,2],8],[7,6]]) == [9,7,1,13,2,8,7,6]) test(flatten([[9,[7,1,13,2],8],[2,6]]) == [9,7,1,13,2,8,2,6]) test(flatten([["this",["a",["thing"],"a"],"is"],["a","easy"]]) == ["this","a","thing","a","is","a","easy"]) test(flatten([]) == []) ``` Rewrite the fibonacci algorithm without using recursion. Can you find bigger terms of the sequence? Can you find `fib(200)` ? Use help to find out what ``` sys.getrecursionlimit() ``` ``` sys.setrecursionlimit(n) ``` do. Create several experiments similar to what was done in infinite_recursion.py to test your understanding of how these module functions work. Write a program that walks a directory structure (as in the last section of this chapter), but instead of printing filenames, it returns a list of all the full paths of files in the directory or the subdirectories. (Don’t include directories in this list — just files.) For example, the output list might have elements like this: ``` ["C:\Python31\Lib\site-packages\pygame\docs\ref\mask.html", "C:\Python31\Lib\site-packages\pygame\docs\ref\midi.html", ... "C:\Python31\Lib\site-packages\pygame\examples\aliens.py", ... "C:\Python31\Lib\site-packages\pygame\examples\data\boom.wav", ... ] ``` Write a program named `litter.py` that creates an empty file named `trash.txt` in each subdirectory of a directory tree given the root of the tree as an argument (or the current directory as a default). Now write a program named `cleanup.py` that removes all these files. Hint #1: Use the program from the example in the last section of this chapter as a basis for these two recursive programs. Because you’re going to destroy files on your disks, you better get this right, or you risk losing files you care about. So excellent advice is that initially you should fake the deletion of the files — just print the full path names of each file that you intend to delete. Once you’re happy that your logic is correct, and you can see that you’re not deleting the wrong things, you can replace the print statement with the real thing. Hint #2: Look in the `os` module for a function that removes files. # Chapter 19: Exceptions Whenever a runtime error occurs, it creates an exception object. The program stops running at this point and Python prints out the traceback, which ends with a message describing the exception that occurred. For example, dividing by zero creates an exception: ``` >>> print(55/0) Traceback (most recent call last): File "<interactive input>", line 1, in <module> ZeroDivisionError: integer division or modulo by zero ``` So does accessing a non-existent list item: ``` >>> a = [] >>> print(a[5]) Traceback (most recent call last): File "<interactive input>", line 1, in <module> IndexError: list index out of range ``` Or trying to make an item assignment on a tuple: ``` >>> tup = ("a", "b", "d", "d") >>> tup[2] = "c" Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: 'tuple' object does not support item assignment ``` In each case, the error message on the last line has two parts: the type of error before the colon, and specifics about the error after the colon. Sometimes we want to execute an operation that might cause an exception, but we don’t want the program to stop. We can handle the exception using the `try` statement to “wrap” a region of code. For example, we might prompt the user for the name of a file and then try to open it. If the file doesn’t exist, we don’t want the program to crash; we want to handle the exception: ``` filename = input("Enter a file name: ") try: f = open(filename, "r") except: print("There is no file named", filename) ``` The `try` statement has three separate clauses, or parts, introduced by the keywords `try` … `except` … `finally` . Either the `except` or the `finally` clauses can be omitted, so the above code considers the most common version of the `try` statement first. The `try` statement executes and monitors the statements in the first block. If no exceptions occur, it skips the block under the `except` clause. If any exception occurs, it executes the statements in the `except` clause and then continues. We could encapsulate this capability in a function: `exists` which takes a filename and returns true if the file exists, false if it doesn’t: ``` def exists(filename): try: f = open(filename) f.close() return True except: return False ``` .. The try statement in this function was already introduced previously (the same code), so I thought it would be appropriate to add an else clause here. pw: HI Victor - I looked at the else: and went against it! It is just a horrible language feature in my view. Not only do we complicate things by overload the keyword else (here, in the if, and in the for loop), but it adds no new expressive power over just doing the “didn’t get an exception” inline. And the allowable combinations are hard to explain. You can omit the except clause if you have a finally clause. But you cannot have the else if you omit except, … and so on. Too much risk for too little return, in my view. A template to test if a file exists, without using exceptions The function we’ve just shown is not one we’d recommend. It opens and closes the file, which is semantically different from asking “does it exist?”. How? Firstly, it might update some timestamps on the file. Secondly, it might tell us that there is no such file if some other program already happens to have the file open, or if our permission settings don’t allow us to open the file. Python provides a module called `os.path` within the `os` module. It provides a number of useful functions to work with paths, files and directories, so you should check out the help. # This is the preferred way to check if a file exists. if os.path.isfile("c:/temp/testdata.txt"): ``` We can use multiple `except` clauses to handle different kinds of exceptions (see the Errors and Exceptions lesson from Python creator <NAME> Rossum’s Python Tutorial for a more complete discussion of exceptions). So the program could do one thing if the file does not exist, but do something else if the file was in use by another program. Can our program deliberately cause its own exceptions? If our program detects an error condition, we can raise an exception. Here is an example that gets input from the user and checks that the number is non-negative: Line 5 creates an exception object, in this case, a `ValueError` object, which encapsulates specific information about the error. Assume that in this case function `A` called `B` which called `C` which called `D` which called `get_age` . The `raise` statement on line 6 carries this object out as a kind of “return value”, and immediately exits from `get_age()` to its caller `D` . Then `D` again exits to its caller `C` , and `C` exits to `B` and so on, each returning the exception object to their caller, until it encounters a `try ... except` that can handle the exception. We call this “unwinding the call stack”. `ValueError` is one of the built-in exception types which most closely matches the kind of error we want to raise. The complete listing of built-in exceptions can be found at the Built-inExceptions section of the Python Library Reference , again by Python’s creator, <NAME>. If the function that called `get_age` (or its caller, or their caller, …) handles the error, then the program can carry on running; otherwise, Python prints the traceback and exits: ``` >>> get_age() Please enter your age: 42 42 >>> get_age() Please enter your age: -2 Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "learn_exceptions.py", line 4, in get_age raise ValueError("{0} is not a valid age".format(age)) ValueError: -2 is not a valid age ``` The error message includes the exception type and the additional information that was provided when the exception object was first created. It is often the case that lines 5 and 6 (creating the exception object, then raising the exception) are combined into a single statement, but there are really two different and independent things happening, so perhaps it makes sense to keep the two steps separate when we first learn to work with exceptions. Here we show it all in a single statement: ``` raise ValueError("{0} is not a valid age".format(age)) ``` Using exception handling, we can now modify our `recursion_depth` example from the previous chapter so that it stops when it reaches the maximum recursion depth allowed: ``` def recursion_depth(number): print("Recursion depth number", number) try: recursion_depth(number + 1) except: print("I cannot go any deeper into this wormhole.") Run this version and observe the results. `finally` clause of the `try` statement A common programming pattern is to grab a resource of some kind — e.g. we create a window for turtles to draw on, or we dial up a connection to our internet service provider, or we may open a file for writing. Then we perform some computation which may raise an exception, or may work without any problems. Whatever happens, we want to “clean up” the resources we grabbed — e.g. close the window, disconnect our dial-up connection, or close the file. The `finally` clause of the `try` statement is the way to do just this. Consider this (somewhat contrived) example: ``` import turtle import time def show_poly(): try: win = turtle.Screen() # Grab/create a resource, e.g. a window tess = turtle.Turtle() # This dialog could be cancelled, # or the conversion to int might fail, or n might be zero. n = int(input("How many sides do you want in your polygon?")) angle = 360 / n for i in range(n): # Draw the polygon tess.forward(10) tess.left(angle) time.sleep(3) # Make program wait a few seconds finally: win.bye() # Close the turtle's window show_poly() show_poly() show_poly() ``` In lines 20–22, `show_poly` is called three times. Each one creates a new window for its turtle, and draws a polygon with the number of sides input by the user. But what if the user enters a string that cannot be converted to an `int` ? What if they close the dialog? We’ll get an exception, but even though we’ve had an exception, we still want to close the turtle’s window. Lines 17–18 does this for us. Whether we complete the statements in the `try` clause successfully or not, the `finally` block will always be executed. Notice that the exception is still unhandled — only an `except` clause can handle an exception, so our program will still crash. But at least its turtle window will be closed before it crashes! ``` exception An error that occurs at runtime. handle an exception To prevent an exception from causing our program to crash, by wrapping the block of code in a `try` ... `except` construct. raise To create a deliberate exception by using the `raise` statement. ``` `readposint` that uses the `input` dialog to prompt the user for a positive integer and then checks the input to confirm that it meets the requirements. It should be able to handle inputs that cannot be converted to `int` , as well as negative `int` s, and edge cases (e.g. when the user closes the dialog, or does not enter anything at all.) # Chapter 20: Dictionaries All of the compound data types we have studied in detail so far — strings, lists, and tuples — are sequence types, which use integers as indices to access the values they contain within them. Dictionaries are yet another kind of compound type. They are Python’s built-in mapping type. They map keys, which can be any immutable type, to values, which can be any type (heterogeneous), just like the elements of a list or tuple. In other languages, they are called associative arrays since they associate a key with a value. As an example, we will create a dictionary to translate English words into Spanish. For this dictionary, the keys are strings. One way to create a dictionary is to start with the empty dictionary and add key:value pairs. The empty dictionary is denoted `{}` : ``` >>> eng2sp = {} >>> eng2sp["one"] = "uno" >>> eng2sp["two"] = "dos" ``` The first assignment creates a dictionary named `eng2sp` ; the other assignments add new key:value pairs to the dictionary. We can print the current value of the dictionary in the usual way: ``` >>> print(eng2sp) {"two": "dos", "one": "uno"} ``` The key:value pairs of the dictionary are separated by commas. Each pair contains a key and a value separated by a colon. Hashing The order of the pairs may not be what was expected. Python uses complex algorithms, designed for very fast access, to determine where the key:value pairs are stored in a dictionary. For our purposes we can think of this ordering as unpredictable. You also might wonder why we use dictionaries at all when the same concept of mapping a key to a value could be implemented using a list of tuples: ``` >>> {"apples": 430, "bananas": 312, "oranges": 525, "pears": 217} {'pears': 217, 'apples': 430, 'oranges': 525, 'bananas': 312} >>> [('apples', 430), ('bananas', 312), ('oranges', 525), ('pears', 217)] [('apples', 430), ('bananas', 312), ('oranges', 525), ('pears', 217)] ``` The reason is dictionaries are very fast, implemented using a technique called hashing, which allows us to access a value very quickly. By contrast, the list of tuples implementation is slow. If we wanted to find a value associated with a key, we would have to iterate over every tuple, checking the 0th element. What if the key wasn’t even in the list? We would have to get to the end of it to find out. ``` Another way to create a dictionary is to provide a list of key:value pairs using the same syntax as the previous output: ```python >>> eng2sp = {"one": "uno", "two": "dos", "three": "tres"} ``` It doesn’t matter what order we write the pairs. The values in a dictionary are accessed with keys, not with indices, so there is no need to care about ordering. Here is how we use a key to look up the corresponding value: ``` >>> print(eng2sp["two"]) 'dos' ``` The key `"two"` yields the value `"dos"` . Lists, tuples, and strings have been called sequences, because their items occur in order. The dictionary is the first compound type that we’ve seen that is not a sequence, so we can’t index or slice a dictionary. The `del` statement removes a key:value pair from a dictionary. For example, the following dictionary contains the names of various fruits and the number of each fruit in stock: ``` >>> inventory = {"apples": 430, "bananas": 312, "oranges": 525, "pears": 217} >>> print(inventory) {'pears': 217, 'apples': 430, 'oranges': 525, 'bananas': 312} ``` If someone buys all of the pears, we can remove the entry from the dictionary: ``` >>> del inventory["pears"] >>> print(inventory) {'apples': 430, 'oranges': 525, 'bananas': 312} ``` Or if we’re expecting more pears soon, we might just change the value associated with pears: ``` >>> inventory["pears"] = 0 >>> print(inventory) {'pears': 0, 'apples': 430, 'oranges': 525, 'bananas': 312} ``` A new shipment of bananas arriving could be handled like this: ``` >>> inventory["bananas"] += 200 >>> print(inventory) {'pears': 0, 'apples': 430, 'oranges': 525, 'bananas': 512} ``` The `len` function also works on dictionaries; it returns the number of key:value pairs: ``` >>> len(inventory) 4 ``` Dictionaries have a number of useful built-in methods. The `keys` method returns what Python 3 calls a view of its underlying keys. A view object has some similarities to the `range` object we saw earlier —it is a lazy promise, to deliver its elements when they’re needed by the rest of the program. We can iterate over the view, or turn the view into a list like this: ``` for k in eng2sp.keys(): # The order of the k's is not defined print("Got key", k, "which maps to value", eng2sp[k]) ks = list(eng2sp.keys()) print(ks) ``` This produces this output: ``` Got key three which maps to value tres Got key two which maps to value dos Got key one which maps to value uno ['three', 'two', 'one'] ``` It is so common to iterate over the keys in a dictionary that we can omit the `keys` method call in the `for` loop — iterating over a dictionary implicitly iterates over its keys: ``` for k in eng2sp: print("Got key", k) ``` The `values` method is similar; it returns a view object which can be turned into a list: ``` >>> list(eng2sp.values()) ['tres', 'dos', 'uno'] ``` The `items` method also returns a view, which promises a list of tuples — one tuple for each key:value pair: ``` >>> list(eng2sp.items()) [('three', 'tres'), ('two', 'dos'), ('one', 'uno')] ``` Tuples are often useful for getting both the key and the value at the same time while we are looping: ``` for (k,v) in eng2sp.items(): print("Got",k,"that maps to",v) ``` This produces: ``` Got three that maps to tres Got two that maps to dos Got one that maps to uno ``` The `in` and `not in` operators can test if a key is in the dictionary: ``` >>> "one" in eng2sp True >>> "six" in eng2sp False >>> "tres" in eng2sp # Note that 'in' tests keys, not values. False ``` This method can be very useful, since looking up a non-existent key in a dictionary causes a runtime error: ``` >>> eng2esp["dog"] Traceback (most recent call last): ... KeyError: 'dog' ``` As in the case of lists, because dictionaries are mutable, we need to be aware of aliasing. Whenever two variables refer to the same object, changes to one affect the other. If we want to modify a dictionary and keep a copy of the original, use the `copy` method. For example, `opposites` is a dictionary that contains pairs of opposites: ``` >>> opposites = {"up": "down", "right": "wrong", "yes": "no"} >>> alias = opposites >>> copy = opposites.copy() # Shallow copy ``` `alias` and `opposites` refer to the same object; `copy` refers to a fresh copy of the same dictionary. If we modify `alias` , `opposites` is also changed: ``` >>> alias["right"] = "left" >>> opposites["right"] 'left' ``` If we modify `copy` , `opposites` is unchanged: ``` >>> copy["right"] = "privilege" >>> opposites["right"] 'left' ``` We previously used a list of lists to represent a matrix. That is a good choice for a matrix with mostly nonzero values, but consider a sparse matrix like this one: The list representation contains a lot of zeroes: ``` matrix = [[0, 0, 0, 1, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 3, 0]] ``` An alternative is to use a dictionary. For the keys, we can use tuples that contain the row and column numbers. Here is the dictionary representation of the same matrix: ``` >>> matrix = {(0, 3): 1, (2, 1): 2, (4, 3): 3} ``` We only need three key:value pairs, one for each nonzero element of the matrix. Each key is a tuple, and each value is an integer. To access an element of the matrix, we could use the `[]` operator: ``` >>> matrix[(0, 3)] 1 ``` Notice that the syntax for the dictionary representation is not the same as the syntax for the nested list representation. Instead of two integer indices, we use one index, which is a tuple of integers. There is one problem. If we specify an element that is zero, we get an error, because there is no entry in the dictionary with that key: ``` >>> matrix[(1, 3)] KeyError: (1, 3) ``` The `get` method solves this problem: ``` >>> matrix.get((0, 3), 0) 1 ``` The first argument is the key; the second argument is the value `get` should return if the key is not in the dictionary: ``` >>> matrix.get((1, 3), 0) 0 ``` `get` definitely improves the semantics of accessing a sparse matrix. Shame about the syntax. If you played around with the `fibo` function from the chapter on recursion, you might have noticed that the bigger the argument you provide, the longer the function takes to run. Furthermore, the run time increases very quickly. On one of our machines, `fib(20)` finishes instantly, `fib(30)` takes about a second, and `fib(40)` takes roughly forever. To understand why, consider this call graph for `fib` with `n = 4` : A call graph shows some function frames (instances when the function has been invoked), with lines connecting each frame to the frames of the functions it calls. At the top of the graph, `fib` with `n = 4` calls `fib` with `n = 3` and `n = 2` . In turn, `fib` with `n = 3` calls `fib` with `n = 2` and `n = 1` . And so on. Count how many times `fib(0)` and `fib(1)` are called. This is an inefficient solution to the problem, and it gets far worse as the argument gets bigger. A good solution is to keep track of values that have already been computed by storing them in a dictionary. A previously computed value that is stored for later use is called a memo. Here is an implementation of `fib` using memos: ``` alreadyknown = {0: 0, 1: 1} def fib(n): if n not in alreadyknown: new_value = fib(n-1) + fib(n-2) alreadyknown[n] = new_value return alreadyknown[n] ``` The dictionary named `alreadyknown` keeps track of the Fibonacci numbers we already know. We start with only two pairs: 0 maps to 1; and 1 maps to 1. Whenever `fib` is called, it checks the dictionary to determine if it contains the result. If it’s there, the function can return immediately without making any more recursive calls. If not, it has to compute the new value. The new value is added to the dictionary before the function returns. Using this version of `fib` , our machines can compute `fib(100)` in an eyeblink. ``` >>> fib(100) 354224848179261915075 ``` In the exercises in Chapter 8 (Strings) we wrote a function that counted the number of occurrences of a letter in a string. A more general version of this problem is to form a frequency table of the letters in the string, that is, how many times each letter appears. Such a frequency table might be useful for compressing a text file. Because different letters appear with different frequencies, we can compress a file by using shorter codes for common letters and longer codes for letters that appear less frequently. Dictionaries provide an elegant way to generate a frequency table: ``` >>> letter_counts = {} >>> for letter in "Mississippi": ... letter_counts[letter] = letter_counts.get(letter, 0) + 1 ... >>> letter_counts {'M': 1, 's': 4, 'p': 2, 'i': 4} ``` We start with an empty dictionary. For each letter in the string, we find the current count (possibly zero) and increment it. At the end, the dictionary contains pairs of letters and their frequencies. It might be more appealing to display the frequency table in alphabetical order. We can do that with the `items` and `sort` methods: ``` >>> letter_items = list(letter_counts.items()) >>> letter_items.sort() >>> print(letter_items) [('M', 1), ('i', 4), ('p', 2), ('s', 4)] ``` Notice in the first line we had to call the type conversion function `list` . That turns the promise we get from `items` into a list, a step that is needed before we can use the list’s `sort` method. ``` call graph A graph consisting of nodes which represent function frames (or invocations), and directed edges (lines with arrows) showing which frames gave rise to other frames. dictionary A collection of key:value pairs that maps from keys to values. The keys can be any immutable value, and the associated value can be of any type. key A data item that is *mapped to* a value in a dictionary. Keys are used to look up values in a dictionary. Each key must be unique across the dictionary. key:value pair One of the pairs of items in a dictionary. Values are looked up in a dictionary by key. mapping type A mapping type is a data type comprised of a collection of keys and associated values. Python's only built-in mapping type is the dictionary. Dictionaries implement the [associative array](http://en.wikipedia.org/wiki/Associative_array) abstract data type. memo Temporary storage of precomputed values to avoid duplicating the same computation. Write a program that reads a string and returns a table of the letters of the alphabet in alphabetical order which occur in the string together with the number of times each letter occurs. Case should be ignored. A sample output of the program when the user enters the data “ThiS is String with Upper and lower case Letters”, would look this this: ``` a 2 c 1 d 1 e 5 g 1 h 2 i 4 l 2 n 2 o 1 p 2 r 4 s 5 t 5 u 1 w 2 ``` ``` >>> d = {"apples": 15, "bananas": 35, "grapes": 12} >>> d["bananas"] ``` ``` >>> d["oranges"] = 20 >>> len(d) ``` `>>> "grapes" in d` `>>> d["pears"]` ``` >>> d.get("pears", 0) ``` ``` >>> fruits = list(d.keys()) >>> fruits.sort() >>> print(fruits) ``` ``` >>> del d["apples"] >>> "apples" in d ``` ``` def add_fruit(inventory, fruit, quantity=0): return # Make these tests work... new_inventory = {} add_fruit(new_inventory, "strawberries", 10) test("strawberries" in new_inventory) test(new_inventory["strawberries"] == 10) add_fruit(new_inventory, "strawberries", 25) test(new_inventory["strawberries"] == 35) ``` Write a program called `alice_words.py` that creates a text file named `alice_words.txt` containing an alphabetical listing of all the words, and the number of times each occurs, in the text version of Alice’s Adventures in Wonderland. (You can obtain a free plain text version of the book, along with many others, from http://www.gutenberg.org.) The first 10 lines of your output file should look something like this: a 631 a-piece 1 abide 1 able 1 about 94 above 3 absence 1 absurd 2 How many times does the word `alice` occur in the book? What is the longest word in Alice in Wonderland? How many characters does it have? # Chapter 21: A Case Study: Indexing your files We present a small case study that ties together modules, recursion, files, dictionaries and introduces simple serialization and deserialization. In this chapter we’re going to use a dictionary to help us find a file rapidly. The case study has two components: Near the end of the chapter on recursion we showed an example of how to recursively list all files under a given path of our filesystem. We’ll borrow (and change) that code somewhat to provide the skeleton of our crawler. It’s function is to recursively traverse every file in a given path. (We’ll figure out what to do with the file soon: for the moment we’ll just print it’s short name, and its full path.) ``` # Crawler crawls the filesystem and builds a dictionary import os def crawl_files(path): """ Recursively visit all files in path """ # Fetch all the entries in the current folder. dirlist = os.listdir(path) for f in dirlist: # Turn each name into full pathname fullname = os.path.join(path, f) # If it is a directory, recurse. if os.path.isdir(fullname): crawl_files(fullname) else: # Do something useful with the file print("{0:30} {1}".format(f, fullname)) crawl_files("C:\\Python32") ``` We get output similar to this: ``` CherryPy-wininst.log C:\Python32\CherryPy-wininst.log bz2.pyd C:\Python32\DLLs\bz2.pyd py.ico C:\Python32\DLLs\py.ico pyc.ico C:\Python32\DLLs\pyc.ico pyexpat.pyd C:\Python32\DLLs\pyexpat.pyd python3.dll C:\Python32\DLLs\python3.dll select.pyd C:\Python32\DLLs\select.pyd sqlite3.dll C:\Python32\DLLs\sqlite3.dll tcl85.dll C:\Python32\DLLs\tcl85.dll tclpip85.dll C:\Python32\DLLs\tclpip85.dll tk85.dll C:\Python32\DLLs\tk85.dll ... ``` We’ll adapt this now to store the short name and the full path of the file as a key:value pair in a dictionary. But first, two observations: We’ll change the code above by setting up a global dictionary, initially empty: The statement `thedict = {}` inserted at line 3 will do this. Then instead of printing the information at line 17, we’ll add the filename and path to the dictionary. The code will need to check whether the key already exists: ``` key = f.lower() # Normalize the filename if key in thedict: thedict[key].append(fullname) else: # insert the key and a list of one pathname thedict[key] = [fullname] ``` After running for a while the program terminates. We can interactively confirm that the dictionary seems to have been built correctly: ``` >>> len(thedict) 14861 >>> thedict["python.exe"] ['C:\\Python32\\python.exe'] >>> thedict["logo.png"] ['C:\\Python32\\Lib\\site-packages\\PyQt4\\doc\\html\_static\\logo.png', 'C:\\Python32\\Lib\\site-packages\\PyQt4\\doc\\sphinx\\static\\logo.png', 'C:\\Python32\\Lib\\site-packages\\PyQt4\\examples\\demos\\textedit\\images\\logo.png', 'C:\\Python32\\Lib\\site-packages\\sphinx-1.1.3-py3.2.egg\\sphinx\\themes\\scrolls\\static\\logo.png'] >>> ``` It would be nice to add a progress bar while the crawler is running: a typical technique is to print dots to show progress. We’ll introduce a count of how many files have been indexed (this can be a global variable), and after we’ve handled the current file, we’ll add this code: ``` filecount += 1 if filecount % 100 == 0: print(".", end="") if filecount % 5000 == 0: print() ``` As we complete each 100 files we print a dot. After every 50 dots we start a new line. You’ll need to also create the global variable, initialize it to zero, and remember to declare the variable as global in the crawler. The main calling code can now print some statistics for us. It becomes ``` crawl_files("C:\\Python32") print() # End the last line of dots ... print("Indexed {0} files, {1} entries in the dictionary.". format(filecount, len(thedict))) ``` We’ll now get something like ``` >>.................................................. .................................................. .................................................. .................................... Indexed 18635 files, 14861 entries in the dictionary. >>> ``` It is reassuring to look at the properties of the folder in our operating system, and note that it counts exactly the same number of files as our program does! The dictionary we’ve built is an object. To save it we’re going to turn it into a string, and write the string to a file on our disk. The string must be in a format that allows another program to unambiguously reconstruct another dictionary with the same key-value entries. The process of turning an object into a string representation is called serialization, and the inverse operation — reconstructing a new object from a string —is called deserialization. There are a few ways to do this: some use binary formats, some use text formats, and the way different types of data are encoded differs. A popular, lightweight technique used extensively in web servers and web pages is to use JSON (JavaScript Object Notation) encoding. Amazingly, we need just four new lines of code to save our dictionary to our disk: f = open("C:\\temp\\mydict.txt", "w") json.dump(thedict, f) f.close() ``` You can find the file on your disk and open it with a text editor to see what the JSON encoding looks like. This needs to reconstruct the dictionary from the disk file, and then provide a lookup function: f = open("C:\\temp\\mydict.txt", "r") dict = json.load(f) f.close() print("Loaded {0} filenames for querying.".format(len(dict))) def query(filename): f = filename.lower() if f not in dict: print("No hits for {0}".format(filename)) else: print("{0} is at ".format(filename)) for p in dict[f]: print("...", p) ``` And here is a sample run: ``` >>Loaded 14861 filenames for querying. >>> query('python.exe') python.exe is at ... C:\Python32\python.exe >>> query('java.exe') No hits for java.exe >>> query('INDEX.HtMl') INDEX.HtMl is at ... C:\Python32\Lib\site-packages\cherrypy\test\static\index.html ... C:\Python32\Lib\site-packages\eric5\Documentation\Source\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\css\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\htmlmixed\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\javascript\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\markdown\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\python\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\rst\index.html ... C:\Python32\Lib\site-packages\IPython\frontend\html\notebook\static\codemirror\mode\xml\index.html ... C:\Python32\Lib\site-packages\pygame\docs\index.html ... C:\Python32\Lib\site-packages\pygame\docs\ref\index.html ... C:\Python32\Lib\site-packages\PyQt4\doc\html\index.html >>> ``` The JSON file might get quite big. Gzip compression is available in Python, so let’s take advantage of it… When we saved the dictionary to disk we opened a text file for writing. We simply have to change that one line of the program (and import the correct modules), to create a gzip file instead of a normal text file. The replacement code is ## f = open("C:\\temp\\mydict.txt", "w") f = io.TextIOWrapper(gzip.open("C:\\temp\\mydict.gz", mode="wb")) json.dump(thedict, f) f.close() ``` Magically, we now get a zipped file that is about 7 times smaller than the text version. (Compressiion/decompression like this is often done by web servers and browsers for significantly faster downloads.) Now, of course, our query program needs to uncompress the data: ## f = open("C:\\temp\\mydict.txt", "r") f = io.TextIOWrapper(gzip.open("C:\\temp\\mydict.gz", mode="r")) dict = json.load(f) f.close() print("Loaded {0} filenames for querying.".format(len(dict))) ``` Composability is the key… In the earliest chapters of the book we talked about composability: the ability to join together or compose different fragments of code and functionality to build more powerful constructs. This case study has shown an excellent example of this. Our JSON serializer and deserializer can link with our file mechanisms. The gzip compressor / decompressor can also present itself to our program as as if it was just a specialized stream of data, as one might get from reading a file. The end result is a very elegant composition of powerful tools. Instead of requiring separate steps for serializing the dictionary to a string, compressing the string, writing the resulting bytes to a file, etc., the composability has let us do it all very easily! deserialization Reconstruction an in-memory object from some external text representation gzip A lossless compression technique that reduces the storage size of data. (Lossless means you can recover the original data exactly.) JSON JavaScript Object Notation is a format for serializing and transporting objects, often used between web servers and web browsers that run JavasScript. Python contains a `json` module to provide this capability. serialization Turning an object into a string (or bytes) so that it can be sent over the internet, or saved in a file. The recipient can reconstruct a new object from the data. # Chapter 22: Even more OOP As another example of a user-defined type, we’ll define a class called `MyTime` that records the time of day. We’ll provide an `__init__` method to ensure that every instance is created with appropriate attributes and initialization. The class definition looks like this: ``` class MyTime: def __init__(self, hrs=0, mins=0, secs=0): """ Create a MyTime object initialized to hrs, mins, secs """ self.hours = hrs self.minutes = mins self.seconds = secs ``` We can instantiate a new `MyTime` object: ``` tim1 = MyTime(11, 59, 30) ``` The state diagram for the object looks like this: We’ll leave it as an exercise for the readers to add a `__str__` method so that MyTime objects can print themselves decently. In the next few sections, we’ll write two versions of a function called `add_time` , which calculates the sum of two `MyTime` objects. They will demonstrate two kinds of functions: pure functions and modifiers. The following is a rough version of `add_time` : ``` def add_time(t1, t2): h = t1.hours + t2.hours m = t1.minutes + t2.minutes s = t1.seconds + t2.seconds sum_t = MyTime(h, m, s) return sum_t ``` The function creates a new `MyTime` object and returns a reference to the new object. This is called a pure function because it does not modify any of the objects passed to it as parameters and it has no side effects, such as updating global variables, displaying a value, or getting user input. Here is an example of how to use this function. We’ll create two `MyTime` objects: `current_time` , which contains the current time; and `bread_time` , which contains the amount of time it takes for a breadmaker to make bread. Then we’ll use `add_time` to figure out when the bread will be done. ``` >>> current_time = MyTime(9, 14, 30) >>> bread_time = MyTime(3, 35, 0) >>> done_time = add_time(current_time, bread_time) >>> print(done_time) 12:49:30 ``` The output of this program is `12:49:30` , which is correct. On the other hand, there are cases where the result is not correct. Can you think of one? The problem is that this function does not deal with cases where the number of seconds or minutes adds up to more than sixty. When that happens, we have to carry the extra seconds into the minutes column or the extra minutes into the hours column. Here’s a better version of the function: ``` def add_time(t1, t2): h = t1.hours + t2.hours m = t1.minutes + t2.minutes s = t1.seconds + t2.seconds sum_t = MyTime(h, m, s) return sum_t ``` This function is starting to get bigger, and still doesn’t work for all possible cases. Later we will suggest an alternative approach that yields better code. There are times when it is useful for a function to modify one or more of the objects it gets as parameters. Usually, the caller keeps a reference to the objects it passes, so any changes the function makes are visible to the caller. Functions that work this way are called modifiers. `increment` , which adds a given number of seconds to a `MyTime` object, would be written most naturally as a modifier. A rough draft of the function looks like this: ``` def increment(t, secs): t.seconds += secs if t.minutes >= 60: t.minutes -= 60 t.hours += 1 ``` The first line performs the basic operation; the remainder deals with the special cases we saw before. Is this function correct? What happens if the parameter `seconds` is much greater than sixty? In that case, it is not enough to carry once; we have to keep doing it until `seconds` is less than sixty. One solution is to replace the `if` statements with `while` statements: ``` def increment(t, seconds): t.seconds += seconds while t.minutes >= 60: t.minutes -= 60 t.hours += 1 ``` This function is now correct when seconds is not negative, and when hours does not exceed 23, but it is not a particularly good solution. `increment` to a method Once again, OOP programmers would prefer to put functions that work with `MyTime` objects into the `MyTime` class, so let’s convert `increment` to a method. To save space, we will leave out previously defined methods, but you should keep them in your version: def increment(self, seconds): self.seconds += seconds while self.seconds >= 60: self.seconds -= 60 self.minutes += 1 while self.minutes >= 60: self.minutes -= 60 self.hours += 1 ``` The transformation is purely mechanical — we move the definition into the class definition and (optionally) change the name of the first parameter to `self` , to fit with Python style conventions. Now we can invoke `increment` using the syntax for invoking a method. ``` current_time.increment(500) ``` Again, the object on which the method is invoked gets assigned to the first parameter, `self` . The second parameter, `seconds` gets the value `500` . Often a high-level insight into the problem can make the programming much easier. In this case, the insight is that a `MyTime` object is really a three-digit number in base 60! The `second` component is the ones column, the `minute` component is the sixties column, and the `hour` component is the thirty-six hundreds column. When we wrote `add_time` and `increment` , we were effectively doing addition in base 60, which is why we had to carry from one column to the next. This observation suggests another approach to the whole problem — we can convert a `MyTime` object into a single number and take advantage of the fact that the computer knows how to do arithmetic with numbers. The following method is added to the `MyTime` class to convert any instance into a corresponding number of seconds: def to_seconds(self): """ Return the number of seconds represented by this instance """ return self.hours * 3600 + self.minutes * 60 + self.seconds ``` Now, all we need is a way to convert from an integer back to a `MyTime` object. Supposing we have `tsecs` seconds, some integer division and mod operators can do this for us: ``` hrs = tsecs // 3600 leftoversecs = tsecs % 3600 mins = leftoversecs // 60 secs = leftoversecs % 60 ``` You might have to think a bit to convince yourself that this technique to convert from one base to another is correct. In OOP we’re really trying to wrap together the data and the operations that apply to it. So we’d like to have this logic inside the `MyTime` class. A good solution is to rewrite the class initializer so that it can cope with initial values of seconds or minutes that are outside the normalized values. (A normalized time would be something like 3 hours 12 minutes and 20 seconds. The same time, but unnormalized could be 2 hours 70 minutes and 140 seconds.) Let’s rewrite a more powerful initializer for `MyTime` : def __init__(self, hrs=0, mins=0, secs=0): """ Create a new MyTime object initialized to hrs, mins, secs. The values of mins and secs may be outside the range 0-59, but the resulting MyTime object will be normalized. """ # Calculate total seconds to represent totalsecs = hrs*3600 + mins*60 + secs self.hours = totalsecs // 3600 # Split in h, m, s leftoversecs = totalsecs % 3600 self.minutes = leftoversecs // 60 self.seconds = leftoversecs % 60 ``` Now we can rewrite `add_time` like this: ``` def add_time(t1, t2): secs = t1.to_seconds() + t2.to_seconds() return MyTime(0, 0, secs) ``` This version is much shorter than the original, and it is much easier to demonstrate or reason that it is correct. In some ways, converting from base 60 to base 10 and back is harder than just dealing with times. Base conversion is more abstract; our intuition for dealing with times is better. But if we have the insight to treat times as base 60 numbers and make the investment of writing the conversions, we get a program that is shorter, easier to read and debug, and more reliable. It is also easier to add features later. For example, imagine subtracting two `MyTime` objects to find the duration between them. The naive approach would be to implement subtraction with borrowing. Using the conversion functions would be easier and more likely to be correct. Ironically, sometimes making a problem harder (or more general) makes the programming easier, because there are fewer special cases and fewer opportunities for error. Specialization versus Generalization Computer Scientists are generally fond of specializing their types, while mathematicians often take the opposite approach, and generalize everything. What do we mean by this? If we ask a mathematician to solve a problem involving weekdays, days of the century, playing cards, time, or dominoes, their most likely response is to observe that all these objects can be represented by integers. Playing cards, for example, can be numbered from 0 to 51. Days within the century can be numbered. Mathematicians will say “These things are enumerable — the elements can be uniquely numbered (and we can reverse this numbering to get back to the original concept). So let’s number them, and confine our thinking to integers. Luckily, we have powerful techniques and a good understanding of integers, and so our abstractions — the way we tackle and simplify these problems — is to try to reduce them to problems about integers.” Computer Scientists tend to do the opposite. We will argue that there are many integer operations that are simply not meaningful for dominoes, or for days of the century. So we’ll often define new specialized types, like `MyTime` , because we can restrict, control, and specialize the operations that are possible. Object-oriented programming is particularly popular because it gives us a good way to bundle methods and specialized data into a new type. Both approaches are powerful problem-solving techniques. Often it may help to try to think about the problem from both points of view — “What would happen if I tried to reduce everything to very few primitive types?”, versus “What would happen if this thing had its own specialized type?” The `after` function should compare two times, and tell us whether the first time is strictly after the second, e.g. ``` >>> t1 = MyTime(10, 55, 12) >>> t2 = MyTime(10, 48, 22) >>> after(t1, t2) # Is t1 after t2? True ``` This is slightly more complicated because it operates on two `MyTime` objects, not just one. But we’d prefer to write it as a method anyway — in this case, a method on the first argument: def after(self, time2): """ Return True if I am strictly greater than time2 """ if self.hours > time2.hours: return True if self.hours < time2.hours: return False if self.minutes > time2.minutes: return True if self.minutes < time2.minutes: return False if self.seconds > time2.seconds: return True return False ``` We invoke this method on one object and pass the other as an argument: ``` if current_time.after(done_time): print("The bread will be done before it starts!") ``` We can almost read the invocation like English: If the current time is after the done time, then… The logic of the `if` statements deserve special attention here. Lines 11-18 will only be reached if the two hour fields are the same. Similarly, the test at line 16 is only executed if both times have the same hours and the same minutes. Could we make this easier by using our “Aha!” insight and extra work from earlier, and reducing both times to integers? Yes, with spectacular results! def after(self, time2): """ Return True if I am strictly greater than time2 """ return self.to_seconds() > time2.to_seconds() ``` This is a great way to code this: if we want to tell if the first time is after the second time, turn them both into integers and compare the integers. Some languages, including Python, make it possible to have different meanings for the same operator when applied to different types. For example, `+` in Python means quite different things for integers and for strings. This feature is called operator overloading. It is especially useful when programmers can also overload the operators for their own user-defined types. For example, to override the addition operator `+` , we can provide a method named `__add__` : ``` class MyTime: # Previously defined methods here... def __add__(self, other): return MyTime(0, 0, self.to_seconds() + other.to_seconds()) ``` As usual, the first parameter is the object on which the method is invoked. The second parameter is conveniently named `other` to distinguish it from `self` . To add two `MyTime` objects, we create and return a new `MyTime` object that contains their sum. Now, when we apply the `+` operator to `MyTime` objects, Python invokes the `__add__` method that we have written: ``` >>> t1 = MyTime(1, 15, 42) >>> t2 = MyTime(3, 50, 30) >>> t3 = t1 + t2 >>> print(t3) 05:06:12 ``` The expression `t1 + t2` is equivalent to `t1.__add__(t2)` , but obviously more elegant. As an exercise, add a method `__sub__(self, other)` that overloads the subtraction operator, and try it out. For the next couple of exercises we’ll go back to the `Point` class defined in our first chapter about objects, and overload some of its operators. Firstly, adding two points adds their respective (x, y) coordinates: ``` class Point: # Previously defined methods here... def __add__(self, other): return Point(self.x + other.x, self.y + other.y) ``` There are several ways to override the behavior of the multiplication operator: by defining a method named `__mul__` , or `__rmul__` , or both. If the left operand of `*` is a `Point` , Python invokes `__mul__` , which assumes that the other operand is also a `Point` . It computes the dot product of the two Points, defined according to the rules of linear algebra: ``` def __mul__(self, other): return self.x * other.x + self.y * other.y ``` If the left operand of `*` is a primitive type and the right operand is a `Point` , Python invokes `__rmul__` , which performs scalar multiplication: ``` def __rmul__(self, other): return Point(other * self.x, other * self.y) ``` The result is a new `Point` whose coordinates are a multiple of the original coordinates. If `other` is a type that cannot be multiplied by a floating-point number, then `__rmul__` will yield an error. This example demonstrates both kinds of multiplication: ``` >>> p1 = Point(3, 4) >>> p2 = Point(5, 7) >>> print(p1 * p2) 43 >>> print(2 * p2) (10, 14) ``` What happens if we try to evaluate `p2 * 2` ? Since the first parameter is a `Point` , Python invokes `__mul__` with `2` as the second argument. Inside `__mul__` , the program tries to access the `x` coordinate of `other` , which fails because an integer has no attributes: ``` >>> print(p2 * 2) AttributeError: 'int' object has no attribute 'x' ``` Unfortunately, the error message is a bit opaque. This example demonstrates some of the difficulties of object-oriented programming. Sometimes it is hard enough just to figure out what code is running. Most of the methods we have written only work for a specific type. When we create a new object, we write methods that operate on that type. But there are certain operations that we will want to apply to many types, such as the arithmetic operations in the previous sections. If many types support the same set of operations, we can write functions that work on any of those types. For example, the `multadd` operation (which is common in linear algebra) takes three parameters; it multiplies the first two and then adds the third. We can write it in Python like this: ``` def multadd (x, y, z): return x * y + z ``` This function will work for any values of `x` and `y` that can be multiplied and for any value of `z` that can be added to the product. We can invoke it with numeric values: ``` >>> multadd (3, 2, 1) 7 ``` Or with `Point` s: ``` >>> p1 = Point(3, 4) >>> p2 = Point(5, 7) >>> print(multadd (2, p1, p2)) (11, 15) >>> print(multadd (p1, p2, 1)) 44 ``` In the first case, the `Point` is multiplied by a scalar and then added to another `Point` . In the second case, the dot product yields a numeric value, so the third parameter also has to be a numeric value. A function like this that can take arguments with different types is called polymorphic. As another example, consider the function `front_and_back` , which prints a list twice, forward and backward: ``` def front_and_back(front): import copy back = copy.copy(front) back.reverse() print(str(front) + str(back)) ``` Because the `reverse` method is a modifier, we make a copy of the list before reversing it. That way, this function doesn’t modify the list it gets as a parameter. Here’s an example that applies `front_and_back` to a list: ``` >>> my_list = [1, 2, 3, 4] >>> front_and_back(my_list) [1, 2, 3, 4][4, 3, 2, 1] ``` Of course, we intended to apply this function to lists, so it is not surprising that it works. What would be surprising is if we could apply it to a `Point` . To determine whether a function can be applied to a new type, we apply Python’s fundamental rule of polymorphism, called the duck typing rule: If all of the operations inside the function can be applied to the type, the function can be applied to the type. The operations in the `front_and_back` function include `copy` , `reverse` , and Not all programming languages define polymorphism in this way. Look up duck typing, and see if you can figure out why it has this name. `copy` works on any object, and we have already written a `__str__` method for `Point` objects, so all we need is a `reverse` method in the `Point` class: ``` def reverse(self): (self.x , self.y) = (self.y, self.x) ``` Then we can pass `Point` s to `front_and_back` : ``` >>> p = Point(3, 4) >>> front_and_back(p) (3, 4)(4, 3) ``` The most interesting polymorphism is the unintentional kind, where we discover that a function we have already written can be applied to a type for which we never planned. ``` dot product An operation defined in linear algebra that multiplies two `Point`s and yields a numeric value. functional programming style A style of program design in which the majority of functions are pure. modifier A function or method that changes one or more of the objects it receives as parameters. Most modifier functions are void (do not return a value). normalized Data is said to be normalized if it fits into some reduced range or set of rules. We usually normalize our angles to values in the range \[0..360). We normalize minutes and seconds to be values in the range \[0..60). And we'd be surprised if the local store advertised its cold drinks at "One dollar, two hundred and fifty cents". operator overloading Extending built-in operators ( `+`, `-`, `*`, `>`, `<`, etc.) so that they do different things for different types of arguments. We've seen early in the book how `+` is overloaded for numbers and strings, and here we've shown how to further overload it for user-defined types. polymorphic A function that can operate on more than one type. Notice the subtle distinction: overloading has different functions (all with the same name) for different types, whereas a polymorphic function is a single function that can work for a range of types. pure function A function that does not modify any of the objects it receives as parameters. Most pure functions are fruitful rather than void. scalar multiplication An operation defined in linear algebra that multiplies each of the coordinates of a `Point` by a numeric value. ``` Write a Boolean function `between` that takes two `MyTime` objects, `t1` and `t2` , as arguments, and returns `True` if the invoking object falls between the two times. Assume `t1 <= t2` , and make the test closed at the lower bound and open at the upper bound, i.e. return True if `t1 <= obj < t2` . Turn the above function into a method in the `MyTime` class. Overload the necessary operator(s) so that instead of having to write : `if t1.after(t2): ...` we can use the more convenient : `if t1 > t2: ...` Rewrite `increment` as a method that uses our “Aha” insight. Create some test cases for the `increment` method. Consider specifically the case where the number of seconds to add to the time is negative. Fix up `increment` so that it handles this case if it does not do so already. (You may assume that you will never subtract more seconds than are in the time object.) Can physical time be negative, or must time always move in the forward direction? Some serious physicists think this is not such a dumb question. See what you can find on the Internet about this. # Chapter 23: Collections of objects By now, we have seen several examples of composition. One of the first examples was using a method invocation as part of an expression. Another example is the nested structure of statements; we can put an `if` statement within a `while` loop, within another `if` statement, and so on. Having seen this pattern, and having learned about lists and objects, we should not be surprised to learn that we can create lists of objects. We can also create objects that contain lists (as attributes); we can create lists that contain lists; we can create objects that contain objects; and so on. In this chapter and the next, we will look at some examples of these combinations, using `Card` objects as an example. `Card` objects If you are not familiar with common playing cards, now would be a good time to get a deck, or else this chapter might not make much sense. There are fifty-two cards in a deck, each of which belongs to one of four suits and one of thirteen ranks. The suits are Spades, Hearts, Diamonds, and Clubs (in descending order in bridge). The ranks are Ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, and King. Depending on the game that we are playing, the rank of Ace may be higher than King or lower than 2. The rank is sometimes called the face-value of the card. If we want to define a new object to represent a playing card, it is obvious what the attributes should be: `rank` and `suit` . It is not as obvious what type the attributes should be. One possibility is to use strings containing words like `"Spade"` for suits and `"Queen"` for ranks. One problem with this implementation is that it would not be easy to compare cards to see which had a higher rank or suit. An alternative is to use integers to encode the ranks and suits. By encode, we do not mean what some people think, which is to encrypt or translate into a secret code. What a computer scientist means by encode is to define a mapping between a sequence of numbers and the items I want to represent. For example: ``` Spades --> 3 Hearts --> 2 Diamonds --> 1 Clubs --> 0 ``` An obvious feature of this mapping is that the suits map to integers in order, so we can compare suits by comparing integers. The mapping for ranks is fairly obvious; each of the numerical ranks maps to the corresponding integer, and for face cards: ``` Jack --> 11 Queen --> 12 King --> 13 ``` The reason we are using mathematical notation for these mappings is that they are not part of the Python program. They are part of the program design, but they never appear explicitly in the code. The class definition for the `Card` type looks like this: As usual, we provide an initialization method that takes an optional parameter for each attribute. To create some objects, representing say the 3 of Clubs and the Jack of Diamonds, use these commands: ``` three_of_clubs = Card(0, 3) card1 = Card(1, 11) ``` In the first case above, for example, the first argument, `0` , represents the suit Clubs. Save this code for later use … In the next chapter we assume that we have save the `Cards` class, and the upcoming `Deck` class in a file called `Cards.py` . `__str__` method In order to print `Card` objects in a way that people can easily read, we want to map the integer codes onto words. A natural way to do that is with lists of strings. We assign these lists to class attributes at the top of the class definition: ``` class Card: suits = ["Clubs", "Diamonds", "Hearts", "Spades"] ranks = ["narf", "Ace", "2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"] def __str__(self): return (self.ranks[self.rank] + " of " + self.suits[self.suit]) ``` A class attribute is defined outside of any method, and it can be accessed from any of the methods in the class. Inside `__str__` , we can use `suits` and `ranks` to map the numerical values of `suit` and `rank` to strings. For example, the expression ``` self.suits[self.suit] ``` means use the attribute `suit` from the object `self` as an index into the class attribute named `suits` , and select the appropriate string. The reason for the `"narf"` in the first element in `ranks` is to act as a place keeper for the zero-eth element of the list, which will never be used. The only valid ranks are 1 to 13. This wasted item is not entirely necessary. We could have started at 0, as usual, but it is less confusing to encode the rank 2 as integer 2, 3 as 3, and so on. With the methods we have so far, we can create and print cards: ``` >>> card1 = Card(1, 11) >>> print(card1) Jack of Diamonds ``` Class attributes like `suits` are shared by all `Card` objects. The advantage of this is that we can use any `Card` object to access the class attributes: ``` >>> card2 = Card(1, 3) >>> print(card2) 3 of Diamonds >>> print(card2.suits[1]) Diamonds ``` Because every `Card` instance references the same class attribute, we have an aliasing situation. The disadvantage is that if we modify a class attribute, it affects every instance of the class. For example, if we decide that Jack of Diamonds should really be called Jack of Swirly Whales, we could do this: ``` >>> card1.suits[1] = "Swirly Whales" >>> print(card1) Jack of Swirly Whales ``` The problem is that all of the Diamonds just became Swirly Whales: ``` >>> print(card2) 3 of Swirly Whales ``` It is usually not a good idea to modify class attributes. For primitive types, there are six relational operators ( `<` , `>` , `==` , etc.) that compare values and determine when one is greater than, less than, or equal to another. If we want our own types to be comparable using the syntax of these relational operators, we need to define six corresponding special methods in our class. We’d like to start with a single method named `cmp` that houses the logic of ordering. By convention, a comparison method takes two parameters, `self` and `other` , and returns 1 if the first object is greater, -1 if the second object is greater, and 0 if they are equal to each other. Some types are completely ordered, which means that we can compare any two elements and tell which is bigger. For example, the integers and the floating-point numbers are completely ordered. Some types are unordered, which means that there is no meaningful way to say that one element is bigger than another. For example, the fruits are unordered, which is why we cannot compare apples and oranges, and we cannot meaningfully order a collection of images, or a collection of cellphones. Playing cards are partially ordered, which means that sometimes we can compare cards and sometimes not. For example, we know that the 3 of Clubs is higher than the 2 of Clubs, and the 3 of Diamonds is higher than the 3 of Clubs. But which is better, the 3 of Clubs or the 2 of Diamonds? One has a higher rank, but the other has a higher suit. In order to make cards comparable, we have to decide which is more important, rank or suit. To be honest, the choice is arbitrary. For the sake of choosing, we will say that suit is more important, because a new deck of cards comes sorted with all the Clubs together, followed by all the Diamonds, and so on. With that decided, we can write `cmp` : ``` def cmp(self, other): # Check the suits if self.suit > other.suit: return 1 if self.suit < other.suit: return -1 # Suits are the same... check ranks if self.rank > other.rank: return 1 if self.rank < other.rank: return -1 # Ranks are the same... it's a tie return 0 ``` In this ordering, Aces appear lower than Deuces (2s). Now, we can define the six special methods that do the overloading of each of the relational operators for us: ``` def __eq__(self, other): return self.cmp(other) == 0 def __le__(self, other): return self.cmp(other) <= 0 def __ge__(self, other): return self.cmp(other) >= 0 def __gt__(self, other): return self.cmp(other) > 0 def __lt__(self, other): return self.cmp(other) < 0 def __ne__(self, other): return self.cmp(other) != 0 ``` With this machinery in place, the relational operators now work as we’d like them to: ``` >>> card1 = Card(1, 11) >>> card2 = Card(1, 3) >>> card3 = Card(1, 11) >>> card1 < card2 False >>> card1 == card3 True ``` Now that we have objects to represent `Card` s, the next logical step is to define a class to represent a `Deck` . Of course, a deck is made up of cards, so each `Deck` object will contain a list of cards as an attribute. Many card games will need at least two different decks — a red deck and a blue deck. The following is a class definition for the `Deck` class. The initialization method creates the attribute `cards` and generates the standard pack of fifty-two cards: ``` class Deck: def __init__(self): self.cards = [] for suit in range(4): for rank in range(1, 14): self.cards.append(Card(suit, rank)) ``` The easiest way to populate the deck is with a nested loop. The outer loop enumerates the suits from 0 to 3. The inner loop enumerates the ranks from 1 to 13. Since the outer loop iterates four times, and the inner loop iterates thirteen times, the total number of times the body is executed is fifty-two (thirteen times four). Each iteration creates a new instance of `Card` with the current suit and rank, and appends that card to the `cards` list. With this in place, we can instantiate some decks: ``` red_deck = Deck() blue_deck = Deck() ``` As usual, when we define a new type we want a method that prints the contents of an instance. To print a `Deck` , we traverse the list and print each `Card` : ``` class Deck: ... def print_deck(self): for card in self.cards: print(card) ``` Here, and from now on, the ellipsis ( `...` ) indicates that we have omitted the other methods in the class. As an alternative to `print_deck` , we could write a `__str__` method for the `Deck` class. The advantage of `__str__` is that it is more flexible. Rather than just printing the contents of the object, it generates a string representation that other parts of the program can manipulate before printing, or store for later use. Here is a version of `__str__` that returns a string representation of a `Deck` . To add a bit of pizzazz, it arranges the cards in a cascade where each card is indented one space more than the previous card: ``` class Deck: ... def __str__(self): s = "" for i in range(len(self.cards)): s = s + " " * i + str(self.cards[i]) + "\n" return s ``` This example demonstrates several features. First, instead of traversing `self.cards` and assigning each card to a variable, we are using `i` as a loop variable and an index into the list of cards. Second, we are using the string multiplication operator to indent each card by one more space than the last. The expression `" " * i` yields a number of spaces equal to the current value of `i` . Third, instead of using the `str` function. Passing an object as an argument to `str` is equivalent to invoking the `__str__` method on the object. Finally, we are using the variable `s` as an accumulator. Initially, `s` is the empty string. Each time through the loop, a new string is generated and concatenated with the old value of `s` to get the new value. When the loop ends, `s` contains the complete string representation of the `Deck` , which looks like this: ``` >>> red_deck = Deck() >>> print(red_deck) Ace of Clubs 2 of Clubs 3 of Clubs 4 of Clubs 5 of Clubs 6 of Clubs 7 of Clubs 8 of Clubs 9 of Clubs 10 of Clubs Jack of Clubs Queen of Clubs King of Clubs Ace of Diamonds 2 of Diamonds ... ``` And so on. Even though the result appears on 52 lines, it is one long string that contains newlines. If a deck is perfectly shuffled, then any card is equally likely to appear anywhere in the deck, and any location in the deck is equally likely to contain any card. To shuffle the deck, we will use the `randrange` function from the `random` module. With two integer arguments, `a` and `b` , `randrange` chooses a random integer in the range `a <= x < b` . Since the upper bound is strictly less than `b` , we can use the length of a list as the second parameter, and we are guaranteed to get a legal index. For example, if `rng` has already been instantiated as a random number source, this expression chooses the index of a random card in a deck: ``` rng.randrange(0, len(self.cards)) ``` An easy way to shuffle the deck is by traversing the cards and swapping each card with a randomly chosen one. It is possible that the card will be swapped with itself, but that is fine. In fact, if we precluded that possibility, the order of the cards would be less than entirely random: ``` class Deck: ... def shuffle(self): import random rng = random.Random() # Create a random generator num_cards = len(self.cards) for i in range(num_cards): j = rng.randrange(i, num_cards) (self.cards[i], self.cards[j]) = (self.cards[j], self.cards[i]) ``` Rather than assume that there are fifty-two cards in the deck, we get the actual length of the list and store it in `num_cards` . For each card in the deck, we choose a random card from among the cards that haven’t been shuffled yet. Then we swap the current card ( `i` ) with the selected card ( `j` ). To swap the cards we use a tuple assignment: ``` (self.cards[i], self.cards[j]) = (self.cards[j], self.cards[i]) ``` While this is a good shuffling method, a random number generator object also has a `shuffle` method that can shuffle elements in a list, in place. So we could rewrite this function to use the one provided for us: ``` class Deck: ... def shuffle(self): import random rng = random.Random() # Create a random generator rng.shuffle(self.cards) # uUse its shuffle method ``` Another method that would be useful for the `Deck` class is `remove` , which takes a card as a parameter, removes it, and returns `True` if the card was in the deck and `False` otherwise: ``` class Deck: ... def remove(self, card): if card in self.cards: self.cards.remove(card) return True else: return False ``` The `in` operator returns `True` if the first operand is in the second. If the first operand is an object, Python uses the object’s `__eq__` method to determine equality with items in the list. Since the `__eq__` we provided in the `Card` class checks for deep equality, the `remove` method checks for deep equality. To deal cards, we want to remove and return the top card. The list method `pop` provides a convenient way to do that: ``` class Deck: ... def pop(self): return self.cards.pop() ``` Actually, `pop` removes the last card in the list, so we are in effect dealing from the bottom of the deck. One more operation that we are likely to want is the Boolean function `is_empty` , which returns `True` if the deck contains no cards: ``` class Deck: ... def is_empty(self): return self.cards == [] ``` ``` encode To represent one type of value using another type of value by constructing a mapping between them. class attribute A variable that is defined inside a class definition but outside any method. Class attributes are accessible from any method in the class and are shared by all instances of the class. accumulator A variable used in a loop to accumulate a series of values, such as by concatenating them onto a string or adding them to a running sum. ``` `cmp` so that Aces are ranked higher than Kings. # Chapter 24: Inheritance The language feature most often associated with object-oriented programming is inheritance. Inheritance is the ability to define a new class that is a modified version of an existing class. The primary advantage of this feature is that you can add new methods to a class without modifying the existing class. It is called inheritance because the new class inherits all of the methods of the existing class. Extending this metaphor, the existing class is sometimes called the parent class. The new class may be called the child class or sometimes subclass. Inheritance is a powerful feature. Some programs that would be complicated without inheritance can be written concisely and simply with it. Also, inheritance can facilitate code reuse, since you can customize the behavior of parent classes without having to modify them. In some cases, the inheritance structure reflects the natural structure of the problem, which makes the program easier to understand. On the other hand, inheritance can make programs difficult to read. When a method is invoked, it is sometimes not clear where to find its definition. The relevant code may be scattered among several modules. Also, many of the things that can be done using inheritance can be done as elegantly (or more so) without it. If the natural structure of the problem does not lend itself to inheritance, this style of programming can do more harm than good. In this chapter we will demonstrate the use of inheritance as part of a program that plays the card game Old Maid. One of our goals is to write code that could be reused to implement other card games. For almost any card game, we need to represent a hand of cards. A hand is similar to a deck, of course. Both are made up of a set of cards, and both require operations like adding and removing cards. Also, we might like the ability to shuffle both decks and hands. A hand is also different from a deck. Depending on the game being played, we might want to perform some operations on hands that don’t make sense for a deck. For example, in poker we might classify a hand(straight, flush, etc.) or compare it with another hand. In bridge, we might want to compute a score for a hand in order to make a bid. This situation suggests the use of inheritance. If `Hand` is a subclass of `Deck` , it will have all the methods of `Deck` , and new methods can be added. We add the code in this chapter to our `Cards.py` file from the previous chapter. In the class definition, the name of the parent class appears in parentheses: ``` class Hand(Deck): pass ``` This statement indicates that the new `Hand` class inherits from the existing `Deck` class. The `Hand` constructor initializes the attributes for the hand, which are `name` and `cards` . The string `name` identifies this hand, probably by the name of the player that holds it. The name is an optional parameter with the empty string as a default value. `cards` is the list of cards in the hand, initialized to the empty list: ``` class Hand(Deck): def __init__(self, name=""): self.cards = [] self.name = name ``` For just about any card game, it is necessary to add and remove cards from the deck. Removing cards is already taken care of, since `Hand` inherits `remove` from `Deck` . But we have to write `add` : ``` class Hand(Deck): ... def add(self, card): self.cards.append(card) ``` Again, the ellipsis indicates that we have omitted other methods. The list `append` method adds the new card to the end of the list of cards. Now that we have a `Hand` class, we want to deal cards from the `Deck` into hands. It is not immediately obvious whether this method should go in the `Hand` class or in the `Deck` class, but since it operates on a single deck and (possibly) several hands, it is more natural to put it in `Deck` . `deal` should be fairly general, since different games will have different requirements. We may want to deal out the entire deck at once or add one card to each hand. `deal` takes two parameters, a list (or tuple) of hands and the total number of cards to deal. If there are not enough cards in the deck, the method deals out all of the cards and stops: ``` class Deck: ... def deal(self, hands, num_cards=999): num_hands = len(hands) for i in range(num_cards): if self.is_empty(): break # Break if out of cards card = self.pop() # Take the top card hand = hands[i % num_hands] # Whose turn is next? hand.add(card) # Add the card to the hand ``` The second parameter, `num_cards` , is optional; the default is a large number, which effectively means that all of the cards in the deck will get dealt. The loop variable `i` goes from 0 to `num_cards-1` . Each time through the loop, a card is removed from the deck using the list method `pop` , which removes and returns the last item in the list. The modulus operator ( `%` ) allows us to deal cards in a round robin (one card at a time to each hand). When `i` is equal to the number of hands in the list, the expression `i % num_hands` wraps around to the beginning of the list (index 0). To print the contents of a hand, we can take advantage of the `__str__` method inherited from `Deck` . For example: ``` >>> deck = Deck() >>> deck.shuffle() >>> hand = Hand("frank") >>> deck.deal([hand], 5) >>> print(hand) Hand frank contains 2 of Spades 3 of Spades 4 of Spades Ace of Hearts 9 of Clubs ``` It’s not a great hand, but it has the makings of a straight flush. Although it is convenient to inherit the existing methods, there is additional information in a `Hand` object we might want to include when we print one. To do that, we can provide a `__str__` method in the `Hand` class that overrides the one in the `Deck` class: ``` class Hand(Deck) def __str__(self): s = "Hand " + self.name if self.is_empty(): s += " is empty\n" else: s += " contains\n" return s + Deck.__str__(self) ``` Initially, `s` is a string that identifies the hand. If the hand is empty, the program appends the words `is empty` and returns `s` . Otherwise, the program appends the word `contains` and the string representation of the `Deck` , computed by invoking the `__str__` method in the `Deck` class on `self` . It may seem odd to send `self` , which refers to the current `Hand` , to a `Deck` method, until you remember that a `Hand` is a kind of `Deck` . `Hand` objects can do everything `Deck` objects can, so it is legal to send a `Hand` to a `Deck` method. In general, it is always legal to use an instance of a subclass in place of an instance of a parent class. `CardGame` class The `CardGame` class takes care of some basic chores common to all games, such as creating the deck and shuffling it: ``` class CardGame: def __init__(self): self.deck = Deck() self.deck.shuffle() ``` This is the first case we have seen where the initialization method performs a significant computation, beyond initializing attributes. To implement specific games, we can inherit from `CardGame` and add features for the new game. As an example, we’ll write a simulation of Old Maid. The object of Old Maid is to get rid of cards in your hand. You do this by matching cards by rank and color. For example, the 4 of Clubs matches the 4 of Spades since both suits are black. The Jack of Hearts matches the Jack of Diamonds since both are red. To begin the game, the Queen of Clubs is removed from the deck so that the Queen of Spades has no match. The fifty-one remaining cards are dealt to the players in a round robin. After the deal, all players match and discard as many cards as possible. When no more matches can be made, play begins. In turn, each player picks a card (without looking) from the closest neighbor to the left who still has cards. If the chosen card matches a card in the player’s hand, the pair is removed. Otherwise, the card is added to the player’s hand. Eventually all possible matches are made, leaving only the Queen of Spades in the loser’s hand. In our computer simulation of the game, the computer plays all hands. Unfortunately, some nuances of the real game are lost. In a real game, the player with the Old Maid goes to some effort to get their neighbor to pick that card, by displaying it a little more prominently, or perhaps failing to display it more prominently, or even failing to fail to display that card more prominently. The computer simply picks a neighbor’s card at random. `OldMaidHand` class A hand for playing Old Maid requires some abilities beyond the general abilities of a `Hand` . We will define a new class, `OldMaidHand` , that inherits from `Hand` and provides an additional method called `remove_matches` : ``` class OldMaidHand(Hand): def remove_matches(self): count = 0 original_cards = self.cards[:] for card in original_cards: match = Card(3 - card.suit, card.rank) if match in self.cards: self.cards.remove(card) self.cards.remove(match) print("Hand {0}: {1} matches {2}" .format(self.name, card, match)) count += 1 return count ``` We start by making a copy of the list of cards, so that we can traverse the copy while removing cards from the original. Since `self.cards` is modified in the loop, we don’t want to use it to control the traversal. Python can get quite confused if it is traversing a list that is changing! For each card in the hand, we figure out what the matching card is and go looking for it. The match card has the same rank and the other suit of the same color. The expression `3 - card.suit` turns a Club (suit 0) into a Spade (suit 3) and a Diamond (suit 1) into a Heart (suit 2). You should satisfy yourself that the opposite operations also work. If the match card is also in the hand, both cards are removed. The following example demonstrates how to use `remove_matches` : ``` >>> game = CardGame() >>> hand = OldMaidHand("frank") >>> game.deck.deal([hand], 13) >>> print(hand) Hand frank contains Ace of Spades 2 of Diamonds 7 of Spades 8 of Clubs 6 of Hearts 8 of Spades 7 of Clubs Queen of Clubs 7 of Diamonds 5 of Clubs Jack of Diamonds 10 of Diamonds 10 of Hearts >>> hand.remove_matches() Hand frank: 7 of Spades matches 7 of Clubs Hand frank: 8 of Spades matches 8 of Clubs Hand frank: 10 of Diamonds matches 10 of Hearts >>> print(hand) Hand frank contains Ace of Spades 2 of Diamonds 6 of Hearts Queen of Clubs 7 of Diamonds 5 of Clubs Jack of Diamonds ``` Notice that there is no `__init__` method for the `OldMaidHand` class. We inherit it from `Hand` . `OldMaidGame` class Now we can turn our attention to the game itself. `OldMaidGame` is a subclass of `CardGame` with a new method called `play` that takes a list of players as a parameter. Since `__init__` is inherited from `CardGame` , a new `OldMaidGame` object contains a new shuffled deck: ``` class OldMaidGame(CardGame): def play(self, names): # Remove Queen of Clubs self.deck.remove(Card(0,12)) # Make a hand for each player self.hands = [] for name in names: self.hands.append(OldMaidHand(name)) # Deal the cards self.deck.deal(self.hands) print("---------- Cards have been dealt") self.print_hands() # Remove initial matches matches = self.remove_all_matches() print("---------- Matches discarded, play begins") self.print_hands() # Play until all 50 cards are matched turn = 0 num_hands = len(self.hands) while matches < 25: matches += self.play_one_turn(turn) turn = (turn + 1) % num_hands print("---------- Game is Over") self.print_hands() ``` The writing of `print_hands` has been left as an exercise. Some of the steps of the game have been separated into methods. `remove_all_matches` traverses the list of hands and invokes `remove_matches` on each: ``` class OldMaidGame(CardGame): ... def remove_all_matches(self): count = 0 for hand in self.hands: count += hand.remove_matches() return count ``` `count` is an accumulator that adds up the number of matches in each hand. When we’ve gone through every hand, the total is returned ( `count` ). When the total number of matches reaches twenty-five, fifty cards have been removed from the hands, which means that only one card is left and the game is over. The variable `turn` keeps track of which player’s turn it is. It starts at 0 and increases by one each time; when it reaches `num_hands` , the modulus operator wraps it back around to 0. The method `play_one_turn` takes a parameter that indicates whose turn it is. The return value is the number of matches made during this turn: ``` class OldMaidGame(CardGame): ... def play_one_turn(self, i): if self.hands[i].is_empty(): return 0 neighbor = self.find_neighbor(i) picked_card = self.hands[neighbor].pop() self.hands[i].add(picked_card) print("Hand", self.hands[i].name, "picked", picked_card) count = self.hands[i].remove_matches() self.hands[i].shuffle() return count ``` If a player’s hand is empty, that player is out of the game, so he or she does nothing and returns 0. Otherwise, a turn consists of finding the first player on the left that has cards, taking one card from the neighbor, and checking for matches. Before returning, the cards in the hand are shuffled so that the next player’s choice is random. The method `find_neighbor` starts with the player to the immediate left and continues around the circle until it finds a player that still has cards: ``` class OldMaidGame(CardGame): ... def find_neighbor(self, i): num_hands = len(self.hands) for next in range(1,num_hands): neighbor = (i + next) % num_hands if not self.hands[neighbor].is_empty(): return neighbor ``` If `find_neighbor` ever went all the way around the circle without finding cards, it would return `None` and cause an error elsewhere in the program. Fortunately, we can prove that that will never happen (as long as the end of the game is detected correctly). We have omitted the `print_hands` method. You can write that one yourself. The following output is from a truncated form of the game where only the top fifteen cards (tens and higher) were dealt to three players. With this small deck, play stops after seven matches instead of twenty-five. ``` >>> import cards >>> game = cards.OldMaidGame() >>> game.play(["Allen","Jeff","Chris"]) ---------- Cards have been dealt Hand Allen contains King of Hearts Jack of Clubs Queen of Spades King of Spades 10 of Diamonds Hand Jeff: Queen of Hearts matches Queen of Diamonds Hand Chris: 10 of Spades matches 10 of Clubs ---------- Matches discarded, play begins Hand Allen contains King of Hearts Jack of Clubs Queen of Spades King of Spades 10 of Diamonds Hand Allen picked King of Diamonds Hand Allen: King of Hearts matches King of Diamonds Hand Jeff picked 10 of Hearts Hand Chris picked Jack of Clubs Hand Allen picked Jack of Hearts Hand Jeff picked Jack of Diamonds Hand Chris picked Queen of Spades Hand Allen picked Jack of Diamonds Hand Allen: Jack of Hearts matches Jack of Diamonds Hand Jeff picked King of Clubs Hand Chris picked King of Spades Hand Allen picked 10 of Hearts Hand Allen: 10 of Diamonds matches 10 of Hearts Hand Jeff picked Queen of Spades Hand Chris picked Jack of Spades Hand Chris: Jack of Clubs matches Jack of Spades Hand Jeff picked King of Spades Hand Jeff: King of Clubs matches King of Spades ---------- Game is Over Hand Allen is empty Hand Jeff contains Queen of Spades Hand Chris is empty ``` So Jeff loses. ``` inheritance The ability to define a new class that is a modified version of a previously defined class. parent class The class from which a child class inherits. child class A new class created by inheriting from an existing class; also called a subclass. ``` `print_hands` , to the `OldMaidGame` class which traverses `self.hands` and prints each hand. `TurtleGTX` , that comes with some extra features: it can jump forward a given distance, and it has an odometer that keeps track of how far the turtle has travelled since it came off the production line. (The parent class has a number of synonyms like `fd` , `forward` , `back` , `backward` , and `bk` : for this exercise, just focus on putting this functionality into the `forward` method.) Think carefully about how to count the distance if the turtle is asked to move forward by a negative amount. (We would not want to buy a second-hand turtle whose odometer reading was faked because its previous owner drove it backwards around the block too often. Try this in a car near you, and see if the car’s odometer counts up or down when you reverse.) `forward` is called. Also provide a `change_tyre` method that can fix the flat. # Chapter 25: Linked lists We have seen examples of attributes that refer to other objects, which we called embedded references. A common data structure, the linked list, takes advantage of this feature. Linked lists are made up of nodes, where each node contains a reference to the next node in the list. In addition, each node contains a unit of data called the cargo. A linked list is considered a recursive data structure because it has a recursive definition. A linked list is either: `None` , or Recursive data structures lend themselves to recursive methods. `Node` class As usual when writing a new class, we’ll start with the initialization and `__str__` methods so that we can test the basic mechanism of creating and displaying the new type: ``` class Node: def __init__(self, cargo=None, next=None): self.cargo = cargo self.next = next As usual, the parameters for the initialization method are optional. By default, both the cargo and the link, `next` , are set to `None` . The string representation of a node is just the string representation of the cargo. Since any value can be passed to the `str` function, we can store any value in a list. To test the implementation so far, we can create a `Node` and print it: ``` >>> node = Node("test") >>> print(node) test ``` To make it interesting, we need a list with more than one node: ``` >>> node1 = Node(1) >>> node2 = Node(2) >>> node3 = Node(3) ``` This code creates three nodes, but we don’t have a list yet because the nodes are not linked. The state diagram looks like this: To link the nodes, we have to make the first node refer to the second and the second node refer to the third: ``` >>> node1.next = node2 >>> node2.next = node3 ``` The reference of the third node is `None` , which indicates that it is the end of the list. Now the state diagram looks like this: Now you know how to create nodes and link them into lists. What might be less clear at this point is why. Lists are useful because they provide a way to assemble multiple objects into a single entity, sometimes called a collection. In the example, the first node of the list serves as a reference to the entire list. To pass the list as a parameter, we only have to pass a reference to the first node. For example, the function `print_list` takes a single node as an argument. Starting with the head of the list, it prints each node until it gets to the end: ``` def print_list(node): while node is not None: print(node, end=" ") node = node.next print() ``` To invoke this method, we pass a reference to the first node: ``` >>> print_list(node1) 1 2 3 ``` Inside `print_list` we have a reference to the first node of the list, but there is no variable that refers to the other nodes. We have to use the `next` value from each node to get to the next node. To traverse a linked list, it is common to use a loop variable like `node` to refer to each of the nodes in succession. This diagram shows the value of `list` and the values that `node` takes on: It is natural to express many list operations using recursive methods. For example, the following is a recursive algorithm for printing a list backwards: Of course, Step 2, the recursive call, assumes that we have a way of printing a list backward. But if we assume that the recursive call works — the leap of faith — then we can convince ourselves that this algorithm works. All we need are a base case and a way of proving that for any list, we will eventually get to the base case. Given the recursive definition of a list, a natural base case is the empty list, represented by `None` : ``` def print_backward(list): if list is None: return head = list tail = list.next print_backward(tail) print(head, end=" ") ``` The first line handles the base case by doing nothing. The next two lines split the list into `head` and `tail` . The last two lines print the list. The `end` argument of the print statement keeps Python from printing a newline after each node. We invoke this method as we invoked `print_list` : ``` >>> print_backward(node1) 3 2 1 ``` The result is a backward list. You might wonder why `print_list` and `print_backward` are functions and not methods in the `Node` class. The reason is that we want to use `None` to represent the empty list and it is not legal to invoke a method on `None` . This limitation makes it awkward to write list – manipulating code in a clean object-oriented style. Can we prove that `print_backward` will always terminate? In other words, will it always reach the base case? In fact, the answer is no. Some lists will make this method crash. Revisit the Recursion chapter In our earlier chapter on recursion we distinguished between the high-level view that requires a leap of faith, and the low-level operational view. In terms of mental chunking, we want to encourage the more abstract high-level view. But if you’d like to see the detail you should use your single-stepping debugging tools to step into the recursive levels and to examine the execution stack frames at every call to `print_backward` . There is nothing to prevent a node from referring back to an earlier node in the list, including itself. For example, this figure shows a list with two nodes, one of which refers to itself: If we invoke `print_list` on this list, it will loop forever. If we invoke `print_backward` , it will recurse infinitely. This sort of behavior makes infinite lists difficult to work with. Nevertheless, they are occasionally useful. For example, we might represent a number as a list of digits and use an infinite list to represent a repeating fraction. Regardless, it is problematic that we cannot prove that `print_list` and `print_backward` terminate. The best we can do is the hypothetical statement, “If the list contains no loops, then these methods will terminate.” This sort of claim is called a precondition. It imposes a constraint on one of the parameters and describes the behavior of the method if the constraint is satisfied. You will see more examples soon. One part of `print_backward` might have raised an eyebrow: ``` head = list tail = list.next ``` After the first assignment, `head` and `list` have the same type and the same value. So why did we create a new variable? The reason is that the two variables play different roles. We think of `head` as a reference to a single node, and we think of `list` as a reference to the first node of a list. These roles are not part of the program; they are in the mind of the programmer. In general we can’t tell by looking at a program what role a variable plays. This ambiguity can be useful, but it can also make programs difficult to read. We often use variable names like `node` and `list` to document how we intend to use a variable and sometimes create additional variables to disambiguate. We could have written `print_backward` without `head` and `tail` , which makes it more concise but possibly less clear: ``` def print_backward(list): if list is None: return print_backward(list.next) print(list, end=" ") ``` Looking at the two function calls, we have to remember that `print_backward` treats its argument as a collection and The fundamental ambiguity theorem describes the ambiguity that is inherent in a reference to a node: A variable that refers to a node might treat the node as a single object or as the first in a list of nodes. There are two ways to modify a linked list. Obviously, we can change the cargo of one of the nodes, but the more interesting operations are the ones that add, remove, or reorder the nodes. As an example, let’s write a method that removes the second node in the list and returns a reference to the removed node: ``` def remove_second(list): if list is None: return first = list second = list.next # Make the first node refer to the third first.next = second.next # Separate the second node from the rest of the list second.next = None return second ``` Again, we are using temporary variables to make the code more readable. Here is how to use this method: ``` >>> print_list(node1) 1 2 3 >>> removed = remove_second(node1) >>> print_list(removed) 2 >>> print_list(node1) 1 3 ``` This state diagram shows the effect of the operation: What happens if you invoke this method and pass a list with only oneelement (a singleton)? What happens if you pass the empty list as an argument? Is there a precondition for this method? If so, fix the method to handle a violation of the precondition in a reasonable way. It is often useful to divide a list operation into two methods. For example, to print a list backward in the conventional list format `[3, 2, 1]` we can use the `print_backward` method to print `3, 2,` but we need a separate method to print the brackets and the first node. Let’s call it : ``` def print_backward_nicely(list): print("[", end=" ") print_backward(list) print("]") ``` Again, it is a good idea to check methods like this to see if they work with special cases like an empty list or a singleton. When we use this method elsewhere in the program, we invoke directly, and it invokes `print_backward` on our behalf. In that sense, acts as a wrapper, and it uses `print_backward` as a helper. `LinkedList` class There are some subtle problems with the way we have been implementing lists. In a reversal of cause and effect, we’ll propose an alternative implementation first and then explain what problems it solves. First, we’ll create a new class called `LinkedList` . Its attributes are an integer that contains the length of the list and a reference to the first node. `LinkedList` objects serve as handles for manipulating lists of `Node` objects: ``` class LinkedList: def __init__(self): self.length = 0 self.head = None ``` One nice thing about the `LinkedList` class is that it provides a natural place to put wrapper functions like , which we can make a method of the `LinkedList` class: ``` class LinkedList: ... def print_backward(self): print("[", end=" ") if self.head is not None: self.head.print_backward() class Node: ... def print_backward(self): if self.next is not None: tail = self.next tail.print_backward() print(self.cargo, end=" ") ``` Just to make things confusing, we renamed . Now there are two methods named `print_backward` : one in the `Node` class (the helper); and one in the `LinkedList` class (the wrapper). When the wrapper invokes ``` self.head.print_backward ``` , it is invoking the helper, because `self.head` is a `Node` object. Another benefit of the `LinkedList` class is that it makes it easier to add or remove the first element of a list. For example, `add_first` is a method for `LinkedList` s; it takes an item of cargo as an argument and puts it at the beginning of the list: ``` class LinkedList: ... def add_first(self, cargo): node = Node(cargo) node.next = self.head self.head = node self.length += 1 ``` As usual, you should check code like this to see if it handles the special cases. For example, what happens if the list is initially empty? Some lists are well formed; others are not. For example, if a list contains a loop, it will cause many of our methods to crash, so we might want to require that lists contain no loops. Another requirement is that the `length` value in the `LinkedList` object should be equal to the actual number of nodes in the list. Requirements like these are called invariants because, ideally, they should be true of every object all the time. Specifying invariants for objects is a useful programming practice because it makes it easier to prove the correctness of code, check the integrity of data structures, and detect errors. One thing that is sometimes confusing about invariants is that there are times when they are violated. For example, in the middle of `add_first` , after we have added the node but before we have incremented `length` ,the invariant is violated. This kind of violation is acceptable; in fact, it is often impossible to modify an object without violating an invariant for at least a little while. Normally, we require that every method that violates an invariant must restore the invariant. If there is any significant stretch of code in which the invariant is violated, it is important for the comments to make that clear, so that no operations are performed that depend on the invariant. ``` embedded reference A reference stored in an attribute of an object. linked list A data structure that implements a collection using a sequence of linked nodes. node An element of a list, usually implemented as an object that contains a reference to another object of the same type. cargo An item of data contained in a node. link An embedded reference used to link one object to another. precondition An assertion that must be true in order for a method to work correctly. fundamental ambiguity theorem A reference to a list node can be treated as a single object or as the first in a list of nodes. singleton A linked list with a single node. wrapper A method that acts as a middleman between a caller and a helper method, often making the method easier or less error-prone to invoke. helper A method that is not invoked directly by a caller but is used by another method to perform part of an operation. invariant An assertion that should be true of an object at all times (except perhaps while the object is being modified). ``` `[1, 2, 3]` . Modify `print_list` so that it generates output in this format. # Chapter 26: Stacks The data types you have seen so far are all concrete, in the sense that we have completely specified how they are implemented. For example, the `Card` class represents a card using two integers. As we discussed at the time, that is not the only way to represent a card; there are many alternative implementations. An abstract data type, or ADT, specifies a set of operations (or methods) and the semantics of the operations (what they do), but it does not not specify the implementation of the operations. That’s what makes it abstract. Why is that useful? When we talk about ADTs, we often distinguish between the code that uses the ADT, called the client code, from the code that implements the ADT, called the provider/implementor code. In this chapter, we will look at one common ADT, the stack. A stack is a collection, meaning that it is a data structure that contains multiple elements. Other collections we have seen include dictionaries and lists. An ADT is defined by the operations that can be performed on it, which is called an interface. The interface for a stack consists of these operations: `__init__` Initialize a new empty stack. `push` Add a new item to the stack. `pop` Remove and return an item from the stack. The item that is returned is always the last one that was added. `is_empty` Check whether the stack is empty. A stack is sometimes called a “Last in, First out” or LIFO data structure, because the last item added is the first to be removed. The list operations that Python provides are similar to the operations that define a stack. The interface isn’t exactly what it is supposed to be, but we can write code to translate from the Stack ADT to the built-in operations. This code is called an implementation of the Stack ADT. In general, an implementation is a set of methods that satisfy the syntactic and semantic requirements of an interface. Here is an implementation of the Stack ADT that uses a Python list: def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def is_empty(self): return (self.items == []) ``` A `Stack` object contains an attribute named `items` that is a list of items in the stack. The initialization method sets `items` to the empty list. To push a new item onto the stack, `push` appends it onto `items` . To pop an item off the stack, `pop` uses the homonymous (same-named) list method to remove and return the last item on the list. Finally, to check if the stack is empty, `is_empty` compares `items` to the empty list. An implementation like this, in which the methods consist of simple invocations of existing methods, is called a veneer. In real life, veneer is a thin coating of good quality wood used in furniture-making to hide lower quality wood underneath. Computer scientists use this metaphor to describe a small piece of code that hides the details of an implementation and provides a simpler, or more standard, interface. A stack is a generic data structure, which means that we can add any type of item to it. The following example pushes two integers and a string onto the stack: ``` >>> s = Stack() >>> s.push(54) >>> s.push(45) >>> s.push("+") ``` We can use `is_empty` and `pop` to remove and print all of the items on the stack: ``` while not s.is_empty(): print(s.pop(), end=" ") ``` The output is `+ 45 54` . In other words, we just used a stack to print the items backward! Granted, it’s not the standard format for printing a list, but by using a stack, it was remarkably easy to do. You should compare this bit of code to the implementation of `print_backward` in the last chapter. There is a natural parallel between the recursive version of `print_backward` and the stack algorithm here. The difference is that `print_backward` uses the runtime stack to keep track of the nodes while it traverses the list, and then prints them on the way back from the recursion. The stack algorithm does the same thing, except that it uses a `Stack` object instead of the runtime stack. In most programming languages, mathematical expressions are written with the operator between the two operands, as in `1 + 2` . This format is called infix. An alternative used by some calculators is called postfix. In postfix, the operator follows the operands, as in `1 2 +` . The reason postfix is sometimes useful is that there is a natural way to evaluate a postfix expression using a stack: To implement the previous algorithm, we need to be able to traverse a string and break it into operands and operators. This process is an example of parsing, and the results — the individual chunks of the string — are called tokens. You might remember these words from Chapter 1. Python provides a `split` method in both string objects and the `re` (regular expression) module. A string’s `split` method splits it into a list using a single character as a delimiter. For example: ``` >>> "Now is the time".split(" ") ['Now', 'is', 'the', 'time'] ``` In this case, the delimiter is the space character, so the string is split at each space. The function `re.split` is more powerful, allowing us to provide a regular expression instead of a delimiter. A regular expression is a way of specifying a set of strings. For example, `[A-z]` is the set of all letters and `[0-9]` is the set of all numbers. The `^` operator negates a set, so `[^0-9]` is the set of everything that is not a number, which is exactly the set we want to use to split up postfix expressions: ``` >>> import re >>> re.split("([^0-9])", "123+456*/") ['123', '+', '456', '*', '', '/', ''] ``` The resulting list includes the operands `123` and `456` and the operators `*` and `/` . It also includes two empty strings that are inserted after the operands. To evaluate a postfix expression, we will use the parser from the previous section and the algorithm from the section before that. To keep things simple, we’ll start with an evaluator that only implements the operators `+` and `*` : ``` def eval_postfix(expr): import re token_list = re.split("([^0-9])", expr) stack = Stack() for token in token_list: if token == "" or token == " ": continue if token == "+": sum = stack.pop() + stack.pop() stack.push(sum) elif token == "*": product = stack.pop() * stack.pop() stack.push(product) else: stack.push(int(token)) return stack.pop() ``` The first condition takes care of spaces and empty strings. The next two conditions handle operators. We assume, for now, that anything else must be an operand. Of course, it would be better to check for erroneous input and report an error message, but we’ll get to that later. Let’s test it by evaluating the postfix form of `(56 + 47) * 2` : ``` >> eval_postfix("56 47 + 2 *") 206 ``` That’s close enough. One of the fundamental goals of an ADT is to separate the interests of the provider, who writes the code that implements the ADT, and the client, who uses the ADT. The provider only has to worry about whether the implementation is correct — in accord with the specification of the ADT — and not how it will be used. Conversely, the client assumes that the implementation of the ADT is correct and doesn’t worry about the details. When you are using one of Python’s built-in types, you have the luxury of thinking exclusively as a client. Of course, when you implement an ADT, you also have to write client code to test it. In that case, you play both roles, which can be confusing. You should make some effort to keep track of which role you are playing at any moment. ``` abstract data type (ADT) A data type (usually a collection of objects) that is defined by a set of operations but that can be implemented in a variety of ways. client A program (or the person who wrote it) that uses an ADT. delimiter A character that is used to separate tokens, such as punctuation in a natural language. generic data structure A kind of data structure that can contain data of any type. implementation Code that satisfies the syntactic and semantic requirements of an interface. interface The set of operations that define an ADT. infix A way of writing mathematical expressions with the operators between the operands. parse To read a string of characters or tokens and analyze its grammatical structure. postfix A way of writing mathematical expressions with the operators after the operands. provider The code (or the person who wrote it) that implements an ADT. token A set of characters that are treated as a unit for purposes of parsing, such as the words in a natural language. veneer A class definition that implements an ADT with method definitions that are invocations of other methods, sometimes with simple transformations. The veneer does no significant work, but it improves or standardizes the interface seen by the client. ``` `1 2 + 3 *` .This example demonstrates one of the advantages of postfix—there is no need to use parentheses to control the order of operations. To get the same result in infix, we would have to write `(1 + 2) * 3` . `1 + 2 * 3` . # Chapter 27: Queues This chapter presents two ADTs: the Queue and the Priority Queue. In real life, a queue is a line of people waiting for something. In most cases, the first person in line is the next one to be served. There are exceptions, though. At airports, peoples whose flights are leaving soon are sometimes taken from the middle of the queue. At supermarkets, a polite person might let someone with only a few items go in front of them. The rule that determines who goes next is called the queueing policy. The simplest queueing policy is called “First in, First out”, FIFO for short. The most general queueing policy is priority queueing, in which each person is assigned a priority and the person with the highest priority goes first, regardless of the order of arrival. We say this is the most general policy because the priority can be based on anything: what time a flight leaves; how many groceries the person has; or how important the person is. Of course, not all queueing policies are fair, but fairness is in the eye of the beholder. The Queue ADT and the Priority Queue ADT have the same set of operations. The difference is in the semantics of the operations: a queue uses the FIFO policy; and a priority queue (as the name suggests) uses the priority queueing policy. The Queue ADT is defined by the following operations: `__init__` Initialize a new empty queue. `insert` Add a new item to the queue. `remove` Remove and return an item from the queue. The item that is returned is the first one that was added. `is_empty` Check whether the queue is empty. The first implementation of the Queue ADT we will look at is called a linked queue because it is made up of linked `Node` objects. Here is the class definition: ``` class Queue: def __init__(self): self.length = 0 self.head = None def insert(self, cargo): node = Node(cargo) if self.head is None: # If list is empty the new node goes first self.head = node else: # Find the last node in the list last = self.head while last.next: last = last.next # Append the new node last.next = node self.length += 1 def remove(self): cargo = self.head.cargo self.head = self.head.next self.length -= 1 return cargo ``` The methods `is_empty` and `remove` are identical to the `LinkedList` methods `is_empty` and `remove_first` . The `insert` method is new and a bit more complicated. We want to insert new items at the end of the list. If the queue is empty, we just set `head` to refer to the new node. Otherwise, we traverse the list to the last node and tack the new node on the end. We can identify the last node because its `next` attribute is `None` . There are two invariants for a properly formed `Queue` object. The value of `length` should be the number of nodes in the queue, and the last node should have `next` equal to `None` . Convince yourself that this method preserves both invariants. Normally when we invoke a method, we are not concerned with the details of its implementation. But there is one detail we might want to know: the performance characteristics of the method. How long does it take, and how does the run time change as the number of items in the collection increases? First look at `remove` . There are no loops or function calls here, suggesting that the runtime of this method is the same every time. Such a method is called a constant time operation. In reality, the method might be slightly faster when the list is empty since it skips the body of the conditional, but that difference is not significant. The performance of `insert` is very different. In the general case, we have to traverse the list to find the last element. This traversal takes time proportional to the length of the list. Since the runtime is a linear function of the length, this method is called linear time. Compared to constant time, that’s very bad. We would like an implementation of the Queue ADT that can perform all operations in constant time. One way to do that is to modify the `Queue` class so that it maintains a reference to both the first and the last node, as shown in the figure: The `ImprovedQueue` implementation looks like this: ``` class ImprovedQueue: def __init__(self): self.length = 0 self.head = None self.last = None So far, the only change is the attribute `last` . It is used in `insert` and `remove` methods: ``` class ImprovedQueue: ... def insert(self, cargo): node = Node(cargo) if self.length == 0: # If list is empty, the new node is head and last self.head = self.last = node else: # Find the last node last = self.last # Append the new node last.next = node self.last = node self.length += 1 ``` Since `last` keeps track of the last node, we don’t have to search for it. As a result, this method is constant time. There is a price to pay for that speed. We have to add a special case to `remove` to set `last` to `None` when the last node is removed: ``` class ImprovedQueue: ... def remove(self): cargo = self.head.cargo self.head = self.head.next self.length -= 1 if self.length == 0: self.last = None return cargo ``` This implementation is more complicated than the Linked Queue implementation, and it is more difficult to demonstrate that it is correct. The advantage is that we have achieved the goal — both `insert` and `remove` are constant time operations. The Priority Queue ADT has the same interface as the Queue ADT, but different semantics. Again, the interface is: `__init__` Initialize a new empty queue. `insert` Add a new item to the queue. `remove` Remove and return an item from the queue. The item that is returned is the one with the highest priority. `is_empty` Check whether the queue is empty. The semantic difference is that the item that is removed from the queue is not necessarily the first one that was added. Rather, it is the item in the queue that has the highest priority. What the priorities are and how they compare to each other are not specified by the Priority Queue implementation. It depends on which items are in the queue. For example, if the items in the queue have names, we might choose them in alphabetical order. If they are bowling scores, we might go from highest to lowest, but if they are golf scores, we would go from lowest to highest. As long as we can compare the items in the queue, we can find and remove the one with the highest priority. This implementation of Priority Queue has as an attribute a Python list that contains the items in the queue. def is_empty(self): return not self.items def insert(self, item): self.items.append(item) ``` The initialization method, `is_empty` , and `insert` are all veneers on list operations. The only interesting method is `remove` : ``` class PriorityQueue: ... def remove(self): maxi = 0 for i in range(1, len(self.items)): if self.items[i] > self.items[maxi]: maxi = i item = self.items[maxi] del self.items[maxi] return item ``` At the beginning of each iteration, `maxi` holds the index of the biggest item (highest priority) we have seen so far. Each time through the loop, the program compares the `i` ’th item to the champion. If the new item is bigger, the value of `maxi` is set to `i` . When the `for` statement completes, `maxi` is the index of the biggest item. This item is removed from the list and returned. Let’s test the implementation: ``` >>> q = PriorityQueue() >>> for num in [11, 12, 14, 13]: ... q.insert(num) ... >>> while not q.is_empty(): ... print(q.remove()) ... 14 13 12 11 ``` If the queue contains simple numbers or strings, they are removed in numerical or alphabetical order, from highest to lowest. Python can find the biggest integer or string because it can compare them using the built-in comparison operators. If the queue contains an object type, it has to provide a `__gt__` method. When `remove` uses the `>` operator to compare items, it invokes the `__gt__` method for one of the items and passes the other as a parameter. As long as the `__gt__` method works correctly, the Priority Queue will work. `Golfer` class As an example of an object with an unusual definition of priority, let’s implement a class called `Golfer` that keeps track of the names and scores of golfers. As usual, we start by defining `__init__` and `__str__` : ``` class Golfer: def __init__(self, name, score): self.name = name self.score= score def __str__(self): return "{0:16}: {1}".format(self.name, self.score) ``` `__str__` uses the format method to put the names and scores in neat columns. Next we define a version of `__gt__` where the lowest score gets highest priority. As always, `__gt__` returns `True` if `self` is greater than `other` , and `False` otherwise. ``` class Golfer: ... def __gt__(self, other): return self.score < other.score # Less is more ``` Now we are ready to test the priority queue with the `Golfer` class: ``` >>> tiger = Golfer("<NAME>", 61) >>> phil = Golfer("<NAME>", 72) >>> hal = Golfer("<NAME>", 69) >>> >>> pq = PriorityQueue() >>> for g in [tiger, phil, hal]: ... pq.insert(g) ... >>> while not pq.is_empty(): ... print(pq.remove()) ... <NAME> : 61 <NAME> : 69 <NAME> : 72 ``` ``` constant time An operation whose runtime does not depend on the size of the data structure. FIFO (First In, First Out) a queueing policy in which the first member to arrive is the first to be removed. linear time An operation whose runtime is a linear function of the size of the data structure. linked queue An implementation of a queue using a linked list. priority queue A queueing policy in which each member has a priority determined by external factors. The member with the highest priority is the first to be removed. Priority Queue An ADT that defines the operations one might perform on a priority queue. queue An ordered set of objects waiting for a service of some kind. Queue An ADT that performs the operations one might perform on a queue. queueing policy The rules that determine which member of a queue is removed next. ``` `ImprovedQueue` for a range of queue lengths. # Chapter 28: Trees Like linked lists, trees are made up of nodes. A common kind of tree is a binary tree, in which each node contains a reference to two other nodes (possibly `None` ). These references are referred to as the left and right subtrees. Like list nodes, tree nodes also contain cargo. A state diagram for a tree looks like this: To avoid cluttering up the picture, we often omit the `None` s. The top of the tree (the node `tree` refers to) is called the root. In keeping with the tree metaphor, the other nodes are called branches and the nodes at the tips with null references are called leaves. It may seem odd that we draw the picture with the root at the top and the leaves at the bottom, but that is not the strangest thing. To make things worse, computer scientists mix in another metaphor: the family tree. The top node is sometimes called a parent and the nodes it refers to are its children. Nodes with the same parent are called siblings. Finally, there is a geometric vocabulary for talking about trees. We already mentioned left and right, but there is also up (toward the parent/root) and down (toward the children/leaves). Also, all of the nodes that are the same distance from the root comprise a level of the tree. We probably don’t need three metaphors for talking about trees, but there they are. Like linked lists, trees are recursive data structures because they are defined recursively. A tree is either: `None` , or The process of assembling a tree is similar to the process of assembling a linked list. Each constructor invocation builds a single node. ``` class Tree: def __init__(self, cargo, left=None, right=None): self.cargo = cargo self.left = left self.right = right The `cargo` can be any type, but the `left` and `right` parameters should be tree nodes. `left` and `right` are optional; the default value is `None` . To print a node, we just print the cargo. One way to build a tree is from the bottom up. Allocate the child nodes first: ``` left = Tree(2) right = Tree(3) ``` Then create the parent node and link it to the children: ``` tree = Tree(1, left, right) ``` We can write this code more concisely by nesting constructor invocations: ``` >>> tree = Tree(1, Tree(2), Tree(3)) ``` Either way, the result is the tree at the beginning of the chapter. Any time you see a new data structure, your first question should be, “How do I traverse it?” The most natural way to traverse a tree is recursively. For example, if the tree contains integers as cargo, this function returns their sum: ``` def total(tree): if tree is None: return 0 return total(tree.left) + total(tree.right) + tree.cargo ``` The base case is the empty tree, which contains no cargo, so the sum is 0. The recursive step makes two recursive calls to find the sum of the child trees. When the recursive calls complete, we add the cargo of the parent and return the total. A tree is a natural way to represent the structure of an expression. Unlike other notations, it can represent the computation unambiguously. For example, the infix expression `1 + 2 * 3` is ambiguous unless we know that the multiplication happens before the addition. This expression tree represents the same computation: The nodes of an expression tree can be operands like `1` and `2` or operators like `+` and `*` . Operands are leaf nodes; operator nodes contain references to their operands. (All of these operators are binary, meaning they have exactly two operands.) We can build this tree like this: ``` >>> tree = Tree("+", Tree(1), Tree("*", Tree(2), Tree(3))) ``` Looking at the figure, there is no question what the order of operations is; the multiplication happens first in order to compute the second operand of the addition. Expression trees have many uses. The example in this chapter uses trees to translate expressions to postfix, prefix, and infix. Similar trees are used inside compilers to parse, optimize, and translate programs. We can traverse an expression tree and print the contents like this: ``` def print_tree(tree): if tree is None: return print(tree.cargo, end=" ") print_tree(tree.left) print_tree(tree.right) ``` In other words, to print a tree, first print the contents of the root, then print the entire left subtree, and then print the entire right subtree. This way of traversing a tree is called a preorder, because the contents of the root appear before the contents of the children. For the previous example, the output is: ``` >>> tree = Tree("+", Tree(1), Tree("*", Tree(2), Tree(3))) >>> print_tree(tree) + 1 * 2 3 ``` This format is different from both postfix and infix; it is another notation called prefix, in which the operators appear before their operands. You might suspect that if you traverse the tree in a different order, you will get the expression in a different notation. For example, if you print the subtrees first and then the root node, you get: ``` def print_tree_postorder(tree): if tree is None: return print_tree_postorder(tree.left) print_tree_postorder(tree.right) print(tree.cargo, end=" ") ``` The result, `1 2 3 * +` , is in postfix! This order of traversal is called postorder. Finally, to traverse a tree inorder, you print the left tree, then the root, and then the right tree: ``` def print_tree_inorder(tree): if tree is None: return print_tree_inorder(tree.left) print(tree.cargo, end=" ") print_tree_inorder(tree.right) ``` The result is `1 + 2 * 3` , which is the expression in infix. To be fair, we should point out that we have omitted an important complication. Sometimes when we write an expression in infix, we have to use parentheses to preserve the order of operations. So an inorder traversal is not quite sufficient to generate an infix expression. Nevertheless, with a few improvements, the expression tree and the three recursive traversals provide a general way to translate expressions from one format to another. If we do an inorder traversal and keep track of what level in the tree we are on, we can generate a graphical representation of a tree: ``` def print_tree_indented(tree, level=0): if tree is None: return print_tree_indented(tree.right, level+1) print(" " * level + str(tree.cargo)) print_tree_indented(tree.left, level+1) ``` The parameter `level` keeps track of where we are in the tree. By default, it is initially 0. Each time we make a recursive call, we pass `level+1` because the child’s level is always one greater than the parent’s. Each item is indented by two spaces per level. The result for the example tree is: ``` >>> print_tree_indented(tree) 3 * 2 + 1 ``` If you look at the output sideways, you see a simplified version of the original figure. In this section, we parse infix expressions and build the corresponding expression trees. For example, the expression `(3 + 7) * 9` yields the following tree: Notice that we have simplified the diagram by leaving out the names of the attributes. The parser we will write handles expressions that include numbers, parentheses, and the operators `+` and `*` . We assume that the input string has already been tokenized into a Python list (producing this list is left as an exercise). The token list for `(3 + 7) * 9` is: ``` ["(", 3, "+", 7, ")", "*", 9, "end"] ``` The `end` token is useful for preventing the parser from reading past the end of the list. The first function we’ll write is `get_token` , which takes a token list and an expected token as parameters. It compares the expected token to the first token on the list: if they match, it removes the token from the list and returns `True` ; otherwise, it returns `False` : ``` def get_token(token_list, expected): if token_list[0] == expected: del token_list[0] return True return False ``` Since `token_list` refers to a mutable object, the changes made here are visible to any other variable that refers to the same object. The next function, `get_number` , handles operands. If the next token in `token_list` is a number, `get_number` removes it and returns a leaf node containing the number; otherwise, it returns `None` . ``` def get_number(token_list): x = token_list[0] if type(x) != type(0): return None del token_list[0] return Tree(x, None, None) ``` Before continuing, we should test `get_number` in isolation. We assign a list of numbers to `token_list` , extract the first, print the result, and print what remains of the token list: ``` >>> token_list = [9, 11, "end"] >>> x = get_number(token_list) >>> print_tree_postorder(x) 9 >>> print(token_list) [11, "end"] ``` The next method we need is `get_product` , which builds an expression tree for products. A simple product has two numbers as operands, like `3 * 7` . Here is a version of `get_product` that handles simple products. ``` def get_product(token_list): a = get_number(token_list) if get_token(token_list, "*"): b = get_number(token_list) return Tree("*", a, b) return a ``` Assuming that `get_number` succeeds and returns a singleton tree, we assign the first operand to `a` . If the next character is `*` , we get the second number and build an expression tree with `a` , `b` , and the operator. If the next character is anything else, then we just return the leaf node with `a` . Here are two examples: ``` >>> token_list = [9, "*", 11, "end"] >>> tree = get_product(token_list) >>> print_tree_postorder(tree) 9 11 * ``` ``` >>> token_list = [9, "+", 11, "end"] >>> tree = get_product(token_list) >>> print_tree_postorder(tree) 9 ``` The second example implies that we consider a single operand to be a kind of product. This definition of product is counter-intuitive, but it turns out to be useful. Now we have to deal with compound products, like like `3 * 5 * 13` . We treat this expression as a product of products, namely `3 * (5 * 13)` . The resulting tree is: With a small change in `get_product` , we can handle an arbitrarily long product: ``` def get_product(token_list): a = get_number(token_list) if get_token(token_list, "*"): b = get_product(token_list) # This line changed return Tree("*", a, b) return a ``` In other words, a product can be either a singleton or a tree with `*` at the root, a number on the left, and a product on the right. This kind of recursive definition should be starting to feel familiar. Let’s test the new version with a compound product: ``` >>> token_list = [2, "*", 3, "*", 5 , "*", 7, "end"] >>> tree = get_product(token_list) >>> print_tree_postorder(tree) 2 3 5 7 * * * ``` Next we will add the ability to parse sums. Again, we use a slightly counter-intuitive definition of sum. For us, a sum can be a tree with `+` at the root, a product on the left, and a sum on the right. Or, a sum can be just a product. If you are willing to play along with this definition, it has a nice property: we can represent any expression (without parentheses) as a sum of products. This property is the basis of our parsing algorithm. `get_sum` tries to build a tree with a product on the left and a sum on the right. But if it doesn’t find a `+` , it just builds a product. ``` def get_sum(token_list): a = get_product(token_list) if get_token(token_list, "+"): b = get_sum(token_list) return Tree("+", a, b) return a ``` Let’s test it with `9 * 11 + 5 * 7` : We are almost done, but we still have to handle parentheses. Anywhere in an expression where there can be a number, there can also be an entire sum enclosed in parentheses. We just need to modify `get_number` to handle subexpressions: ``` def get_number(token_list): if get_token(token_list, "("): x = get_sum(token_list) # Get the subexpression get_token(token_list, ")") # Remove the closing parenthesis return x else: x = token_list[0] if type(x) != type(0): return None del token_list[0] return Tree(x, None, None) ``` Let’s test this code with `9 * (11 + 5) * 7` : The parser handled the parentheses correctly; the addition happens before the multiplication. In the final version of the program, it would be a good idea to give `get_number` a name more descriptive of its new role. Throughout the parser, we’ve been assuming that expressions are well-formed. For example, when we reach the end of a subexpression, we assume that the next character is a close parenthesis. If there is an error and the next character is something else, we should deal with it. ``` def get_number(token_list): if get_token(token_list, "("): x = get_sum(token_list) if not get_token(token_list, ")"): raise ValueError('Missing close parenthesis") return x else: # The rest of the function omitted ``` The `raise` statement throws the exception object which we create. In this case we simply used the most appropriate type of built-in exception that we could find, but you should be aware that you can create your own more specific user-defined exceptions if you need to. If the function that called `get_number` , or one of the other functions in the traceback, handles the exception, then the program can continue. Otherwise, Python will print an error message and quit. In this section, we develop a small program that uses a tree to represent a knowledge base. The program interacts with the user to create a tree of questions and animal names. Here is a sample run: ``` Are you thinking of an animal? y Is it a bird? n What is the animals name? dog What question would distinguish a dog from a bird? Can it fly If the animal were dog the answer would be? n Are you thinking of an animal? y Can it fly? n Is it a dog? n What is the animals name? cat What question would distinguish a cat from a dog? Does it bark If the animal were cat the answer would be? n Are you thinking of an animal? y Can it fly? n Does it bark? y Is it a dog? y I rule! Are you thinking of an animal? n ``` Here is the tree this dialog builds: At the beginning of each round, the program starts at the top of the tree and asks the first question. Depending on the answer, it moves to the left or right child and continues until it gets to a leaf node. At that point, it makes a guess. If the guess is not correct, it asks the user for the name of the new animal and a question that distinguishes the (bad) guess from the new animal. Then it adds a node to the tree with the new question and the new animal. Here is the code: ``` def yes(ques): ans = input(ques).lower() return ans[0] == "y" def animal(): # Start with a singleton root = Tree("bird") # Loop until the user quits while True: print() if not yes("Are you thinking of an animal? "): break # Walk the tree tree = root while tree.left is not None: prompt = tree.cargo + "? " if yes(prompt): tree = tree.right else: tree = tree.left # Make a guess guess = tree.cargo prompt = "Is it a " + guess + "? " if yes(prompt): print("I rule!") continue # Get new information prompt = "What is the animal's name? " animal = input(prompt) prompt = "What question would distinguish a {0} from a {1}? " question = input(prompt.format(animal, guess)) # Add new information to the tree tree.cargo = question prompt = "If the animal were {0} the answer would be? " if yes(prompt.format(animal)): tree.left = Tree(guess) tree.right = Tree(animal) else: tree.left = Tree(animal) tree.right = Tree(guess) ``` The function `yes` is a helper; it prints a prompt and then takes input from the user. If the response begins with y or Y, the function returns `True` . The condition of the outer loop of `animal` is `True` , which means it will continue until the `break` statement executes, if the user is not thinking of an animal. The inner `while` loop walks the tree from top to bottom, guided by the user’s responses. When a new node is added to the tree, the new question replaces the cargo, and the two children are the new animal and the original cargo. One shortcoming of the program is that when it exits, it forgets everything you carefully taught it! Fixing this problem is left as an exercise. ``` binary operator An operator that takes two operands. binary tree A tree in which each node refers to zero, one, or two dependent nodes. child One of the nodes referred to by a node. leaf A bottom-most node in a tree, with no children. level The set of nodes equidistant from the root. parent The node that refers to a given node. postorder A way to traverse a tree, visiting the children of each node before the node itself. prefix notation A way of writing a mathematical expression with each operator appearing before its operands. preorder A way to traverse a tree, visiting each node before its children. root The topmost node in a tree, with no parent. siblings Nodes that share a common parent. subexpression An expression in parentheses that acts as a single operand in a larger expression. ``` `print_tree_inorder` so that it puts parentheses around every operator and pair of operands. Is `raise` statements. Test your code with improperly formed expressions. # Appendix A: Debugging Different kinds of errors can occur in a program, and it is useful to distinguish among them in order to track them down more quickly: `def` statement yields the somewhat redundant message . The first step in debugging is to figure out which kind of error you are dealing with. Although the following sections are organized by error type, some techniques are applicable in more than one situation. Syntax errors are usually easy to fix once you figure out what they are. Unfortunately, the error messages are often not helpful. The most common messages are ``` SyntaxError: invalid token ``` , neither of which is very informative. On the other hand, the message does tell you where in the program the problem occurred. Actually, it tells you where Python noticed a problem, which is not necessarily where the error is. Sometimes the error is prior to the location of the error message, often on the preceding line. If you are building the program incrementally, you should have a good idea about where the error is. It will be in the last line you added. If you are copying code from a book, start by comparing your code to the book’s code very carefully. Check every character. At the same time, remember that the book might be wrong, so if you see something that looks like a syntax error, it might be. Here are some ways to avoid the most common syntax errors: `for` , `while` , `if` , and `def` statements. `invalid token` error at the end of your program, or it may treat the following part of the program as a string until it comes to the next string. In the second case, it might not produce an error message at all! `=` instead of `==` inside a conditional. If nothing works, move on to the next section… If the compiler says there is an error and you don’t see it, that might be because you and the compiler are not looking at the same code. Check your programming environment to make sure that the program you are editing is the one Python is trying to run. If you are not sure, try putting an obvious and deliberate syntax error at the beginning of the program. Now run (or import) it again. If the compiler doesn’t find the new error, there is probably something wrong with the way your environment is set up. If this happens, one approach is to start again with a new program like Hello, World!, and make sure you can get a known program to run. Then gradually add the pieces of the new program to the working one. Once your program is syntactically correct, Python can import it and at least start running it. What could possibly go wrong? This problem is most common when your file consists of functions and classes but does not actually invoke anything to start execution. This may be intentional if you only plan to import this module to supply classes and functions. If it is not intentional, make sure that you are invoking a function to start execution, or execute one from the interactive prompt. Also see the Flow of Execution section below. If a program stops and seems to be doing nothing, we say it is hanging. Often that means that it is caught in an infinite loop or an infinite recursion. `print` statement immediately before the loop that says entering the loop and another immediately after that says exiting the loop. If you think you have an infinite loop and you think you know what loop is causing the problem, add a ``` while x > 0 and y < 0: # Do something to x # Do something to y print("x: ", x) print("y: ", y) print("condition: ", (x > 0 and y < 0)) ``` Now when you run the program, you will see three lines of output for each time through the loop. The last time through the loop, the condition should be `False` . If the loop keeps going, you will be able to see the values of `x` and `y` , and you might figure out why they are not being updated correctly. In a development environment like PyScripter, one can also set a breakpoint at the start of the loop, and single-step through the loop. While you do this, inspect the values of `x` and `y` by hovering your cursor over them. Of course, all programming and debugging require that you have a good mental model of what the algorithm ought to be doing: if you don’t understand what ought to happen to `x` and `y` , printing or inspecting its value is of little use. Probably the best place to debug the code is away from your computer, working on your understanding of what should be happening. Most of the time, an infinite recursion will cause the program to run for a while and then produce a ``` Maximum recursion depth exceeded ``` error. If you suspect that a function or method is causing an infinite recursion, start by checking to make sure that there is a base case. In other words, there should be some condition that will cause the function or method to return without making a recursive invocation. If not, then you need to rethink the algorithm and identify a base case. If there is a base case but the program doesn’t seem to be reaching it, add a Once again, if you have an environment that supports easy single-stepping, breakpoints, and inspection, learn to use them well. It is our opinion that walking through code step-by-step builds the best and most accurate mental model of how computation happens. Use it if you have it! If you are not sure how the flow of execution is moving through your program, add `foo` , where `foo` is the name of the function. Now when you run the program, it will print a trace of each function as it is invoked. If you’re not sure, step through the program with your debugger. If something goes wrong during runtime, Python prints a message that includes the name of the exception, the line of the program where the problem occurred, and a traceback. Put a breakpoint on the line causing the exception, and look around! The traceback identifies the function that is currently running, and then the function that invoked it, and then the function that invoked that, and so on. In other words, it traces the path of function invocations that got you to where you are. It also includes the line number in your file where each of these calls occurs. The first step is to examine the place in the program where the error occurred and see if you can figure out what happened. These are some of the most common runtime errors: NameErrorYou are trying to use a variable that doesn’t exist in the current environment. Remember that local variables are local. You cannot refer to them from outside the function where they are defined. TypeErrorThere are several possible causes: `self` . Then look at the method invocation; make sure you are invoking the method on an object with the right type and providing the other arguments correctly. KeyErrorYou are trying to access an element of a dictionary using a key value that the dictionary does not contain. AttributeErrorYou are trying to access an attribute or method that does not exist. IndexErrorThe index you are using to access a list, string, or tuple is greater than its length minus one. Immediately before the site of the error, add a `print` statements I get inundated with output. One of the problems with using To simplify the output, you can remove or comment out To simplify the program, there are several things you can do. First, scale down the problem the program is working on. For example, if you are sorting a sequence, sort a small sequence. If the program takes input from the user, give it the simplest input that causes the problem. Second, clean up the program. Remove dead code and reorganize the program to make it as easy to read as possible. For example, if you suspect that the problem is in a deeply nested part of the program, try rewriting that part with simpler structure. If you suspect a large function, try splitting it into smaller functions and testing them separately. Often the process of finding the minimal test case leads you to the bug. If you find that a program works in one situation but not in another, that gives you a clue about what is going on. Similarly, rewriting a piece of code can help you find subtle bugs. If you make a change that you think doesn’t affect the program, and it does, that can tip you off. You can also wrap your debugging print statements in some condition, so that you suppress much of the output. For example, if you are trying to find an element using a binary search, and it is not working, you might code up a debugging print statement inside a conditional: if the range of candidate elements is less that 6, then print debugging information, otherwise don’t print. Similarly, breakpoints can be made conditional: you can set a breakpoint on a statement, then edit the breakpoint to say “only break if this expression becomes true”. In some ways, semantic errors are the hardest to debug, because the compiler and the runtime system provide no information about what is wrong. Only you know what the program is supposed to do, and only you know that it isn’t doing it. The first step is to make a connection between the program text and the behavior you are seeing. You need a hypothesis about what the program is actually doing. One of the things that makes that hard is that computers run so fast. You will often wish that you could slow the program down to human speed, and with some debuggers you can. But the time it takes to insert a few well-placed You should ask yourself these questions: In order to program, you need to have a mental model of how programs work. If you write a program that doesn’t do what you expect, very often the problem is not in the program; it’s in your mental model. The best way to correct your mental model is to break the program into its components (usually the functions and methods) and test each component independently. Once you find the discrepancy between your model and reality, you can solve the problem. Of course, you should be building and testing components as you develop the program. If you encounter a problem, there should be only a small amount of new code that is not known to be correct. Writing complex expressions is fine as long as they are readable, but they can be hard to debug. It is often a good idea to break a complex expression into a series of assignments to temporary variables. ``` self.hands[i].add_card(self.hands[self.find_neighbor(i)].pop_card()) ``` This can be rewritten as: ``` neighbor = self.find_neighbor (i) picked_card = self.hands[neighbor].pop_card() self.hands[i].add_card(picked_card) ``` The explicit version is easier to read because the variable names provide additional documentation, and it is easier to debug because you can check the types of the intermediate variables and display or inspect their values. Another problem that can occur with big expressions is that the order of evaluation may not be what you expect. For example, if you are translating the expression `x/2pi` into Python, you might write: `y = x / 2 * math.pi` That is not correct because multiplication and division have the same precedence and are evaluated from left to right. So this expression computes `(x/2)pi` . A good way to debug expressions is to add parentheses to make the order of evaluation explicit: ``` y = x / (2 * math.pi) ``` Whenever you are not sure of the order of evaluation, use parentheses. Not only will the program be correct (in the sense of doing what you intended), it will also be more readable for other people who haven’t memorized the rules of precedence. If you have a `return` statement with a complex expression, you don’t have a chance to print the `return` value before returning. Again, you can use a temporary variable. For example, instead of: ``` return self.hands[i].remove_matches() ``` you could write: ``` count = self.hands[i].remove_matches() return count ``` Now you have the opportunity to display or inspect the value of `count` before returning. First, try getting away from the computer for a few minutes. Computers emit waves that affect the brain, causing these effects: If you find yourself suffering from any of these symptoms, get up and go for a walk. When you are calm, think about the program. What is it doing? What are some possible causes of that behavior? When was the last time you had a working program, and what did you do next? Sometimes it just takes time to find a bug. We often find bugs when we are away from the computer and let our minds wander. Some of the best places to find bugs are trains, showers, and in bed, just before you fall asleep. It happens. Even the best programmers occasionally get stuck. Sometimes you work on a program so long that you can’t see the error. A fresh pair of eyes is just the thing. Before you bring someone else in, make sure you have exhausted the techniques described here. Your program should be as simple as possible, and you should be working on the smallest input that causes the error. You should have When you bring someone in to help, be sure to give them the information they need: Good instructors and helpers will also do something that should not offend you: they won’t believe when you tell them “I’m sure all the input routines are working just fine, and that I’ve set up the data correctly!”. They will want to validate and check things for themselves. After all, your program has a bug. Your understanding and inspection of the code have not found it yet. So you should expect to have your assumptions challenged. And as you gain skills and help others, you’ll need to do the same for them. When you find the bug, take a second to think about what you could have done to find it faster. Next time you see something similar, you will be able to find the bug more quickly. Remember, the goal is not just to make the program work. The goal is to learn how to make the program work. # Appendix B: An odds-and-ends Workbook This workbook / cookbook of recipes is still very much under construction. This was an important study commissioned by the President in the USA. It looked at what was needed for students to become proficient in maths. But it is also an amazingly accurate fit for what we need for proficiency in Computer Science, or even for proficiency in playing Jazz! Procedural Fluency: Learn the syntax. Learn to type. Learn your way around your tools. Learn and practice your scales. Learn to rearrange formulae. Conceptual Understanding: Understand why the bits fit together like they do. Strategic Competence: Can you see what to do next? Can you formulate this word problem into your notation? Can you take the music where you want it to go? Adaptive Reasoning: Can you see how to change what you’ve learned for this new problem? A Productive Disposition: We need that Can Do! attitude! Check out http://mason.gmu.edu/~jsuh4/teaching/strands.htm, or Kilpatrick’s book at http://www.nap.edu/openbook.php?isbn=0309069955 Sometimes it is fun to do powerful things with Python — remember that part of the “productive disposition” we saw under the five threads of proficiency included efficacy — the sense of being able to accomplish something useful. Here is a Python example of how you can send email to someone. ``` import smtplib, email.mime.text me = "<EMAIL>" # Put your own email here fred = "<EMAIL>" # And fred's email address here your_mail_server = "<EMAIL>" # Ask your system administrator # Create a text message containing the body of the email. # You could read this from a file, of course. msg = email.mime.text.MIMEText("""Hey Fred, I'm having a party, please come at 8pm. Bring a plate of snacks and your own drinks. Joe""" ) msg["From"] = me # Add headers to the message object msg["To"] = fred msg["Subject"] = "Party on Saturday 23rd" # Create a connection to your mail server svr = smtplib.SMTP(your_mail_server) response = svr.sendmail(me, fred, msg.as_string()) # Send message if response != {}: print("Sending failed for ", response) else: print("Message sent.") svr.quit() # Close the connection ``` In the context of the course, notice how we use the two objects in this program: we create a message object on line 9, and set some attributes at lines 16-18. We then create a connection object at line 21, and ask it to send our message. Python is gaining in popularity as a tool for writing web applications. Although one will probably use Python to process requests behind a web server like Apache, there are powerful libraries which allow you to write your own stand-alone web server in a couple of lines. This simpler approach means that you can have a test web server running on your own desktop machine in a couple of minutes, without having to install any extra software. In this cookbook example we use the `wsgi` (“wizz-gee”) protocol: a modern way of connecting web servers to code that runs to provide the services. See http://en.wikipedia.org/wiki/Web_Server_Gateway_Interface for more on `wsgi` . ``` from codecs import latin_1_encode from wsgiref.simple_server import make_server def my_handler(environ, start_response): path_info = environ.get("PATH_INFO", None) query_string = environ.get("QUERY_STRING", None) response_body = "You asked for {0} with query {1}".format( path_info, query_string) response_headers = [("Content-Type", "text/plain"), ("Content-Length", str(len(response_body)))] start_response("200 OK", response_headers) response = latin_1_encode(response_body)[0] return [response] httpd = make_server("127.0.0.1", 8000, my_handler) httpd.serve_forever() # Start the server listening for requests ``` When you run this, your machine will listen on port 8000 for requests. (You may have to tell your firewall software to be kind to your new application!) In a web browser, navigate to http://127.0.0.1:8000/catalogue?category=guitars. Your browser should get the response ``` You asked for /catalogue with query category=guitars ``` Your web server will keep running until you interrupt it (Ctrl+F2 if you are using PyScripter). The important lines 15 and 16 create a web server on the local machine, listening at port 8000. Each incoming html request causes the server to call `my_handler` which processes the request and returns the appropriate response. We modify the above example below: `my_handler` now interrogates the `path_info` , and calls specialist functions to deal with each different kind of incoming request. (We say that `my_handler` dispatches the request to the appropriate function.) We can easily add other more request cases: def my_handler(environ, start_response): path_info = environ.get("PATH_INFO", None) if path_info == "/gettime": response_body = gettime(environ, start_response) elif path_info == "/classlist": response_body = classlist(environ, start_response) else: response_body = "" start_response("404 Not Found", [("Content-Type", "text/plain")]) response = latin_1_encode(response_body)[0] return [response] def gettime(env, resp): html_template = """<html> <body bgcolor='lightblue'> <h2>The time on the server is {0}</h2> <body> </html> """ response_body = html_template.format(time.ctime()) response_headers = [("Content-Type", "text/html"), ("Content-Length", str(len(response_body)))] resp("200 OK", response_headers) return response_body def classlist(env, resp): return # Will be written in the next section! ``` Notice how `gettime` returns an (admittedly simple) html document which is built on the fly by using `format` to substitute content into a predefined template. Python has a library for using the popular and lightweight sqlite database. Learn more about this self-contained, embeddable, zero-configuration SQL database engine at http://www.sqlite.org. Firstly, we have a script that creates a new database, creates a table, and stores some rows of test data into the table: (Copy and paste this code into your Python system.) We get this output: ``` Database table StudentSubjects has been created. StudentSubjects table now has 18 rows of data. ``` Our next recipe adds to our web browser from the previous section. We’ll allow a query like ``` classlist?subject=CompSci&year=2012 ``` and show how our server can extract the arguments from the query string, query the database, and send the rows back to the browser as a formatted table within an html page. We’ll start with two new imports to get access to `sqlite3` and `cgi` , a library which helps us parse forms and query strings that are sent to the server: ``` import sqlite3 import cgi ``` Now we replace the stub function `classlist` with a handler that can do what we need: ``` classlistTemplate = """<html> <body bgcolor='lightgreen'> <h2>Students taking {0} during {1}:</h2> <table border=3 cellspacing=2 cellpadding=2> {2} </table> <body> </html> """ def classlist(env, resp): # Parse the field value from the query string (or from a submitted form) # In a real server you'd want to check thay they were present! the_fields = cgi.FieldStorage(environ = env) subj = the_fields["subject"].value year = the_fields["year"].value # Attach to the database, build a query, fetch the rows. connection = sqlite3.connect("c:\studentRecords.db") cursor = connection.cursor() cursor.execute("SELECT * FROM StudentSubjects WHERE subject=? AND year=?", (subj, year)) result = cursor.fetchall() # Build the html rows for the table table_rows = "" for (sn, yr, subj) in result: table_rows += " <tr><td>{0}<td>{1}<td>{2}\n". format(sn, yr, subj) # Now plug the headings and data into the template, and complete the response response_body = classlistTemplate.format(subj, year, table_rows) response_headers = [("Content-Type", "text/html"), ("Content-Length", str(len(response_body)))] resp("200 OK", response_headers) return response_body ``` When we run this and navigate to http://127.0.0.1:8000/classlist?subject=CompSci&year=2012 with a browser, we’ll get output like this: It is unlikely that we would write our own web server from scratch. But the beauty of this approach is that it creates a great test environment for working with server-side applications that use the `wsgi` protocols. Once our code is ready, we can deploy it behind a web server like Apache which can interact with our handlers using `wsgi` . # Appendix C: Configuring Ubuntu for Python Development Note: the following instructions assume that you are connected to the Internet and that you have both the `main` and `universe` package repositories enabled. All unix shell commands are assumed to be running from your home directory ($HOME). Finally, any command that begins with `sudo` assumes that you have administrative rights on your machine. If you do not — please ask your system administrator about installing the software you need. What follows are instructions for setting up an Ubuntu 9.10 (Karmic) home environment for use with this book. I use Ubuntu GNU/Linux for both development and testing of the book, so it is the only system about which I can personally answer setup and configuration questions. In the spirit of software freedom and open collaboration, please contact me if you would like to maintain a similar appendix for your own favorite system. I’d be more than happy to link to it or put it on the Open Book Project site, provided you agree to answer user feedback concerning it. Thanks! <NAME>Governor’s Career and Technical Academy in Arlington Arlington, Virginia Vim can be used very effectively for Python development, but Ubuntu only comes with the vim-tiny package installed by default, so it doesn’t support color syntax highlighting or auto-indenting. To use Vim, do the following: From the unix command prompt, run: ``` $ sudo apt-get install vim-gnome ``` Create a file in your home directory named .vimrc that contains the following: ``` syntax enable filetype indent on set et set sw=4 set smarttab map <f2> :w\|!python % ``` When you edit a file with a .py extension, you should now have color syntax highlighting and auto indenting. Pressing the key should run your program, and bring you back to the editor when the program completes. To learn to use vim, run the following command at a unix command prompt: `$ vimtutor` The following creates a useful environment in your home directory for adding your own Python libraries and executable scripts: From the command prompt in your home directory, create bin and lib/python subdirectories by running the following commands: ``` $ mkdir bin lib $ mkdir lib/python ``` Add the following lines to the bottom of your .bashrc in your home directory: ``` PYTHONPATH=$HOME/lib/python EDITOR=vim export PYTHONPATH EDITOR ``` This will set your prefered editor to Vim, add your own lib/python subdirectory for your Python libraries to your Python path, and add your own bin directory as a place to put executable scripts. You need to logout and log back in before your local bin directory will be in your search path. On unix systems, Python scripts can be made executable using the following process: Add this line as the first line in the script: ``` #!/usr/bin/env python3 ``` At the unix command prompt, type the following to make myscript.py executable: ``` $ chmod +x myscript.py ``` Move myscript.py into your bin directory, and it will be runnable from anywhere. # Appendix D: Customizing and Contributing to the Book Note: the following instructions assume that you are connected to the Internet and that you have both the `main` and `universe` package repositories enabled. All unix shell commands are assumed to be running from your home directory ($HOME). Finally, any command that begins with `sudo` assumes that you have administrative rights on your machine. If you do not — please ask your system administrator about installing the software you need. This book is free as in freedom, which means you have the right to modify it to suite your needs, and to redistribute your modifications so that our whole community can benefit. That freedom lacks meaning, however, if you the tools needed to make a custom version or to contribute corrections and additions are not within your reach. This appendix attempts to put those tools in your hands. Thanks! <NAME>Governor’s Career and Technical Academy in Arlington Arlington, Virginia This book is marked up in ReStructuredText using a document generation system called Sphinx. The source code is located at https://code.launchpad.net/~thinkcspy-rle-team/thinkcspy/thinkcspy3-rle. The easiest way to get the source code on an Ubuntu computer is: on your system to install bzr. ``` bzr branch lp:thinkcspy ``` . The last command above will download the book source from Launchpad into a directory named `thinkcspy` which contains the Sphinx source and configuration information needed to build the book. To generate the html version of the book: to install the Sphinx documentation system. `cd thinkcspy` - change into the `thinkcspy` directory containing the book source. `make html` . The last command will run sphinx and create a directory named `build` containing the html version of the text. Note: Sphinx supports building other output types as well, such as PDF. This requires that LaTeX be present on your system. Since I only personally use the html version, I will not attempt to document that process here. # Appendix E: Some Tips, Tricks, and Common Errors These are small summaries of ideas, tips, and commonly seen errors that might be helpful to those beginning Python. Functions help us with our mental chunking: they allow us to group together statements for a high-level purpose, e.g. a function to sort a list of items, a function to make the turtle draw a spiral, or a function to compute the mean and standard deviation of some measurements. There are two kinds of functions: fruitful, or value-returning functions, which calculate and return a value, and we use them because we’re primarily interested in the value they’ll return. Void (non-fruitful) functions are used because they perform actions that we want done — e.g. make a turtle draw a rectangle, or print the first thousand prime numbers. They always return `None` — a special dummy value. Tip: `None` is not a string Values like `None` , `True` and `False` are not strings: they are special values in Python, and are in the list of keywords we gave in chapter 2 (Variables, expressions, and statements). Keywords are special in the language: they are part of the syntax. So we cannot create our own variable or function with a name `True` — we’ll get a syntax error. (Built-in functions are not privileged like keywords: we can define our own variable or function called `len` , but we’d be silly to do so!) Along with the fruitful/void families of functions, there are two flavors of the `return` statement in Python: one that returns a useful value, and the other that returns nothing, or `None` . And if we get to the end of any function and we have not explicitly executed any `return` statement, Python automatically returns the value `None` . Tip: Understand what the function needs to return Perhaps nothing — some functions exists purely to perform actions rather than to calculate and return a result. But if the function should return a value, make sure all execution paths do return the value. To make functions more useful, they are given parameters. So a function to make a turtle draw a square might have two parameters — one for the turtle that needs to do the drawing, and another for the size of the square. See the first example in Chapter 4 (Functions) — that function can be used with any turtle, and for any size square. So it is much more general than a function that always uses a specific turtle, say `tess` to draw a square of a specific size, say 30. Tip: Use parameters to generalize functions Understand which parts of the function will be hard-coded and unchangeable, and which parts should become parameters so that they can be customized by the caller of the function. Tip: Try to relate Python functions to ideas we already know In math, we’re familiar with functions like `f(x) = 3x + 5` . We already understand that when we call the function `f(3)` we make some association between the parameter x and the argument 3. Try to draw parallels to argument passing in Python. Quiz: Is the function `f(z) = 3z + 5` the same as function `f` above? We often want to know if some condition holds for any item in a list, e.g. “does the list have any odd numbers?” This is a common mistake: ``` def any_odd(xs): # Buggy version """ Return True if there is an odd number in xs, a list of integers. """ for v in xs: if v % 2 == 1: return True else: return False ``` Can we spot two problems here? As soon as we execute a `return` , we’ll leave the function. So the logic of saying “If I find an odd number I can return `True` ” is fine. However, we cannot return `False` after only looking at one item — we can only return `False` if we’ve been through all the items, and none of them are odd. So line 6 should not be there, and line 7 has to be outside the loop. To find the second problem above, consider what happens if you call this function with an argument that is an empty list. Here is a corrected version: ``` def any_odd(xs): """ Return True if there is an odd number in xs, a list of integers. """ for v in xs: if v % 2 == 1: return True return False ``` This “eureka”, or “short-circuit” style of returning from a function as soon as we are certain what the outcome will be was first seen in Section 8.10, in the chapter on strings. It is preferred over this one, which also works correctly: ``` def any_odd(xs): """ Return True if there is an odd number in xs, a list of integers. """ count = 0 for v in xs: if v % 2 == 1: count += 1 # Count the odd numbers if count > 0: return True else: return False ``` The performance disadvantage of this one is that it traverses the whole list, even if it knows the outcome very early on. Tip: Think about the return conditions of the function Do I need to look at all elements in all cases? Can I shortcut and take an early exit? Under what conditions? When will I have to examine all the items in the list? The code in lines 7-10 can also be tightened up. The expression `count > 0` evaluates to a Boolean value, either `True` or `False` . The value can be used directly in the `return` statement. So we could cut out that code and simply have the following: ``` def any_odd(xs): """ Return True if there is an odd number in xs, a list of integers. """ count = 0 for v in xs: if v % 2 == 1: count += 1 # Count the odd numbers return count > 0 # Aha! a programmer who understands that Boolean # expressions are not just used in if statements! ``` Although this code is tighter, it is not as nice as the one that did the short-circuit return as soon as the first odd number was found. Tip: Generalize your use of Booleans Mature programmers won’t write ``` if is_prime(n) == True: ``` when they could say instead `if is_prime(n):` Think more generally about Boolean values, not just in the context of `if` or `while` statements. Like arithmetic expressions, they have their own set of operators ( `and` , `or` , `not` ) and values ( `True` , `False` ) and can be assigned to variables, put into lists, etc. A good resource for improving your use of Booleans is http://en.wikibooks.org/wiki/Non-Programmer%27s_Tutorial_for_Python_3/Boolean_Expressions Exercise time: `True` if all the numbers are odd? Can you still use a short-circuit style? `True` if at least three of the numbers are odd? Short-circuit the traversal when the third odd number is found — don’t traverse the whole list unless we have to. Functions are called, or activated, and while they’re busy they create their own stack frame which holds local variables. A local variable is one that belongs to the current activation. As soon as the function returns (whether from an explicit return statement or because Python reached the last statement), the stack frame and its local variables are all destroyed. The important consequence of this is that a function cannot use its own variables to remember any kind of state between different activations. It cannot count how many times it has been called, or remember to switch colors between red and blue UNLESS it makes use of variables that are global. Global variables will survive even after our function has exited, so they are the correct way to maintain information between calls. ``` sz = 2 def h2(): """ Draw the next step of a spiral on each call. """ global sz tess.turn(42) tess.forward(sz) sz += 1 ``` This fragment assumes our turtle is `tess` . Each time we call `h2()` it turns, draws, and increases the global variable `sz` . Python always assumes that an assignment to a variable (as in line 7) means that we want a new local variable, unless we’ve provided a `global` declaration (on line 4). So leaving out the global declaration means this does not work. Tip: Local variables do not survive when you exit the function Use a Python visualizer like the one at http://pythontutor.com/ to build a strong understanding of function calls, stack frames, local variables, and function returns. Tip: Assignment in a function creates a local variable Any assignment to a variable within a function means Python will make a local variable, unless we override with `global` . Our chapter on event handling showed three different kinds of events that we could handle. They each have their own subtle points that can trip us up. `x` and `y` ). This is how the handler knows where the mouse click occurred. `wn.listen()` before our program will receive any keypresses. But if the user presses the key 10 times, the handler will be called ten times. `wn.ontimer(....)` to set up the next event. There are only four really important operations on strings, and we’ll be able to do just about anything. There are many more nice-to-have methods (we’ll call them sugar coating) that can make life easier, but if we can work with the basic four operations smoothly, we’ll have a great grounding. So if we need to know if “snake” occurs as a substring within `s` , we could write ``` if s.find("snake") >= 0: ... if "snake" in s: ... # Also works, nice-to-know sugar coating! ``` It would be wrong to split the string into words unless we were asked whether the word “snake” occurred in the string. Suppose we’re asked to read some lines of data and find function definitions, e.g.: ``` def some_function_name(x, y): ``` , and we are further asked to isolate and work with the name of the function. (Let’s say, print it.) ``` s = "..." # Get the next line from somewhere def_pos = s.find("def ") # Look for "def " in the line if def_pos == 0: # If it occurs at the left margin op_index = s.find("(") # Find the index of the open parenthesis fnname = s[4:op_index] # Slice out the function name print(fnname) # ... and work with it. ``` One can extend these ideas: `def_pos` position were spaces. We would not want to do the wrong thing on data like this: ``` # I def initely like Python! ``` `def` and the start of the function name. It will not work nicely for `def f(x)` As we’ve already mentioned, there are many more “sugar-coated” methods that let us work more easily with strings. There is an `rfind` method, like `find` , that searches from the end of the string backwards. It is useful if we want to find the last occurrence of something. The `lower` and `upper` methods can do case conversion. And the `split` method is great for breaking a string into a list of words, or into a list of lines. We’ve also made extensive use in this book of the `format` method. In fact, if we want to practice reading the Python documentation and learning some new methods on our own, the string methods are an excellent resource. Exercises: `find` . It takes some extra arguments, so you can set a starting point from which it will search.) Computers are useful because they can repeat computation, accurately and fast. So loops are going to be a central feature of almost all programs you encounter. Tip: Don’t create unnecessary lists Lists are useful if you need to keep data for later computation. But if you don’t need lists, it is probably better not to generate them. Here are two functions that both generate ten million random numbers, and return the sum of the numbers. They both work. ``` import random joe = random.Random() def sum1(): """ Build a list of random numbers, then sum them """ xs = [] for i in range(10000000): num = joe.randrange(1000) # Generate one random number xs.append(num) # Save it in our list tot = sum(xs) return tot def sum2(): """ Sum the random numbers as we generate them """ tot = 0 for i in range(10000000): num = joe.randrange(1000) tot += num return tot print(sum1()) print(sum2()) ``` What reasons are there for preferring the second version here? (Hint: open a tool like the Performance Monitor on your computer, and watch the memory usage. How big can you make the list before you get a fatal memory error in `sum1` ?) In a similar way, when working with files, we often have an option to read the whole file contents into a single string, or we can read one line at a time and process each line as we read it. Line-at-a-time is the more traditional and perhaps safer way to do things — you’ll be able to work comfortably no matter how large the file is. (And, of course, this mode of processing the files was essential in the old days when computer memories were much smaller.) But you may find whole-file-at-once is sometimes more convenient!
rampart
readthedoc
XML
RAMPART 0.11.0 documentation [RAMPART](index.html#document-index) --- RAMPART is a *de novo* assembly pipeline that makes use of third party-tools and High Performance Computing resources. It can be used as a single interface to several popular assemblers, and can perform automated comparison and analysis of any generated assemblies. RAMPART was created by The Genome Analysis Centre (TGAC) in Norwich, UK by: * <NAME> * <NAME> * <NAME> Useful links: * Project Website: <http://www.tgac.ac.uk/rampart/> * Online Documentation (this manual): <http://rampart.readthedocs.org/en/latest/index.html> * Source code and distributable tarball: <https://github.com/TGAC/RAMPARTTable of Contents[¶](#table-of-contents) === Introduction[¶](#introduction) --- RAMPART is a configurable pipeline for *de novo* assembly of DNA sequence data. RAMPART is not a *de novo* assembler. There are already many very good freely available assembly tools, however, few will produce a good assembly, suitable for annotation and downstream analysis, first time around. The reason for this is that genome assembly of non-model organisms are often complex and involve tuning of parameters and potentially, pre and post processing of the assembly. There are many combinations of tools that could be tried and no clear way of knowing *a priori*, which will work best. RAMPART makes use of tried and tested tools for read pre-processing, assembly and assembly improvement, and allows the user to configure these tools and specify how they should be executed in a single configuration file. RAMPART also provides options for comparing and analysing sequence data and assemblies. This functionality means that RAMPART can be used for at least 4 different purposes: * Analysing sequencing data and understanding novel genomes. * Comparing and testing different assemblers and related tools on known datasets. * An automated pipeline for *de novo* assembly projects. * Provides a single common interface for a number of different assembly tools. The intention is that RAMPART gives the user the possibility of producing a decent assembly that is suitable for distribution and downstream analysis. Of course, in practice not every assembly project is so straight forward, the actual quality of assembly is always going to be a function of at least these sequencing variables: * sequencing quality * sequencing depth * read length * read insert size ... and the genome properties such as: * genome size * genome ploidy * genome repetitiveness RAMPART enables a bioinformatician to get a reasonable assembly, given the constraints just mentioned, with minimal effort. In many cases, particularly for organisms with haploid genomes or relatively simple (i.e. not too heterozygous and not too repeaty) diploid genomes, where appropriate sequenceing has been conducted, RAMPART can produce an assembly that suitable for annotation and downstream analysis. RAMPART is designed with High Performance Computing (HPC) resources in mind. Currently, LSF and PBS schedulers are supported and RAMPART can execute jobs in parallel over many nodes if requested. Having said this RAMPART can be told to run all parts of the pipeline in sequence on a regular server provided enough memory is available for the job in question. This documentation is designed to help end users install, configure and run RAMPART. ### Comparison to other systems[¶](#comparison-to-other-systems) **Roll-your-own Make files** This method probably offers the most flexibility. It allows you to define exactly how you want your tools to run in whatever order you wish. However, you will need to define all the inputs and outputs to each tool. And in some cases write scripts to manage interoperability between some otherwise incompatible tools. RAMPART takes all this complication away from the user as all input and output between each tool is managed automatically. In addition, RAMPART offers more support for HPC environments, making it easier to parallelize steps in the pipeline. Managing this manually is difficult and time consuming. **Galaxy** This is a platform for chaining together tools in such a way as to promote reproducible analyses. It also has support for HPC environments. However, it is a heavy weight solution, and is not trivial to install and configure locally. RAMPART itself is lightweight in comparison, and ignoring dependencies, much easier to install. In addition, galaxy is not designed with *de novo* genome assembly specifically in mind, whereas RAMPART is. RAMPART places more constraints in the workflow design process as well as more checks initially before the workflow is started. In addition, as mentioned above RAMPART will automatically manage interoperability between tools, which will likely save the user time debugging workflows and writing their own scripts to manage specific tool interaction issues. **A5-miseq** and **BugBuilder** Both are domain specific pipeline for automating assembly of microbial organisms. They are designed specifically with microbial genomes in mind and keep their interfaces simple and easy to use. RAMPART, while more complex to use, is far more configurable as a result. RAMPART also allows users to tackle eukaryote assembly projects. **iMetAMOS** This is a configurable pipeline for isolate genome assembly and annotation. One distinct advantage of iMetAMOS is that it offers the ability to annotate your genome. It also supports some assemblers that RAMPART currently does not. Both systems are highly configurable, allowing the user to create bespoke pipelines and compare and validate the results of multiple assemblers. However, in it’s current form, iMetAMOS doesn’t have as much provision for automating or managing assembly scaffolding or gap filling steps in the assembly workflow. In addition, we would argue that RAMPART is more configurable, easier to use and has more support for HPC environments. Dependencies[¶](#dependencies) --- In order to do useful work RAMPART can call out to a number of third party tools during execution. The current list of dependencies is shown below. For full functionality, all these tools should be installed on your environment, however they are not mandatory so you only need to install those which you wish to use. Assemblers (RAMPART is not an assembler itself so you should have at least one of these installed to do useful work): * Abyss V1.5 * ALLPATHS-LG V50xxx * Platanus V1.2 * SPAdes 3.1 * SOAPdenovo V2 * Velvet V1.2 * Discovar V51xxx Dataset improvement tools: * Sickle V1.2 * Quake V0.3 * Musket V1.0 Assembly improvement tools: * Platanus V1.2 (for scaffolding and gap closing) * SSPACE Basic V2.0 * SOAP de novo V2 (for scaffolding) * SOAP GapCloser V1.12 * Reapr V1.0 Assembly Analysis Tools: * Quast V2.3 - for contiguity analysis * CEGMA V2.4 - for assembly completeness analysis * KAT V1.0 - for kmer analysis Miscellaneous Tools: * TGAC Subsampler V1.0 - for reducing coverage of reads * Jellyfish V1.1.10 - for kmer counting * Kmer Genie V1.6 - for determining optimal kmer values for assembly To save time finding all these tools on the internet RAMPART provides two options. The first and recommended approach is to download a compressed tarball of all supported versions of the tools, which is available on the github releases page: `https://github.com/TGAC/RAMPART/releases`. The second option is to download them all to a directory of your choice. The one exception to this is SSPACE, which requires you to fill out a form prior to download. RAMPART can help with this. After the core RAMPART pipeline is compiled, type: `rampart-download-deps <dir>`. The tool will place all downloaded packages in a sub-directory called “rampart_dependencies” off of the specified directory. Note that executing this command does not try to install the tools, as this can be a complex process and you may wish to run a custom installation in order to compile and configure the tools in a way that is optimal for your environment. In case the specific tool versions requested are no longer available to download the project URLs are specified below. It’s possible that alternative (preferably newer) versions of the software may still work if the interfaces have not changed significantly. If you find that a tool does not work in the RAMPART pipeline please contact [<EMAIL>](mailto:<EMAIL>), or raise a job ticket via the github issues page: <https://github.com/TGAC/RAMPART/issues>. Project URLs: * Abyss - <http://www.bcgsc.ca/platform/bioinfo/software/abyss> * ALLPATHS-LG - <http://www.broadinstitute.org/software/allpaths-lg/blog/?page_id=12> * Cegma - <http://korflab.ucdavis.edu/datasets/cegma/> * Discovar - <http://www.broadinstitute.org/software/discovar/blog/> * KAT - <http://www.tgac.ac.uk/kat/> * Kmer Genie - <http://kmergenie.bx.psu.edu/> * Jellyfish - <http://www.cbcb.umd.edu/software/jellyfish/> * Musket - <http://musket.sourceforge.net/homepage.htm#latest> * Quake - <http://www.cbcb.umd.edu/software/quake/> * Quast - <http://bioinf.spbau.ru/quast> * Platanus - <http://http://platanus.bio.titech.ac.jp/platanus-assembler/> * Reapr - <https://www.sanger.ac.uk/resources/software/reapr/#t_2> * Sickle - <https://github.com/najoshi/sickle> * SoapDeNovo - <http://soap.genomics.org.cn/soapdenovo.html> * SOAP_GapCloser - <http://soap.genomics.org.cn/soapdenovo.html> * SPAdes - <http://spades.bioinf.spbau.ru> * SSPACE_Basic - <http://www.baseclear.com/landingpages/basetools-a-wide-range-of-bioinformatics-solutions/sspacev12/> * Subsampler - <https://github.com/homonecloco/subsampler> * Velvet - <https://www.ebi.ac.uk/~zerbino/velvet/Installation[¶](#installation) --- Before installing RAMPART please ensure any dependencies listed above are installed. In addition, the following dependencies are required to install and run RAMPART: * Java Runtime Environment (JRE) V1.7+ RAMPART can be installed either from a distributable tarball, from source via a `git clone`, or via homebrew. These steps are described below. Before that however, here are a few things to keep in mind during the installation process: ### Quick start[¶](#quick-start) To get a bare bones version of RAMPART up and running quickly, we recommend installation via Homebrew. This requires you to first install homebrew and tap homebrew/science. On Mac you can access homebrew from `http://brew.sh` and linuxbrew for linux from `https://github.com/Homebrew/linuxbrew`. Once installed make sure to tap homebrew science with `brew tap homebrew/science`. Then, as discussed above, please ensure you have JRE V1.7+ installed. Finally, to install RAMPART simply type `brew install rampart`. This will install RAMPART into your homebrew cellar, with the bare minimum of dependencies: Quake, Kmergenie, ABySS, Velvet, Quast, KAT. ### From tarball[¶](#from-tarball) RAMPART is available as a distributable tarball. The installation process is simply involves unpacking the compressed tarball, available from the RAMPART github page: `https://github.com/TGAC/RAMPART/releases`, to a directory of your choice: `tar -xvf <name_of_tarball>`. This will create a directory called `rampart-<version>` and in there should be the following sub-directories: * bin - contains the main rampart script and other utility scripts * doc - a html and pdf copy of this manual * etc - contains default environment configuration files, and some example job configuration files * man - contains a copy of the manual in man form * repo - contains the java classes used to program the pipeline * support_jars - contains source and javadocs for the main rampart codebase Should you want to run the tools without referring to their paths, you should ensure the ‘bin’ sub-directory is on your PATH environment variable. Also please note that this method does not install any dependencies automatically. You must do this before trying to run RAMPART. ### From source[¶](#from-source) RAMPART is a java 1.7 / maven project. Before compiling the source code, please make sure the following dependencies are installed: * GIT * Maven 3 * JDK v1.7+ * Sphinx and texlive (If you would like to compile this documentation. If these are not installed you must comment out the `create-manual` execution element from the pom.xml file.) You also need to make sure that the system to are compiling on has internet access, as it will try to automatically incorporate any required java dependencies via maven. Now type the following: ``` git clone https://github.com/TGAC/RAMPART.git cd RAMPART mvn clean install ``` Note: If you cannot clone the git repositories using “https”, please try “ssh” instead. Consult github to obtain the specific URLs. Assuming there were no compilation errors. The build, hopefully the same as that described in the previous section, can now be found in `./build/rampart-<version>`. There should also be a `dist` sub directory which will contain a tarball suitable for installing RAMPART on other systems. Some common errors the user may encounter, and steps necessary to fix the, during the installation procedure follow: 1. Old Java Runtime Environment (JRE) installed: `Exception in thread "main" java.lang.UnsupportedClassVersionError: uk/ac/tgac/rampart/RampartCLI : Unsupported major.minor version 51.0` This occurs When trying to run RAMPART, or any associated tools, with an old JRE version. If you see this, install or load JRE V1.7 or later and try again. Note that if you are trying to compile RAMPART you will need JDK (Java Development Kit) V1.7 or higher as well. 2. Incorrect sphinx configuration: If you are compiling RAMPART from source, if you encounter a problem when creating the documention with sphinx it maybe that your sphinx version is different from what is expected. Specifically please check that your sphinx configuration provides a tool called `sphinx_build` and that tool is available on the PATH. Some sphinx configurations may have an executable named like this `sphinx_build_<version>`. If this is the case you can try making a copy of this executable and remove the version suffix from it. Alternatively if you do not wish to compile the documentation just remove it by commenting out the `create-manual` element from the pom.xml file. 3. Texlive not installed: `“[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-manual) on project rampart: An Ant BuildException has occured: Warning: Could not find file RAMPART.pdf to copy.” Creating an empty RAMPART.pdf file in the specified directory fixed this issue, allowing RAMPART to successfully build` This error occurs because the RAMPART.pdf file was not created when trying to compile the documentation. RAMPART.pdf is created from the documentation via sphinx and texlive. If you see this error then probably sphinx is working fine but texlive is not installed. Properly installing and configuring texlive so it’s available on your path should fix this issue. Alternatively if you do not wish to compile the documentation just remove it by commenting out the `create-manual` element from the pom.xml file. Environment configuration[¶](#environment-configuration) --- RAMPART is designed to utilise a scheduled environment in order to exploit the large-scale parallelism high performance computing environments typically offer. Currently, LSF and PBS schedulers are supported, although it is also possible to run RAMPART on a regular server in an unscheduled fashion. In order to use a scheduled environment for executing RAMPART child jobs, some details of your specific environment are required. These details will be requested when installing the software, however, they can be overwritten later. By default the settings are stored in `etc` folder within the project’s installation/build directory, and these are the files that will be used by RAMPART by default. However, they can be overridden by either keeping a copy in `~/.tgac/rampart/` or by explicity specifying the location of the files when running RAMPART. Priority is as follows: * custom configuration file specified at runtime via the command line - `--env_config=<path_to_env_config_file>` * user config directory - `~/.tgac/rampart/conan.properties` * installation directory - `<installation dir>/etc/conan.properties` ### Conan - scheduler configuration[¶](#conan-scheduler-configuration) RAMPART’s execution context is specified by default in a file called “conan.properties”. In this file it is possible to describe the type of scheduling system to use and if so, what queue to run on. Valid properties: * `executionContext.scheduler =` Valid options {“”,”LSF”,”PBS”,”SLURM”} * `executionContext.scheduler.queue =` The queue/partition to execute child jobs on. * `executionContext.locality = LOCAL` Always use this for now! In the future it may be possible to execute child jobs at a remote location. * `externalProcessConfigFile = <location to external process loading file>` See next section for details of how to setup this file. Lines starting with `#` are treated as comments. ### External process configuration[¶](#external-process-configuration) RAMPART can utilise a number of dependencies, each of which may require modification of environment variables in order for it to run successfully. This can be problematic if multiple versions of the same piece of software need to be available on the same environment. At TGAC we execute python scripts for configure the environment for a tool, although other institutes may use an alternative system like “modules”. Instead of configuring all the tools in one go, RAMPART can execute commands specific to each dependency just prior to it’s execution. Currently known process keys are described below. In general the versions indicated have been tested and will work with RAMPART, however other versions may work if their command line interface has not changed significantly from the listed versions. Note that these keys are hard coded, please keep the exact wording as below, even if you are using a different version of the software. The format for each entry is as follows: `<key>=<command_to_load_tool>`. Valid keys: ``` # Assemblers Abyss_V1.5 AllpathsLg_V50 Platanus_Assemble_V1.2 SOAP_Assemble_V2.4 Spades_V3.1 Velvet_V1.2 # Dataset improving tools Sickle_V1.2 Quake_V0.3 Musket_V1.0 # Assembly improving tools Platanus_Gapclose_V1.2 Platanus_Scaffold_V1.2 SSPACE_Basic_v2.0 SOAP_GapCloser_V1.12 SOAP_Scaffold_V2.4 Reapr_V1 FastXRC_V0013 # Assembly analysis tools Quast_V2.3 Cegma_V2.4 KAT_Comp_V1.0 KAT_GCP_V1.0 KAT_Plot_Density_V1.0 KAT_Plot_Spectra-CN_V1.0 # Misc tools Jellyfish_Count_V1.1 Jellyfish_Merge_V1.1 Jellyfish_Stats_V1.1 Subsampler_V1.0 KmerGenie_V1.6 ``` By default RAMPART assumes the tools are all available and properly configured. So if this applies to your environment then you do not need to setup this file. ### Logging[¶](#logging) In addition, RAMPART uses SLF4J as a logging facade and is currently configured to use LOG4J. If you which to alter to logging configuration then modify the “log4j.properties” file. For details please consult: “<http://logging.apache.org/log4j/2.x/>“ Running RAMPART[¶](#running-rampart) --- Running RAMPART itself from the command line is relatively straight forward. The syntax is simply: `rampart [options] <path_to_job_configuration_file>`. To get a list and description of available options type: `rampart --help`. Upon starting RAMPART will search for a environment configuration file, a logging configuration file and a job configuration file. RAMPART will configure the execution environment as appropriate and then execute the steps specified in the job configuration file. Setting up a suitable configuration file for your assembly project is more complex however, and we expect a suitable level of understanding and experience of *de novo* genome assembly, NGS and genome analysis. From a high level the definition of a job involves supplying information about 3 topics: the organism’s genome; the input data; and how the pipeline should execute. In addition, we recommend the user specifies some metadata about this job for posterity and for reporting reasons. The job configuration file must be specified in XML format. Creating a configuration file from scratch can be daunting, particularly if the user isn’t familiar with XML or other markup languages, so to make this process easier for the user we provide a number of example configuration files which can be modified and extended as appropriate. These can be found in the `etc/example_job_configs` directory. Specifically the file named `ecoli_full_job.xml` provides a working example configuration file once you download the raw reads from: `http://www.ebi.ac.uk/ena/data/view/DRR015910`. ### The genome to assemble[¶](#the-genome-to-assemble) Different genomes have different properties. The more information provided the better the analysis of the assembly and the more features are enabled in the RAMPART pipeline. As a minimum the user must enter the name of the genome to assemble this is used for reporting and logging purposes and if a prefix for name standardisation isn’t provided initials taken from the name are used in the filename and headers of the final assembly. In addition it is recommended that the user provides the genome ploidy as this is required if you choose to run the ALLPATHS-LG assembler. It is also useful for calculating kmer counting hash sizes, analysing the assemblies. If you set to “2”, i.e. diploid, then assembly enhancement tools may make use of bubble files if they are available and if the tools are capable. If you plan to assemble polyploid genomes please check that the third-party tools also support this. ALLPATHs-LG for example cannot currently assembly polyploid genomes for example. An example XML snippet containing this basic information is shown below: ``` <organism name="Escherichia coli" ploidy="1"/> ``` If you plan to assemble a variant of an organism that already has a good quality assembly then the user should also provide the fasta file for that assembly. This allows RAMPART to make better assessments of assemblies produced as part of the pipeline and enables input read subsampling (coverage reduction). It also is used to make better estimates of required memory usage for various tools. An example XML snippet containing a genome reference is shown below: ``` <organism name="Escherichia coli" ploidy="1"> <reference name="EcoliK12" path="EcoliK12.fasta"/> </organism> ``` If you are assembling a non-model organism an suitable existing reference may not be available. In this case it is beneficial if you have any expectations of genomic properties that you provide them in order to make better assembly assessments and enable input read subsampling an memory requirement estimation. Estimated input values can be entered as follows. Note that these are optional, so the user can specify any or all of the properties as known, although the estimated genome size is particularly useful: ``` <organism name="Escherichia coli" ploidy="1"> <estimated est_genome_size="4600000" est_gc_percentage="50.8" est_nb_genes="4300"/> </organism> ``` ### Defining datasets[¶](#defining-datasets) Before an assembly can be made some sequencing data is required. Sometimes an modern assembly project might involve a single set of sequencing data, othertimes it can involve a number of sequencing projects using different protocols and different data types. In order to instruct the assemblers and other tools to use the data in the right way, the user must describe each dataset and how to interpret it. Each dataset description must contain the following information: * Attribute “name” - an identifier - so we can point tools to use a specific dataset later. * Element “files” - Must contain one of more “path” elements containing file paths to the actual sequencing data. For paired end and mate pair datasets this will involve pointers to two separate files. Ideally you should specify the following information as well if you want RAMPART to execute all tools with the best settings: * Attribute “read_length” -The length of each read in base pairs. * Attribute “avg_insert_size” - An estimate of the average insert size in base pairs used for sequencing if this is a paired end or mate pair library. * Attribute “insert_err_tolerance” - How tolerant tools should be when interpreting the average insert size specified above. This figure should be a percentage, e.g. the tool should accept inserts sizes with a 30% tolerance either side of the average insert size. * Attribute “orientation” - If this is a paired end or mate pair library, the orientation of the reads. For example, paired end libraries are often created using “forward reverse” orientation, and often long mate pairs use “reverse forward” orientation. The user should specify either “FR” or “RF” for this property. * Attribute “type” - The kind of library this is. Valid options: “SE” - single end; “OPE” - overlapping paired end; “PE” - paired end; “MP” - mate pair. * Attribute “phred” - The ascii offset to apply to the quality scores in the library. Valid options: “PHRED_33” (Sanger / Illumina 1.8+); “PHRED_64” (Illumina 1.3 - 1.7). * Attribute “uniform” - Whether or not the reads have uniform length. This is set to true by default. This property is used to work out the fastest way to calculate the number of bases present in the library for downsampling, should that be requested. An example XML snippet of a set of NGS datasets for an assembly project are shown below: ``` <libraries> <library name="pe1" read_length="101" avg_insert_size="500" insert_err_tolerance="0.3" orientation="FR" type="PE" phred="PHRED_64"> <files> <path>lib1_R1.fastq</path> <path>lib1_R2.fastq</path> </files> </library> <library name="mp1" read_length="150" avg_insert_size="4000" insert_err_tolerance="0.3" orientation="RF" type="MP" uniform="false" phred="PHRED_64"> <files> <path>lib2_R1.fastq</path> <path>lib2_R2.fastq</path> </files> </library> <library name="ope1" read_length="101" avg_insert_size="180" insert_err_tolerance="0.3" orientation="FR" type="OPE" phred="PHRED_33"> <files> <path>lib3_R1.fastq</path> <path>lib3_R2.fastq</path> </files> </library> </libraries> ``` In the future we plan to interrogate the libraries to work out many of the settings automatically. However, for the time being we request that you enter all these details manually. ### The pipeline[¶](#the-pipeline) The RAMPART pipeline can be separated into a number of stages, all of which are optional customisable. The pipeline can be controlled in two ways. The first way is by definition in the job configuration file. If a pipeline stage is not defined it will not be executed. The second way is via a command line option: `-s`. By specifying which stages you wish to execute here you can run specific stage of the pipeline in isolation, or as a group. For example by typing: `rampart -s MECQ,MASS job.cfg`, you instruct RAMPART to run only the MECQ and MASS stages described in the job.cfg file. A word of caution here, requesting stages not defined in the configuration file does not work. Also you must ensure that each stage has it’s pre-requisites fulfilled before starting. For example, you cannot run the AMP stage, without a selected assembly to work with. #### MECQ - Multiple Error Correction and Quality Trimming Tool[¶](#mecq-multiple-error-correction-and-quality-trimming-tool) The purpose of this step is to try to improve the input datasets. The user can select from a number of seperate tools can be executed on one or more of the input datasets provided. The user can also request whether or not the tools should be run linearly or in parallel. Attempting to improve the dataset is a slightly controversial topic. Although, it is true that having good quality data is critical to creating a good assembly, then benefits from trimming and correcting input data are debatable. It is certainly true that error correction tools in particular can boost assembly contiguity, however this can occasionally come at the expense of missassemblies. In addition, trimming reads can alter the kmer-coverage statistics for the dataset and in turn confuse assemblers into making incorrect choices. It is also worth noting that some assemblers perform their own error correction, for example, ALLPATHS-LG and SPAdes. Meaning that additional error correction via tools such as quake would not be significantly beneficial. It is a complicated topic and the decision to quality trim and error correct reads is left to the user. RAMPART makes it simpler to incorperate this kind of process into the assembly pipeline for assemblers which don’t already have this option built in. The authors advice is to learn the error correction and assembly tools well, understand how they work and understand your data. Finally, RAMPART offers a suitable platform for testing these combinations out, so if you have the time and computational resources, it might be worth experimenting with different permutations. Whilst new tools will be added as and when the are needed, currently MECQ supports the following tools: * Sickle V1.1 * Quake V0.3 * Musket V1.0 * TrimGalore V An example XML snippet demonstrating how to run two different tools in parallel, one on two datasets, the other on a single dataset: ``` <mecq parallel="false"> <ecq name="sickle_agressive" tool="SICKLE_V1.2" libs="lib1896-pe,lib1897-mp"/> <ecq name="quake" tool="QUAKE_V0.3" libs="lib1896-pe1" threads="4" memory="8000"/> </mecq> ``` MECQ produces output in the `mecq` directory produced in the specified job output directory. The directory will contain sub-directories relating to each `ecq` element described in the XML snippet, then further sub-directories relating to the specified libraries used for that `ecq`. The next steps in the pipeline (KMER and MASS) know how to read this directory structure to get their input automatically. ##### Adding other command line arguments to the error corrector[¶](#adding-other-command-line-arguments-to-the-error-corrector) MECQ offers two ways to add command line arguments to the error corrector. The first is via a POSIX format string containing command line options/arguments that should be checked/validated as soon as the configuration file is parsed. Checked arguments undergo a limited amount of validation to check the argument name is recognized and that the argument values (if required) are plausible. The second method is to add a string containing unchecked arguments directly to the assembler verbatim. This second method is not recommended in general because any syntax error in the options will only register once the assembler starts running, which maybe well into the workflow. However, it is useful for working around problems that can’t be easily fixed in any other way. For example, checked args only work if the developer has properly implemented handling of the argument in the error corrector wrapper. If this has not been implemented then the only way to work around the problem is to use unchecked arguments. The following example demonstrates how to set some checked and unchecked arguments for Quake: ``` <mecq> <ecq name="quake" tool="QUAKE_V0.3" threads="16" memory="16000" libs="lib1896-pe1" checked_args="-k 19 -q 30 --hash_size=10000000" unchecked_args="-l --log"/> </job> </mass> ``` Note that we use POSIX format for the checked arguments, regardless of what the underlying tool typically would expect. Unchecked arguments are passed verbatim to the tool. You should also ensure that care is taken not to override variables, otherwise unpredictable behaviour will occur. In general options related to input libraries, threads/cpus and memory values are set separately. Also take care not to override the same options in both checked_args and unchecked_args. #### Analysing reads[¶](#analysing-reads) This stage analyses all datasets, both the RAW and those, if any, which have been produced by the MECQ stage. Currently, the only analysis option provided involves a kmer analysis, using tools called jellyfish and KAT. This process will produce GC vs kmer frequence plots, which can highlight potential contamination and indicate whether you have sufficient coverage in your datasets for assembling your genome. The user has the option to control, the number of threads and amount of memory to request per process and whether or not the kmer counting for each dataset should take place in parallel. An example of this is shown below: ``` <analyse_reads kmer="true" parallel="true" threads="16" memory="4000"/> ``` Note: This step is required if you wish to count kmers in the assemblies and compare the kmer content of reads to assemblies. See _ref::mass for more details. #### MASS - Multiple Assembly Creation[¶](#mass-multiple-assembly-creation) This tool enables the user to try different assemblers with different settings. Currently, the following assemblers are supported by RAMPART (brackets below indicate tool name to use if config file - case insensitive): * Abyss V1.5 (ABYSS_V1.5) * ALLPATHS-LG V50xxx (ALLPATHSLG_V50) * Platanus V1.2 (Platanus_Assemble_V1.2) * SOAP denovo V2.04 (SOAP_Assemble_V2.4) * SPAdes V3.1 (Spades_V3.1) * Velvet V1.2 (Velvet_V1.2) * Discovar V51xxx (Discovar_V51XXX) A simple MASS job might be configured as follows: ``` <kmer_calc threads="32" memory="20000"/> <mass> <job name="abyss-raw-kmer" tool="ABYSS_V1.5" threads="16" memory="4000" exp_walltime="60"> <inputs> <input ecq="raw" lib="pe1"/> </inputs> </job> </mass> ``` This instructs RAMPART to run a single Abyss assembly using 16 threads, requesting 4GB RAM, expecting to run for 60mins, using the optimal kmer value determined by kmer genie on the raw pe1 dataset. The kmer_calc stage looks ahead to run on dataset configurations for each MASS job. In the job element there are two required attributes: “name” and “tool”. The name attribute is primarily used as the name of the output directory for this job, but it also provides a way of referring to this job from other parts of the pipeline. The tool attribute must represent one of the supported assemblers, and take one of the assemblers values defined at the start of this chapter, or in the environment config section of the documentation. There are also several optional attributes: “threads”, “memory”, “exp_walltime”, “checked_args”, “unchecked_args”. The value entered to threads will be passed to the tool and the scheduler to define the number of threads required for this job. memory may get passed to the tool, depending on whether the tool requires it, but will get passed to the scheduler. exp_walltime, will just go to the scheduler. It’s important to understand how your scheduler works before entering these values. The intention is that these figures will represent guidelines to help the scheduler organise it’s workload fairly, such as LSF. However other schedulers may define these as hardlimits. For example on PBS there is no notion of “expected” walltime, only a hard limited walltime, so we double the value entered here in order to create a conservative hard limit instead. checked and unchecked args are described later in this section. ##### Varying kmers for De Bruijn Graph assemblers[¶](#varying-kmers-for-de-bruijn-graph-assemblers) Many DeBruijn graph assemblers require you to specify a parameter that defines the kmer size to use in the graph. It is not obvious before running the assembly which kmer value will work best and so a common approach to the problem is to try many kmers to “optimise” to the kmer parameter. RAMPART allows the user to do this in two different ways. First, RAMPART supports kmergenie. If the user enters the kmergenie element in the mass element then kmer genie is used to determine the best kmer values to use for each distinct mass configuration. For example, if the same single dataset is used for each mass job then kmergeneie is run once, and that optimal kmer value is passed on to each mass job. If different datasets are used for building contigs in different mass jobs then RAMPART will automatically work out in which combinations of kmer genie need to be run to drive the pipeline. The alternative way is to manually specify which kmer to use or to request a kmer spread, i.e. to define the range of kmer values that should be tried. This maybe necessary, if for example you would like to do your own analysis of the resultant assemblies, or if kmer genie fails on your dataset. If the user specifies both kmergenie and a manual kmer spread, then the manual kmer spread will override the kmergenie recommendation. The snippet below shows how to run Abyss using a spread of kmer values: ``` <mass> <job name="abyss-raw-kmer" tool="ABYSS_V1.5" threads="16" memory="4000"> <kmer min="61" max="101" step="COARSE"/> <inputs> <input ecq="raw" lib="pe1"/> </inputs> </job> </mass> ``` As you can see the XML element starting `<kmer` has been modified to specify a min, max and step value. Min and max obviously set the limits of the kmer range. You can omit the min and/or max values. If so the default min value is set to 35 and the default max value will be automatically determined by the provided input libraries. Specifically, the default max K will be 1 less than the read length of the library with smallest read length. The step value, controls how large the step should be between each assembly. The valid options include any integer between 2 and 100. We also provide some special keywords to define step size: `FINE,MEDIUM,COARSE`, which correspond to steps of `4,10,20` respectively. Alternatively, you can simply specify a list of kmer values to test. The following examples all represent the same kmer range (61,71,81,91,101): ``` <kmer min="61" max="101" step="10"/> <kmer min="61" max="101" step="MEDIUM"/> <kmer list="61,71,81,91,101"/> ``` Note: Depending on the assembler used the values specified for the kmer range, the actual assemblies generated may be executed with slightly different values. For example some assemblers do not allow you to use kmers of even value. Others may try to optimise the k parameter themselves. We therefore make a best effort to match the requested RAMPART kmer range to the actual kmer range executed by the assembler. Assemblers such as SPAdes and Platanus have their own K optimisation strategies. In these cases, instead of running multiple instances of these assemblers, RAMPART will run a single instance, and translate the kmer range information into the parameters suitable for these assemblers. Some De Bruijn graph assemblers, such as ALLPATHS-LG, recommend that you do not modify the kmer value. In these cases RAMPART lets the assembler manage the k value. If the selected assembler does require you to specify a k value, and you omit the kmer element from the config, then RAMPART specifies a default kmer spread for you. This will have a min value of 35, the max is automatically determined from the provided libraries as described above, and the step is 20 (COARSE). ##### Varying coverage[¶](#varying-coverage) In the past, sequencing was expensive and slow, which led to sequencing coverage of a genome to be relatively low. In those days, you typically would use all the data you could get in your assembly. These days, sequencing is relatively cheap and it is often possible to over sequence data, to the point where the gains in terms of separating signal from noise become irrelevant. Typically, a sequencing depth of 100X is more than sufficient for most purposes. Furthermore, over sequencing doesn’t just present problems in terms of data storage, RAM usage and runtime, it also can degrade the quality of some assemblies. One common reason for failed assemblies with high coverage can occur if trying to assemble DNA sequenced from populations rather than a single individual. The natural variation in the data can make it impossible to construct unambiguous seed sequences to start a *De Brujin* graph. Therefore RAMPART offers the ability to randomly subsample the reads to a desired level of coverage. It does this either by using the assembler’s own subsampling functionality if present (ALLPATHS-LG does have this functionality), or it will use an external tool developed by TGAC to do this if the assembler doesn’t have this functionality. In both cases user’s interface to this is identical, and an example is shown below: ``` <mass> <job name="abyss-raw-cvg" tool="ABYSS_V1.5" threads="16" memory="4000"> <coverage min="50" max="100" step="MEDIUM" all="true"/> <inputs> <input ecq="raw" lib="pe1"/> </inputs> </job> </mass> ``` This snippet says to run Abyss varying the coverage between 50X to 100X using a medium step. It also says to run an abyss assembly using all the reads. The step options has the following valid values: `FINE, MEDIUM, COARSE`, which correspond to steps of: `10X, 25X, 50X`. If the user does not wish to run an assembly with all the reads, then they should set the all option to false. ##### Varying other variables[¶](#varying-other-variables) MASS provides a mechanism to vary most parameters of any assembler. This is done with the `var` element, and there can be only one `var` element per MASS job. The parameter name should be specified by an attribute called `name` in that element and the values to test should be put in a single comma separated string under an attribute called `values`. For example, should you wish to alter the coverage cutoff parameter in the velvet assembler you might write something like this: ``` <mass> <job name="velvet-cc" tool="VELVET_V1.2" threads="16" memory="8000"> <kmer list="75"/> <var name="cov_cutoff" values="2,5,10,auto"/> <inputs> <input ecq="raw" lib="pe1"/> </inputs> </job> </mass> ``` Note that in this example we set the kmer value to 75 for all tests. If the kmer value is not specified then the default for the assembler should be used. ##### Using multiple input libraries[¶](#using-multiple-input-libraries) You can add more than one input library for most assemblers. You can specify additional libraries to the MASS job by simply adding additional `input` elements inside the `inputs` element. MASS supports the ALLPATHS-LG assembler, which has particular requirements for its input: a so-called fragment library and a jumping library. In RAMPART nomenclature, we would refer to a fragment library, as either an overlapping paired end library, and a jumping library as either a paired end or mate pair library. ALLPATHS-LG also has the concept of a long jump library and long library. RAMPART will translate mate pair libraries with an insert size > 20KBP as long jump libraries and single end reads longer than 500BP as long libraries. An simple example of ALLPATHS-LG run, using a single fragment and jumping library is shown below: ``` <mass> <job name="allpaths-raw" tool="ALLPATHSLG_V50" threads="16" memory="16000"> <inputs> <input ecq="raw" lib="ope1"/> <input ecq="raw" lib="mp1"/> </inputs> </job> </mass> ``` ##### Multiple MASS runs[¶](#multiple-mass-runs) It is possible to ask MASS to conduct several MASS runs. You may wish to do this for several reasons. The first might be to compare different assemblers, another reason might be to vary the input data being provided to a single assembler. The example below shows how to run a spread of Abyss assemblies and a single ALLPATHS assembly on the same data: ``` <mass parallel="true"> <job name="abyss-raw-kmer" tool="ABYSS_V1.5" threads="16" memory="4000"> <kmer min="65" max="85" step="MEDIUM"/> <inputs> <input ecq="raw" lib="ope1"/> <input ecq="raw" lib="mp1"/> </inputs> </job> <job name="allpaths-raw" tool="ALLPATHSLG_V50" threads="16" memory="16000"> <inputs> <input ecq="raw" lib="ope1"/> <input ecq="raw" lib="mp1"/> </inputs> </job> </mass> ``` Note that the attribute in MASS called `parallel` has been added and set to true. This says to run the Abyss and ALLPATHS assemblies in parallel in your environment. Typically, you would be running on a cluster or some other HPC architecture when doing this. The next example, shows running two sets of abyss assemblies (not in parallel this time) each varying kmer values in the same way, but one set running on error corrected data, the other on raw data: ``` <mass parallel="false"> <job name="abyss-raw-kmer" tool="ABYSS_V1.5" threads="16" memory="4000"> <kmer min="65" max="85" step="MEDIUM"/> <inputs> <input ecq="raw" lib="pe1"/> </inputs> </job> <job name="abyss-raw-kmer" tool="ABYSS_V1.5" threads="16" memory="4000"> <inputs> <input ecq="quake" lib="pe1"/> </inputs> </job> </mass> ``` ##### Adding other command line arguments to the assembler[¶](#adding-other-command-line-arguments-to-the-assembler) MASS offers two ways to add command line arguments to the assembler. The first is via a POSIX format string containing command line options/arguments that should be checked/validated as soon as the configuration file is parsed. Checked arguments undergo a limited amount of validation to check the argument name is recognized and that the argument values (if required) are plausible. The second method is to add a string containing unchecked arguments directly to the assembler verbatim. This second method is not recommended in general because any syntax error in the options will only register once the assembler starts running, which maybe well into the workflow. However, it is useful for working around problems that can’t be easily fixed in any other way. For example, checked args only work if the developer has properly implemented handling of the argument in the assembler wrapper script. If this has not been implemented then the only way to work around the problem is to use unchecked arguments. The following example demonstrates how to set some checked and unchecked arguments for Abyss: ``` <mass> <job name="abyss" tool="ABYSS_V1.5" threads="16" memory="16000" checked_args="-n 20 -t 250" unchecked_args="p=0.8 q=5 s=300 S=350"> <kmer list="83"/> <inputs> <input ecq="raw" lib="ope1"/> <input ecq="raw" lib="mp1"/> </inputs> </job> </mass> ``` Note that we use POSIX format for the checked arguments, regardless of what the underlying tool typically would expect. Unchecked arguments are passed verbatim to the tool. You should also ensure that care is taken not to override variables, otherwise unpredictable behaviour will occur. In general options related to input libraries, threads/cpus, memory and kmer values are set separately. Also remember not to override arguments that you may be varying using a `var` element. ##### Navigating the directory structure[¶](#navigating-the-directory-structure) Once MASS starts it will create a directory within the job’s output directory called `mass`. Inside this directory you might expect to see something like this: ``` - <Job output directory> -- mass --- <mass_job_name> --- <assembly> (contains output from the assembler for this assembly) --- ... --- unitigs (contains links to unitigs for each assembly and analysis of unitigs) --- contigs (contains links to contigs for each assembly and analysis of contigs) --- scaffolds (contains links to scaffolds for each assembly and analysis of scaffolds) --- ... ``` The directory structure is created as the assemblers run. So the full file structure may not be visible straight after MASS starts. Also, we create the symbolic links to unitigs, contigs and scaffolds on an as needed basis. Some assemblers may not produce certain types of assembled sequences and in those cases we do not create the associated links directory. ##### Troubleshooting[¶](#troubleshooting) Here are some issues that you might run into during the MASS stage: 1. ABySS installed but without MPI support. RAMPART requires ABySS to be configured with openmpi in order to use parallelisation in ABySS. If you encounter the following error message lease reinstall ABySS and specify the –with-mpi option during configuration: ``` mpirun was unable to find the specified executable file, and therefore did not launch the job. This error was first reported for process rank 0; it may have occurred for other processes as well. NOTE: A common cause for this error is misspelling a mpirun command line parameter option (remember that mpirun interprets the first unrecognized command line token as the executable). ``` #### Analyse assemblies[¶](#analyse-assemblies) RAMPART currently offers 3 assembly analysis options: * Contiguity * Kmer read-assembly comparison * Completeness These types of analyses can be executed in either the `analyse_mass` or `analyse_amp` pipeline element. The available tool options for the analyses are: QUAST,KAT,CEGMA. QUAST, compares the assemblies from a contiguity perspective. This tool runs really fast, and produces statistics such as the N50, assembly size, max sequence length. It also produces a nice html report showing cumulative length distribution curves for each assembly and GC content curves. KAT, performs a kmer count on the assembly using Jellyfish, and, assuming kmer counting was requested on the reads previously, will use the Kmer Analysis Toolkit (KAT) to create a comparison matrix comparing kmer counts in the reads to the assembly. This can be visualised later using KAT to show how much of the content in the reads has been assembled and how repetitive the assembly is. Repetition could be due to heterozygosity in the diploid genomes so please read the KAT manual and walkthrough guide to get a better understanding of how to interpret this data. Note that information for KAT is not automatically used for selecting the best assembly at present. See next section for more information about automatic assembly selection. CEGMA aligns highly conserved eukaryotic genes to the assembly. CEGMA produces a statistic which represents an estimate of gene completeness in the assembly. i.e. if we see CEGMA maps 95% of the conserved genes to the assembly we can assume that the assembly is approximately 95% complete. This is a very rough guide and shouldn’t be taken literally, but can be useful when comparing other assemblies made from the same data. CEGMA has a couple of other disadvantages however, first it is quite slow, second it only works on eukaryotic organisms so is useless for bacteria. An example snippet for a simple and fast contiguity based analyses is as follows: ``` <analyse_mass> <tool name="QUAST" threads="16" memory="4000"/> </analyse_mass> ``` In fact we strongly recommend you use QUAST for all your analyses. Most of RAMPART’s system for scoring assemblies (see below) is derived from Quast metrics and it also runs really fast. The runtime is insignificant when compared to the time taken to create assemblies. For more a complete analysis, which will take a significant amount of time for each assembly, you can request KAT and CEGMA. An example showing the use of all analysis tools is as follows: ``` <analyse_mass parallel="false"> <tool name="QUAST" threads="16" memory="4000"/> <tool name="KAT" threads="16" memory="50000" parallel="true"/> <tool name="CEGMA" threads="16" memory="20000"/> </analyse_mass> ``` Note that you can apply `parallel` attributes to both the `analyse_mass` and individual tool elements. This enables you to select those process to be run in parallel where possible. Setting `parallel="true"` for the `analyse_mass` element will override `parallel` attribute values for specific tools. ##### Selecting the best assembly[¶](#selecting-the-best-assembly) Assuming at least one analysis option is selected, RAMPART will produce a summary file and a tab separated value file listing metrics for each assembly, along with scores relating to the contiguity, conservation and problem metrics, and a final overall score for each assembly. Each score is given a value between 0.0 and 1.0, where higher values represent better assemblies. The assembly with the highest score is then automatically selected as the **best** assembly to be used downstream. The group scores and the final scores are derived from underlying metrics and can be adjusted to have different weightings applied to them. This is done by specifying a weighting file to use in the RAMPART pipeline. By default RAMPART applies its own weightings, which can be found at `<rampart_dir>/etc/weightings.tab`, so to run the assembly selection stage with default settings the user simply needs add the following element to the pipeline: ``` <select_mass/> ``` Should the user wish to override the default weights that are assigned to each assembly metric, they can do so by setting the `weightings_file` attribute element. For example, using an absolute path to a custom weightings file the XML snippet may look like this: ``` <select_mass weightings_file="~/.tgac/rampart/custom_weightings.tab"/> ``` The format of the weightings key value pair file separated by ‘=’ character. Comment lines can start using ‘#’. Most metrics are derived from Quast results, except for the core eukaryote genes detection score which is gathered from CEGMA. Note, that some metrics from Quast will only be used in certain circumstances. For example, the na50 and nb_ma_ref metrics are only used if a reference is supplied in the organism element of the configuration file. Additionally, the nb_bases, nb_bases_gt_1k and the gc% metrics are used only if the user has supplied either a reference, or has provided estimated size and / or estimated gc% for the organism respectively. TODO: Currently the kmer metric, is not included. In the future this will offer an alternate means of assessing the assembly completeness. The file best.fa is particularly important as this is the assembly that will be taken forward to the second half of the pipeline (from the AMP stage). Although we have found that scoring system to be generally quite useful, we strongly recommend users to make their own assessment as to which assembly to take forward as we acknowledge that the scoring system is biased by outlier assemblies. For example, consider three assemblies with an N50 of 1000, 1100 and 1200 bp, with scaled scores of 0, 0.5 and 1. We add a third assembly, which does poorly and is disregarded by the user, with an N50 of 200 bp. Now the weighted N50 scores of the assemblies are 0, 0.8, 0.9 and 1. Even though the user has no intention of using that poor assembly, the effective weight of the N50 metric of the three good assemblies has decreased drastically by a factor of (1 - 0) / (1 - 0.8) = 5. It’s possible that the assembly selected as the best would change by adding an irrelevant assembly. For example consider two metrics, a and b, with even weights of 0.5 for three assemblies, and then again for four assemblies after adding a fourth irrelevant assembly, which performs worst in both metrics. By adding a fourth irrelevant assembly, the choice of the best assembly has changed. Three assemblies: a = {1000, 1100, 1200}, b = {0, 10, 8} sa = {0, 0.5, 1}, sb = {0, 1, 0.8} fa = {0, 0.75, 0.9} best = 0.9, the assembly with a = 1200 Four assemblies: a = {200, 1000, 1100, 1200}, b = {0, 0, 10, 8} sa = {0, 0.8, 0.9, 1}, sb = {0, 0, 1, 0.8} fa = {0, 0.4, 0.95, 0.9} best = 0.95, the assembly with a = 1100 To reiterate, we recommend that the user double check the results provided by RAMPART and if necessary overrule the choice of assembly selected for further processing. This can be done, i.e. starting from the AMP stage with a user selected assembly, by using the following command: `rampart -2 -a <path_to_assembly> <path_to_job_config>`. ##### Analysing assemblies produced by AMP[¶](#analysing-assemblies-produced-by-amp) In addition, to analysing assemblies produced by the MASS stage, the same set of anlayses can also be applied to assemblies produced by the AMP stage. The same set of attributes that can be applied to `analyse_mass` can be applied to `analyse_amp`. In addition, it is also possible to specify an additional attribute: `analyse_all`, which instructs RAMPART to analyse > assemblies produced at every stage of the AMP pipeline. By default only the final assembly is analysed. Also note that > there is no need to select assemblies from AMP, so there is no corresponding `select_amp` element. #### AMP - Assembly Improver[¶](#amp-assembly-improver) This stage takes a single assembly as input and tries to improve it. For example, additional scaffolding, gap filling can be performed at this stage. Currently, AMP supports the following tools: * Platanus_Scaffold_V1.2 * SSPACE_Basic_V2.0 * SOAP_Scaffold_V2.4 * Platanus_Gapclose_V1.2 * SOAP_GapCloser_V1.12 * Reapr_V1 AMP stages accept `threads` and `memory` attributes just like MASS and MECQ. A simple XML snippet describing a scaffolding and gap closing process is shown below: ``` <amp> <stage tool="SSPACE_Basic_V2.0" threads="8" memory="16000"> <inputs> <input name="mp1" ecq="raw"/> </inputs> </stage> <stage tool="SOAP_GapCloser_V1.12" threads="6" memory="8000"> <inputs> <input name="pe1" ecq="raw"/> </inputs> </stage> </amp> ``` Each stage in the AMP pipeline must necessarily run linearly as each stage requires the output from the previous stage. The user can specify additional arguments to each tool by adding the `checked_args` attribute to each stage. For example to specify that SSPACE should use PE reads to extend into gaps and to cap min contig length to scaffold as 1KB: ``` <amp> <stage tool="SSPACE_Basic_V2.0" threads="16" memory="32000" checked_args="-x 1 -z 1000"> <inputs> <input name="mp1" ecq="raw"/> </inputs> </stage> <stage tool="SOAP_GapCloser_V1.12" threads="6" memory="8000"> <inputs> <input name="pe1" ecq="raw"/> </inputs> </stage> </amp> ``` Output from amp will be placed in a directory called `amp` within the job’s output directory. Output from each stage will be placed in a sub-directory within this and a link will be created to the final assembly produced at the end of the amp pipeline. This assembly will be used in the next stage. Some assembly enhancement tools such as Platanus scaffolder, can make use of bubble files in certain situations to provide better scaffolding performance. When you are assembling a diploid organism (i.e. the ploidy attribute of your organism element is set to “2”) and the assembler used in the MASS step produces bubble files, then these are automatically passed onto the relevant AMP stage. Some assembly enhancement tools require the input contigs file to have the fasta headers formatted in a particular way, and sometimes with special information embedded within it. For example, SOAP scaffolder cannot process input files that contain gaps, and the platanus scaffolder must contain the kmer coverage value of the contig in the header. Where possible RAMPART tries to automatically reformat these files so they are suitable for the assembly enhancement tool. However, not all permutations are catered for, and some combinations are probably not possible. If you are aware of any compatibility issues between contig assemblers and assembly enhancement tools that RAMPART is not currently addressing correctly, then please raise a ticket on the RAMPART github page: <https://github.com/TGAC/RAMPART/issues>, with details and we will try to fix the issue in a future version. In the future we plan to make the AMP stage more flexible so that it can handle parameter optimisation like the MASS stage. ##### Troubleshooting[¶](#troubleshooting) If you encounter a error message relating to “finalFusion” not being available during inital checks it is likely that you nneed to find the “finalFusion” executable and install this along with SOAP scaffolder. This can be difficult to find, and note this is not the same as SOAPFusion. If you can not find this online, please contact [<EMAIL>](mailto:daniel.mapleson%40tgac.ac.uk). #### Finaliser[¶](#finaliser) The final step in the RAMPART pipeline is to finalise the assembly. Currently, this simply involves standardising the assembly filename and headers in the fasta file. The finaliser can be invoked this like to use the specifed prefix: ``` <finalise prefix="E.coli_Sample1_V1.0"/> ``` If a prefix is not specified RAMPART will build it’s own prefix based on the information provided in the job configuration file, particularly from the organism details. The finalising process will take scaffolds and artifically break them into contigs where there are large gaps present. The size of the gap required to trigger a break into contigs is defined by the `min_n` attribute. By default this is set to 10. An example, that applies a prefix and breaks to contigs only on gaps larger than size 20 is shown below: ``` <finalise prefix="E.coli_Sample1_V1.0" min_n="20"/> ``` The input from this stage will either be the best assembly selected from MASS, or the final assembly produced by AMP depending on how you’ve setup your job. The output from this stage will be as follows: * `<prefix>.scaffolds.fa` (the final assembly which can be used for annotation and downstream analysis) * `<prefix>.contigs.fa` (the final set of scaffolds are broken up where stretches of N’s exceed a certain limit) * `<prefix>.agp` (a description of where the contigs fit into the scaffolds) * `<prefix>.translate` (how the fasta header names translate back to the input assembly) All the output files should be compressed into a single tarball by default. You can turn this functionality off by adding the `compress="false"` attribute to finaliser. ### Potential runtime problems[¶](#potential-runtime-problems) There are a few issues that can occur during execution of RAMPART which may prevent your jobs from completing successfully. This part of the documentation attempts to list common problems and suggests workarounds or solutions: * Quake fails In this case, if you have set the quake k value high you should try reducing it, probably to the default value unless you know what you are doing. Also Quake can only work successfully if you have sufficient sequencing depth in your dataset. If this is not the case then you should either obtain a new dataset or remove quake error correction from your RAMPART configuration and try again. * Kmergenie fails Often this occurs for the same reasons as Quake, i.e. inadequate coverage. Check that you have correct set the ploidy value for your organism in the configuration file (Kmer genie only support haploid or diploid (i.e. 1 or 2), for polyploid genomes you are on your own!) Also keep in mind that should you remove kmer genie from your pipeline and manually set a kmer value for an assembler, it is unlikely that your assembly will be very contiguous but RAMPART allows you to try things out and you maybe able to assemble some useful data. * Pipeline failed at a random point during execution of one of the external tools In this case check your system. Ensure that the computing systems are all up and running, that there have been no power outages and you have plenty of spare disk space. RAMPART can produce a lot of data for non-trivial genomes so please you have plenty of spare disk space before starting a job. Multi-sample projects[¶](#multi-sample-projects) --- Due to the falling cost and increasing capacity of sequencing devices, as well as improvements in the automation of library preparation, it is now possible to sequence many strains of the same species. Assembling these strains in isolation can be time consuming and error prone, even with pipelines such as RAMPART. We therefore provide some additional tools to help the bioinformatician manage these projects and produce reasonable results in short time frames. The logic we use to approach multi-sample projects is as follows: 1. Use jellyswarm to assess all samples based on their distinct kmer count after rare kmers (sequencing errors) are excluded. If there is little agreement between samples then more analysis is required. If there is good agreement then carry on to 2. 2. Exclude outlier samples. These must be assembled and analysed separately. 3. Use RAMPART to help develop an assembly recipe for a few of the more typical strains (strains where distinct kmer count is close to the mean) to attempt to identify optimal parameters, particularly the k parameter. If there is little agreement in parameters between samples then all strains must be looked at in isolation. If there is good agreement carry on to 4. 4. Use RAMPART in multi-sample configuration to execute the recipe for all strains. Note: This is by no means the only way to approach these projects, nor will it necessarily give the best results, but it should allow a single bioinformatician to produce reasonable assemblies for a project with hundreds or thousands of samples within a reasonable timeframe given appropriate computational resources. ### Jellyswarm[¶](#jellyswarm) Jellyswarm uses the jellyfish K-mer counting program to count K-mers found across multiple samples. Jellyswarm will attempt to exclude K-mers that are the result of sequencing errors from the results. It then analyses the number of distinct k-mers found across all samples and records basic statistics such as the mean and standard deviation of the distribution. The syntax for running Jellyswarm from the command line is: `jellyswarm [options] <directory_containing_samples>`. To get a list and description of available options type: `jellyswarm --help`. Upon starting jellyswarm will search for an environment configuration file and a logging configuration exactly like RAMPART. Jellyswarm will then configure the execution environment as appropriate and then run the pipeline. Jellyswarm finds fastq samples associated with the samples by interrogating a directory containing all the files. It then sorts by name the files found in that directory with an ”.fq” or ”.fastq” extension. By default we assume paired end sequencing was used and group each pair of fastq files together. If you have interleaved fastq files or have single end data then activate the single end mode by using the `-1` command line option. Jellyswarm can also interogate all subdirectories in the parent directory for fastq files by using the `-r` command line option. You can control the amount of resources jellyswarm uses on your HPC environment by using a few other options. The `-m` option allows you to specify a memory limit for each jellyfish count instance that is run. The amount of memory required will be determined by the size of your hash and the size of the genome in question. Depending on the particular environment used this limit may either represent a hard limit, i.e. if exceeded the job will fail (this is the case on PBS), or it may represent a resource reservation where by this amount of memory is reserved for the running job (this is the case on LSF). For LSF, your job may or may not fail if the memory limit is exceeded depending on the availability of memory on the node on which your job is running. The number of threads per process is controlled using the `-t` option. Finally, you can k-mer count all samples in parallel by using the ### RAMPART multi-sample mode[¶](#rampart-multi-sample-mode) RAMPART can be executed in multi-sample mode by removing the [``](#id1)libraries’’ element from the configuration file and replacing it with a [``](#id3)samples’’ element containing a [``](#id5)file’’ attribute describing the path to a file containing a list of sample libraries to process. For example: ``` <samples file="reads.lst"/> ``` The file containing the sample libraries should be a tab separated file with columns describing the following: 1. Sample name 2. Phred 3. Path to R1 file 4. Path to R2 file For example: ``` PRO461_S10_B20 PHRED_33 S10_B20_R1.fastq S10_B20_R2.fastq PRO461_S10_D20 PHRED_33 S10_D20_R1.fastq S10_D20_R2.fastq PRO461_S10_F20 PHRED_33 S10_F20_R1.fastq S10_F20_R2.fastq PRO461_S11_H2 PHRED_33 S11_H2_R1.fastq S11_H2_R2.fastq ``` We may extend this format to include additional columns describing library options in the future. In addition to replacing the libraries element with the samples element, you should also add a [``](#id7)collect’’ element inside the [``](#id9)pipeline’’ element: ``` <collect threads="2" memory="5000"/> ``` A complete multi-sample configuration file might look like this: ``` <?xml version="1.0" encoding="UTF-8"?> <rampart author="dan" collaborator="someone" institution="tgac" title="Set of C.Coli assemblies"> <organism name="ccoli_nextera" ploidy="1"> <reference name="C.Coli_15-537360" path="Campylobacter_coli_15_537360.GCA_000494775.1.25.dna.genome.fa"/> </organism> <samples file="reads.lst"/> <pipeline parallel="true"> <mecq> <ecq name="tg" tool="TrimGalore_V0.4" threads="2" memory="5000"/> </mecq> <mass> <job name="spades_auto" tool="Spades_V3.1" threads="2" memory="10000"> <kmer min="71" max="111" step="COARSE"/> <coverage list="50"/> <inputs> <input ecq="tg"/> </inputs> </job> </mass> <mass_analysis> <tool name="QUAST" threads="2" memory="5000"/> </mass_analysis> <mass_select threads="2" memory="5000"/> <finalise prefix="Ccoli_Nextera"/> <collect threads="2" memory="5000"/> </pipeline> </rampart> ``` You can then start RAMPART in the normal way. RAMPART will output the stages directories as normal but as subdirectories within a sample directory. Currently, there are also a number of supplementary scripts to aid the analysis of data across all samples and annotation of each sample using PROKKA (<http://www.vicbioinformatics.com/software.prokka.shtml>). Note that PROKKA is only relevant for prokaryotic genomes. The scripts were designed for execution on LSF environments, so some modification of the scripts may be necessary should you wish to execute in other scheduled environments or on unscheduled systems. Whilst each script comes with its own help message and man page, we so not provide extensive documentation for these and leave it to the bioinformatician to tweak or reuse the scripts as they see fit. We plan to incorperate a mechanism into RAMPART to enable it to properly handle prokaryotic genome annotation in the future. Citing RAMPART[¶](#citing-rampart) --- To cite RAMPART please use the following reference: RAMPART: a workflow management system for de novo genome assembly <NAME>; <NAME>; <NAME> Bioinformatics 2015; doi: 10.1093/bioinformatics/btv056 Additional information: PMID: 25637556 Github page: <https://github.com/TGAC/RAMPART> Project page: <http://www.tgac.ac.uk/rampart/> Latest manual: <http://rampart.readthedocs.org/en/latest/index.htmlLicense[¶](#license) --- RAMPART is available under GNU GLP V3: <http://www.gnu.org/licenses/gpl.txtFor licensing details of other RAMPART dependencies please consult their own documentation. Contact[¶](#contact) --- <NAME> - Analysis Pipelines Project Leader at The Genome Analysis Centre (TGAC) Website: <http://www.tgac.ac.uk/bioinformatics/genome-analysis/daniel-mapleson/Email: [<EMAIL>](mailto:<EMAIL>.ac.uk) Acknowledgements[¶](#acknowledgements) --- * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * And everyone who contributed to making the tools RAMPART depends on!
svenssonm
cran
R
Package ‘svenssonm’ October 14, 2022 Type Package Title Svensson's Method Version 0.1.0 Description Obtain parameters of Svensson's Method, including percentage agreement, systematic change and individual change. Also, the contingency table can be generated. Svensson's Method is a rank-invariant nonparametric method for the analysis of ordered scales which measures the level of change both from systematic and individual aspects. For the details, please refer to Svensson E. Analysis of systematic and random differences between paired ordinal categorical data [dissertation]. Stockholm: Almqvist & Wiksell International; 1993. License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 6.0.1 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2017-08-03 14:32:01 UTC R topics documented: con_t... 2 indichang... 2 p... 3 sresul... 4 syschang... 5 con_ta Contingency Table Generation Description Generate Contingency table for Svensson’s Method Usage con_ta(x, y, level = 5) Arguments x a numeric vector of data values, each element range from 1 to level. y a numeric vector of data values, must have same length as x. level the dimension of the contingency table, the default is 5. Value A contingency table based on x and y. See Also sresult for summary of Svensson’s method analysis. Examples x <- c (1:5,5:1) y <- c(1:5,1,1,5,4,1) con_ta(x,y,) indichange Individual Change Description In Svensson’s method, the individual change is described by the relative rank variance (RV), the ob- servable part, and the internal rank variance (IV), the unobservable part, together. A measure of the closeness of observations to the rank transformable pattern of change is defined as the augmented correlation coefficient (ralpha) and its p-value. Usage rv(t) rvse(t) iv(t) ralpha(t) pralpha(t) Arguments t The contingency table for Svensson’s method, a two-dimension matrix. Value rv and iv give the RV and IV value. rvse gives the standard error of RV. ralpha and pralpha give the augmented correlation coefficient and the corresponding p-value. See Also con_ta for generating contingency table. syschange for systematic change. sresult for summary of Svensson’s method analysis. Examples x <- c (1:5,5:1) y <- c(1:5,1,1,5,4,1) z <- con_ta(x,y,) rv(z) rvse(z) iv(z) ralpha(z) pralpha(z) pa Percentage Agreement Description The percentage agreement (PA) which shows the proportion of the subjects who did not change their choices. Usage pa(t) Arguments t The contingency table for Svensson’s method, a two-dimension matrix. Value pa gives the PA value, multiply by 100 to get a percentage number. See Also con_ta for generating contingency table. sresult for summary of Svensson’s method analysis. Examples x <- c (1:5,5:1) y <- c(1:5,1,1,5,4,1) z <- con_ta(x,y,) pa(z) sresult Summary for Svensson’s Method Description List all the results for Svensson’s Method. Including percentage agreement, systematic change and individual change. Usage sresult(t) Arguments t The contingency table for Svensson’s method, a two-dimension matrix. Value sresult lists the results for Svensson’s method. PA for percentage agreement, RP for relative position, RC for relative concentration, RV for relative rank variance, SE(RP), SE(RC), SE(RV) for the corresponding standard error and CI(RP), CI(RC), CI(RV) for the 95% confidence interval. IV for internal rank variance, R.Alpha for augmented correlation coefficient, P.R.Alpha for the corresponding p-value (significant level 0.05). See Also con_ta for generating contingency table. Examples x <- c (1:5,5:1) y <- c(1:5,1,1,5,4,1) z <- con_ta(x,y,) sresult(z) syschange Systematic Change Description The value and the standard error of relative position (RP), the systematic change in position be- tween the two ordered categorical classification. Also, the value and the standard error of relative concentration (RC), a comprehensive evaluation of the systematic change. Usage rp(t) rpse(t) rc(t) rcse(t) Arguments t The contingency table for Svensson’s method, a two-dimension matrix. Value rp and rc give the RP and RC value. rpse and rcse give the standard error of RP and RC. See Also con_ta for generating contingency table. indichange for individual change. sresult for summary of Svensson’s method analysis. Examples x <- c (1:5,5:1) y <- c(1:5,1,1,5,4,1) z <- con_ta(x,y,) rp(z) rpse(z) rc(z) rcse(z)
@types/react-native-qrcode
npm
JavaScript
[Installation](#installation) === > `npm install --save @types/react-native-qrcode` [Summary](#summary) === This package contains type definitions for react-native-qrcode (<https://github.com/cssivision/react-native-qrcode>). [Details](#details) === Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/react-native-qrcode>. [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/react-native-qrcode/index.d.ts) --- ``` import * as React from "react"; export default class QRCode extends React.Component<QRCodeProperties> {} export interface QRCodeProperties { value?: string | undefined; size?: number | undefined; bgColor?: string | undefined; fgColor?: string | undefined; } ``` ### [Additional Details](#additional-details) * Last updated: Wed, 18 Oct 2023 11:45:05 GMT * Dependencies: [@types/react](https://npmjs.com/package/@types/react) [Credits](#credits) === These definitions were written by [York Yao](https://github.com/plantain-00). Readme --- ### Keywords none
anki_connect
hex
Erlang
API Reference === [modules](#modules) Modules --- [AnkiConnect](AnkiConnect.html) This module delegates all functions to `AnkiConnect.Actions.*` and `AnkiConnect.Services.*` modules. [AnkiConnect.Actions.Deck](AnkiConnect.Actions.Deck.html) Deck actions. [AnkiConnect.Actions.Graphical](AnkiConnect.Actions.Graphical.html) Graphical actions. [AnkiConnect.Actions.Media](AnkiConnect.Actions.Media.html) Media actions. [AnkiConnect.Actions.Miscellaneous](AnkiConnect.Actions.Miscellaneous.html) Miscellaneous actions. [AnkiConnect.Actions.Model](AnkiConnect.Actions.Model.html) Model actions. [AnkiConnect.Actions.Note](AnkiConnect.Actions.Note.html) Note actions. [AnkiConnect.Actions.Statistic](AnkiConnect.Actions.Statistic.html) Statistic actions. [AnkiConnect.Services.AddNotesFromFile](AnkiConnect.Services.AddNotesFromFile.html) Provides functionality to upload notes from a given text file to Anki. [AnkiConnect.Specs.FileSpec](AnkiConnect.Specs.FileSpec.html) Includes type specs for file. [AnkiConnect.Specs.NoteSpec](AnkiConnect.Specs.NoteSpec.html) Includes type specs for note. [mix-tasks](#mix-tasks) Mix Tasks --- [mix anki_connect](Mix.Tasks.AnkiConnect.html) Provides functionality to interact via a command-line interface (CLI) with AnkiConnect, a plugin for the Anki flashcard application. AnkiConnect === This module delegates all functions to `AnkiConnect.Actions.*` and `AnkiConnect.Services.*` modules. Functions can be called from your Elixir application as follows: ``` AnkiConnect.add_note(%{ note: %{ deck_name: "TEST DECK", model_name: "Basic", fields: %{ Front: "front content", Back: "back content" } } }) ``` or from the command line using `mix anki_connect` task: ``` > mix anki_connect add_note --note='{"deck_name": "TEST DECK", "model_name": "Basic", "fields": {"Front": "front content", "Back": "back content"}}' 1684946786121 ``` another example: ``` > mix anki_connect add_notes_from_file --file="words.md" --deck="TEST DECK" [1684955655336, 1684955655337, ...] ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [add_note(param)](#add_note/1) See [`AnkiConnect.Actions.Note.add_note/1`](AnkiConnect.Actions.Note.html#add_note/1). [add_notes(param)](#add_notes/1) See [`AnkiConnect.Actions.Note.add_notes/1`](AnkiConnect.Actions.Note.html#add_notes/1). [add_notes_from_file(param)](#add_notes_from_file/1) See [`AnkiConnect.Services.AddNotesFromFile.add_notes_from_file/1`](AnkiConnect.Services.AddNotesFromFile.html#add_notes_from_file/1). [add_tags(param)](#add_tags/1) See [`AnkiConnect.Actions.Note.add_tags/1`](AnkiConnect.Actions.Note.html#add_tags/1). [api_reflect(param)](#api_reflect/1) See [`AnkiConnect.Actions.Miscellaneous.api_reflect/1`](AnkiConnect.Actions.Miscellaneous.html#api_reflect/1). [can_add_notes(param)](#can_add_notes/1) See [`AnkiConnect.Actions.Note.can_add_notes/1`](AnkiConnect.Actions.Note.html#can_add_notes/1). [card_reviews(param)](#card_reviews/1) See [`AnkiConnect.Actions.Statistic.card_reviews/1`](AnkiConnect.Actions.Statistic.html#card_reviews/1). [change_deck(param)](#change_deck/1) See [`AnkiConnect.Actions.Deck.change_deck/1`](AnkiConnect.Actions.Deck.html#change_deck/1). [clear_unused_tags()](#clear_unused_tags/0) See [`AnkiConnect.Actions.Note.clear_unused_tags/0`](AnkiConnect.Actions.Note.html#clear_unused_tags/0). [clone_deck_config_id(param)](#clone_deck_config_id/1) See [`AnkiConnect.Actions.Deck.clone_deck_config_id/1`](AnkiConnect.Actions.Deck.html#clone_deck_config_id/1). [create_deck(param)](#create_deck/1) See [`AnkiConnect.Actions.Deck.create_deck/1`](AnkiConnect.Actions.Deck.html#create_deck/1). [create_model(param)](#create_model/1) See [`AnkiConnect.Actions.Model.create_model/1`](AnkiConnect.Actions.Model.html#create_model/1). [deck_names()](#deck_names/0) See [`AnkiConnect.Actions.Deck.deck_names/0`](AnkiConnect.Actions.Deck.html#deck_names/0). [deck_names_and_ids()](#deck_names_and_ids/0) See [`AnkiConnect.Actions.Deck.deck_names_and_ids/0`](AnkiConnect.Actions.Deck.html#deck_names_and_ids/0). [delete_decks(param)](#delete_decks/1) See [`AnkiConnect.Actions.Deck.delete_decks/1`](AnkiConnect.Actions.Deck.html#delete_decks/1). [delete_media_file(param)](#delete_media_file/1) See [`AnkiConnect.Actions.Media.delete_media_file/1`](AnkiConnect.Actions.Media.html#delete_media_file/1). [delete_notes(param)](#delete_notes/1) See [`AnkiConnect.Actions.Note.delete_notes/1`](AnkiConnect.Actions.Note.html#delete_notes/1). [export_package(param)](#export_package/1) See [`AnkiConnect.Actions.Miscellaneous.export_package/1`](AnkiConnect.Actions.Miscellaneous.html#export_package/1). [find_and_replace_in_models(param)](#find_and_replace_in_models/1) See [`AnkiConnect.Actions.Model.find_and_replace_in_models/1`](AnkiConnect.Actions.Model.html#find_and_replace_in_models/1). [find_notes(param)](#find_notes/1) See [`AnkiConnect.Actions.Note.find_notes/1`](AnkiConnect.Actions.Note.html#find_notes/1). [get_collection_stats_html(param)](#get_collection_stats_html/1) See [`AnkiConnect.Actions.Statistic.get_collection_stats_html/1`](AnkiConnect.Actions.Statistic.html#get_collection_stats_html/1). [get_deck_config(param)](#get_deck_config/1) See [`AnkiConnect.Actions.Deck.get_deck_config/1`](AnkiConnect.Actions.Deck.html#get_deck_config/1). [get_deck_stats(param)](#get_deck_stats/1) See [`AnkiConnect.Actions.Deck.get_deck_stats/1`](AnkiConnect.Actions.Deck.html#get_deck_stats/1). [get_decks(param)](#get_decks/1) See [`AnkiConnect.Actions.Deck.get_decks/1`](AnkiConnect.Actions.Deck.html#get_decks/1). [get_latest_review_id(param)](#get_latest_review_id/1) See [`AnkiConnect.Actions.Statistic.get_latest_review_id/1`](AnkiConnect.Actions.Statistic.html#get_latest_review_id/1). [get_media_dir_path()](#get_media_dir_path/0) See [`AnkiConnect.Actions.Media.get_media_dir_path/0`](AnkiConnect.Actions.Media.html#get_media_dir_path/0). [get_media_files_names(param)](#get_media_files_names/1) See [`AnkiConnect.Actions.Media.get_media_files_names/1`](AnkiConnect.Actions.Media.html#get_media_files_names/1). [get_note_tags(param)](#get_note_tags/1) See [`AnkiConnect.Actions.Note.get_note_tags/1`](AnkiConnect.Actions.Note.html#get_note_tags/1). [get_num_cards_reviewed_by_day()](#get_num_cards_reviewed_by_day/0) See [`AnkiConnect.Actions.Statistic.get_num_cards_reviewed_by_day/0`](AnkiConnect.Actions.Statistic.html#get_num_cards_reviewed_by_day/0). [get_num_cards_reviewed_today()](#get_num_cards_reviewed_today/0) See [`AnkiConnect.Actions.Statistic.get_num_cards_reviewed_today/0`](AnkiConnect.Actions.Statistic.html#get_num_cards_reviewed_today/0). [get_profiles()](#get_profiles/0) See [`AnkiConnect.Actions.Miscellaneous.get_profiles/0`](AnkiConnect.Actions.Miscellaneous.html#get_profiles/0). [get_reviews_of_cards(param)](#get_reviews_of_cards/1) See [`AnkiConnect.Actions.Statistic.get_reviews_of_cards/1`](AnkiConnect.Actions.Statistic.html#get_reviews_of_cards/1). [get_tags()](#get_tags/0) See [`AnkiConnect.Actions.Note.get_tags/0`](AnkiConnect.Actions.Note.html#get_tags/0). [gui_add_cards(param)](#gui_add_cards/1) See [`AnkiConnect.Actions.Graphical.gui_add_cards/1`](AnkiConnect.Actions.Graphical.html#gui_add_cards/1). [gui_answer_card(param)](#gui_answer_card/1) See [`AnkiConnect.Actions.Graphical.gui_answer_card/1`](AnkiConnect.Actions.Graphical.html#gui_answer_card/1). [gui_browse(param)](#gui_browse/1) See [`AnkiConnect.Actions.Graphical.gui_browse/1`](AnkiConnect.Actions.Graphical.html#gui_browse/1). [gui_check_database()](#gui_check_database/0) See [`AnkiConnect.Actions.Graphical.gui_check_database/0`](AnkiConnect.Actions.Graphical.html#gui_check_database/0). [gui_current_card()](#gui_current_card/0) See [`AnkiConnect.Actions.Graphical.gui_current_card/0`](AnkiConnect.Actions.Graphical.html#gui_current_card/0). [gui_deck_browser()](#gui_deck_browser/0) See [`AnkiConnect.Actions.Graphical.gui_deck_browser/0`](AnkiConnect.Actions.Graphical.html#gui_deck_browser/0). [gui_deck_overview(param)](#gui_deck_overview/1) See [`AnkiConnect.Actions.Graphical.gui_deck_overview/1`](AnkiConnect.Actions.Graphical.html#gui_deck_overview/1). [gui_deck_review(param)](#gui_deck_review/1) See [`AnkiConnect.Actions.Graphical.gui_deck_review/1`](AnkiConnect.Actions.Graphical.html#gui_deck_review/1). [gui_edit_note(param)](#gui_edit_note/1) See [`AnkiConnect.Actions.Graphical.gui_edit_note/1`](AnkiConnect.Actions.Graphical.html#gui_edit_note/1). [gui_exit_anki()](#gui_exit_anki/0) See [`AnkiConnect.Actions.Graphical.gui_exit_anki/0`](AnkiConnect.Actions.Graphical.html#gui_exit_anki/0). [gui_selected_notes()](#gui_selected_notes/0) See [`AnkiConnect.Actions.Graphical.gui_selected_notes/0`](AnkiConnect.Actions.Graphical.html#gui_selected_notes/0). [gui_show_answer()](#gui_show_answer/0) See [`AnkiConnect.Actions.Graphical.gui_show_answer/0`](AnkiConnect.Actions.Graphical.html#gui_show_answer/0). [gui_show_question()](#gui_show_question/0) See [`AnkiConnect.Actions.Graphical.gui_show_question/0`](AnkiConnect.Actions.Graphical.html#gui_show_question/0). [gui_start_card_timer()](#gui_start_card_timer/0) See [`AnkiConnect.Actions.Graphical.gui_start_card_timer/0`](AnkiConnect.Actions.Graphical.html#gui_start_card_timer/0). [import_package(param)](#import_package/1) See [`AnkiConnect.Actions.Miscellaneous.import_package/1`](AnkiConnect.Actions.Miscellaneous.html#import_package/1). [insert_reviews(param)](#insert_reviews/1) See [`AnkiConnect.Actions.Statistic.insert_reviews/1`](AnkiConnect.Actions.Statistic.html#insert_reviews/1). [load_profile(param)](#load_profile/1) See [`AnkiConnect.Actions.Miscellaneous.load_profile/1`](AnkiConnect.Actions.Miscellaneous.html#load_profile/1). [model_field_descriptions(param)](#model_field_descriptions/1) See [`AnkiConnect.Actions.Model.model_field_descriptions/1`](AnkiConnect.Actions.Model.html#model_field_descriptions/1). [model_field_fonts(param)](#model_field_fonts/1) See [`AnkiConnect.Actions.Model.model_field_fonts/1`](AnkiConnect.Actions.Model.html#model_field_fonts/1). [model_field_names(param)](#model_field_names/1) See [`AnkiConnect.Actions.Model.model_field_names/1`](AnkiConnect.Actions.Model.html#model_field_names/1). [model_fields_on_templates(param)](#model_fields_on_templates/1) See [`AnkiConnect.Actions.Model.model_fields_on_templates/1`](AnkiConnect.Actions.Model.html#model_fields_on_templates/1). [model_names()](#model_names/0) See [`AnkiConnect.Actions.Model.model_names/0`](AnkiConnect.Actions.Model.html#model_names/0). [model_names_and_ids()](#model_names_and_ids/0) See [`AnkiConnect.Actions.Model.model_names_and_ids/0`](AnkiConnect.Actions.Model.html#model_names_and_ids/0). [model_styling(param)](#model_styling/1) See [`AnkiConnect.Actions.Model.model_styling/1`](AnkiConnect.Actions.Model.html#model_styling/1). [model_template_add(param)](#model_template_add/1) See [`AnkiConnect.Actions.Model.model_template_add/1`](AnkiConnect.Actions.Model.html#model_template_add/1). [model_template_rename(param)](#model_template_rename/1) See [`AnkiConnect.Actions.Model.model_template_rename/1`](AnkiConnect.Actions.Model.html#model_template_rename/1). [model_template_reposition(param)](#model_template_reposition/1) See [`AnkiConnect.Actions.Model.model_template_reposition/1`](AnkiConnect.Actions.Model.html#model_template_reposition/1). [model_templates(param)](#model_templates/1) See [`AnkiConnect.Actions.Model.model_templates/1`](AnkiConnect.Actions.Model.html#model_templates/1). [multi(param)](#multi/1) See [`AnkiConnect.Actions.Miscellaneous.multi/1`](AnkiConnect.Actions.Miscellaneous.html#multi/1). [notes_info(param)](#notes_info/1) See [`AnkiConnect.Actions.Note.notes_info/1`](AnkiConnect.Actions.Note.html#notes_info/1). [reload_collection()](#reload_collection/0) See [`AnkiConnect.Actions.Miscellaneous.reload_collection/0`](AnkiConnect.Actions.Miscellaneous.html#reload_collection/0). [remove_deck_config_id(param)](#remove_deck_config_id/1) See [`AnkiConnect.Actions.Deck.remove_deck_config_id/1`](AnkiConnect.Actions.Deck.html#remove_deck_config_id/1). [remove_empty_notes(param)](#remove_empty_notes/1) See [`AnkiConnect.Actions.Note.remove_empty_notes/1`](AnkiConnect.Actions.Note.html#remove_empty_notes/1). [remove_tags(param)](#remove_tags/1) See [`AnkiConnect.Actions.Note.remove_tags/1`](AnkiConnect.Actions.Note.html#remove_tags/1). [replace_tags(param)](#replace_tags/1) See [`AnkiConnect.Actions.Note.replace_tags/1`](AnkiConnect.Actions.Note.html#replace_tags/1). [replace_tags_in_all_notes(param)](#replace_tags_in_all_notes/1) See [`AnkiConnect.Actions.Note.replace_tags_in_all_notes/1`](AnkiConnect.Actions.Note.html#replace_tags_in_all_notes/1). [request_permission()](#request_permission/0) See [`AnkiConnect.Actions.Miscellaneous.request_permission/0`](AnkiConnect.Actions.Miscellaneous.html#request_permission/0). [retrieve_media_file(param)](#retrieve_media_file/1) See [`AnkiConnect.Actions.Media.retrieve_media_file/1`](AnkiConnect.Actions.Media.html#retrieve_media_file/1). [save_deck_config(param)](#save_deck_config/1) See [`AnkiConnect.Actions.Deck.save_deck_config/1`](AnkiConnect.Actions.Deck.html#save_deck_config/1). [set_deck_config_id(param)](#set_deck_config_id/1) See [`AnkiConnect.Actions.Deck.set_deck_config_id/1`](AnkiConnect.Actions.Deck.html#set_deck_config_id/1). [store_media_file(param)](#store_media_file/1) See [`AnkiConnect.Actions.Media.store_media_file/1`](AnkiConnect.Actions.Media.html#store_media_file/1). [sync()](#sync/0) See [`AnkiConnect.Actions.Miscellaneous.sync/0`](AnkiConnect.Actions.Miscellaneous.html#sync/0). [update_model_styling(param)](#update_model_styling/1) See [`AnkiConnect.Actions.Model.update_model_styling/1`](AnkiConnect.Actions.Model.html#update_model_styling/1). [update_model_templates(param)](#update_model_templates/1) See [`AnkiConnect.Actions.Model.update_model_templates/1`](AnkiConnect.Actions.Model.html#update_model_templates/1). [update_note(param)](#update_note/1) See [`AnkiConnect.Actions.Note.update_note/1`](AnkiConnect.Actions.Note.html#update_note/1). [update_note_fields(param)](#update_note_fields/1) See [`AnkiConnect.Actions.Note.update_note_fields/1`](AnkiConnect.Actions.Note.html#update_note_fields/1). [update_note_tags(param)](#update_note_tags/1) See [`AnkiConnect.Actions.Note.update_note_tags/1`](AnkiConnect.Actions.Note.html#update_note_tags/1). [version()](#version/0) See [`AnkiConnect.Actions.Miscellaneous.version/0`](AnkiConnect.Actions.Miscellaneous.html#version/0). [Link to this section](#functions) Functions === AnkiConnect.Actions.Deck === Deck actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [change_deck(map)](#change_deck/1) Moves cards with the given IDs to a different deck, creating the deck if it doesn’t exist yet. [clone_deck_config_id(map)](#clone_deck_config_id/1) Creates a new configuration group with the given name. [create_deck(map)](#create_deck/1) Creates a new empty deck. Will not overwrite a deck that exists with the same name. [deck_names()](#deck_names/0) Gets the complete list of deck names for the current user. [deck_names_and_ids()](#deck_names_and_ids/0) Gets the complete list of deck names and their respective IDs for the current user. [delete_decks(map)](#delete_decks/1) Deletes decks with the given names. [get_deck_config(map)](#get_deck_config/1) Gets the configuration group object for the given deck. [get_deck_stats(map)](#get_deck_stats/1) Gets statistics such as total cards and cards due for the given decks. [get_decks(map)](#get_decks/1) Accepts an array of card IDs and returns an object with each deck name as a key, and its value an array of the given cards which belong to it. [remove_deck_config_id(map)](#remove_deck_config_id/1) Removes the configuration group with the given ID. [save_deck_config(deck_config)](#save_deck_config/1) Saves the given configuration group. [set_deck_config_id(map)](#set_deck_config_id/1) Changes the configuration group for the given decks to the one with the given ID. [Link to this section](#functions) Functions === AnkiConnect.Actions.Graphical === Graphical actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [gui_add_cards(map)](#gui_add_cards/1) Invokes the *Add Cards* dialog, presets the note using the given deck and model, with the provided field values and tags. [gui_answer_card(map)](#gui_answer_card/1) Answers the current card. [gui_browse(map)](#gui_browse/1) Invokes the *Card Browser* dialog and searches for a given query. Returns an array of identifiers of the cards that were found. [gui_check_database()](#gui_check_database/0) Requests a database check. [gui_current_card()](#gui_current_card/0) Returns information about the current card or error if not in review mode. [gui_deck_browser()](#gui_deck_browser/0) Opens the *Deck Browser* dialog. [gui_deck_overview(map)](#gui_deck_overview/1) Opens the *Deck Overview* dialog for the deck with the given name. [gui_deck_review(map)](#gui_deck_review/1) Starts review for the deck with the given name. Returns error if failed. [gui_edit_note(map)](#gui_edit_note/1) Opens the *Edit* dialog with a note corresponding to given note ID. [gui_exit_anki()](#gui_exit_anki/0) Schedules a request to gracefully close Anki. [gui_selected_notes()](#gui_selected_notes/0) Finds the open instance of the *Card Browser* dialog and returns an array of identifiers of the notes that are selected. [gui_show_answer()](#gui_show_answer/0) Shows answer text for the current card. Returns error if not in review mode. [gui_show_question()](#gui_show_question/0) Shows question text for the current card. Returns error if not in review mode. [gui_start_card_timer()](#gui_start_card_timer/0) Starts or resets the timerStarted value for the current card. [Link to this section](#functions) Functions === AnkiConnect.Actions.Media === Media actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [delete_media_file(map)](#delete_media_file/1) Deletes the specified file inside the media folder. [get_media_dir_path()](#get_media_dir_path/0) Gets the full path to the `collection.media` folder of the currently opened profile. [get_media_files_names(map)](#get_media_files_names/1) Gets the names of media files matched the pattern. Returning all names by default. [retrieve_media_file(map)](#retrieve_media_file/1) Retrieves the base64-encoded contents of the specified file, returning error if the file does not exist. [store_media_file(data)](#store_media_file/1) Stores a file with the specified base64-encoded contents inside the media folder. [Link to this section](#functions) Functions === AnkiConnect.Actions.Miscellaneous === Miscellaneous actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [api_reflect(param)](#api_reflect/1) Gets information about the AnkiConnect APIs available. [export_package(param)](#export_package/1) Exports a given deck in `.apkg` format. [get_profiles()](#get_profiles/0) Retrieve the list of profiles. [import_package(map)](#import_package/1) Imports a file in `.apkg` format into the collection. [load_profile(map)](#load_profile/1) Selects the profile specified in request. [multi(map)](#multi/1) Performs multiple actions in one request, returning an array with the response of each action (in the given order). [reload_collection()](#reload_collection/0) Tells anki to reload all data from the database. [request_permission()](#request_permission/0) Requests permission to use the API exposed by AnkiConnect. [sync()](#sync/0) Synchronizes the local Anki collections with AnkiWeb. [version()](#version/0) Gets the version of the API exposed by AnkiConnect. [Link to this section](#functions) Functions === AnkiConnect.Actions.Model === Model actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [create_model(param)](#create_model/1) Creates a new model to be used in Anki. [find_and_replace_in_models(param)](#find_and_replace_in_models/1) Find and replace string in existing model by model name. [model_field_add(map)](#model_field_add/1) Creates a new field within a given model. [model_field_descriptions(map)](#model_field_descriptions/1) Gets the complete list of field descriptions (the text seen in the gui editor when a field is empty) for the provided model name. [model_field_fonts(map)](#model_field_fonts/1) Gets the complete list of fonts along with their font sizes. [model_field_names(map)](#model_field_names/1) Gets the complete list of field names for the provided model name. [model_field_remove(map)](#model_field_remove/1) Deletes a field within a given model. [model_field_rename(map)](#model_field_rename/1) Rename the field name of a given model. [model_field_reposition(map)](#model_field_reposition/1) Reposition the field within the field list of a given model. [model_field_set_description(map)](#model_field_set_description/1) Sets the description (the text seen in the gui editor when a field is empty) for a field within a given model. [model_field_set_font(map)](#model_field_set_font/1) Sets the font for a field within a given model. [model_field_set_font_size(map)](#model_field_set_font_size/1) Sets the font size for a field within a given model. [model_fields_on_templates(map)](#model_fields_on_templates/1) Returns an object indicating the fields on the question and answer side of each card template for the given model name. [model_names()](#model_names/0) Gets the complete list of model names for the current user. [model_names_and_ids()](#model_names_and_ids/0) Gets the complete list of model names and their corresponding IDs for the current user. [model_styling(map)](#model_styling/1) Gets the CSS styling for the provided model by name. [model_template_add(map)](#model_template_add/1) Adds a template to an existing model by name. [model_template_remove(map)](#model_template_remove/1) Removes a template from an existing model. [model_template_rename(map)](#model_template_rename/1) Renames a template in an existing model. [model_template_reposition(map)](#model_template_reposition/1) Repositions a template in an existing model. [model_templates(map)](#model_templates/1) Returns an object indicating the template content for each card connected to the provided model by name. [update_model_styling(map)](#update_model_styling/1) Modify the CSS styling of an existing model by name. [update_model_templates(map)](#update_model_templates/1) Modify the templates of an existing model by name. [Link to this section](#functions) Functions === AnkiConnect.Actions.Note === Note actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [add_note(map)](#add_note/1) Creates a note using the given deck and model, with the provided field values and tags. [add_notes(map)](#add_notes/1) Creates multiple notes using the given deck and model, with the provided field values and tags. [add_tags(map)](#add_tags/1) Adds tags to notes by note ID. [can_add_notes(map)](#can_add_notes/1) Accepts an array of objects which define parameters for candidate notes (see [`AnkiConnect.Actions.Note.add_note/1`](#add_note/1)) and returns an array of booleans indicating whether or not the parameters at the corresponding index could be used to create a new note. [clear_unused_tags()](#clear_unused_tags/0) Clears all the unused tags in the notes for the current user. [delete_notes(map)](#delete_notes/1) Deletes notes with the given ids. [find_notes(map)](#find_notes/1) Returns an array of note IDs for a given query. [get_note_tags(map)](#get_note_tags/1) Get a note’s tags by note ID. [get_tags()](#get_tags/0) Gets the complete list of tags for the current user. [notes_info(map)](#notes_info/1) Returns a list of objects containing for each note ID the note fields, tags, note type and the cards belonging to the note. [remove_empty_notes(map)](#remove_empty_notes/1) Removes all the empty notes for the current user. [remove_tags(map)](#remove_tags/1) Remove tags from notes by note ID. [replace_tags(map)](#replace_tags/1) Replace tags in notes by note ID. [replace_tags_in_all_notes(map)](#replace_tags_in_all_notes/1) Replace tags in all the notes for the current user. [update_note(map)](#update_note/1) Modify the fields and/or tags of an existing note. [update_note_fields(map)](#update_note_fields/1) Modify the fields of an existing note. [update_note_tags(map)](#update_note_tags/1) Set a note’s tags by note ID. [Link to this section](#functions) Functions === AnkiConnect.Actions.Statistic === Statistic actions. All functions are delegated inside [`AnkiConnect`](AnkiConnect.html) module, so you should import them from there. [Link to this section](#summary) Summary === [Functions](#functions) --- [card_reviews(map)](#card_reviews/1) Requests all card reviews for a specified deck after a certain time. [get_collection_stats_html(map \\ %{whole_collection: true})](#get_collection_stats_html/1) Gets the collection statistics report [get_latest_review_id(map)](#get_latest_review_id/1) Returns the unix time of the latest review for the given deck. [get_num_cards_reviewed_by_day()](#get_num_cards_reviewed_by_day/0) Gets the number of cards reviewed as a list of pairs of (dateString, number) [get_num_cards_reviewed_today()](#get_num_cards_reviewed_today/0) Gets the count of cards that have been reviewed in the current day (with day start time as configured by user in anki) [get_reviews_of_cards(map)](#get_reviews_of_cards/1) Requests all card reviews for each card ID. [insert_reviews(map)](#insert_reviews/1) Inserts the given reviews into the database. [Link to this section](#functions) Functions === AnkiConnect.Services.AddNotesFromFile === Provides functionality to upload notes from a given text file to Anki. [Link to this section](#summary) Summary === [Types](#types) --- [note()](#t:note/0) [Functions](#functions) --- [add_notes_from_file(param)](#add_notes_from_file/1) Uploads notes from a given text file to Anki. [Link to this section](#types) Types === [Link to this section](#functions) Functions === AnkiConnect.Specs.FileSpec === Includes type specs for file. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) Represents a file to be downloaded by Anki to media folder. [Link to this section](#types) Types === AnkiConnect.Specs.NoteSpec === Includes type specs for note. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) Notes passed to [`AnkiConnect.Actions.Note.add_note/1`](AnkiConnect.Actions.Note.html#add_note/1) and [`AnkiConnect.Actions.Note.add_notes/1`](AnkiConnect.Actions.Note.html#add_notes/1) functions follow this spec. [Link to this section](#types) Types === mix anki_connect === Provides functionality to interact via a command-line interface (CLI) with AnkiConnect, a plugin for the Anki flashcard application. To use the AnkiConnect task, run the following command in your terminal: ``` mix anki_connect <action> [params] ``` where `<action>` is the action you want to perform and `[params]` are optional parameters required by the action. List of actions can be found in the [AnkiConnect module documentation](https://hexdocs.pm/anki_connect/AnkiConnect.html). If a function expects a map with some keys, those keys should be written as parameters in the command line (preceded with `--` and with `_` replaced with `-`). [examples](#module-examples) Examples --- ``` > mix anki_connect deck_names ["Current", "Current::English", "Default"] ``` ``` > mix anki_connect create_deck --deck='TEST DECK' 1684945081956 ``` ``` > mix anki_connect delete_decks --decks='["TEST DECK"]' Done. ``` Let's assume that note with ID `1684946786121` has been already created. ``` > mix anki_connect add_tags --notes='[1684946786121]' --tags='some tag' Done. ``` ``` > mix anki_connect add_tags --notes='[1684946786121]' --tags='other tag' Done. ``` ``` > mix anki_connect get_note_tags --note='1684946786121' ["other tag", "some tag"] ``` [passing-nested-structures-as-parameters](#module-passing-nested-structures-as-parameters) Passing nested structures as parameters --- Maps and lists in the command line should be written as JSON encoded strings with keys in snake_case styling. For example, to add a note to a deck, you should run the following command: ``` > mix anki_connect add_note --note='{"deck_name": "TEST DECK", "model_name": "Basic", "fields": {"Front": "front content", "Back": "back content"}}' 1684946786121 ``` which is an equivalent to the following Elixir code: ``` AnkiConnect.add_note(%{ note: %{ deck_name: "TEST DECK", model_name: "Basic", fields: %{ Front: "front content", Back: "back content" } } }) ``` Full list of available actions can be found in the [AnkiConnect module documentation](https://hexdocs.pm/anki_connect/AnkiConnect.html). [post-actions](#module-post-actions) Post-actions --- Post-actions ordain what should be done when task was succesfully completed. They are defined as params which starts with `--with-` prefix. Available post-actions: * `--with-sync` - run [`AnkiConnect.Actions.Miscellaneous.sync/0`](AnkiConnect.Actions.Miscellaneous.html#sync/0) command after task was performed without any error. Example: ``` > mix anki_connect create_deck --deck='TEST DECK' --with-sync 1684945081956 Syncing... Synced! ```
pub_chgk_rating.jsonl
personal_doc
YAML
# rating.chgk.info API (4.2.0.x) Date: 2019-08-24 Categories: Tags: Download OpenAPI specification:Download По всем вопросам пишите в телеграм @fjqtp Creates a user token. The login data email | | | --- | --- | password | | * "email": "string", * "password": "string" } `{` * "token": "string" } The updated Country resource name | | | --- | --- | * "name": "string" } `{` * "id": 0, * "name": "string" } page | | | --- | --- | itemsPerPage | | surname | | name | | toBeChecked | | * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" }] The new Player resource The updated Player resource * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" } Retrieves the collection of TeamSeason resources. * "idplayer": 0, * "idteam": 0, * "idtournament": 0 }] * "id": 0, * "date": "2000-01-01", * "realDate": "2000-01-01", * "lastRunRefresh": "2019-08-24T14:15:22Z" }] Игровым сезоном называется промежуток времени с начала сентября по конец августа следующего года. Граница между сезонами проводится в 23:59 последнего четверга августа. `[` The new Season resource The updated Season resource page | | | --- | --- | itemsPerPage | | pagination | | name | | town | | town[] | | town.country | | town.country[] | | town.region | | town.region[] | | toBeChecked | | * "id": 0, * "name": "string" }}}] The new Team resource The updated Team resource * "id": 0, * "name": "string" }}} Retrieves the collection of TeamSeason resources. * "idteam": 0, * "idtournament": 0 }] * "id": 0, * "idtournament": 0, * "type": "R", * "issuedAt": "2019-08-24T14:15:22Z", * "status": "N", * "appeal": "string", * "comment": "string", * "overriddenBy": 0, * "questionNumber": 0, * "answer": "string" } * "id": 0, * "questionNumber": 0, * "answer": "string", * "issuedAt": "2019-08-24T14:15:22Z", * "status": "N", * "comment": "string", * "resolvedAt": "2019-08-24T14:15:22Z", * "appealJuryComment": "string" } * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" }, * "approximateTeamsCount": 0, * "issuedAt": "2019-08-24T14:15:22Z", * "dateStart": "2019-08-24T14:15:22Z", * "tournamentId": 0 } Retrieves the collection of TournamentSynchRequest resources. * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" }, * "approximateTeamsCount": 0, * "issuedAt": "2019-08-24T14:15:22Z", * "dateStart": "2019-08-24T14:15:22Z", * "tournamentId": 0 }] page | | | --- | --- | itemsPerPage | | dateStart[before] | | dateStart[strictly_before] | | dateStart[after] | | dateStart[strictly_after] | | dateEnd[before] | | dateEnd[strictly_before] | | dateEnd[after] | | dateEnd[strictly_after] | | lastEditDate[before] | | lastEditDate[strictly_before] | | lastEditDate[after] | | lastEditDate[strictly_after] | | name | | type | | type[] | | languages | | languages[] | | synchData.archive | | order[id] | | order[lastEditDate] | | properties.maiiAegis | | properties.maiiRating | | * "string" ]}] `{` * "string" ]} Retrieves the collection of TournamentSynchAppeal resources. * "id": 0, * "idtournament": 0, * "type": "R", * "issuedAt": "2019-08-24T14:15:22Z", * "status": "N", * "appeal": "string", * "comment": "string", * "overriddenBy": 0, * "questionNumber": 0, * "answer": "string" }] Retrieves the collection of TournamentSynchRequest resources. * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" }, * "approximateTeamsCount": 0, * "issuedAt": "2019-08-24T14:15:22Z", * "dateStart": "2019-08-24T14:15:22Z", * "tournamentId": 0 }] Фильтр по country работает если не задан ни town ни region, фильтр по region работает если не задан town.Дефолтная сортировка - по убыванию position. Пока не наступит tournament.synchData.hideResultsTo, скрываются поля mask, controversials, position, rating, questionsTotal includeTeamMembers | | | --- | --- | includeMasksAndControversials | | includeTeamFlags | | includeRatingB | | venue | | town | | region | | country | | flag | | venue[] | | town[] | | region[] | | country[] | | flag[] | | * "team": { * "id": 0, * "name": "string" }}, * "mask": "string", * "current": { * "id": 0, * "name": "string" }}, * "questionsTotal": 0, * "synchRequest": { * "id": 0, * "venue": { * "id": 0, * "name": "string" }, * "tournamentId": 0 }, * "rating": { * "inRating": true, * "b": 0, * "predictedPosition": 0, * "rt": 0, * "rb": 0, * "rg": 0, * "r": 0, * "bp": 0, * "d1": 0, * "d2": 0, * "d": 0 }, * "position": 0, * "controversials": [ * "id": 0, * "questionNumber": 0, * "answer": "string", * "issuedAt": "2019-08-24T14:15:22Z", * "status": "N", * "comment": "string", * "resolvedAt": "2019-08-24T14:15:22Z", * "appealJuryComment": "string" }], * "flags": [ * "id": 0, * "shortName": "string", * "longName": "string" }], * "teamMembers": [ * "flag": "string", * "usedRating": 0, * "rating": 0, * "player": { * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string" }}]} page | | | --- | --- | itemsPerPage | | pagination | | name | | region | | region[] | | country | | country[] | | * "id": 0, * "email": "string", * "player": { * "id": 0, * "name": "string", * "patronymic": "string", * "surname": "string", * "dbChgkInfoTag": "string", * "comment": "string" }, * "isBlocked": true, * "approved": true } * "string" ]}] The new Venue resource The updated Venue resource # Date: Categories: Tags: `xxxxxxxxxx` ``` # Welcome to GraphiQL ``` ``` # GraphiQL is an in-browser tool for writing, validating, and ``` ``` # testing GraphQL queries. ``` ``` # Type queries into this side of the screen, and you will see intelligent ``` ``` # typeaheads aware of the current GraphQL type schema and live syntax and ``` ``` # validation errors highlighted within the text. ``` ``` # GraphQL queries typically start with a "{" character. Lines that start ``` ``` # with a # are ignored. ``` ``` # An example GraphQL query might look like: ``` `#` `# {` ``` # field(arg: "value") { ``` `# subField` `# }` `# }` `#` ``` # Keyboard shortcuts: ``` ``` # Prettify query: Shift-Ctrl-P (or press the prettify button) ``` ``` # Merge fragments: Shift-Ctrl-M (or press the merge button) ``` ``` # Run Query: Ctrl-Enter (or press the play button) ``` ``` # Auto Complete: Ctrl-Space (or just start typing) ``` `#` `​` `​`
EcoTroph
cran
R
Package ‘EcoTroph’ October 12, 2022 Encoding UTF-8 Type Package Title An Implementation of the EcoTroph Ecosystem Modelling Approach Version 1.6.1 Date 2022-04-13 Author <NAME>, <NAME>, <NAME>, and <NAME> Maintainer <NAME> <<EMAIL>> URL http://sirs.agrocampus-ouest.fr/EcoTroph/ Description An approach and software for modelling marine and freshwater ecosystems. It is articu- lated entirely around trophic levels. EcoTroph's key displays are bivariate plots, with trophic lev- els as the abscissa, and biomass flows or related quantities as ordinates. Thus, trophic ecosys- tem functioning can be modelled as a continuous flow of biomass surg- ing up the food web, from lower to higher trophic levels, due to predation and ontogenic pro- cesses. Such an approach, wherein species as such disappear, may be viewed as the ulti- mate stage in the use of the trophic level metric for ecosystem modelling, providing a simpli- fied but potentially useful caricature of ecosystem functioning and impacts of fishing. This ver- sion contains catch trophic spectrum analysis (CTSA) function and corrected ver- sions of the mf.diagnosis and create.ETmain functions. License GPL LazyLoad yes Depends XML, utils, stats, graphics, grDevices RoxygenNote 7.1.2 NeedsCompilation no Repository CRAN Date/Publication 2022-04-13 15:12:33 UTC R topics documented: a13.e... 2 a13.eq.a... 3 check.tabl... 3 convert.list2ta... 4 create.ETdiagnosi... 4 create.ETmai... 7 create.smoot... 8 CTSA.catch.inpu... 9 E0.... 11 ecopath_guine... 11 Ems... 12 E_MSY_0.... 13 mf.diagnosi... 14 plot.ETdiagnosi... 15 plot.ETmai... 16 plot.smoot... 17 plot.Transpos... 18 plot_ETdiagnosis_isoplet... 19 read.ecopath.mode... 20 regP... 21 regPB.a... 21 saturatio... 22 Transpos... 23 a13.eq function used within the multiplier effort analysis Description function used within the multiplier effort analysis Usage a13.eq(compteur, ET_Main, biom.mf, fish.m, TopD, FormD, range.TLpred) Arguments compteur counter ET_Main output of the create.ETmain function biom.mf Effort multiplier applied on biomass fish.m Effort multiplier applied on natural mortality TopD Parameters of the formula FormD Parameters of the formula range.TLpred Range of predators trophic level a13.eq.ac function used within the multiplier effort analysis Description function used within the multiplier effort analysis Usage a13.eq.ac(compteur, ET_Main, biom.mf, fish.m.ac, TopD, FormD, range.TLpred) Arguments compteur counter ET_Main output of the create.ETmain function biom.mf Effort multiplier applied on biomass fish.m.ac Effort multiplier applied on natural mortality TopD Parameters of the formula FormD Parameters of the formula range.TLpred Range of predators trophic level check.table convert.list2tab enables to convert the list object returned by the cre- ate.ETdiagnosis function into a list of data.frames. These data.frames contain calculated variables by TL class and combinations of effort multipliers. Description convert.list2tab enables to convert the list object returned by the create.ETdiagnosis function into a list of data.frames. These data.frames contain calculated variables by TL class and combinations of effort multipliers. Usage check.table(ecopath) Arguments ecopath is the input table used in ET (possibly based on Ecopath data). The different variables are the group name, its trophic level, biomass, production, catches, omnivory index and accessibility (fraction of the group that can be catch assum- ing an infinite fishing effort). Examples data(ecopath_guinee) check.table(ecopath_guinee) convert.list2tab Check Ecopath table function Description This function enables the verification of input tables based on EwE data and used in the EcoTroph routine. A template is provided in the example: data(ecopath_guinee). Usage convert.list2tab(diagn.list) Arguments diagn.list is the list object returned by the create.ETdiagnosis function. Examples data(ecopath_guinee) Liste=create.ETdiagnosis(create.ETmain(ecopath_guinee)) Tab=convert.list2tab(Liste) create.ETdiagnosis ET-Transpose provides a picture of an ecosystem under a given fish- ing mortality. ET-Diagnosis is a routine simulating how this base- line ecosystem would be impacted by increasing or decreasing fish- ing effort. Fishing effort can be modified per fleet and/or trophic group. Ecosystem-wide effects of altering fishing effort include po- tential changes in biomass, accessible biomass, production, kinetics and catch trophic spectra, as well as impacts on the mean trophic level of the catch and biomass. Additionally, ET-Diagnosis constitutes a useful exploratory tool for ecosystem-based management. It simu- lates how reducing or increasing fishing effort and/or preferentially targeting different trophic levels could improve yield at the ecosystem scale. Lastly, ET-Diagnosis allows to view how different assumptions on ecosystem functioning (biomass input control, top-down effect) af- fect both trophic level specific and ecosystem-wide properties in rela- tion to fishing. Description ET-Transpose provides a picture of an ecosystem under a given fishing mortality. ET-Diagnosis is a routine simulating how this baseline ecosystem would be impacted by increasing or decreas- ing fishing effort. Fishing effort can be modified per fleet and/or trophic group. Ecosystem-wide effects of altering fishing effort include potential changes in biomass, accessible biomass, produc- tion, kinetics and catch trophic spectra, as well as impacts on the mean trophic level of the catch and biomass. Additionally, ET-Diagnosis constitutes a useful exploratory tool for ecosystem-based management. It simulates how reducing or increasing fishing effort and/or preferentially targeting different trophic levels could improve yield at the ecosystem scale. Lastly, ET-Diagnosis allows to view how different assumptions on ecosystem functioning (biomass input control, top-down effect) affect both trophic level specific and ecosystem-wide properties in relation to fishing. Usage create.ETdiagnosis(data, Mul_eff = NULL, Group = NULL, fleet.of.interest = NULL, same.mE = NULL, B.Input=NULL, Beta = NULL, TopD = NULL, FormD = NULL, TLpred = NULL) Arguments data is the list object returned by the create.ETmain function. Mul_eff is a vector of fishing effort multipliers that the user wants to test. Mul_eff must contain the value 1 (reference state). By default, the function simulates a range of fishing effort multipliers from 0 to 5 for each fleet. Group is a character vector of trophic groups that the user specifically wants to impact by changing associated fishing efforts. By default, all trophic groups are equally impacted. fleet.of.interest is a character vector of fleet(s) that the user specifically wants to impact by changing associated fishing efforts (default =NULL). This argument is of par- ticular interest if there are more than two fleets because it limits the mE combi- nations to be tested, and thus the associated computation time. same.mE is a logical argument (default=F), if TRUE the same effort multipliers are simul- taneously applied to all fleets. B.Input is a logical argument (default=F), if TRUE the "Biomass input control" equation is accounted for in EcoTroph equations. Beta is a coefficient expressing the extent of the biomass input control. Beta=0 refers to an ecosystem where all secondary production originates from grazing on pri- mary producers, and Beta=1 to an ecosystem where detritus and/or recruitment contribute to a major part of the biomass input (default=0.2). TopD is a coefficient expressing the top-down control, i.e. the fraction of the natural mortality depending on predator abundance. It varies between 0 and 1. The user can specify a numeric value, which is applied to each trophic level, or a numeric vector (of the same length as TL classes), i.e. a value for each TL (default=0.4). FormD is a shape parameter varying between 0 and 1. It defines the functional rela- tionship between prey and predators. The value 1 refers to a situation where predators abundance has a linear effect on the speed of the flow of their preys. The user can specify a numeric value, which is applied to each trophic level, or a numeric vector (of the same length as TL classes), i.e. a value for each TL (default=0.5). TLpred is the trophic level that the user considers to be the "predator" trophic classes start. The default value is 3.5. Details Fleets’ names used in the argument ’fleet.of.interest’ are the catch column names of the Ecopath input data.frame (e.g. ’catch.1’ or ’catch.ind’). Value This function returns a list of elements referring to each simulated combination of fishing effort multipliers. Each element is a list of two types of results: - Variables characterizing the state and functioning of the modeled ecosystem: biomass, flow, ki- netic, catches (total and per fleet) and fishing mortality per trophic level. - Summary statistics (contained in the ET_Main_diagnose): absolute and relative (in comparison with the reference state) total biomass, flow, catches. See Also plot.ETdiagnosis and plot.ETdiagnosis_isopleth to plot the principle graphics resulting from the create.ETdiagnosis function, create.ETmain to create a list of tables used as input in the cre- ate.ETdiagnosis function. Examples data(ecopath_guinee) #Impacts of global changes in fishing efforts multipliers (in the range 0-5) create.ETdiagnosis(create.ETmain(ecopath_guinee),same.mE=TRUE) #Test of all the combinations of fishing effort multipliers per fleet #(in the range 0-5) create.ETdiagnosis(create.ETmain(ecopath_guinee)) #With biomass input control create.ETdiagnosis(create.ETmain(ecopath_guinee),B.Input=TRUE) #Impacts of changing fishing effort against Barracudas+ and Carangids groups create.ETdiagnosis(create.ETmain(ecopath_guinee), Mul_eff=(seq(0,5,.1)),Group=c('Barracudas+','Carangids')) create.ETmain This function enables the creation of the ET-Main table (summarizing the principal results/variables in function of the TL classes) and other intermediate tables of the ET-Transpose routine. It provides a picture of an ecosystem under a given fishing mortality. Description This function enables the creation of the ET-Main table (summarizing the principal results/variables in function of the TL classes) and other intermediate tables of the ET-Transpose routine. It provides a picture of an ecosystem under a given fishing mortality. Usage create.ETmain(ecopath, smooth_type=NULL, sigmaLN_cst=NULL, pas=NULL, shift=NULL, smooth_param=NULL) Arguments ecopath is the input table used in ET (possibly based on Ecopath data). The different variables are the group name, its trophic level, biomass, production, catches, omnivory index and accessibility (fraction of the group that can be catch assum- ing an infinite fishing effort). smooth_type is a parameter of the create.smooth function. It defines the type of sigma cal- culation for the lognormal distribution. The value for this parameter is 1, 2 or 3. By default smooth_type=1, this defines a constant sigma. By choosing smooth_type=2, the user has the possibility to put a sigmaLN=smooth_param*ln(TL- 0.05), with smooth_param=0.07 and shift=0.95 by default. Smooth_type=3 cor- responds to the use of the calculated Omnivory Index (OI) divided by the asso- ciated mean TL as sigmaLN. sigmaLN_cst is a parameter of the create.smooth function. It defines the value of the con- stant sigma of the lognormal distribution for smooth_type=1. By default, sig- maLN_cst=0.12. pas is a parameter of the create.smooth function. It defines the splitting of the TL classes. shift is a parameter of the create.smooth function. It defines the beginning of the smooth function and allows the substraction of 0.05 in the sigma calculation accounting for the half interval range of the trophic class. smooth_param is a parameter of the create.smooth function. It defines the slope of the log- linearly increase of the TL variability with the mean trophic level of the group for smooth_type=2. SigmaLN(TL) is thus defined as sigmaLN(TL)=smooth_param*ln(TL- 0.05). Value This function returns a list containing: the ET-Main table, intermediate matrices (biomass, accessi- ble biomass, flowP...) and a list of matrices corresponding to the different fisheries catches. See Also plot.ETmain to create the principle graphics resulting from the create.ETmain function, create.smooth to create the Smooth table used in this function, Transpose to convert data referring to groups into data referring to TL classes. Examples data(ecopath_guinee) create.ETmain(ecopath_guinee) #Use of the second smooth type create.ETmain(ecopath_guinee,smooth_type=2) create.smooth Create Smooth Function Description create.smooth is used to create a smooth function. This function enables the conversion of data pertaining to specific taxa or functional groups into data by trophic classes. The main assumption of this Smooth function is that the distribution of the biomass (or catch...) of a trophic group around its mean trophic level follows a lognormal curve. The curve is defined by a mean (the mean TL of the trophic group) and a standart deviation (sigma), which is a measure of the trophic level variability within the group. The distribution is then defined by the lognormal function LN(mean TL, sigma). Usage create.smooth(tab_input, smooth_type=NULL, sigmaLN_cst=NULL, pas=NULL, shift=NULL, smooth_param=NULL) Arguments tab_input is the input table used in ET (possibly based on Ecopath data). The different variables are the group name, its trophic level, biomass, production, catches, omnivory index and accessibility (fraction of the group that can be catch assum- ing an infinite fishing effort). smooth_type defines the type of sigma calculation for the lognormal distribution. Values of this parameter are 1, 2 or 3. By default smooth_type=1, this defines a constant sigma. By choosing smooth_type=2, the user has the possibility to implement a sigmaLN=smooth_param*ln(TL-0.05), with the parameter smooth_param=0.07 and shift=0.95 by default. Smooth_type=3 corresponds to the use of the om- nivory index (OI) in the sigmaLN calculation (sigmaLN=OI/TL). sigmaLN_cst defines the value of the constant sigma of the lognormal distribution in case of smooth_type=1. By default, sigmaLN_cst=0.12. pas defines the splitting of the TL classes. By default, pas=0.1. shift defines the beginning of the smooth function and allows the substraction of 0.05 in the sigma calculation accounting for the half interval range of the trophic class. By default, with a constant sigmaLN (smooth_type=1), shift=1.8; with a function defined sigmaLN (smooth_type=2), shift=0.95; and with sigmaLN=OI/TL (smooth_type=3), shift=0. smooth_param defines the slope of the log-linear increase of the TL variability with the mean trophic level of the group. SigmaLN(TL) is thus defined as sigmaLN(TL)=smooth_param*ln(TL- 0.05). By default, smooth_param=0.07. Details The user has the possibility to define sigmaLN for each trophic group and also adjust the LN dis- tribution with the smooth_type, sigmaLN_cst, smooth_param, shift and pas parameters. Different choices are available : a constant sigma, a function defined sigma (log-linear increase) , or a sigma equal to the omnivory index divided by the associated mean TL. Value create.smooth returns a table of the TL distribution within a trophic class. This table enables the calculation of Trophic Spectra, it is used in the Transpose function. See Also plot.smooth to plot the Smooth function, Transpose to build trophic spectra, plot.Transpose to plot the trophic spectra. Examples data(ecopath_guinee) create.smooth(ecopath_guinee) create.smooth(ecopath_guinee,sigmaLN_cst=0.11) create.smooth(ecopath_guinee,smooth_type=2,pas=0.2) CTSA.catch.input Catch input for CTSA Description CTSA.catch.input is used to create inputs for the CTSA.forward function. It is a list of data.frames referring to catches per fleet formatted with TL classes in rows and trophic groups in columns. Usage CTSA.catch.input(catch.group,smooth_type=NULL,sigmaLN_cst=NULL, pas=NULL,shift=NULL,smooth_param=NULL) Arguments catch.group is a data.frame containing: a column group_name, column(s) referring to the catches of each fleet (named ’catch.1’, ’catch.2’...), a column TL specifying the mean TL of each group, and optionally a column OI (omnivory index) used for smooth_type=3. smooth_type is a parameter of the create.smooth function. It defines the type of sigma cal- culation for the lognormal distribution. The value for this parameter is 1, 2 or 3. By default smooth_type=1, this defines a constant sigma. By choosing smooth_type=2, the user has the possibility to put a sigmaLN=smooth_param*ln(TL- 0.05), with smooth_param=0.07 and shift=0.95 by default. Smooth_type=3 cor- responds to the use of the calculated Omnivory Index (OI) divided by the asso- ciated mean TL as sigmaLN. sigmaLN_cst is a parameter of the create.smooth function. It defines the value of the con- stant sigma of the lognormal distribution for smooth_type=1. By default, sig- maLN_cst=0.12. pas is a parameter of the create.smooth function. It defines the splitting of the TL classes. shift is a parameter of the create.smooth function. It defines the beginning of the smooth function and allows the substraction of 0.05 in the sigma calculation accounting for the half interval range of the trophic class. smooth_param is a parameter of the create.smooth function. It defines the slope of the log- linearly increase of the TL variability with the mean trophic level of the group for smooth_type=2. SigmaLN(TL) is thus defined as sigmaLN(TL)=smooth_param*ln(TL- 0.05). Value CTSA.catch.input returns a list of data.frames, referring to catches per fleet formatted with TL classes in rows and trophic groups in columns. See Also create.smooth, Transpose and CTSA.forward. Examples data(ecopath_guinee) catch.group=ecopath_guinee[,c("group_name","TL","catch.1","catch.2")] Y_test <- CTSA.catch.input(catch.group) Y_test E0.1 E0.1 Description E0.1 Usage E0.1(TL, Y, Fish_mort) Arguments TL Trophic level Y Catches Fish_mort Fishing mortality ecopath_guinee EcoTroph example dataset: Guinean data Description This example dataset is extracted from the 2004 Guinean Ecopath model (Gascuel et al., 2009). It provides a template for the input table formatting, the wanted variables names and the different capabilities of this package (used in function’s examples). Usage data(ecopath_guinee) Format A data.frame with 35 observations on the following 8 variables. group_name a character vector corresponding to the names of the trophic groups used in the Eco- path model. Has obligatory to be written ’group_name’. TL a numeric vector corresponding to the trophic level of the associated trophic groups. Has oblig- atory to be written ’TL’. biomass a numeric vector corresponding to the biomass of the associated trophic groups. Has obligatory to be written ’biomass’. prod a numeric vector corresponding to the production on biomass ratio. For the Detritus groups (no P/B value entered in Ecopath), put 0 as a value. Has obligatory to be written ’prod’. catch.1 a numeric vector corresponding to the catch of the artisanal fleet. A value must be entered for all groups, with a 0-value if no catch are made. Has obligatory to be written ’catch.something’. catch.2 a numeric vector corresponding to the catch of the industrial fleet. A value must be entered for all groups, with a 0-value if no catch are made. Has obligatory to be written ’catch.something’ accessibility a numeric vector corresponding to the fraction of the trophic group that can be catch assuming an infinite fishing effort. Has obligatory to be written ’accessibility’. OI a numeric vector corresponding to the omnivory index calculated by Ecopath for each trophic group. Has obligatory to be written ’OI’. Details No NA are accepted in the dataset (0 for the P/B of the detritus groups, 0 for the catch...). Follow the instructions stated in the variables descriptions. Different fleets can be entered in the model using the following system: catch.1, catch.2, catch.whatyouwant... If there is only one fleet, you just have to put catch as a variable name. Source Gascuel et al. (2009) Impact de la peche sur l’ecosysteme marin de Guinee - Modelisation EwE 1985/2005 - Examples data(ecopath_guinee) ecopath_guinee names(ecopath_guinee) Emsy E_MSY Description E_MSY Usage Emsy(TL, Y, Fish_mort) Arguments TL Trophic level Y Catches Fish_mort Fishing mortality E_MSY_0.1 E_MSY_0.1 computes two indices of exploitation: Emsy or Fmsy (maximum sustainable yield), and E0.1 or F0.1 ("start" of full ex- ploitation) per TL class. Description E_MSY_0.1 computes two indices of exploitation: Emsy or Fmsy (maximum sustainable yield), and E0.1 or F0.1 ("start" of full exploitation) per TL class. Usage E_MSY_0.1(data, Mul_eff=NULL, B.Input=NULL, Beta=NULL,TopD=NULL, FormD=NULL, TLpred=NULL, maxTL=NULL) Arguments data is the list object returned by the create.ETmain function. Mul_eff is a parameter of the create.ETdiagnosis function. It is a vector of fishing ef- fort multipliers that the user wants to test. Mul_eff must contain the value 1 (reference state). By default, the function simulates a range of fishing effort multipliers from 0 to 5 for each fleet. B.Input is a parameter of the create.ETdiagnosis function. It is a logical argument (de- fault=F), if TRUE the "Biomass input control" equation is accounted for in EcoTroph equations. Beta is a parameter of the create.ETdiagnosis function. It is a coefficient expressing the extent of the biomass input control. Beta=0 refers to an ecosystem where all secondary production originates from grazing on primary producers, and Beta=1 to an ecosystem where detritus and/or recruitment contribute to a major part of the biomass input (default=0.2). TopD is a parameter of the create.ETdiagnosis function. It is a coefficient expressing the top-down control, i.e. the fraction of the natural mortality depending on predator abundance. It varies between 0 and 1. The user can specify a numeric value, which is applied to each trophic level, or a numeric vector (of the same length as TL classes), i.e. a value for each TL (default=0.4). FormD is a parameter of the create.ETdiagnosis function. It is a shape parameter vary- ing between 0 and 1. It defines the functional relationship between prey and predators. The value 1 refers to a situation where predators abundance has a lin- ear effect on the speed of the flow of their preys. The user can specify a numeric value, which is applied to each trophic level, or a numeric vector (of the same length as TL classes), i.e. a value for each TL (default=0.5). TLpred is a parameter of the create.ETdiagnosis function. It is the trophic level that the user considers to be the "predator" trophic classes start. The default value is 3.5. maxTL is a numeric string indicating the maximum TL for which indices are computed. Details For any TL class, if E0.1 and/or Emsy value(s) is(are) equal to the maximum effort multiplier tested (max(Mul_eff)), then E/F0.1 and/or E/Fmsy are set equal to NA. Value The E_MSY_0.1 function returns a data.frame containing Fmsy, Emsy, F0.1 and E0.1 per TL class. Examples data(ecopath_guinee) E_MSY_0.1(create.ETmain(ecopath_guinee)) mf.diagnosis Effort multiplier diagnosis Description Effort multiplier diagnosis Usage mf.diagnosis( x, ET_Main, catch.list, TL_out, fleet, n.fleet, Fish_mort_ref, Fish_mort_acc_ref, B.Input, Beta, TopD, FormD, TLpred, n.TL, range.TLpred, lim.high.TL, range.highTL ) Arguments x is the list object returned by the create.ETdiagnosis function. ET_Main is the list object returned by the create.ETmain function. catch.list catches for each fleet TL_out Maximum TL to consider fleet List of available fleet n.fleet Number of fleet Fish_mort_ref Fishing mortality Fish_mort_acc_ref Accessible Fisshing mortality B.Input Biomass Beta Beta parameters of the formula TopD Parameters of the formula FormD Parameters of the formula TLpred Trophic level for predators n.TL Number of trophic levels range.TLpred Range of trophic level for predators lim.high.TL Limit for high trophic levels range.highTL Range for High trophic levels plot.ETdiagnosis This function enables the creation of the principle graphics resulting from the create.ETdiagnosis function. The function returns the prin- cipal plots of the global ET-Diagnosis routine: the graphics of the biomass, accessible biomass...rates for the different effort multipliers, the Biomass Trophic Spectra (BTS) for the different effort multipliers, the B/Bref(mE=1) and Y/Yref graphs for the main TL classes and the Catch Trophic Spectra (CTS) (global and per fleet). Description This function enables the creation of the principle graphics resulting from the create.ETdiagnosis function. The function returns the principal plots of the global ET-Diagnosis routine: the graphics of the biomass, accessible biomass...rates for the different effort multipliers, the Biomass Trophic Spectra (BTS) for the different effort multipliers, the B/Bref(mE=1) and Y/Yref graphs for the main TL classes and the Catch Trophic Spectra (CTS) (global and per fleet). Usage ## S3 method for class 'ETdiagnosis' plot( x, scale = NULL, maxrange = NULL, legend.cex = NULL, ask = interactive(), ... ) Arguments x is the list object returned by the create.ETdiagnosis function. scale is the scale parameter of the Biomass Trophic Spectra, can be log or by default the standard scale of results. maxrange is the maximum TL wanted for the x-axis. By default maxrange = 5.5. legend.cex defines the value of the cex for the legend. ask default value is interactive. Parameter used to enable the user to control the display of each graph. ... plot other arguments Examples data(ecopath_guinee) diagn.list<-create.ETdiagnosis(create.ETmain(ecopath_guinee),same.mE=TRUE) plot(diagn.list) plot.ETmain This function enables the display of the principle plots resulting from the create.ETmain function: Biomass Trophic Spectra, Accessible Biomass Trophic Spectra, Catch by fleet Trophic spectra, Total Catch Trophic Spectra and other summary plots. Description This function enables the display of the principle plots resulting from the create.ETmain function: Biomass Trophic Spectra, Accessible Biomass Trophic Spectra, Catch by fleet Trophic spectra, Total Catch Trophic Spectra and other summary plots. Usage ## S3 method for class 'ETmain' plot( x, scale1 = NULL, scale2 = NULL, scale3 = NULL, legend.cex = NULL, ask = interactive(), ... ) Arguments x is the list of tables returned by the create.ETmain function. scale1 defines the scale of the Biomass plots y-axis: can be log or not. scale2 defines the scale of the Accessible Biomass plots y-axis: can be log or not. scale3 defines the scale of the Catch by fleet plots y-axis: can be log or not. legend.cex defines the value of the cex for the legend. ask default value is interactive. Parameter used to enable the user to control the display of each graph. ... plot other arguments Value The function returns the principal graphics of the global ET-Transpose routine: the Biomass Trophic Spectra, the Accessible Biomass Trophic Spectra and other graphics, notably the Catch Trophic Spectra. See Also create.smooth function to create the Smooth, Transpose to calculate the data transposition into trophic spectra, create.ETmain to create a list of table including the ET-Main table. Examples data(ecopath_guinee) plot(create.ETmain(ecopath_guinee),scale1=log) plot(create.ETmain(ecopath_guinee),scale1=log,scale3=log) plot.smooth plot.smooth is used to plot the Smooth function. This function enables the user to see the TL distributions around their mean trophic levels. Description plot.smooth is used to plot the Smooth function. This function enables the user to see the TL distributions around their mean trophic levels. Usage ## S3 method for class 'smooth' plot(x, ...) Arguments x is the table returned by the create.smooth function. ... plot other arguments See Also create.smooth function to create the Smooth, Transpose to calculate the data transposition into trophic spectra. Examples data(ecopath_guinee) plot(create.smooth(ecopath_guinee)) plot(create.smooth(ecopath_guinee,smooth_type=2)) plot.Transpose This function returns the two principal plots of the Transpose function : a plot by group and the associated Trophic Spectra (CTS, BTS...). Description This function returns the two principal plots of the Transpose function : a plot by group and the associated Trophic Spectra (CTS, BTS...). Usage ## S3 method for class 'Transpose' plot(x, title = NULL, scale = NULL, legend.cex = NULL, ...) Arguments x is the table returned by the Transpose function. title defines the title of the graph. scale defines the scale of the y-axis: can be log or not. legend.cex defines the value of the cex for the legend. ... plot other arguments See Also create.smooth function to create the Smooth, plot.smooth to plot the smooth function, Transpose to calculate the data transposition into trophic spectra. Examples data(ecopath_guinee) smoothed<-create.smooth(ecopath_guinee) plot(Transpose(smoothed,ecopath_guinee,"biomass"),scale=log) plot(Transpose(smoothed,ecopath_guinee,"catch.1"),title="Small Scale Fishery Catch") plot_ETdiagnosis_isopleth This function enables to plot the mixed impacts of changes in fishing effort for two fleets (or groups of fleets). Description This function enables to plot the mixed impacts of changes in fishing effort for two fleets (or groups of fleets). Usage plot_ETdiagnosis_isopleth( x, fleet1 = NULL, fleet2 = NULL, var = NULL, n.level = NULL, relative = NULL, name.fleet1 = NULL, name.fleet2 = NULL, color = NULL, ask = interactive() ) Arguments x is the list object returned by the create.ETdiagnosis function. fleet1 is a character vector of fleets for which fishing efforts are equally changed. fleet2 is a second character vector of fleets for which fishing efforts are equally changed. Fishing efforts remain unchanged for fleets not assigned in fleet1 or fleet2. If fleet2 is NULL, all fleets not assigned in fleet1 are assigned in fleet2. If the ar- gument ’fleet.of.interest’ has been assigned in the function create.ETdiagnosis, fleet1=fleet.of.interest and thus fleet2 is composed of the remaining fleet(s) (not assigned in fleet.of.interest). var is a character vector of plotted variables (TOT_biomass,TOT_biomass_acc,Y,Y_fleet1,Y_fleet2,TL_TOT the variables are plotted by default. n.level is a numeric string, specifying the number of plotted isopleth areas (7 is the default value). relative is a logical string (by default relative=F), specifying if the variables have to be plotted in absolute or relative values (in comparison with reference state, Mul_eff=1). Note that if relative=TRUE, mean trophic level in biomass or catches (TL_TOT_biomass,TL_Y,...) are not plotted. name.fleet1 is a character string used to implement x-axis name. By default name.fleet1=’fleet 1’. name.fleet2 is a character string used to implement y-axis name. By default name.fleet2=’fleet 2’. If the argument fleet.of.interest has been assigned in the function create.ETdiagnosis, name.fleet1 = ’fleet of interest’ and name.fleet2 = ’other fleets’. color is a vector of colors, the length of this vector should be equal to the value of levels. By default, color=rainbow(n=levels). ask default value is interactive. Parameter used to enable the user to control the display of each graph. Details Fleets’ names used in the arguments ’fleet1’ and ’fleet2’ are the catch column names of ecopath input dataframe (e.g. ’catch.1’ or ’catch.ind’). Examples #data(ecopath_guinee) #diagn.list=create.ETdiagnosis(create.ETmain(ecopath_guinee)) #plot_ETdiagnosis_isopleth(diagn.list,fleet1='catch.1',fleet2='catch.2') #plot_ETdiagnosis_isopleth(diagn.list,fleet1='catch.1',fleet2='catch.2',relative=TRUE) read.ecopath.model Input data import function (from an xml file) Description This function loads input data from an xml file created by the user, or exported from the EwE EcoTroph plug-in, or from a web service associated to a database populated with parameters of several EwE models. Usage read.ecopath.model(filename) Arguments filename is the address of the file the user wants to import. Value This function returns a data.frame containing all the column needed to run the EcoTroph R package. See Also check.table to control the reliability of the dataset. regPB function used to compute pB for the higest trophic levels Description function used to compute pB for the higest trophic levels Usage regPB(compteur, pb.mf, TL_out, range.highTL) Arguments compteur counter pb.mf Effort multiplier vector TL_out Trophic level out of the scope range.highTL Range of High Trophic level regPB.ac function used to compute pB for the higest trophic levels and accessi- ble biomass Description function used to compute pB for the higest trophic levels and accessible biomass Usage regPB.ac(compteur, pb.mf.ac, TL_out, range.highTL) Arguments compteur counter pb.mf.ac Effort multiplier TL_out Trophic level out of the scope range.highTL Range of high trophic level saturation Sigma Saturation Function Description This function enables an other calculation for the sigma of the create.smooth function. Sigma is calculated on the base of a saturation function reflecting a biological reasoning about the variability of the TL within trophic classes: the variability increases with the TL and reaches a plateau after a certain TL. Usage saturation(sigma_inf = NULL, coeff = NULL, pas = NULL) Arguments sigma_inf defines the value of the curve’s plateau. coeff defines the value of the slope. pas defines the splitting of the TL classes. Details By default sigma is constant. This function enables an other user choice reflecting a different reasoning. Value saturation returns a vector of values for the sigma used in the create.smooth function. See Also create.smooth function to create the Smooth, plot.smooth to plot the smooth function. Examples plot(saturation()) lines(saturation(0.2)) text(48,0.18,"sigma_inf=0.2") lines(saturation(coeff=0.5)) text(48,0.35,"coeff=0.5") Transpose Transpose enables the conversion of data pertaining to specific taxa or functionnal groups into data by trophic class. Data can represent catches, biomasses or production in order to produce continuous dis- tributions of those variables over trophic levels. Description Transpose enables the conversion of data pertaining to specific taxa or functionnal groups into data by trophic class. Data can represent catches, biomasses or production in order to produce continuous distributions of those variables over trophic levels. Usage Transpose(tab_smooth, tab_input, column) Arguments tab_smooth is the table returned by the create.smooth function. tab_input is the input table based on Ecopath data or on independent data. The different variables are the group name, its trophic level, biomass, production on biomass ratio, catches, omnivory index and accessibility (fraction of the group that can be catch assuming an infinite fishing effort) if the input table corresponds to an EwE model. In other case, to simply build trophic spectra, only the group names, their trophic levels and related variables are necessary. column is the tab_input table column name of the variable the user wants to transpose (for example "biomass" or "catch"). See Also create.smooth function to create the Smooth, plot.smooth to plot the smooth function, plot.Transpose to plot the associated trophic spectra. Examples data(ecopath_guinee) Transpose(create.smooth(ecopath_guinee),ecopath_guinee,"biomass") Transpose(create.smooth(ecopath_guinee),ecopath_guinee,"catch.1")
hashie
ruby
Ruby
Hashie === [![Join the chat at https://gitter.im/hashie/hashie](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/hashie/hashie?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Gem Version](http://img.shields.io/gem/v/hashie.svg)](http://badge.fury.io/rb/hashie) [![Build Status](https://github.com/hashie/hashie/actions/workflows/main.yml/badge.svg)](https://github.com/hashie/hashie/actions/workflows/main.yml) [![<NAME>](./mascot.svg)](#mascot) Hashie is a growing collection of tools that extend Hashes and make them more useful. Table of Contents === * [Hashie](#hashie) * [Table of Contents](#table-of-contents) + [Installation](#installation) + [Stable Release](#stable-release) + [Hash Extensions](#hash-extensions) + [Logging](#logging) + [Coercion](#coercion) + [Coercing Collections](#coercing-collections) + [Coercing Hashes](#coercing-hashes) + [Coercing Core Types](#coercing-core-types) + [Coercion Proc](#coercion-proc) - [A note on circular coercion](#a-note-on-circular-coercion) + [KeyConversion](#keyconversion) + [MergeInitializer](#mergeinitializer) + [MethodAccess](#methodaccess) + [MethodAccessWithOverride](#methodaccesswithoverride) + [MethodOverridingInitializer](#methodoverridinginitializer) + [IndifferentAccess](#indifferentaccess) + [IgnoreUndeclared](#ignoreundeclared) + [DeepMerge](#deepmerge) + [DeepFetch](#deepfetch) + [DeepFind](#deepfind) + [DeepLocate](#deeplocate) + [StrictKeyAccess](#strictkeyaccess) + [Mash](#mash) + [KeepOriginalKeys](#keeporiginalkeys) + [PermissiveRespondTo](#permissiverespondto) + [SafeAssignment](#safeassignment) + [SymbolizeKeys](#symbolizekeys) + [DefineAccessors](#defineaccessors) + [Dash](#dash) + [Potential Gotchas](#potential-gotchas) + [PropertyTranslation](#propertytranslation) + [Mash and Rails 4 Strong Parameters](#mash-and-rails-4-strong-parameters) + [Coercion](#coercion-1) + [PredefinedValues](#predefinedvalues) + [Trash](#trash) + [Clash](#clash) + [Rash](#rash) + [Auto-Optimized](#auto-optimized) + [Mascot](#mascot) + [Contributing](#contributing) + [Copyright](#copyright) Installation --- Hashie is available as a RubyGem: ``` $ gem install hashie ``` Stable Release --- You're reading the documentation for the stable release of Hashie, v5.0.0. Hash Extensions --- The library is broken up into a number of atomically includable Hash extension modules as described below. This provides maximum flexibility for users to mix and match functionality while maintaining feature parity with earlier versions of Hashie. Any of the extensions listed below can be mixed into a class by `include`-ing `Hashie::Extensions::ExtensionName`. Logging --- Hashie has a built-in logger that you can override. By default, it logs to `STDOUT` but can be replaced by any `Logger` class. The logger is accessible on the Hashie module, as shown below: ``` # Set the logger to the Rails logger [Hashie](/gems/hashie/Hashie "Hashie (module)").[logger](/gems/hashie/Hashie#logger-class_method "Hashie.logger (method)") = Rails.logger ``` ### Coercion Coercions allow you to set up "coercion rules" based either on the key or the value type to massage data as it's being inserted into the Hash. Key coercions might be used, for example, in lightweight data modeling applications such as an API client: ``` class Tweet < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MergeInitializer](/gems/hashie/Hashie/Extensions/MergeInitializer "Hashie::Extensions::MergeInitializer (module)") coerce_key :user, User end user_hash = { name: "Bob" } Tweet.new(user: user_hash) # => automatically calls User.coerce(user_hash) or # User.new(user_hash) if that isn't present. ``` Value coercions, on the other hand, will coerce values based on the type of the value being inserted. This is useful if you are trying to build a Hash-like class that is self-propagating. ``` class SpecialHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") coerce_value Hash, SpecialHash def initialize(hash = {}) super hash.each_pair do |k,v| self[k] = v end end end ``` ### Coercing Collections ``` class Tweet < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") coerce_key :mentions, Array[User] coerce_key :friends, Set[User] end user_hash = { name: "Bob" } mentions_hash= [user_hash, user_hash] friends_hash = [user_hash] tweet = Tweet.new(mentions: mentions_hash, friends: friends_hash) # => automatically calls User.coerce(user_hash) or # User.new(user_hash) if that isn't present on each element of the array tweet.mentions.map(&:class) # => [User, User] tweet.friends.class # => Set ``` ### Coercing Hashes ``` class Relation def initialize(string) @relation = string end end class Tweet < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") coerce_key :relations, Hash[User => Relation] end user_hash = { name: "Bob" } relations_hash= { user_hash => "father", user_hash => "friend" } tweet = Tweet.new(relations: relations_hash) tweet.relations.map { |k,v| [k.class, v.class] } # => [[User, Relation], [User, Relation]] tweet.relations.class # => Hash # => automatically calls User.coerce(user_hash) on each key # and Relation.new on each value since Relation doesn't define the `coerce` class method ``` ### Coercing Core Types Hashie handles coercion to the following by using standard conversion methods: | type | method | | --- | --- | | Integer | `#to_i` | | Float | `#to_f` | | Complex | `#to_c` | | Rational | `#to_r` | | String | `#to_s` | | Symbol | `#to_sym` | **Note**: The standard Ruby conversion methods are less strict than you may assume. For example, `:foo.to_i` raises an error but `"foo".to_i` returns 0. You can also use coerce from the following supertypes with `coerce_value`: * Integer * Numeric Hashie does not have built-in support for coercing boolean values, since Ruby does not have a built-in boolean type or standard method for coercing to a boolean. You can coerce to booleans using a custom proc. ### Coercion Proc You can use a custom coercion proc on either `#coerce_key` or `#coerce_value`. This is useful for coercing to booleans or other simple types without creating a new class and `coerce` method. For example: ``` class Tweet < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") coerce_key :retweeted, ->(v) do case v when String !!(v =~ /\A(true|t|yes|y|1)\z/i) when Numeric !v.to_i.zero? else v == true end end end ``` #### A note on circular coercion Since `coerce_key` is a class-level method, you cannot have circular coercion without the use of a proc. For example: ``` class CategoryHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MergeInitializer](/gems/hashie/Hashie/Extensions/MergeInitializer "Hashie::Extensions::MergeInitializer (module)") coerce_key :products, Array[ProductHash] end class ProductHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MergeInitializer](/gems/hashie/Hashie/Extensions/MergeInitializer "Hashie::Extensions::MergeInitializer (module)") coerce_key :categories, Array[CategoriesHash] end ``` This will fail with a `NameError` for `CategoryHash::ProductHash` because `ProductHash` is not defined at the point that `coerce_key` is happening for `CategoryHash`. To work around this, you can use a coercion proc. For example, you could do: ``` class CategoryHash < Hash # ... coerce_key :products, ->(value) do return value.map { |v| ProductHash.new(v) } if value.respond_to?(:map) ProductHash.new(value) end end ``` ### KeyConversion The KeyConversion extension gives you the convenience methods of `symbolize_keys` and `stringify_keys` along with their bang counterparts. You can also include just stringify or just symbolize with `Hashie::Extensions::StringifyKeys` or `Hashie::Extensions::SymbolizeKeys`. Hashie also has a utility method for converting keys on a Hash without a mixin: ``` [Hashie](/gems/hashie/Hashie "Hashie (module)").[symbolize_keys!](/gems/hashie/Hashie/Extensions/SymbolizeKeys/ClassMethods#symbolize_keys!-instance_method "Hashie::Extensions::SymbolizeKeys::ClassMethods#symbolize_keys! (method)") hash # => Symbolizes all string keys of hash. [Hashie](/gems/hashie/Hashie "Hashie (module)").[symbolize_keys](/gems/hashie/Hashie/Extensions/SymbolizeKeys/ClassMethods#symbolize_keys-instance_method "Hashie::Extensions::SymbolizeKeys::ClassMethods#symbolize_keys (method)") hash # => Returns a copy of hash with string keys symbolized. [Hashie](/gems/hashie/Hashie "Hashie (module)").[stringify_keys!](/gems/hashie/Hashie/Extensions/StringifyKeys/ClassMethods#stringify_keys!-instance_method "Hashie::Extensions::StringifyKeys::ClassMethods#stringify_keys! (method)") hash # => Stringifies keys of hash. [Hashie](/gems/hashie/Hashie "Hashie (module)").[stringify_keys](/gems/hashie/Hashie/Extensions/StringifyKeys/ClassMethods#stringify_keys-instance_method "Hashie::Extensions::StringifyKeys::ClassMethods#stringify_keys (method)") hash # => Returns a copy of hash with keys stringified. ``` ### MergeInitializer The MergeInitializer extension simply makes it possible to initialize a Hash subclass with another Hash, giving you a quick short-hand. ### MethodAccess The MethodAccess extension allows you to quickly build method-based reading, writing, and querying into your Hash descendant. It can also be included as individual modules, i.e. `Hashie::Extensions::MethodReader`, `Hashie::Extensions::MethodWriter` and `Hashie::Extensions::MethodQuery`. ``` class MyHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MethodAccess](/gems/hashie/Hashie/Extensions/MethodAccess "Hashie::Extensions::MethodAccess (module)") end h = MyHash.new h.abc = 'def' h.abc # => 'def' h.abc? # => true ``` ### MethodAccessWithOverride The MethodAccessWithOverride extension is like the MethodAccess extension, except that it allows you to override Hash methods. It aliases any overridden method with two leading underscores. To include only this overriding functionality, you can include the single module `Hashie::Extensions::MethodOverridingWriter`. ``` class MyHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MethodAccess](/gems/hashie/Hashie/Extensions/MethodAccess "Hashie::Extensions::MethodAccess (module)") end class MyOverridingHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MethodAccessWithOverride](/gems/hashie/Hashie/Extensions/MethodAccessWithOverride "Hashie::Extensions::MethodAccessWithOverride (module)") end non_overriding = MyHash.new non_overriding.zip = 'a-dee-doo-dah' non_overriding.zip #=> [[['zip', 'a-dee-doo-dah']]] overriding = MyOverridingHash.new overriding.zip = 'a-dee-doo-dah' overriding.zip #=> 'a-dee-doo-dah' overriding.__zip #=> [[['zip', 'a-dee-doo-dah']]] ``` ### MethodOverridingInitializer The MethodOverridingInitializer extension will override hash methods if you pass in a normal hash to the constructor. It aliases any overridden method with two leading underscores. To include only this initializing functionality, you can include the single module `Hashie::Extensions::MethodOverridingInitializer`. ``` class MyHash < Hash end class MyOverridingHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MethodOverridingInitializer](/gems/hashie/Hashie/Extensions/MethodOverridingInitializer "Hashie::Extensions::MethodOverridingInitializer (module)") end non_overriding = MyHash.new(zip: 'a-dee-doo-dah') non_overriding.zip #=> [] overriding = MyOverridingHash.new(zip: 'a-dee-doo-dah') overriding.zip #=> 'a-dee-doo-dah' overriding.__zip #=> [[['zip', 'a-dee-doo-dah']]] ``` ### IndifferentAccess This extension can be mixed in to your Hash subclass to allow you to use Strings or Symbols interchangeably as keys; similar to the `params` hash in Rails. In addition, IndifferentAccess will also inject itself into sub-hashes so they behave the same. ``` class MyHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[MergeInitializer](/gems/hashie/Hashie/Extensions/MergeInitializer "Hashie::Extensions::MergeInitializer (module)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[IndifferentAccess](/gems/hashie/Hashie/Extensions/IndifferentAccess "Hashie::Extensions::IndifferentAccess (module)") end myhash = MyHash.new(:cat => 'meow', 'dog' => 'woof') myhash['cat'] # => "meow" myhash[:cat] # => "meow" myhash[:dog] # => "woof" myhash['dog'] # => "woof" # Auto-Injecting into sub-hashes. myhash['fishes'] = {} myhash['fishes'].class # => Hash myhash['fishes'][:food] = 'flakes' myhash['fishes']['food'] # => "flakes" ``` To get back a normal, not-indifferent Hash, you can use `#to_hash` on the indifferent hash. It exports the keys as strings, not symbols: ``` myhash = MyHash.new myhash["foo"] = "bar" myhash[:foo] #=> "bar" normal_hash = myhash.to_hash myhash["foo"] #=> "bar" myhash[:foo] #=> nil ``` ### IgnoreUndeclared This extension can be mixed in to silently ignore undeclared properties on initialization instead of raising an error. This is useful when using a Trash to capture a subset of a larger hash. ``` class Person < Trash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[IgnoreUndeclared](/gems/hashie/Hashie/Extensions/IgnoreUndeclared "Hashie::Extensions::IgnoreUndeclared (module)") property :first_name property :last_name end user_data = { first_name: 'Freddy', last_name: 'Nostrils', email: '[[email protected]](/cdn-cgi/l/email-protection)' } p = Person.new(user_data) # 'email' is silently ignored p.first_name # => 'Freddy' p.last_name # => 'Nostrils' p.email # => NoMethodError ``` ### DeepMerge This extension allows you to easily include a recursive merging system into any Hash descendant: ``` class MyHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[DeepMerge](/gems/hashie/Hashie/Extensions/DeepMerge "Hashie::Extensions::DeepMerge (module)") end h1 = MyHash[{ x: { y: [4,5,6] }, z: [7,8,9] }] h2 = MyHash[{ x: { y: [7,8,9] }, z: "xyz" }] h1.deep_merge(h2) # => { x: { y: [7, 8, 9] }, z: "xyz" } h2.deep_merge(h1) # => { x: { y: [4, 5, 6] }, z: [7, 8, 9] } ``` Like with Hash#merge in the standard library, a block can be provided to merge values: ``` class MyHash < Hash include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[DeepMerge](/gems/hashie/Hashie/Extensions/DeepMerge "Hashie::Extensions::DeepMerge (module)") end h1 = MyHash[{ a: 100, b: 200, c: { c1: 100 } }] h2 = MyHash[{ b: 250, c: { c1: 200 } }] h1.deep_merge(h2) { |key, this_val, other_val| this_val + other_val } # => { a: 100, b: 450, c: { c1: 300 } } ``` ### DeepFetch This extension can be mixed in to provide for safe and concise retrieval of deeply nested hash values. In the event that the requested key does not exist a block can be provided and its value will be returned. Though this is a hash extension, it conveniently allows for arrays to be present in the nested structure. This feature makes the extension particularly useful for working with JSON API responses. ``` user = { name: { first: 'Bob', last: 'Boberts' }, groups: [ { name: 'Rubyists' }, { name: 'Open source enthusiasts' } ] } user.extend [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[DeepFetch](/gems/hashie/Hashie/Extensions/DeepFetch "Hashie::Extensions::DeepFetch (module)") user.deep_fetch :name, :first # => 'Bob' user.deep_fetch :name, :middle # => 'KeyError: Could not fetch middle' # using a default block user.deep_fetch(:name, :middle) { |key| 'default' } # => 'default' # a nested array user.deep_fetch :groups, 1, :name # => 'Open source enthusiasts' ``` ### DeepFind This extension can be mixed in to provide for concise searching for keys within a deeply nested hash. It can also search through any Enumerable contained within the hash for objects with the specified key. Note: The searches are depth-first, so it is not guaranteed that a shallowly nested value will be found before a deeply nested value. ``` user = { name: { first: 'Bob', last: 'Boberts' }, groups: [ { name: 'Rubyists' }, { name: 'Open source enthusiasts' } ] } user.extend [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[DeepFind](/gems/hashie/Hashie/Extensions/DeepFind "Hashie::Extensions::DeepFind (module)") user.deep_find(:name) #=> { first: 'Bob', last: 'Boberts' } user.deep_detect(:name) #=> { first: 'Bob', last: 'Boberts' } user.deep_find_all(:name) #=> [{ first: 'Bob', last: 'Boberts' }, 'Rubyists', 'Open source enthusiasts'] user.deep_select(:name) #=> [{ first: 'Bob', last: 'Boberts' }, 'Rubyists', 'Open source enthusiasts'] ``` ### DeepLocate This extension can be mixed in to provide a depth first search based search for enumerables matching a given comparator callable. It returns all enumerables which contain at least one element, for which the given comparator returns `true`. Because the container objects are returned, the result elements can be modified in place. This way, one can perform modifications on deeply nested hashes without the need to know the exact paths. ``` books = [ { title: "Ruby for beginners", pages: 120 }, { title: "CSS for intermediates", pages: 80 }, { title: "Collection of ruby books", books: [ { title: "Ruby for the rest of us", pages: 576 } ] } ] books.extend([Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[DeepLocate](/gems/hashie/Hashie/Extensions/DeepLocate "Hashie::Extensions::DeepLocate (module)")) # for ruby 1.9 leave *no* space between the lambda rocket and the braces # http://ruby-journal.com/becareful-with-space-in-lambda-hash-rocket-syntax-between-ruby-1-dot-9-and-2-dot-0/ books.deep_locate -> (key, value, object) { key == :title && value.include?("Ruby") } # => [{:title=>"Ruby for beginners", :pages=>120}, {:title=>"Ruby for the rest of us", :pages=>576}] books.deep_locate -> (key, value, object) { key == :pages && value <= 120 } # => [{:title=>"Ruby for beginners", :pages=>120}, {:title=>"CSS for intermediates", :pages=>80}] ``` StrictKeyAccess --- This extension can be mixed in to allow a Hash to raise an error when attempting to extract a value using a non-existent key. ``` class StrictKeyAccessHash < Hash include Hashie::Extensions::StrictKeyAccess end >> hash = StrictKeyAccessHash[foo: "bar"] => {:foo=>"bar"} >> hash[:foo] => "bar" >> hash[:cow] KeyError: key not found: :cow ``` Mash --- Mash is an extended Hash that gives simple pseudo-object functionality that can be built from hashes and easily extended. It is intended to give the user easier access to the objects within the Mash through a property-like syntax, while still retaining all Hash functionality. ``` mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)") mash.name? # => false mash.name # => nil mash.name = "My Mash" mash.name # => "My Mash" mash.name? # => true mash.inspect # => <Hashie::Mash name="My Mash"mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)") # use bang methods for multi-level assignment mash.author!.name = "<NAME>" mash.author # => <Hashie::Mash name="<NAME>"mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)") # use under-bang methods for multi-level testing mash.author_.name? # => false mash.inspect # => <Hashie::Mash> ``` **Note:** The `?` method will return false if a key has been set to false or nil. In order to check if a key has been set at all, use the `mash.key?('some_key')` method instead. *How does Mash handle conflicts with pre-existing methods?* Please note that a Mash will not override methods through the use of the property-like syntax. This can lead to confusion if you expect to be able to access a Mash value through the property-like syntax for a key that conflicts with a method name. However, it protects users of your library from the unexpected behavior of those methods being overridden behind the scenes. ``` mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)") mash.name = "My Mash" mash.zip = "Method Override?" mash.zip # => [[["name", "My Mash"]], [["zip", "Method Override?"]]] ``` Since Mash gives you the ability to set arbitrary keys that then act as methods, Hashie logs when there is a conflict between a key and a pre-existing method. You can set the logger that this logs message to via the global Hashie logger: ``` [Hashie](/gems/hashie/Hashie "Hashie (module)").[logger](/gems/hashie/Hashie#logger-class_method "Hashie.logger (method)") = Rails.logger ``` You can also disable the logging in subclasses of Mash: ``` class Response < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") disable_warnings end ``` The default is to disable logging for all methods that conflict. If you would like to only disable the logging for specific methods, you can include an array of method keys: ``` class Response < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") disable_warnings :zip, :zap end ``` This behavior is cumulative. The examples above and below behave identically. ``` class Response < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") disable_warnings :zip disable_warnings :zap end ``` Disable warnings will honor the last `disable_warnings` call. Calling without parameters will override the ignored methods list, and calling with parameters will create a new ignored methods list. This includes child classes that inherit from a class that disables warnings. ``` class Message < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") disable_warnings :zip, :zap disable_warnings end # No errors will be logged Message.new(merge: 'true', compact: true) ``` ``` class Message < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") disable_warnings end class Response < Message disable_warnings :zip, :zap end # 2 errors will be logged Response.new(merge: 'true', compact: true, zip: '90210', zap: 'electric') ``` If you would like to create an anonymous subclass of a Hashie::Mash with key conflict warnings disabled: ``` [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[quiet](/gems/hashie/Hashie/Mash#quiet-class_method "Hashie::Mash.quiet (method)").new(zip: '90210', compact: true) # no errors logged [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[quiet](/gems/hashie/Hashie/Mash#quiet-class_method "Hashie::Mash.quiet (method)")(:zip).new(zip: '90210', compact: true) # error logged for compact ``` *How does the wrapping of Mash sub-Hashes work?* Mash duplicates any sub-Hashes that you add to it and wraps them in a Mash. This allows for infinite chaining of nested Hashes within a Mash without modifying the object(s) that are passed into the Mash. When you subclass Mash, the subclass wraps any sub-Hashes in its own class. This preserves any extensions that you mixed into the Mash subclass and allows them to work within the sub-Hashes, in addition to the main containing Mash. ``` mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)")(name: "Hashie", dependencies: { rake: "< 11", rspec: "~> 3.0" }) mash.dependencies.class #=> Hashie::Mash class MyGem < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)"); end my_gem = MyGem.new(name: "Hashie", dependencies: { rake: "< 11", rspec: "~> 3.0" }) my_gem.dependencies.class #=> MyGem ``` *How does Mash handle key types which cannot be symbolized?* Mash preserves keys which cannot be converted *directly* to both a string and a symbol, such as numeric keys. Since Mash is conceived to provide psuedo-object functionality, handling keys which cannot represent a method call falls outside its scope of value. ``` [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)")('1' => 'one string', :'1' => 'one sym', 1 => 'one num') # => {"1"=>"one sym", 1=>"one num"} ``` The symbol key `:'1'` is converted the string `'1'` to support indifferent access and consequently its value `'one sym'` will override the previously set `'one string'`. However, the subsequent key of `1` cannot directly convert to a symbol and therefore **not** converted to the string `'1'` that would otherwise override the previously set value of `'one sym'`. *What else can Mash do?* Mash allows you also to transform any files into a Mash objects. ``` #/etc/config/settings/twitter.yml development: api_key: 'api_key' production: api_key: <%= ENV['API_KEY'] %> #let's say that ENV['API_KEY'] is set to 'abcd' ``` ``` mash = Mash.load('settings/twitter.yml') mash.development.api_key # => 'localhost' mash.development.api_key = "foo" # => <# RuntimeError can't modify frozen ...> mash.development.api_key? # => true ``` You can also load with a `Pathname` object: ``` mash = Mash.load(Pathname 'settings/twitter.yml') mash.development.api_key # => 'localhost' ``` You can access a Mash from another class: ``` mash = Mash.load('settings/twitter.yml')[ENV['RACK_ENV']] Twitter.extend mash.to_module # NOTE: if you want another name than settings, call: to_module('my_settings') Twitter.settings.api_key # => 'abcd' ``` You can use another parser (by default: [YamlErbParser](lib/hashie/extensions/parsers/yaml_erb_parser.rb)): ``` #/etc/data/user.csv id | name | lastname ---|--- | --- 1 |John | Doe 2 |Laurent | Garnier ``` ``` mash = Mash.load('data/user.csv', parser: MyCustomCsvParser) # => { 1 => { name: 'John', lastname: 'Doe'}, 2 => { name: 'Laurent', lastname: 'Garnier' } } mash[1] #=> { name: 'John', lastname: 'Doe' } ``` The `Mash#load` method calls `YAML.safe_load(path, [], [], true)`. Specify `permitted_symbols`, `permitted_classes` and `aliases` options as needed. ``` Mash.load('data/user.csv', permitted_classes: [Symbol], permitted_symbols: [], aliases: false) ``` ### KeepOriginalKeys This extension can be mixed into a Mash to keep the form of any keys passed directly into the Mash. By default, Mash converts symbol keys to strings to give indifferent access. This extension still allows indifferent access, but keeps the form of the keys to eliminate confusion when you're not expecting the keys to change. ``` class KeepingMash < ::[Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Mash](/gems/hashie/Hashie/Extensions/Mash "Hashie::Extensions::Mash (module)")::[KeepOriginalKeys](/gems/hashie/Hashie/Extensions/Mash/KeepOriginalKeys "Hashie::Extensions::Mash::KeepOriginalKeys (module)") end mash = KeepingMash.new(:symbol_key => :symbol, 'string_key' => 'string') mash.to_hash == { :symbol_key => :symbol, 'string_key' => 'string' } #=> true mash.symbol_key #=> :symbol mash[:symbol_key] #=> :symbol mash['symbol_key'] #=> :symbol mash.string_key #=> 'string' mash['string_key'] #=> 'string' mash[:string_key] #=> 'string' ``` ### PermissiveRespondTo By default, Mash only states that it responds to built-in methods, affixed methods (e.g. setters, underbangs, etc.), and keys that it currently contains. That means it won't state that it responds to a getter for an unset key, as in the following example: ``` mash = [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)")(a: 1) mash.respond_to? :b #=> false ``` This means that by default Mash is not a perfect match for use with a SimpleDelegator since the delegator will not forward messages for unset keys to the Mash even though it can handle them. In order to have a SimpleDelegator-compatible Mash, you can use the `PermissiveRespondTo` extension to make Mash respond to anything. ``` class PermissiveMash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Mash](/gems/hashie/Hashie/Extensions/Mash "Hashie::Extensions::Mash (module)")::[PermissiveRespondTo](/gems/hashie/Hashie/Extensions/Mash/PermissiveRespondTo "Hashie::Extensions::Mash::PermissiveRespondTo (module)") end mash = PermissiveMash.new(a: 1) mash.respond_to? :b #=> true ``` This comes at the cost of approximately 20% performance for initialization and setters and 19KB of permanent memory growth for each such class that you create. ### SafeAssignment This extension can be mixed into a Mash to guard the attempted overwriting of methods by property setters. When mixed in, the Mash will raise an `ArgumentError` if you attempt to write a property with the same name as an existing method. ``` class SafeMash < ::[Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Mash](/gems/hashie/Hashie/Extensions/Mash "Hashie::Extensions::Mash (module)")::[SafeAssignment](/gems/hashie/Hashie/Extensions/Mash/SafeAssignment "Hashie::Extensions::Mash::SafeAssignment (module)") end safe_mash = SafeMash.new safe_mash.zip = 'Test' # => ArgumentError safe_mash[:zip] = 'test' # => still ArgumentError ``` ### SymbolizeKeys This extension can be mixed into a Mash to change the default behavior of converting keys to strings. After mixing this extension into a Mash, the Mash will convert all string keys to symbols. It can be useful to use with keywords argument, which required symbol keys. ``` class SymbolizedMash < ::[Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Mash](/gems/hashie/Hashie/Extensions/Mash "Hashie::Extensions::Mash (module)")::[SymbolizeKeys](/gems/hashie/Hashie/Extensions/Mash/SymbolizeKeys "Hashie::Extensions::Mash::SymbolizeKeys (module)") end symbol_mash = SymbolizedMash.new symbol_mash['test'] = 'value' symbol_mash.test #=> 'value' symbol_mash.to_h #=> {test: 'value'} def example(test:) puts test end example(symbol_mash) #=> value ``` There is a major benefit and coupled with a major trade-off to this decision (at least on older Rubies). As a benefit, by using symbols as keys, you will be able to use the implicit conversion of a Mash via the `#to_hash` method to destructure (or splat) the contents of a Mash out to a block. This can be handy for doing iterations through the Mash's keys and values, as follows: ``` symbol_mash = SymbolizedMash.new(id: 123, name: 'Rey') symbol_mash.each do |key, value| # key is :id, then :name # value is 123, then 'Rey' end ``` However, on Rubies less than 2.0, this means that every key you send to the Mash will generate a symbol. Since symbols are not garbage-collected on older versions of Ruby, this can cause a slow memory leak when using a symbolized Mash with data generated from user input. ### DefineAccessors This extension can be mixed into a Mash so it makes it behave like `OpenStruct`. It reduces the overhead of `method_missing?` magic by lazily defining field accessors when they're requested. ``` class MyHash < ::[Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Mash](/gems/hashie/Hashie/Extensions/Mash "Hashie::Extensions::Mash (module)")::[DefineAccessors](/gems/hashie/Hashie/Extensions/Mash/DefineAccessors "Hashie::Extensions::Mash::DefineAccessors (module)") end mash = MyHash.new MyHash.method_defined?(:foo=) #=> false mash.foo = 123 MyHash.method_defined?(:foo=) #=> true MyHash.method_defined?(:foo) #=> false mash.foo #=> 123 MyHash.method_defined?(:foo) #=> true ``` You can also extend the existing mash without defining a class: ``` mash = ::[Hashie](/gems/hashie/Hashie "Hashie (module)")::[Mash](/gems/hashie/Hashie/Mash "Hashie::Mash (class)").[new](/gems/hashie/Hashie/Mash#initialize-instance_method "Hashie::Mash#initialize (method)").[with_accessors!](/gems/hashie/Hashie/Mash#with_accessors!-instance_method "Hashie::Mash#with_accessors! (method)") ``` Dash --- Dash is an extended Hash that has a discrete set of defined properties and only those properties may be set on the hash. Additionally, you can set defaults for each property. You can also flag a property as required. Required properties will raise an exception if unset. Another option is message for required properties, which allow you to add custom messages for required property. You can also conditionally require certain properties by passing a Proc or Symbol. If a Proc is provided, it will be run in the context of the Dash instance. If a Symbol is provided, the value returned for the property or method of the same name will be evaluated. The property will be required if the result of the conditional is truthy. ``` class Person < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") property :name, required: true property :age, required: true, message: 'must be set.' property :email property :phone, required: -> { email.nil? }, message: 'is required if email is not set.' property :pants, required: :weekday?, message: 'are only required on weekdays.' property :occupation, default: 'Rubyist' def weekday? [ Time.now.saturday?, Time.now.sunday? ].none? end end p = Person.new # => ArgumentError: The property 'name' is required for this Dash. p = Person.new(name: 'Bob') # => ArgumentError: The property 'age' must be set. p = Person.new(name: "Bob", age: 18) p.name # => 'Bob' p.name = nil # => ArgumentError: The property 'name' is required for this Dash. p.age # => 18 p.age = nil # => ArgumentError: The property 'age' must be set. p.email = '[[email protected]](/cdn-cgi/l/email-protection)' p.occupation # => 'Rubyist' p.email # => '[[email protected]](/cdn-cgi/l/email-protection)' p[:awesome] # => NoMethodError p[:occupation] # => 'Rubyist' p.update_attributes!(name: 'Trudy', occupation: 'Evil') p.occupation # => 'Evil' p.name # => 'Trudy' p.update_attributes!(occupation: nil) p.occupation # => 'Rubyist' ``` Properties defined as symbols are not the same thing as properties defined as strings. ``` class Tricky < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") property :trick property 'trick' end p = Tricky.new(trick: 'one', 'trick' => 'two') p.trick # => 'one', always symbol version p[:trick] # => 'one' p['trick'] # => 'two' ``` Note that accessing a property as a method always uses the symbol version. ``` class Tricky < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") property 'trick' end p = Tricky.new('trick' => 'two') p.trick # => NoMethodError ``` If you would like to update a Dash and use any default values set in the case of a `nil` value, use `#update_attributes!`. ``` class WithDefaults < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") property :description, default: 'none' end dash = WithDefaults.new dash.description #=> 'none' dash.description = 'You committed one of the classic blunders!' dash.description #=> 'You committed one of the classic blunders!' dash.description = nil dash.description #=> nil dash.description = 'Only slightly less known is ...' dash.update_attributes!(description: nil) dash.description #=> 'none' ``` ### Potential Gotchas Because Dashes are subclasses of the built-in Ruby Hash class, the double-splat operator takes the Dash as-is without any conversion. This can lead to strange behavior when you use the double-splat operator on a Dash as the first part of a keyword list or built Hash. For example: ``` class Foo < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") property :bar end foo = Foo.new(bar: 'baz') #=> {:bar=>"baz"} qux = { **foo, quux: 'corge' } #=> {:bar=> "baz", :quux=>"corge"} qux.is_a?(Foo) #=> true qux[:quux] #=> raise NoMethodError, "The property 'quux' is not defined for Foo." qux.key?(:quux) #=> true ``` You can work around this problem in two ways: 1. Call `#to_h` on the resulting object to convert it into a Hash. 2. Use the double-splat operator on the Dash as the last argument in the Hash literal. This will cause the resulting object to be a Hash instead of a Dash, thereby circumventing the problem. ``` qux = { **foo, quux: 'corge' }.to_h #=> {:bar=> "baz", :quux=>"corge"} qux.is_a?(Hash) #=> true qux[:quux] #=> "corge" qux = { quux: 'corge', **foo } #=> {:quux=>"corge", :bar=> "baz"} qux.is_a?(Hash) #=> true qux[:quux] #=> "corge" ``` ### PropertyTranslation The `Hashie::Extensions::Dash::PropertyTranslation` mixin extends a Dash with the ability to remap keys from a source hash. Property translation is useful when you need to read data from another application -- such as a Java API -- where the keys are named differently from Ruby conventions. ``` class PersonHash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Dash](/gems/hashie/Hashie/Extensions/Dash "Hashie::Extensions::Dash (module)")::[PropertyTranslation](/gems/hashie/Hashie/Extensions/Dash/PropertyTranslation "Hashie::Extensions::Dash::PropertyTranslation (module)") property :first_name, from: :firstName property :last_name, from: :lastName property :first_name, from: :f_name property :last_name, from: :l_name end person = PersonHash.new(firstName: 'Michael', l_name: 'Bleigh') person[:first_name] #=> 'Michael' person[:last_name] #=> 'Bleigh ``` You can also use a lambda to translate the value. This is particularly useful when you want to ensure the type of data you're wrapping. ``` class DataModelHash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Dash](/gems/hashie/Hashie/Extensions/Dash "Hashie::Extensions::Dash (module)")::[PropertyTranslation](/gems/hashie/Hashie/Extensions/Dash/PropertyTranslation "Hashie::Extensions::Dash::PropertyTranslation (module)") property :id, transform_with: ->(value) { value.to_i } property :created_at, from: :created, with: ->(value) { Time.parse(value) } end model = DataModelHash.new(id: '123', created: '2014-04-25 22:35:28') model.id.class #=> Integer (Fixnum if you are using Ruby 2.3 or lower) model.created_at.class #=> Time ``` ### Mash and Rails 4 Strong Parameters To enable compatibility with Rails 4 use the [hashie-forbidden_attributes](https://github.com/Maxim-Filimonov/hashie-forbidden_attributes) gem. ### Coercion If you want to use `Hashie::Extensions::Coercion` together with `Dash` then you may probably want to use `Hashie::Extensions::Dash::Coercion` instead. This extension automatically includes `Hashie::Extensions::Coercion` and also adds a convenient `:coerce` option to `property` so you can define coercion in one line instead of using `property` and `coerce_key` separate: ``` class UserHash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Coercion "Hashie::Extensions::Coercion (module)") property :id property :posts coerce_key :posts, Array[PostHash] end ``` This is the same as: ``` class UserHash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Dash](/gems/hashie/Hashie/Extensions/Dash "Hashie::Extensions::Dash (module)")::[Coercion](/gems/hashie/Hashie/Extensions/Dash/Coercion "Hashie::Extensions::Dash::Coercion (module)") property :id property :posts, coerce: Array[PostHash] end ``` ### PredefinedValues The `Hashie::Extensions::Dash::PredefinedValues` mixin extends a Dash with the ability to accept predefined values on a property. ``` class UserHash < [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Dash](/gems/hashie/Hashie/Dash "Hashie::Dash (class)") include [Hashie](/gems/hashie/Hashie "Hashie (module)")::[Extensions](/gems/hashie/Hashie/Extensions "Hashie::Extensions (module)")::[Dash](/gems/hashie/Hashie/Extensions/Dash "Hashie::Extensions::Dash (module)")::[PredefinedValues](/gems/hashie/Hashie/Extensions/Dash/PredefinedValues "Hashie::Extensions::Dash::PredefinedValues (module)") property :gender, values: %i[male female prefer_not_to_say] ```
corrr
cran
R
Package ‘corrr’ October 12, 2022 Type Package Title Correlations in R Version 0.4.4 Description A tool for exploring correlations. It makes it possible to easily perform routine tasks when exploring correlation matrices such as ignoring the diagonal, focusing on the correlations of certain variables against others, or rearranging and visualizing the matrix in terms of the strength of the correlations. License MIT + file LICENSE URL https://github.com/tidymodels/corrr, https://corrr.tidymodels.org BugReports https://github.com/tidymodels/corrr/issues Depends R (>= 3.4) Imports dplyr (>= 1.0.0), ggplot2 (>= 2.2.0), ggrepel (>= 0.6.5), glue (>= 1.4.2), purrr (>= 0.2.2), rlang (>= 0.4.0), seriation (>= 1.2-0), tibble (>= 2.0) Suggests covr, DBI, dbplyr (>= 1.2.1), knitr (>= 1.13), rmarkdown (>= 0.9.6), RSQLite, sparklyr (>= 0.9), testthat (>= 3.0.0) VignetteBuilder knitr Config/Needs/website tidyverse/tidytemplate Encoding UTF-8 RoxygenNote 7.2.1.9000 Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-08-16 20:40:01 UTC R topics documented: as_cord... 2 as_matri... 3 autoplot.cor_d... 3 colpair_ma... 4 correlat... 5 dic... 7 fashio... 8 first_co... 9 focu... 9 focus_i... 10 network_plo... 11 pair_... 12 rearrang... 12 retrac... 13 rplo... 14 shav... 15 stretc... 16 as_cordf Coerce lists and matrices to correlation data frames Description A wrapper function to coerce objects in a valid format (such as correlation matrices created using the base function, cor) into a correlation data frame. Usage as_cordf(x, diagonal = NA) Arguments x A list, data frame or matrix that can be coerced into a correlation data frame. diagonal Value (typically numeric or NA) to set the diagonal to Value A correlation data frame Examples x <- cor(mtcars) as_cordf(x) as_cordf(x, diagonal = 1) as_matrix Convert a correlation data frame to matrix format Description Convert a correlation data frame to original matrix format. Usage as_matrix(x, diagonal) Arguments x A correlation data frame. See correlate or as_cordf. diagonal Value (typically numeric or NA) to set the diagonal to Value Correlation matrix Examples x <- correlate(mtcars) as_matrix(x) autoplot.cor_df Create a correlation matrix from a cor_df object Description This method provides a good first visualization of the correlation matrix. Usage ## S3 method for class 'cor_df' autoplot( object, ..., method = "PCA", triangular = c("upper", "lower", "full"), barheight = 20, low = "#B2182B", mid = "#F1F1F1", high = "#2166AC" ) Arguments object A cor_df object. ... this argument is ignored. method String specifying the arrangement (clustering) method. Clustering is achieved via seriate, which can be consulted for a complete list of clustering methods. Default = "PCA". triangular Which part of the correlation matrix should be shown? Must be one of "upper", "lower", or "full", and defaults to "upper". barheight A single, non-negative number. Is passed to ggplot2::guide_colourbar() to determine the height of the guide colorbar. Defaults to 20, is likely to need manual adjustments. low A single color. Is passed to ggplot2::scale_fill_gradient2(). The color of negative correlation. Defaults to "#B2182B". mid A single color. Is passed to ggplot2::scale_fill_gradient2(). The color of no correlation. Defaults to "#F1F1F1". high A single color. Is passed to ggplot2::scale_fill_gradient2(). The color of the positive correlation. Defaults to "#2166AC". Value A ggplot object Examples x <- correlate(mtcars) autoplot(x) autoplot(x, triangular = "lower") autoplot(x, triangular = "full") colpair_map Apply a function to all pairs of columns in a data frame Description colpair_map() transforms a data frame by applying a function to each pair of its columns. The result is a correlation data frame (see correlate for details). Usage colpair_map(.data, .f, ..., .diagonal = NA) Arguments .data A data frame or data frame extension (e.g. a tibble). .f A function. ... Additional arguments passed on to the mapped function. .diagonal Value at which to set the diagonal (defaults to NA). Value A correlation data frame (cor_df). Examples ## Using `stats::cov` produces a covariance data frame. colpair_map(mtcars, cov) ## Function to get the p-value from a t-test: calc_p_value <- function(vec_a, vec_b) { t.test(vec_a, vec_b)$p.value } colpair_map(mtcars, calc_p_value) correlate Correlation Data Frame Description An implementation of stats::cor(), which returns a correlation data frame rather than a matrix. See details below. Additional adjustment include the use of pairwise deletion by default. Usage correlate( x, y = NULL, use = "pairwise.complete.obs", method = "pearson", diagonal = NA, quiet = FALSE ) Arguments x a numeric vector, matrix or data frame. y NULL (default) or a vector, matrix or data frame with compatible dimensions to x. The default is equivalent to y = x (but more efficient). use an optional character string giving a method for computing covariances in the presence of missing values. This must be (an abbreviation of) one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs". method a character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman": can be abbreviated. diagonal Value (typically numeric or NA) to set the diagonal to quiet Set as TRUE to suppress message about method and use parameters. Details This function returns a correlation matrix as a correlation data frame in the following format: • A tibble (see tibble) • An additional class, "cor_df" • A "term" column • Standardized variances (the matrix diagonal) set to missing values by default (NA) so they can be ignored in calculations. The use argument and its possible values are inherited from stats::cor(): • "everything": NAs will propagate conceptually, i.e. a resulting value will be NA whenever one of its contributing observations is NA • "all.obs": the presence of missing observations will produce an error • "complete.obs": correlations will be computed from complete observations, with an error being raised if there are no complete cases. • "na.or.complete": correlations will be computed from complete observations, returning an NA if there are no complete cases. • "pairwise.complete.obs": the correlation between each pair of variables is computed using all complete pairs of those particular variables. As of version 0.4.3, the first column of a cor_df object is named "term". In previous versions this first column was named "rowname". There is a ggplot2::autoplot() method for quickly visualizing the correlation matrix, for more information see autoplot.cor_df(). Value A correlation data frame cor_df Examples ## Not run: correlate(iris) ## End(Not run) correlate(iris[-5]) correlate(mtcars) ## Not run: # Also supports DB backend and collects results into memory library(sparklyr) sc <- spark_connect(master = "local") mtcars_tbl <- copy_to(sc, mtcars) mtcars_tbl %>% correlate(use = "pairwise.complete.obs", method = "spearman") spark_disconnect(sc) ## End(Not run) dice Returns a correlation table with the selected fields only Description Returns a correlation table with the selected fields only Usage dice(x, ...) Arguments x A correlation table, class cor_df ... A list of variables in the correlation table Examples dice(correlate(mtcars), mpg, wt, am) fashion Fashion a correlation data frame for printing. Description For the purpose of printing, convert a correlation data frame into a noquote matrix with the correla- tions cleanly formatted (leading zeros removed; spaced for signs) and the diagonal (or any NA) left blank. Usage fashion(x, decimals = 2, leading_zeros = FALSE, na_print = "") Arguments x Scalar, vector, matrix or data frame. decimals Number of decimal places to display for numbers. leading_zeros Should leading zeros be displayed for decimals (e.g., 0.1)? If FALSE, they will be removed. na_print Character string indicating NA values in printed output Value noquote. Also a data frame if x is a matrix or data frame. Examples # Examples with correlate() library(dplyr) mtcars %>% correlate() %>% fashion() mtcars %>% correlate() %>% fashion(decimals = 1) mtcars %>% correlate() %>% fashion(leading_zeros = TRUE) mtcars %>% correlate() %>% fashion(na_print = "*") # But doesn't have to include correlate() mtcars %>% fashion(decimals = 3) c(0.234, 134.23, -.23, NA) %>% fashion(na_print = "X") first_col Add a first column to a data.frame Description Add a first column to a data.frame. This is most commonly used to append a term column to create a cor_df. Usage first_col(df, ..., var = "term") Arguments df Data frame ... Values to go into the column var Label for the column, with the default "term" Examples first_col(mtcars, 1:nrow(mtcars)) focus Focus on section of a correlation data frame. Description Convenience function to select a set of variables from a correlation matrix to keep as the columns, and exclude these or all other variables from the rows. This function will take a correlate corre- lation matrix, and expression(s) suited for dplyr::select(). The selected variables will remain in the columns, and these, or all other variables, will be excluded from the rows based on ‘same. For a complete list of methods for using this function, see select. Usage focus(x, ..., mirror = FALSE) focus_(x, ..., .dots, mirror) Arguments x cor_df. See correlate. ... One or more unquoted expressions separated by commas. Variable names can be used as if they were positions in the data frame, so expressions like ‘x:y“ can be used to select a range of variables. mirror Boolean. Whether to mirror the selected columns in the rows or not. .dots Use focus_ to do standard evaluations. See select. Value A tbl or, if mirror = TRUE, a cor_df (see correlate). Examples library(dplyr) x <- correlate(mtcars) focus(x, mpg, cyl) # Focus on correlations of mpg and cyl with all other variables focus(x, -disp, -mpg, mirror = TRUE) # Remove disp and mpg from columns and rows x <- correlate(iris[-5]) focus(x, -matches("Sepal")) # Focus on correlations of non-Sepal # variables with Sepal variables. focus_if Conditionally focus correlation data frame Description Apply a predicate function to each column of correlations. Columns that evaluate to TRUE will be included in a call to focus. Usage focus_if(x, .predicate, ..., mirror = FALSE) Arguments x Correlation data frame or object to be coerced to one via as_cordf. .predicate A predicate function to be applied to the columns. The columns for which .pred- icate returns TRUE will be included as variables in focus. ... Additional arguments to pass to the predicate function if not anonymous. mirror Boolean. Whether to mirror the selected columns in the rows or not. Value A tibble or, if mirror = TRUE, a correlation data frame. Examples library(dplyr) any_greater_than <- function(x, val) { mean(abs(x), na.rm = TRUE) > val } x <- correlate(mtcars) x %>% focus_if(any_greater_than, .6) x %>% focus_if(any_greater_than, .6, mirror = TRUE) %>% network_plot() network_plot Network plot of a correlation data frame Description Output a network plot of a correlation data frame in which variables that are more highly correlated appear closer together and are joined by stronger paths. Paths are also colored by their sign (blue for positive and red for negative). The proximity of the points are determined using multidimensional clustering. Usage network_plot( rdf, min_cor = 0.3, legend = c("full", "range", "none"), colours = c("indianred2", "white", "skyblue1"), repel = TRUE, curved = TRUE, colors ) Arguments rdf Correlation data frame (see correlate) or object that can be coerced to one (see as_cordf). min_cor Number from 0 to 1 indicating the minimum value of correlations (in absolute terms) to plot. legend How should the colors and legend for the correlation values be displayed? The options are "full" (the default) for -1 to 1 with a legend, "range" for the range of correlation values in rdf with a legend, or "none" for colors between -1 to 1 with no legend displayed. colours, colors Vector of colors to use for n-color gradient. repel Should variable labels repel each other? If TRUE, text is added via geom_text_repel instead of geom_text curved Should the paths be curved? If TRUE, paths are added via geom_curve; if FALSE, via geom_segment Examples x <- correlate(mtcars) network_plot(x) network_plot(x, min_cor = .1) network_plot(x, min_cor = .6) network_plot(x, min_cor = .2, colors = c("red", "green"), legend = "full") network_plot(x, min_cor = .2, colors = c("red", "green"), legend = "range") pair_n Number of pairwise complete cases. Description Compute the number of complete cases in a pairwise fashion for x (and y). Usage pair_n(x, y = NULL) Arguments x a numeric vector, matrix or data frame. y NULL (default) or a vector, matrix or data frame with compatible dimensions to x. The default is equivalent to y = x (but more efficient). Value Matrix of pairwise sample sizes (number of complete cases). Examples pair_n(mtcars) rearrange Re-arrange a correlation data frame Description Re-arrange a correlation data frame to group highly correlated variables closer together. Usage rearrange(x, method = "PC", absolute = TRUE) Arguments x cor_df. See correlate. method String specifying the arrangement (clustering) method. Clustering is achieved via seriate, which can be consulted for a complete list of clustering methods. Default = "PCA". absolute Boolean whether absolute values for the correlations should be used for cluster- ing. Value cor_df. See correlate. Examples x <- correlate(mtcars) rearrange(x) # Default settings rearrange(x, method = "HC") # Different seriation method rearrange(x, absolute = FALSE) # Not using absolute values for arranging retract Creates a data frame from a stretched correlation table Description retract does the opposite of what stretch does Usage retract(.data, x, y, val) Arguments .data A data.frame or tibble containing at least three variables: x, y and the value x The name of the column to use from .data as x y The name of the column to use from .data as y val The name of the column to use from .data to use as the value Examples x <- correlate(mtcars) xs <- stretch(x) retract(xs) rplot Plot a correlation data frame. Description Plot a correlation data frame using ggplot2. Usage rplot( rdf, legend = TRUE, shape = 16, colours = c("indianred2", "white", "skyblue1"), print_cor = FALSE, colors, .order = c("default", "alphabet") ) Arguments rdf Correlation data frame (see correlate) or object that can be coerced to one (see as_cordf). legend Boolean indicating whether a legend mapping the colors to the correlations should be displayed. shape geom_point aesthetic. colours, colors Vector of colors to use for n-color gradient. print_cor Boolean indicating whether the correlations should be printed over the shapes. .order Either "default", meaning x and y variables keep the same order as the columns in x, or "alphabet", meaning the variables are alphabetized. Details Each value in the correlation data frame is represented by one point/circle in the output plot. The size of each point corresponds to the absolute value of the correlation (via the size aesthetic). The color of each point corresponds to the signed value of the correlation (via the color aesthetic). Value Plots a correlation data frame Examples x <- correlate(mtcars) rplot(x) # Common use is following rearrange and shave x <- rearrange(x, absolute = FALSE) x <- shave(x) rplot(x) rplot(x, print_cor = TRUE) rplot(x, shape = 20, colors = c("red", "green"), legend = TRUE) shave Shave off upper/lower triangle. Description Convert the upper or lower triangle of a correlation data frame (cor_df) to missing values. Usage shave(x, upper = TRUE) Arguments x cor_df. See correlate. upper Boolean. If TRUE, set upper triangle to NA; lower triangle if FALSE. Value cor_df. See correlate. Examples x <- correlate(mtcars) shave(x) # Default; shave upper triangle shave(x, upper = FALSE) # shave lower triangle stretch Stretch correlation data frame into long format. Description stretch is a specified implementation of tidyr::gather() to be applied to a correlation data frame. It will gather the columns into a long-format data frame. The term column is handled automatically. Usage stretch(x, na.rm = FALSE, remove.dups = FALSE) Arguments x cor_df. See correlate. na.rm Boolean. Whether rows with an NA correlation (originally the matrix diagonal) should be dropped? Will automatically be set to TRUE if mirror is FALSE. remove.dups Removes duplicate entries, without removing all NAs Value tbl with three columns (x and y variables, and their correlation) Examples x <- correlate(mtcars) stretch(x) # Convert all to long format stretch(x, na.rm = TRUE) # omit NAs (diagonal in this case) x <- shave(x) # use shave to set upper triangle to NA and then... stretch(x, na.rm = TRUE) # omit all NAs, therefore keeping each # correlation only once.
github.com/ivaaaan/smug
go
Go
README [¶](#section-readme) --- ### Smug - tmux session manager [![Actions Status](https://github.com/ivaaaan/smug/workflows/Go/badge.svg)](https://github.com/ivaaaan/smug/actions) [![Go Report Card](https://goreportcard.com/badge/github.com/ivaaaan/smug)](https://goreportcard.com/report/github.com/ivaaaan/smug) Inspired by [tmuxinator](https://github.com/tmuxinator/tmuxinator) and [tmuxp](https://github.com/tmux-python/tmuxp). Smug automates your [tmux](https://github.com/tmux/tmux) workflow. You can create a single configuration file, and Smug will create all the required windows and panes from it. ![gif](https://raw.githubusercontent.com/ivaaaan/gifs/master/smug.gif) The configuration used in this GIF can be found [here](#readme-example-2). #### Installation ##### Download from the releases page Download the latest version of Smug from the [releases page](https://github.com/ivaaaan/smug/releases) and then run: ``` mkdir smug && tar -xzf smug_0.1.0_Darwin_x86_64.tar.gz -C ./smug && sudo mv smug/smug /usr/local/bin && rm -rf smug ``` Don't forget to replace `smug_0.1.0_Darwin_x86_64.tar.gz` with the archive that you've downloaded. ##### Git ###### Prerequisite Tools * [Git](https://git-scm.com/) * [Go (we test it with the last 2 major versions)](https://golang.org/dl/) ###### Fetch from GitHub The easiest way is to clone Smug from GitHub and install it using `go-cli`: ``` cd /tmp git clone https://github.com/ivaaaan/smug.git cd smug go install ``` ##### macOS On macOS, you can install Smug using [MacPorts](https://www.macports.org) or [Homebrew](https://brew.sh). ###### Homebrew ``` brew install smug ``` ###### MacPorts ``` sudo port selfupdate sudo port install smug ``` ##### Linux ###### Arch There's [AUR](https://aur.archlinux.org/packages/smug) with smug. ``` git clone https://aur.archlinux.org/smug.git cd smug makepkg -si ``` #### Usage ``` smug <command> [<project>] [-f, --file <file>] [-w, --windows <window>]... [-a, --attach] [-d, --debug] ``` ##### Options: ``` -f, --file A custom path to a config file -w, --windows List of windows to start. If session exists, those windows will be attached to current session. -a, --attach Force switch client for a session -i, --inside-current-session Create all windows inside current session -d, --debug Print all commands to ~/.config/smug/smug.log --detach Detach session. The same as `-d` flag in the tmux ``` ##### Custom settings You can pass custom settings into your configuration file. Use `${variable_name}` syntax in your config and then pass key-value args: ``` xyz@localhost:~$ smug start project variable_name=value ``` ##### Examples To create a new project, or edit an existing one in the `$EDITOR`: ``` xyz@localhost:~$ smug new project xyz@localhost:~$ smug edit project ``` To start/stop a project and all windows, run: ``` xyz@localhost:~$ smug start project xyz@localhost:~$ smug stop project ``` Also, smug has aliases to the most of the commands: ``` xyz@localhost:~$ smug project # the same as "smug start project" xyz@localhost:~$ smug st project # the same as "smug stop project" xyz@localhost:~$ smug p ses # the same as "smug print ses" ``` When you already have a running session, and you want only to create some windows from the configuration file, you can do something like this: ``` xyz@localhost:~$ smug start project:window1 xyz@localhost:~$ smug start project:window1,window2 xyz@localhost:~$ smug start project -w window1 xyz@localhost:~$ smug start project -w window1 -w window2 xyz@localhost:~$ smug stop project:window1 xyz@localhost:~$ smug stop project -w window1 -w window2 ``` Also, you are not obliged to put your files in the `~/.config/smug` directory. You can use a custom path in the `-f` flag: ``` xyz@localhost:~$ smug start -f ./project.yml xyz@localhost:~$ smug stop -f ./project.yml xyz@localhost:~$ smug start -f ./project.yml -w window1 -w window2 ``` #### Configuration Configuration files can stored in the `~/.config/smug` directory in the `YAML` format, e.g `~/.config/smug/your_project.yml`. You may also create a file named `.smug.yml` in the current working directory, which will be used by default. ##### Examples ###### Example 1 ``` session: blog root: ~/Developer/blog before_start: - docker-compose -f my-microservices/docker-compose.yml up -d # my-microservices/docker-compose.yml is a relative to `root` env: FOO: BAR stop: - docker stop $(docker ps -q) windows: - name: code root: blog # a relative path to root manual: true # you can start this window only manually, using the -w arg layout: main-vertical commands: - docker-compose start panes: - type: horizontal root: . commands: - docker-compose exec php /bin/sh - clear - name: infrastructure root: ~/Developer/blog/my-microservices layout: tiled panes: - type: horizontal root: . commands: - docker-compose up -d - docker-compose exec php /bin/sh - clear ``` ###### Example 2 ``` session: blog root: ~/Code/blog before_start: - docker-compose up -d stop: - docker-compose stop windows: - name: code layout: main-horizontal commands: - $EDITOR app/dependencies.php panes: - type: horizontal commands: - make run-tests - name: ssh commands: - ssh -i ~/keys/blog.pem [email protected] ``` Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
@getodk/slonik
npm
JavaScript
Slonik === A [battle-tested](#battle-tested) PostgreSQL client with strict types, detailed logging and assertions. (The above GIF shows Slonik producing [query logs](https://github.com/gajus/slonik#logging). Slonik produces logs using [Roarr](https://github.com/gajus/roarr). Logs include stack trace of the actual query invocation location and values used to execute the query.) Sponsors --- If you value my work and want to see Slonik and [many other of my](https://github.com/gajus/) Open-Source projects to be continuously improved, then please consider becoming a patron: Principles --- * Promotes writing raw SQL. * Discourages ad-hoc dynamic generation of SQL. Read: [Stop using Knex.js](https://medium.com/@gajus/bf410349856c) Note: Using this project does not require TypeScript. It is a regular ES6 module. Ignore the type definitions used in the documentation if you do not use a type system. Features --- * [Assertions and type safety](#repeating-code-patterns-and-type-safety). * [Connection mocking](#mocking-slonik). * [Safe connection handling](#protecting-against-unsafe-connection-handling). * [Safe transaction handling](#protecting-against-unsafe-transaction-handling). * [Safe value interpolation](#protecting-against-unsafe-value-interpolation). * [Transaction nesting](#transaction-nesting). * [Transaction retrying](#transaction-retrying) * Detailed [logging](#slonik-debugging). * [Asynchronous stack trace resolution](#capture-stack-trace). * [Middlewares](#slonik-interceptors). * [Mapped errors](#error-handling). * [ESLint plugin](https://github.com/gajus/eslint-plugin-sql). Contents --- * [Slonik](#slonik) + [Sponsors](#slonik-sponsors) + [Principles](#slonik-principles) + [Features](#slonik-features) + [Contents](#slonik-contents) + [About Slonik](#slonik-about-slonik) - [Battle-Tested](#slonik-about-slonik-battle-tested) - [Origin of the name](#slonik-about-slonik-origin-of-the-name) - [Repeating code patterns and type safety](#slonik-about-slonik-repeating-code-patterns-and-type-safety) - [Protecting against unsafe connection handling](#slonik-about-slonik-protecting-against-unsafe-connection-handling) - [Protecting against unsafe transaction handling](#slonik-about-slonik-protecting-against-unsafe-transaction-handling) - [Protecting against unsafe value interpolation](#slonik-about-slonik-protecting-against-unsafe-value-interpolation) + [Documentation](#slonik-documentation) + [Usage](#slonik-usage) - [Create connection](#slonik-usage-create-connection) - [End connection pool](#slonik-usage-end-connection-pool) - [Describing the current state of the connection pool](#slonik-usage-describing-the-current-state-of-the-connection-pool) - [API](#slonik-usage-api) - [Default configuration](#slonik-usage-default-configuration) - [Using native libpq bindings](#slonik-usage-using-native-libpq-bindings) - [Checking out a client from the connection pool](#slonik-usage-checking-out-a-client-from-the-connection-pool) - [Mocking Slonik](#slonik-usage-mocking-slonik) + [How are they different?](#slonik-how-are-they-different) - [`pg` vs `slonik`](#slonik-how-are-they-different-pg-vs-slonik) - [`pg-promise` vs `slonik`](#slonik-how-are-they-different-pg-promise-vs-slonik) + [Type parsers](#slonik-type-parsers) - [Built-in type parsers](#slonik-type-parsers-built-in-type-parsers) + [Interceptors](#slonik-interceptors) - [Interceptor methods](#slonik-interceptors-interceptor-methods) - [Community interceptors](#slonik-interceptors-community-interceptors) + [Recipes](#slonik-recipes) - [Inserting large number of rows](#slonik-recipes-inserting-large-number-of-rows) - [Routing queries to different connections](#slonik-recipes-routing-queries-to-different-connections) + [`sql` tag](#slonik-sql-tag) + [Value placeholders](#slonik-value-placeholders) - [Tagged template literals](#slonik-value-placeholders-tagged-template-literals) - [Manually constructing the query](#slonik-value-placeholders-manually-constructing-the-query) - [Nesting `sql`](#slonik-value-placeholders-nesting-sql) + [Query building](#slonik-query-building) - [`sql.array`](#slonik-query-building-sql-array) - [`sql.binary`](#slonik-query-building-sql-binary) - [`sql.identifier`](#slonik-query-building-sql-identifier) - [`sql.json`](#slonik-query-building-sql-json) - [`sql.join`](#slonik-query-building-sql-join) - [`sql.unnest`](#slonik-query-building-sql-unnest) + [Query methods](#slonik-query-methods) - [`any`](#slonik-query-methods-any) - [`anyFirst`](#slonik-query-methods-anyfirst) - [`exists`](#slonik-query-methods-exists) - [`copyFromBinary`](#slonik-query-methods-copyfrombinary) - [`many`](#slonik-query-methods-many) - [`manyFirst`](#slonik-query-methods-manyfirst) - [`maybeOne`](#slonik-query-methods-maybeone) - [`maybeOneFirst`](#slonik-query-methods-maybeonefirst) - [`one`](#slonik-query-methods-one) - [`oneFirst`](#slonik-query-methods-onefirst) - [`query`](#slonik-query-methods-query) - [`stream`](#slonik-query-methods-stream) - [`transaction`](#slonik-query-methods-transaction) + [Error handling](#slonik-error-handling) - [Original `node-postgres` error](#slonik-error-handling-original-node-postgres-error) - [Handling `BackendTerminatedError`](#slonik-error-handling-handling-backendterminatederror) - [Handling `CheckIntegrityConstraintViolationError`](#slonik-error-handling-handling-checkintegrityconstraintviolationerror) - [Handling `ConnectionError`](#slonik-error-handling-handling-connectionerror) - [Handling `DataIntegrityError`](#slonik-error-handling-handling-dataintegrityerror) - [Handling `ForeignKeyIntegrityConstraintViolationError`](#slonik-error-handling-handling-foreignkeyintegrityconstraintviolationerror) - [Handling `NotFoundError`](#slonik-error-handling-handling-notfounderror) - [Handling `NotNullIntegrityConstraintViolationError`](#slonik-error-handling-handling-notnullintegrityconstraintviolationerror) - [Handling `StatementCancelledError`](#slonik-error-handling-handling-statementcancellederror) - [Handling `StatementTimeoutError`](#slonik-error-handling-handling-statementtimeouterror) - [Handling `UniqueIntegrityConstraintViolationError`](#slonik-error-handling-handling-uniqueintegrityconstraintviolationerror) + [Types](#slonik-types) + [Debugging](#slonik-debugging) - [Logging](#slonik-debugging-logging) - [Capture stack trace](#slonik-debugging-capture-stack-trace) + [Syntax Highlighting](#slonik-syntax-highlighting) - [Atom Syntax Highlighting Plugin](#slonik-syntax-highlighting-atom-syntax-highlighting-plugin) - [VS Code Syntax Highlighting Extension](#slonik-syntax-highlighting-vs-code-syntax-highlighting-extension) About Slonik --- ### Battle-Tested Slonik began as a collection of utilities designed for working with [`node-postgres`](https://github.com/brianc/node-postgres). We continue to use `node-postgres` as it provides a robust foundation for interacting with PostgreSQL. However, what once was a collection of utilities has since grown into a framework that abstracts repeating code patterns, protects against unsafe connection handling and value interpolation, and provides rich debugging experience. Slonik has been [battle-tested](https://medium.com/@gajus/lessons-learned-scaling-postgresql-database-to-1-2bn-records-month-edc5449b3067) with large data volumes and queries ranging from simple CRUD operations to data-warehousing needs. ### Origin of the name The name of the elephant depicted in the official PostgreSQL logo is Slonik. The name itself is derived from the Russian word for "little elephant". Read: [The History of Slonik, the PostgreSQL Elephant Logo](https://www.vertabelo.com/blog/notes-from-the-lab/the-history-of-slonik-the-postgresql-elephant-logo) ### Repeating code patterns and type safety Among the primary reasons for developing Slonik, was the motivation to reduce the repeating code patterns and add a level of type safety. This is primarily achieved through the methods such as `one`, `many`, etc. But what is the issue? It is best illustrated with an example. Suppose the requirement is to write a method that retrieves a resource ID given values defining (what we assume to be) a unique constraint. If we did not have the aforementioned convenience methods available, then it would need to be written as: ``` import { sql } from 'slonik'; import type { DatabaseConnectionType } from 'slonik'; type DatabaseRecordIdType = number; const getFooIdByBar = async (connection: DatabaseConnectionType, bar: string): Promise<DatabaseRecordIdType> => { const fooResult = await connection.query(sql` SELECT id FROM foo WHERE bar = ${bar} `); if (fooResult.rowCount === 0) { throw new Error('Resource not found.'); } if (fooResult.rowCount > 1) { throw new Error('Data integrity constraint violation.'); } return fooResult[0].id; }; ``` `oneFirst` method abstracts all of the above logic into: ``` const getFooIdByBar = (connection: DatabaseConnectionType, bar: string): Promise<DatabaseRecordIdType> => { return connection.oneFirst(sql` SELECT id FROM foo WHERE bar = ${bar} `); }; ``` `oneFirst` throws: * `NotFoundError` if query returns no rows * `DataIntegrityError` if query returns multiple rows * `DataIntegrityError` if query returns multiple columns This becomes particularly important when writing routines where multiple queries depend on the previous result. Using methods with inbuilt assertions ensures that in case of an error, the error points to the original source of the problem. In contrast, unless assertions for all possible outcomes are typed out as in the previous example, the unexpected result of the query will be fed to the next operation. If you are lucky, the next operation will simply break; if you are unlucky, you are risking data corruption and hard to locate bugs. Furthermore, using methods that guarantee the shape of the results, allows us to leverage static type checking and catch some of the errors even before they executing the code, e.g. ``` const fooId = await connection.many(sql` SELECT id FROM foo WHERE bar = ${bar} `); await connection.query(sql` DELETE FROM baz WHERE foo_id = ${fooId} `); ``` Static type check of the above example will produce a warning as the `fooId` is guaranteed to be an array and binding of the last query is expecting a primitive value. ### Protecting against unsafe connection handling Slonik only allows to check out a connection for the duration of the promise routine supplied to the `pool#connect()` method. The primary reason for implementing *only* this connection pooling method is because the alternative is inherently unsafe, e.g. ``` // Note: This example is using unsupported API. const main = async () => { const connection = await pool.connect(); await connection.query(sql`SELECT foo()`); await connection.release(); }; ``` In this example, if `SELECT foo()` produces an error, then connection is never released, i.e. the connection remains to hang. A fix to the above is to ensure that `connection#release()` is always called, i.e. ``` // Note: This example is using unsupported API. const main = async () => { const connection = await pool.connect(); let lastExecutionResult; try { lastExecutionResult = await connection.query(sql`SELECT foo()`); } finally { await connection.release(); } return lastExecutionResult; }; ``` Slonik abstracts the latter pattern into `pool#connect()` method. ``` const main = () => { return pool.connect((connection) => { return connection.query(sql`SELECT foo()`); }); }; ``` Connection is always released back to the pool after the promise produced by the function supplied to `connect()` method is either resolved or rejected. ### Protecting against unsafe transaction handling Just like in the [unsafe connection handling](#protecting-against-unsafe-connection-handling) described above, Slonik only allows to create a transaction for the duration of the promise routine supplied to the `connection#transaction()` method. ``` connection.transaction(async (transactionConnection) => { await transactionConnection.query(sql`INSERT INTO foo (bar) VALUES ('baz')`); await transactionConnection.query(sql`INSERT INTO qux (quux) VALUES ('quuz')`); }); ``` This pattern ensures that the transaction is either committed or aborted the moment the promise is either resolved or rejected. ### Protecting against unsafe value interpolation [SQL injections](https://en.wikipedia.org/wiki/SQL_injection) are one of the most well known attack vectors. Some of the [biggest data leaks](https://en.wikipedia.org/wiki/SQL_injection#Examples) were the consequence of improper user-input handling. In general, SQL injections are easily preventable by using parameterization and by restricting database permissions, e.g. ``` // Note: This example is using unsupported API. connection.query('SELECT $1', [ userInput ]); ``` In this example, the query text (`SELECT $1`) and parameters (value of the `userInput`) are passed to the PostgreSQL server where the parameters are safely substituted into the query. This is a safe way to execute a query using user-input. The vulnerabilities appear when developers cut corners or when they do not know about parameterization, i.e. there is a risk that someone will instead write: ``` // Note: This example is using unsupported API. connection.query('SELECT \'' + userInput + '\''); ``` As evident by the history of the data leaks, this happens more often than anyone would like to admit. This is especially a big risk in Node.js community, where predominant number of developers are coming from frontend and have not had training working with RDBMSes. Therefore, one of the key selling points of Slonik is that it adds multiple layers of protection to prevent unsafe handling of user-input. To begin with, Slonik does not allow to run plain-text queries. ``` connection.query('SELECT 1'); ``` The above invocation would produce an error: > TypeError: Query must be constructed using `sql` tagged template literal. This means that the only way to run a query is by constructing it using [`sql` tagged template literal](https://github.com/gajus/slonik#slonik-value-placeholders-tagged-template-literals), e.g. ``` connection.query(sql`SELECT 1`); ``` To add a parameter to the query, user must use [template literal placeholders](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Description), e.g. ``` connection.query(sql`SELECT ${userInput}`); ``` Slonik takes over from here and constructs a query with value bindings, and sends the resulting query text and parameters to the PostgreSQL. As `sql` tagged template literal is the only way to execute the query, it adds a strong layer of protection against accidental unsafe user-input handling due to limited knowledge of the SQL client API. As Slonik restricts user's ability to generate and execute dynamic SQL, it provides helper functions used to generate fragments of the query and the corresponding value bindings, e.g. [`sql.identifier`](#sqlidentifier), [`sql.join`](#sqljoin) and [`sql.unnest`](#sqlunnest). These methods generate tokens that the query executor interprets to construct a safe query, e.g. ``` connection.query(sql` SELECT ${sql.identifier(['foo', 'a'])} FROM ( VALUES ( ${sql.join( [ sql.join(['a1', 'b1', 'c1'], sql`, `), sql.join(['a2', 'b2', 'c2'], sql`, `) ], sql`), (` )} ) ) foo(a, b, c) WHERE foo.b IN (${sql.join(['c1', 'a2'], sql`, `)}) `); ``` This (contrived) example generates a query equivalent to: ``` SELECT "foo"."a" FROM ( VALUES ($1, $2, $3), ($4, $5, $6) ) foo(a, b, c) WHERE foo.b IN ($7, $8) ``` That is executed with the parameters provided by the user. To sum up, Slonik is designed to prevent accidental creation of queries vulnerable to SQL injections. Documentation --- Usage --- ### Create connection Use `createPool` to create a connection pool, e.g. ``` import { createPool, } from 'slonik'; const pool = createPool('postgres://'); ``` Instance of Slonik connection pool can be then used to create a new connection, e.g. ``` pool.connect(async (connection) => { await connection.query(sql`SELECT 1`); }); ``` The connection will be kept alive until the promise resolves (the result of the method supplied to `connect()`). Refer to [query method](#slonik-query-methods) documentation to learn about the connection methods. If you do not require having a persistent connection to the same backend, then you can directly use `pool` to run queries, e.g. ``` pool.query(sql`SELECT 1`); ``` Beware that in the latter example, the connection picked to execute the query is a random connection from the connection pool, i.e. using the latter method (without explicit `connect()`) does not guarantee that multiple queries will refer to the same backend. ### End connection pool Use `pool.end()` to end idle connections and prevent creation of new connections. The result of `pool.end()` is a promise that is resolved when all connections are ended. ``` import { createPool, sql, } from 'slonik'; const pool = createPool('postgres://'); const main = async () => { await pool.query(sql` SELECT 1 `); await pool.end(); }; main(); ``` Note: `pool.end()` does not terminate active connections/ transactions. ### Describing the current state of the connection pool Use `pool.getPoolState()` to find out if pool is alive and how many connections are active and idle, and how many clients are waiting for a connection. ``` import { createPool, sql, } from 'slonik'; const pool = createPool('postgres://'); const main = async () => { pool.getPoolState(); // { // activeConnectionCount: 0, // ended: false, // idleConnectionCount: 0, // waitingClientCount: 0, // } await pool.connect(() => { pool.getPoolState(); // { // activeConnectionCount: 1, // ended: false, // idleConnectionCount: 0, // waitingClientCount: 0, // } }); pool.getPoolState(); // { // activeConnectionCount: 0, // ended: false, // idleConnectionCount: 1, // waitingClientCount: 0, // } await pool.end(); pool.getPoolState(); // { // activeConnectionCount: 0, // ended: true, // idleConnectionCount: 0, // waitingClientCount: 0, // } }; main(); ``` Note: `pool.end()` does not terminate active connections/ transactions. ### API ``` /** * @param connectionUri PostgreSQL [Connection URI](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). */ createPool( connectionUri: string, clientConfiguration: ClientConfigurationType ): DatabasePoolType; /** * @property captureStackTrace Dictates whether to capture stack trace before executing query. Middlewares access stack trace through query execution context. (Default: true) * @property connectionRetryLimit Number of times to retry establishing a new connection. (Default: 3) * @property connectionTimeout Timeout (in milliseconds) after which an error is raised if connection cannot cannot be established. (Default: 5000) * @property idleInTransactionSessionTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000) * @property idleTimeout Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 5000) * @property interceptors An array of [Slonik interceptors](https://github.com/gajus/slonik#slonik-interceptors). * @property maximumPoolSize Do not allow more than this many connections. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 10) * @property preferNativeBindings Uses libpq bindings when `pg-native` module is installed. (Default: true) * @property statementTimeout Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. (Default: 60000) * @property transactionRetryLimit Number of times a transaction failing with Transaction Rollback class error is retried. (Default: 5) * @property typeParsers An array of [Slonik type parsers](https://github.com/gajus/slonik#slonik-type-parsers). */ type ClientConfigurationInputType = {| +captureStackTrace?: boolean, +connectionRetryLimit?: number, +connectionTimeout?: number | 'DISABLE_TIMEOUT', +idleInTransactionSessionTimeout?: number | 'DISABLE_TIMEOUT', +idleTimeout?: number | 'DISABLE_TIMEOUT', +interceptors?: $ReadOnlyArray<InterceptorType>, +maximumPoolSize?: number, +preferNativeBindings?: boolean, +statementTimeout?: number | 'DISABLE_TIMEOUT', +transactionRetryLimit?: number, +typeParsers?: $ReadOnlyArray<TypeParserType>, |}; ``` Example: ``` import { createPool } from 'slonik'; const pool = createPool('postgres://'); await pool.query(sql`SELECT 1`); ``` ### Default configuration #### Default interceptors None. Check out [`slonik-interceptor-preset`](https://github.com/gajus/slonik-interceptor-preset) for an opinionated collection of interceptors. #### Default type parsers These type parsers are enabled by default: | Type name | Implementation | | --- | --- | | `date` | Produces a literal date as a string (format: YYYY-MM-DD). | | `int8` | Produces an integer. | | `interval` | Produces interval in seconds (integer). | | `numeric` | Produces a float. | | `timestamp` | Produces a unix timestamp (in milliseconds). | | `timestamptz` | Produces a unix timestamp (in milliseconds). | To disable the default type parsers, pass an empty array, e.g. ``` createPool('postgres://', { typeParsers: [] }); ``` You can create default type parser collection using `createTypeParserPreset`, e.g. ``` import { createTypeParserPreset } from 'slonik'; createPool('postgres://', { typeParsers: [ ...createTypeParserPreset() ] }); ``` #### Default timeouts There are 4 types of configurable timeouts: | Configuration | Description | Default | | --- | --- | --- | | `connectionTimeout` | Timeout (in milliseconds) after which an error is raised if connection cannot cannot be established. | 5000 | | `idleInTransactionSessionTimeout` | Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 60000 | | `idleTimeout` | Timeout (in milliseconds) after which idle clients are closed. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 5000 | | `statementTimeout` | Timeout (in milliseconds) after which database is instructed to abort the query. Use 'DISABLE_TIMEOUT' constant to disable the timeout. | 60000 | Slonik sets aggressive timeouts by default. These timeouts are designed to provide safe interface to the database. These timeouts might not work for all programs. If your program has long running statements, consider adjusting timeouts just for those statements instead of changing the defaults. ### Using native libpq bindings In order to use native [libpq](https://www.npmjs.com/package/libpq) PostgreSQL bindings install `pg-native`. ``` $ npm install pg-native ``` By default, Slonik uses native bindings when `pg-native` is installed. To use JavaScript bindings when `pg-native` is installed, configure `preferNativeBindings: false`. ### Checking out a client from the connection pool Slonik only allows to check out a connection for the duration of the promise routine supplied to the `pool#connect()` method. ``` import { createPool, } from 'slonik'; const pool = createPool('postgres://localhost'); const result = await pool.connect(async (connection) => { await connection.query(sql`SELECT 1`); await connection.query(sql`SELECT 2`); return 'foo'; }); result; // 'foo' ``` Connection is released back to the pool after the promise produced by the function supplied to `connect()` method is either resolved or rejected. Read: [Protecting against unsafe connection handling](#protecting-against-unsafe-connection-handling) ### Mocking Slonik Slonik provides a way to mock queries against the database. * Use `createMockPool` to create a mock connection. * Use `createMockQueryResult` to create a mock query result. ``` import { createMockPool, createMockQueryResult, } from 'slonik'; type OverridesType = {| +query: (sql: string, values: $ReadOnlyArray<PrimitiveValueExpressionType>,) => Promise<QueryResultType<QueryResultRowType>>, |}; createMockPool(overrides: OverridesType): DatabasePoolType; createMockQueryResult(rows: $ReadOnlyArray<QueryResultRowType>): QueryResultType<QueryResultRowType>; ``` Example: ``` import { createMockPool, createMockQueryResult, } from 'slonik'; const pool = createMockPool({ query: async () => { return createMockQueryResult([ { foo: 'bar', }, ]); }, }); await pool.connect(async (connection) => { const results = await connection.query(sql` SELECT ${'foo'} `); }); ``` How are they different? --- ### `pg` vs `slonik` [`pg`](https://github.com/brianc/node-postgres) is built intentionally to provide unopinionated, minimal abstraction and encourages use of other modules to implement convenience methods. Slonik is built on top of `pg` and it provides convenience methods for [building queries](#value-placeholders) and [querying data](#slonik-query-methods). Work on `pg` began on [Tue Sep 28 22:09:21 2010](https://github.com/brianc/node-postgres/commit/cf637b08b79ef93d9a8b9dd2d25858aa7e9f9bdc). It is authored by [<NAME>](https://github.com/brianc). ### `pg-promise` vs `slonik` As the name suggests, [`pg-promise`](https://github.com/vitaly-t/pg-promise) was originally built to enable use of `pg` module with promises (at the time, `pg` only supported Continuation Passing Style (CPS), i.e. callbacks). Since then `pg-promise` added features for connection/ transaction handling, a powerful query-formatting engine and a declarative approach to handling query results. The primary difference between Slonik and `pg-promise`: * Slonik does not allow to execute raw text queries. Slonik queries can only be constructed using [`sql` tagged template literals](#slonik-value-placeholders-tagged-template-literals). This design [protects against unsafe value interpolation](#protecting-against-unsafe-value-interpolation). * Slonik implements [interceptor API](#slonik-interceptors) (middleware). Middlewares allow to modify connection handling, override queries and modify the query results. Example Slonik interceptors include [field name transformation](https://github.com/gajus/slonik-interceptor-field-name-transformation), [query normalization](https://github.com/gajus/slonik-interceptor-query-normalisation) and [query benchmarking](https://github.com/gajus/slonik-interceptor-query-benchmarking). Note: Author of `pg-promise` has [objected to the above claims](https://github.com/gajus/slonik/issues/122). I have removed a difference that was clearly wrong. I maintain that the above two differences remain valid differences: even though `pg-promise` might have substitute functionality for variable interpolation and interceptors, it implements them in a way that does not provide the same benefits that Slonik provides, namely: guaranteed security and support for extending library functionality using multiple plugins. Other differences are primarily in how the equivalent features are implemented, e.g. | `pg-promise` | Slonik | | --- | --- | | [Custom type formatting](https://github.com/vitaly-t/pg-promise#custom-type-formatting). | Not available in Slonik. The current proposal is to create an interceptor that would have access to the [query fragment constructor](https://github.com/gajus/slonik/issues/21). | | [formatting filters](https://github.com/vitaly-t/pg-promise#nested-named-parameters) | Slonik tagged template [value expressions](https://github.com/gajus/slonik#slonik-value-placeholders) to construct query fragments and bind parameter values. | | [Query files](https://github.com/vitaly-t/pg-promise#query-files). | Use [`slonik-sql-tag-raw`](https://github.com/gajus/slonik-sql-tag-raw). | | [Tasks](https://github.com/vitaly-t/pg-promise#tasks). | Use [`pool.connect`](https://github.com/gajus/slonik#slonik-usage-create-connection). | | Configurable transactions. | Not available in Slonik. Track [this issue](https://github.com/gajus/slonik/issues/30). | | Events. | Use [interceptors](https://github.com/gajus/slonik#slonik-interceptors). | When weighting which abstraction to use, it would be unfair not to consider that `pg-promise` is a mature project with dozens of contributors. Meanwhile, Slonik is a young project (started in March 2017) that until recently was developed without active community input. However, if you do support the unique features that Slonik adds, the opinionated API design, and are not afraid of adopting a technology in its young days, then I warmly invite you to adopt Slonik and become a contributor to what I intend to make the standard PostgreSQL client in the Node.js community. Work on `pg-promise` began [Wed Mar 4 02:00:34 2015](https://github.com/vitaly-t/pg-promise/commit/78fb80f638e7f28b301f75576701536d6b638f31). It is authored by [<NAME>](https://github.com/vitaly-t). Type parsers --- Type parsers describe how to parse PostgreSQL types. ``` type TypeParserType = {| +name: string, +parse: (value: string) => * |}; ``` Example: ``` { name: 'int8', parse: (value) => { return parseInt(value, 10); } } ``` Note: Unlike [`pg-types`](https://github.com/brianc/node-pg-types) that uses OIDs to identify types, Slonik identifies types using their names. Use this query to find type names: ``` SELECT typname FROM pg_type ORDER BY typname ASC ``` Type parsers are configured using [`typeParsers` client configuration](#slonik-usage-api). Read: [Default type parsers](#default-type-parsers). ### Built-in type parsers | Type name | Implementation | Factory function name | | --- | --- | --- | | `date` | Produces a literal date as a string (format: YYYY-MM-DD). | `createDateTypeParser` | | `int8` | Produces an integer. | `createBigintTypeParser` | | `interval` | Produces interval in seconds (integer). | `createIntervalTypeParser` | | `numeric` | Produces a float. | `createNumericTypeParser` | | `timestamp` | Produces a unix timestamp (in milliseconds). | `createTimestampTypeParser` | | `timestamptz` | Produces a unix timestamp (in milliseconds). | `createTimestampWithTimeZoneTypeParser` | Built-in type parsers can be created using the exported factory functions, e.g. ``` import { createTimestampTypeParser } from 'slonik'; createTimestampTypeParser(); // { // name: 'timestamp', // parse: (value) => { // return value === null ? value : Date.parse(value); // } // } ``` Interceptors --- Functionality can be added to Slonik client by adding interceptors (middleware). Interceptors are configured using [client configuration](#api), e.g. ``` import { createPool } from 'slonik'; const interceptors = []; const connection = createPool('postgres://', { interceptors }); ``` Interceptors are executed in the order they are added. Read: [Default interceptors](#default-interceptors). ### Interceptor methods Interceptor is an object that implements methods that can change the behaviour of the database client at different stages of the connection life-cycle ``` type InterceptorType = {| +afterPoolConnection?: ( connectionContext: ConnectionContextType, connection: DatabasePoolConnectionType ) => MaybePromiseType<null>, +afterQueryExecution?: ( queryContext: QueryContextType, query: QueryType, result: QueryResultType<QueryResultRowType> ) => MaybePromiseType<QueryResultType<QueryResultRowType>>, +beforePoolConnection?: ( connectionContext: ConnectionContextType ) => MaybePromiseType<?DatabasePoolType>, +beforePoolConnectionRelease?: ( connectionContext: ConnectionContextType, connection: DatabasePoolConnectionType ) => MaybePromiseType<null>, +beforeQueryExecution?: ( queryContext: QueryContextType, query: QueryType ) => MaybePromiseType<QueryResultType<QueryResultRowType>> | MaybePromiseType<null>, +beforeQueryResult?: ( queryContext: QueryContextType, query: QueryType, result: QueryResultType<QueryResultRowType> ) => MaybePromiseType<null>, +beforeTransformQuery?: ( queryContext: QueryContextType, query: QueryType ) => Promise<null>, +queryExecutionError?: ( queryContext: QueryContextType, query: QueryType, error: SlonikError ) => MaybePromiseType<null>, +transformQuery?: ( queryContext: QueryContextType, query: QueryType ) => QueryType, +transformRow?: ( queryContext: QueryContextType, query: QueryType, row: QueryResultRowType, fields: $ReadOnlyArray<FieldType> ) => QueryResultRowType |}; ``` #### `afterPoolConnection` Executed after a connection is acquired from the connection pool (or a new connection is created), e.g. ``` const pool = createPool('postgres://'); // Interceptor is executed here. ↓ pool.connect(); ``` #### `afterQueryExecution` Executed after query has been executed and before rows were transformed using `transformRow`. Note: When query is executed using `stream`, then `afterQuery` is called with empty result set. #### `beforeQueryExecution` This function can optionally return a direct result of the query which will cause the actual query never to be executed. #### `beforeQueryResult` Executed just before the result is returned to the client. Use this method to capture the result that will be returned to the client. #### `beforeTransformQuery` Executed before `transformQuery`. Use this interceptor to capture the original query (e.g. for logging purposes). #### `beforePoolConnectionRelease` Executed before connection is released back to the connection pool, e.g. ``` const pool = await createPool('postgres://'); pool.connect(async () => { await 1; // Interceptor is executed here. ↓ }); ``` #### `queryExecutionError` Executed if query execution produces an error. Use `queryExecutionError` to log and/ or re-throw another error. #### `transformQuery` Executed before `beforeQueryExecution`. Transforms query. #### `transformRow` Executed for each row. Transforms row. Use `transformRow` to modify the query result. ### Community interceptors | Name | Description | | --- | --- | | [`slonik-interceptor-field-name-transformation`](https://github.com/gajus/slonik-interceptor-field-name-transformation) | Transforms Slonik query result field names. | | [`slonik-interceptor-query-benchmarking`](https://github.com/gajus/slonik-interceptor-query-benchmarking) | Benchmarks Slonik queries. | | [`slonik-interceptor-query-cache`](https://github.com/gajus/slonik-interceptor-query-cache) | Caches Slonik queries. | | [`slonik-interceptor-query-logging`](https://github.com/gajus/slonik-interceptor-query-logging) | Logs Slonik queries. | | [`slonik-interceptor-query-normalisation`](https://github.com/gajus/slonik-interceptor-query-normalisation) | Normalises Slonik queries. | Check out [`slonik-interceptor-preset`](https://github.com/gajus/slonik-interceptor-preset) for an opinionated collection of interceptors. Recipes --- ### Inserting large number of rows Use [`sql.unnest`](#sqlunnest) to create a set of rows using `unnest`. Using the `unnest` approach requires only 1 variable per every column; values for each column are passed as an array, e.g. ``` await connection.query(sql` INSERT INTO foo (bar, baz, qux) SELECT * FROM ${sql.unnest( [ [1, 2, 3], [4, 5, 6] ], [ 'int4', 'int4', 'int4' ] )} `); ``` Produces: ``` { sql: 'INSERT INTO foo (bar, baz, qux) SELECT * FROM unnest($1::int4[], $2::int4[], $2::int4[])', values: [ [ 1, 4 ], [ 2, 5 ], [ 3, 6 ] ] } ``` Inserting data this way ensures that the query is stable and reduces the amount of time it takes to parse the query. ### Routing queries to different connections If connection is initiated by a query (as opposed to a obtained explicitly using `pool#connect()`), then `beforePoolConnection` interceptor can be used to change the pool that will be used to execute the query, e.g. ``` const slavePool = createPool('postgres://slave'); const masterPool = createPool('postgres://master', { interceptors: [ { beforePoolConnection: (connectionContext, pool) => { if (connectionContext.query && connectionContext.query.sql.includes('SELECT')) { return slavePool; } return pool; } } ] }); // This query will use `postgres://slave` connection. masterPool.query(sql`SELECT 1`); // This query will use `postgres://master` connection. masterPool.query(sql`UPDATE 1`); ``` `sql` tag --- `sql` tag serves two purposes: * It is used to construct queries with bound parameter values (see [Value placeholders](#value-placeholders)). * It used to generate dynamic query fragments (see [Query building](#query-building)). `sql` tag can be imported from Slonik package: ``` import { sql } from 'slonik'; ``` Sometimes it may be desirable to construct a custom instance of `sql` tag. In those cases, you can use the `createSqlTag` factory, e.g. ``` import { createSqlTag } from 'slonik'; /** * @typedef SqlTagConfiguration */ /** * @param {SqlTagConfiguration} configuration */ const sql = createSqlTag(configuration); ``` Value placeholders --- ### Tagged template literals Slonik query methods can only be executed using `sql` [tagged template literal](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Template_literals#Tagged_template_literals), e.g. ``` import { sql } from 'slonik' connection.query(sql` SELECT 1 FROM foo WHERE bar = ${'baz'} `); ``` The above is equivalent to evaluating: ``` SELECT 1 FROM foo WHERE bar = $1 ``` query with 'baz' value binding. ### Manually constructing the query Manually constructing queries is not allowed. There is an internal mechanism that checks to see if query was created using `sql` tagged template literal, i.e. ``` const query = { sql: 'SELECT 1 FROM foo WHERE bar = $1', type: 'SQL', values: [ 'baz' ] }; connection.query(query); ``` Will result in an error: > Query must be constructed using `sql` tagged template literal. This is a security measure designed to prevent unsafe query execution. Furthermore, a query object constructed using `sql` tagged template literal is [frozen](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze) to prevent further manipulation. ### Nesting `sql` `sql` tagged template literals can be nested, e.g. ``` const query0 = sql`SELECT ${'foo'} FROM bar`; const query1 = sql`SELECT ${'baz'} FROM (${query0})`; ``` Produces: ``` { sql: 'SELECT $1 FROM (SELECT $2 FROM bar)', values: [ 'baz', 'foo' ] } ``` Query building --- Queries are built using methods of the `sql` tagged template literal. If this is your first time using Slonik, read [Dynamically generating SQL queries using Node.js](https://dev.to/gajus/dynamically-generating-sql-queries-using-node-js-2c1g). ### `sql.array` ``` ( values: $ReadOnlyArray<PrimitiveValueExpressionType>, memberType: TypeNameIdentifierType | SqlTokenType ) => ArraySqlTokenType; ``` Creates an array value binding, e.g. ``` await connection.query(sql` SELECT (${sql.array([1, 2, 3], 'int4')}) `); ``` Produces: ``` { sql: 'SELECT $1::"int4"[]', values: [ [ 1, 2, 3 ] ] } ``` #### `sql.array` `memberType` If `memberType` is a string (`TypeNameIdentifierType`), then it is treated as a type name identifier and will be quoted using double quotes, i.e. `sql.array([1, 2, 3], 'int4')` is equivalent to `$1::"int4"[]`. The implication is that keywords that are often used interchangeably with type names are not going to work, e.g. [`int4`](https://github.com/postgres/postgres/blob/69edf4f8802247209e77f69e089799b3d83c13a4/src/include/catalog/pg_type.dat#L74-L78) is a type name identifier and will work. However, [`int`](https://github.com/postgres/postgres/blob/69edf4f8802247209e77f69e089799b3d83c13a4/src/include/parser/kwlist.h#L213) is a keyword and will not work. You can either use type name identifiers or you can construct custom member using `sql` tag, e.g. ``` await connection.query(sql` SELECT (${sql.array([1, 2, 3], sql`int[]`)}) `); ``` Produces: ``` { sql: 'SELECT $1::int[]', values: [ [ 1, 2, 3 ] ] } ``` #### `sql.array` vs `sql.join` Unlike `sql.join`, `sql.array` generates a stable query of a predictable length, i.e. regardless of the number of values in the array, the generated query remains the same: * Having a stable query enables [`pg_stat_statements`](https://www.postgresql.org/docs/current/pgstatstatements.html) to aggregate all query execution statistics. * Keeping the query length short reduces query parsing time. Example: ``` sql`SELECT id FROM foo WHERE id IN (${sql.join([1, 2, 3], sql`, `)})`; sql`SELECT id FROM foo WHERE id NOT IN (${sql.join([1, 2, 3], sql`, `)})`; ``` Is equivalent to: ``` sql`SELECT id FROM foo WHERE id = ANY(${sql.array([1, 2, 3], 'int4')})`; sql`SELECT id FROM foo WHERE id != ALL(${sql.array([1, 2, 3], 'int4')})`; ``` Furthermore, unlike `sql.join`, `sql.array` can be used with an empty array of values. In short, `sql.array` should be preferred over `sql.join` when possible. ### `sql.binary` ``` ( data: Buffer ) => BinarySqlTokenType; ``` Binds binary ([`bytea`](https://www.postgresql.org/docs/current/datatype-binary.html)) data, e.g. ``` await connection.query(sql` SELECT ${sql.binary(Buffer.from('foo'))} `); ``` Produces: ``` { sql: 'SELECT $1', values: [ Buffer.from('foo') ] } ``` ### `sql.identifier` ``` ( names: $ReadOnlyArray<string> ) => IdentifierSqlTokenType; ``` [Delimited identifiers](https://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) are created by enclosing an arbitrary sequence of characters in double-quotes ("). To create create a delimited identifier, create an `sql` tag function placeholder value using `sql.identifier`, e.g. ``` sql` SELECT 1 FROM ${sql.identifier(['bar', 'baz'])} `; ``` Produces: ``` { sql: 'SELECT 1 FROM "bar"."baz"', values: [] } ``` ### `sql.json` ``` ( value: SerializableValueType ) => JsonSqlTokenType; ``` Serializes value and binds it as a JSON string literal, e.g. ``` await connection.query(sql` SELECT (${sql.json([1, 2, 3])}) `); ``` Produces: ``` { sql: 'SELECT $1', values: [ '[1,2,3]' ] } ``` #### Difference from `JSON.stringify` | Input | `sql.json` | `JSON.stringify` | | --- | --- | --- | | `undefined` | Throws `InvalidInputError` error. | `undefined` | | `null` | `null` | `"null"` (string literal) | ### `sql.join` ``` ( members: $ReadOnlyArray<SqlTokenType>, glue: SqlTokenType ) => ListSqlTokenType; ``` Concatenates SQL expressions using `glue` separator, e.g. ``` await connection.query(sql` SELECT ${sql.join([1, 2, 3], sql`, `)} `); ``` Produces: ``` { sql: 'SELECT $1, $2, $3', values: [ 1, 2, 3 ] } ``` `sql.join` is the primary building block for most of the SQL, e.g. Boolean expressions: ``` sql` SELECT ${sql.join([1, 2], sql` AND `)} ` // SELECT $1 AND $2 ``` Tuple: ``` sql` SELECT (${sql.join([1, 2], sql`, `)}) ` // SELECT ($1, $2) ``` Tuple list: ``` sql` SELECT ${sql.join( [ sql`(${sql.join([1, 2], sql`, `)})`, sql`(${sql.join([3, 4], sql`, `)})`, ], sql`, ` )} ` // SELECT ($1, $2), ($3, $4) ``` ### `sql.unnest` ``` ( tuples: $ReadOnlyArray<$ReadOnlyArray<PrimitiveValueExpressionType>>, columnTypes: $ReadOnlyArray<string> ): UnnestSqlTokenType; ``` Creates an `unnest` expressions, e.g. ``` await connection.query(sql` SELECT bar, baz FROM ${sql.unnest( [ [1, 'foo'], [2, 'bar'] ], [ 'int4', 'text' ] )} AS foo(bar, baz) `); ``` Produces: ``` { sql: 'SELECT bar, baz FROM unnest($1::int4[], $2::text[]) AS foo(bar, baz)', values: [ [ 1, 2 ], [ 'foo', 'bar' ] ] } ``` Query methods --- ### `any` Returns result rows. Example: ``` const rows = await connection.any(sql`SELECT foo`); ``` `#any` is similar to `#query` except that it returns rows without fields information. ### `anyFirst` Returns value of the first column of every row in the result set. * Throws `DataIntegrityError` if query returns multiple columns. Example: ``` const fooValues = await connection.anyFirst(sql`SELECT foo`); ``` ### `exists` Returns a boolean value indicating whether query produces results. The query that is passed to this function is wrapped in `SELECT exists()` prior to it getting executed, i.e. ``` pool.exists(sql` SELECT LIMIT 1 `) ``` is equivalent to: ``` pool.oneFirst(sql` SELECT exists( SELECT LIMIT 1 ) `) ``` ### `copyFromBinary` ``` ( streamQuery: TaggedTemplateLiteralInvocationType, tupleList: $ReadOnlyArray<$ReadOnlyArray<any>>, columnTypes: $ReadOnlyArray<TypeNameIdentifierType> ) => Promise<null>; ``` Copies from a binary stream. The binary stream is constructed using user supplied `tupleList` and `columnTypes` values. Example: ``` const tupleList = [ [ 1, 'baz' ], [ 2, 'baz' ] ]; const columnTypes = [ 'int4', 'text' ]; await connection.copyFromBinary( sql` COPY foo ( id, baz ) FROM STDIN BINARY `, tupleList, columnTypes ); ``` #### Limitations * Tuples cannot contain `NULL` values. #### Implementation notes `copyFromBinary` implementation is designed to minimize the query execution time at the cost of increased script memory usage and execution time. This is achieved by separating data encoding from feeding data to PostgreSQL, i.e. all data passed to `copyFromBinary` is first encoded and then fed to PostgreSQL (contrast this to using a stream with encoding transformation to feed data to PostgreSQL). #### Related documentation * [`COPY` documentation](https://www.postgresql.org/docs/current/sql-copy.html) ### `many` Returns result rows. * Throws `NotFoundError` if query returns no rows. Example: ``` const rows = await connection.many(sql`SELECT foo`); ``` ### `manyFirst` Returns value of the first column of every row in the result set. * Throws `NotFoundError` if query returns no rows. * Throws `DataIntegrityError` if query returns multiple columns. Example: ``` const fooValues = await connection.many(sql`SELECT foo`); ``` ### `maybeOne` Selects the first row from the result. * Returns `null` if row is not found. * Throws `DataIntegrityError` if query returns multiple rows. Example: ``` const row = await connection.maybeOne(sql`SELECT foo`); // row.foo is the result of the `foo` column value of the first row. ``` ### `maybeOneFirst` Returns value of the first column from the first row. * Returns `null` if row is not found. * Throws `DataIntegrityError` if query returns multiple rows. * Throws `DataIntegrityError` if query returns multiple columns. Example: ``` const foo = await connection.maybeOneFirst(sql`SELECT foo`); // foo is the result of the `foo` column value of the first row. ``` ### `one` Selects the first row from the result. * Throws `NotFoundError` if query returns no rows. * Throws `DataIntegrityError` if query returns multiple rows. Example: ``` const row = await connection.one(sql`SELECT foo`); // row.foo is the result of the `foo` column value of the first row. ``` > Note: > I've been asked "What makes this different from [knex.js](http://knexjs.org/) `knex('foo').limit(1)`?". > `knex('foo').limit(1)` simply generates "SELECT * FROM foo LIMIT 1" query. > `knex` is a query builder; it does not assert the value of the result. > Slonik `#one` adds assertions about the result of the query. ### `oneFirst` Returns value of the first column from the first row. * Throws `NotFoundError` if query returns no rows. * Throws `DataIntegrityError` if query returns multiple rows. * Throws `DataIntegrityError` if query returns multiple columns. Example: ``` const foo = await connection.oneFirst(sql`SELECT foo`); // foo is the result of the `foo` column value of the first row. ``` ### `query` API and the result shape are equivalent to [`pg#query`](https://github.com/brianc/node-postgres). Example: ``` await connection.query(sql`SELECT foo`); // { // command: 'SELECT', // fields: [], // notices: [], // rowCount: 1, // rows: [ // { // foo: 'bar' // } // ] // } ``` ### `stream` Streams query results. Example: ``` await connection.stream(sql`SELECT foo`, (stream) => { stream.on('data', (datum) => { datum; // { // fields: [ // { // name: 'foo', // dataTypeId: 23, // } // ], // row: { // foo: 'bar' // } // } }); }); ``` Note: Implemented using [`pg-query-stream`](https://github.com/brianc/node-pg-query-stream). ### `transaction` `transaction` method is used wrap execution of queries in `START TRANSACTION` and `COMMIT` or `ROLLBACK`. `COMMIT` is called if the transaction handler returns a promise that resolves; `ROLLBACK` is called otherwise. `transaction` method can be used together with `createPool` method. When used to create a transaction from an instance of a pool, a new connection is allocated for the duration of the transaction. ``` const result = await connection.transaction(async (transactionConnection) => { await transactionConnection.query(sql`INSERT INTO foo (bar) VALUES ('baz')`); await transactionConnection.query(sql`INSERT INTO qux (quux) VALUES ('corge')`); return 'FOO'; }); result === 'FOO'; ``` #### Transaction nesting Slonik uses [`SAVEPOINT`](https://www.postgresql.org/docs/current/sql-savepoint.html) to automatically nest transactions, e.g. ``` await connection.transaction(async (t1) => { await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`); return t1.transaction((t2) => { return t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`); }); }); ``` is equivalent to: ``` START TRANSACTION; INSERT INTO foo (bar) VALUES ('baz'); SAVEPOINT slonik_savepoint_1; INSERT INTO qux (quux) VALUES ('corge'); COMMIT; ``` Slonik automatically rollsback to the last savepoint if a query belonging to a transaction results in an error, e.g. ``` await connection.transaction(async (t1) => { await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`); try { await t1.transaction(async (t2) => { await t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`); return Promise.reject(new Error('foo')); }); } catch (error) { } }); ``` is equivalent to: ``` START TRANSACTION; INSERT INTO foo (bar) VALUES ('baz'); SAVEPOINT slonik_savepoint_1; INSERT INTO qux (quux) VALUES ('corge'); ROLLBACK TO SAVEPOINT slonik_savepoint_1; COMMIT; ``` If error is unhandled, then the entire transaction is rolledback, e.g. ``` await connection.transaction(async (t1) => { await t1.query(sql`INSERT INTO foo (bar) VALUES ('baz')`); await t1.transaction(async (t2) => { await t2.query(sql`INSERT INTO qux (quux) VALUES ('corge')`); await t1.transaction(async (t3) => { await t3.query(sql`INSERT INTO uier (grault) VALUES ('garply')`); return Promise.reject(new Error('foo')); }); }); }); ``` is equivalent to: ``` START TRANSACTION; INSERT INTO foo (bar) VALUES ('baz'); SAVEPOINT slonik_savepoint_1; INSERT INTO qux (quux) VALUES ('corge'); SAVEPOINT slonik_savepoint_2; INSERT INTO uier (grault) VALUES ('garply'); ROLLBACK TO SAVEPOINT slonik_savepoint_2; ROLLBACK TO SAVEPOINT slonik_savepoint_1; ROLLBACK; ``` #### Transaction retrying Transactions that are failing with [Transaction Rollback](https://www.postgresql.org/docs/current/errcodes-appendix.html) class errors are automatically retried. A failing transaction will be rolled back and all queries up to the failing query will be replayed. How many times a transaction is retried is controlled using `transactionRetryLimit` configuration (default: 5). Error handling --- All Slonik errors extend from `SlonikError`, i.e. You can catch Slonik specific errors using the following logic. ``` import { SlonikError } from 'slonik'; try { await query(); } catch (error) { if (error instanceof SlonikError) { // This error is thrown by Slonik. } } ``` ### Original `node-postgres` error When error originates from `node-postgres`, the original error is available under `originalError` property. This propery is exposed for debugging purposes only. Do not use it for conditional checks – it can change. If you require to extract meta-data about a specific type of error (e.g. contraint violation name), raise a GitHub issue describing your use case. ### Handling `BackendTerminatedError` `BackendTerminatedError` is thrown when the backend is terminated by the user, i.e. [`pg_terminate_backend`](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL). `BackendTerminatedError` must be handled at the connection level, i.e. ``` await pool.connect(async (connection0) => { try { await pool.connect(async (connection1) => { const backendProcessId = await connection1.oneFirst(sql`SELECT pg_backend_pid()`); setTimeout(() => { connection0.query(sql`SELECT pg_cancel_backend(${backendProcessId})`) }, 2000); try { await connection1.query(sql`SELECT pg_sleep(30)`); } catch (error) { // This code will not be executed. } }); } catch (error) { if (error instanceof BackendTerminatedError) { // Handle backend termination. } else { throw error; } } }); ``` ### Handling `CheckIntegrityConstraintViolationError` `CheckIntegrityConstraintViolationError` is thrown when PostgreSQL responds with [`check_violation`](https://www.postgresql.org/docs/9.4/static/errcodes-appendix.html) (`23514`) error. ### Handling `ConnectionError` `ConnectionError` is thrown when connection cannot be established to the PostgreSQL server. ### Handling `DataIntegrityError` To handle the case where the data result does not match the expectations, catch `DataIntegrityError` error. ``` import { NotFoundError } from 'slonik'; let row; try { row = await connection.one(sql`SELECT foo`); } catch (error) { if (error instanceof DataIntegrityError) { console.error('There is more than one row matching the select criteria.'); } else { throw error; } } ``` ### Handling `ForeignKeyIntegrityConstraintViolationError` `ForeignKeyIntegrityConstraintViolationError` is thrown when PostgreSQL responds with [`foreign_key_violation`](https://www.postgresql.org/docs/9.4/static/errcodes-appendix.html) (`23503`) error. ### Handling `NotFoundError` To handle the case where query returns less than one row, catch `NotFoundError` error. ``` import { NotFoundError } from 'slonik'; let row; try { row = await connection.one(sql`SELECT foo`); } catch (error) { if (!(error instanceof NotFoundError)) { throw error; } } if (row) { // row.foo is the result of the `foo` column value of the first row. } ``` ### Handling `NotNullIntegrityConstraintViolationError` `NotNullIntegrityConstraintViolationError` is thrown when PostgreSQL responds with [`not_null_violation`](https://www.postgresql.org/docs/9.4/static/errcodes-appendix.html) (`23502`) error. ### Handling `StatementCancelledError` `StatementCancelledError` is thrown when a query is cancelled by the user (i.e. [`pg_cancel_backend`](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL)) or in case of a timeout. It should be safe to use the same connection if `StatementCancelledError` is handled, e.g. ``` await pool.connect(async (connection0) => { await pool.connect(async (connection1) => { const backendProcessId = await connection1.oneFirst(sql`SELECT pg_backend_pid()`); setTimeout(() => { connection0.query(sql`SELECT pg_cancel_backend(${backendProcessId})`) }, 2000); try { await connection1.query(sql`SELECT pg_sleep(30)`); } catch (error) { if (error instanceof StatementCancelledError) { // Safe to continue using the same connection. } else { throw error; } } }); }); ``` ### Handling `StatementTimeoutError` `StatementTimeoutError` inherits from `StatementCancelledError` and it is called only in case of a timeout. ### Handling `UniqueIntegrityConstraintViolationError` `UniqueIntegrityConstraintViolationError` is thrown when PostgreSQL responds with [`unique_violation`](https://www.postgresql.org/docs/9.4/static/errcodes-appendix.html) (`23505`) error. Types --- This package is using [TypeScript](http://typescriptlang.org/) types. Refer to [`./src/types.js`](https://github.com/gajus/slonik/blob/HEAD/src/types.js). The public interface exports the following types: * `DatabaseConnectionType` * `DatabasePoolConnectionType` * `DatabaseSingleConnectionType` Use these types to annotate `connection` instance in your code base, e.g. ``` import type { DatabaseConnectionType } from 'slonik'; export default async ( connection: DatabaseConnectionType, code: string ): Promise<number> => { const countryId = await connection.oneFirst(sql` SELECT id FROM country WHERE code = ${code} `); return countryId; }; ``` Debugging --- ### Logging Slonik uses [roarr](https://github.com/gajus/roarr) to log queries. To enable logging, define `ROARR_LOG=true` environment variable. By default, Slonik logs only connection events, e.g. when connection is created, connection is acquired and notices. Query-level logging can be added using [`slonik-interceptor-query-logging`](https://github.com/gajus/slonik-interceptor-query-logging) interceptor. ### Capture stack trace Note: Requires [`slonik-interceptor-query-logging`](https://github.com/gajus/slonik-interceptor-query-logging). Enabling `captureStackTrace` configuration will create a stack trace before invoking the query and include the stack trace in the logs, e.g. ``` {"context":{"package":"slonik","namespace":"slonik","logLevel":20,"executionTime":"357 ms","queryId":"01CV2V5S4H57KCYFFBS0BJ8K7E","rowCount":1,"sql":"SELECT schedule_cinema_data_task();","stackTrace":["/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:162:28","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:314:12","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:361:20","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist/utilities:17:13","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:59:21","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:590:45","internal/process/next_tick.js:68:7"],"values":[]},"message":"query","sequence":4,"time":1540915127833,"version":"1.0.0"} {"context":{"package":"slonik","namespace":"slonik","logLevel":20,"executionTime":"66 ms","queryId":"01CV2V5SGS0WHJX4GJN09Z3MTB","rowCount":1,"sql":"SELECT cinema_id \"cinemaId\", target_data \"targetData\" FROM cinema_data_task WHERE id = ?","stackTrace":["/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:162:28","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist:285:12","/Users/gajus/Documents/dev/applaudience/data-management-program/node_modules/slonik/dist/utilities:17:13","/Users/gajus/Documents/dev/applaudience/data-management-program/src/bin/commands/do-cinema-data-tasks.js:603:26","internal/process/next_tick.js:68:7"],"values":[17953947]},"message":"query","sequence":5,"time":1540915127902,"version":"1.0.0"} ``` Use [`@roarr/cli`](https://github.com/gajus/roarr-cli) to pretty-print the output. Syntax Highlighting --- ### Atom Syntax Highlighting Plugin Using [Atom](https://atom.io/) IDE you can leverage the [`language-babel`](https://github.com/gandm/language-babel) package in combination with the [`language-sql`](https://github.com/atom/language-sql) to enable highlighting of the SQL strings in the codebase. To enable highlighting, you need to: 1. Install `language-babel` and `language-sql` packages. 2. Configure `language-babel` "JavaScript Tagged Template Literal Grammar Extensions" setting to use `language-sql` to highlight template literals with `sql` tag (configuration value: `sql:source.sql`). 3. Use [`sql` helper to construct the queries](https://github.com/gajus/slonik#tagged-template-literals). For more information, refer to the [JavaScript Tagged Template Literal Grammar Extensions](https://github.com/gandm/language-babel#javascript-tagged-template-literal-grammar-extensions) documentation of `language-babel` package. ### VS Code Syntax Highlighting Extension The [`vscode-sql-template-literal` extension](https://marketplace.visualstudio.com/items?itemName=forbeslindesay.vscode-sql-template-literal) provides syntax highlighting for VS Code: Readme --- ### Keywords * postgresql * promise * types
gsmoothr
cran
R
Package ‘gsmoothr’ October 13, 2022 Version 0.1.7 Date 2013/03/03 Title Smoothing tools Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Depends R (>= 2.8.0), methods Description Tools rewritten in C for various smoothing tasks License LGPL (>= 2.0) NeedsCompilation yes Repository CRAN Date/Publication 2014-06-10 09:55:15 R topics documented: tmean... 1 trimmedMea... 3 tmeanC Trimmed Mean Smoother Description A fast trimmed mean smoother (using C code) of data at discrete points (e.g. probe-level data). Usage tmeanC(sp, x, spout = NULL, nProbes = 10, probeWindow = 600, trim = 0.1) Arguments sp numeric vector of positions (x-values) x numeric vector of data (corresponding to sp spout optional vector of output values to calculate trimmed mean at, default: NULL nProbes minimum number of observations required within window probeWindow distance (in x) in each direction to look for observations to be used in the trimmed mean trim proportion of trim to use in trimmed mean Details Using the specified probe window, this procedure uses all values within the window and calculates a trimmed mean with the specified amount of trim. If there are not enough observations within the window at a given position (as given by nProbes), a zero is returned. Value vector (of the same length as sp (or spout)) giving the trimmed mean smoothed values Author(s) <NAME> See Also trimmedMean Examples sp <- seq(100, 1000, by=100) ss <- seq(100,1000, by=50) set.seed(14) x <- rnorm(length(sp)) tmC <- tmeanC(sp, x, probeWindow=300, nProbes=5) tmC1 <- tmeanC(sp, x, spout=sp, probeWindow=300, nProbes=5) tmC2 <- tmeanC(sp, x, spout=ss, probeWindow=300, nProbes=5) cbind(tmC,tmC1) plot(sp, x, type="h", ylim=c(-2,2)) lines(sp, tmC1, col="blue") lines(ss, tmC2, col="red") trimmedMean Trimmed Mean Smoother Description A slow trimmed mean smoother (using R code) of data at discrete points (e.g. probe-level data). Usage trimmedMean(pos, score, probeWindow=600, meanTrim=.1, nProbes=10) Arguments pos numeric vector of positions (x-values) score numeric vector of data (corresponding to sp probeWindow distance (in x) in each direction to look for observations to be used in the trimmed mean meanTrim proportion of trim to use in trimmed mean nProbes minimum number of observations required within window Details Using the specified probe window, this procedure uses all values within the window and calculates a trimmed mean with the specified amount of trim. If there are not enough observations within the window at a given position (as given by nProbes), a zero is returned. Value vector (of the same length as sp giving the trimmed mean smoothed values Author(s) <NAME> See Also tmeanC Examples sp <- seq(100, 1000, by=100) ss <- seq(100,1000, by=50) set.seed(14) x <- rnorm(length(sp)) tmC <- trimmedMean(sp, x, probeWindow=300, nProbes=5)
outreg
cran
R
Package ‘outreg’ October 14, 2022 Type Package Title Regression Table for Publication Version 0.2.2 Description Create regression tables for publication. Currently supports 'lm', 'glm', 'survreg', and 'ivreg' outputs. License MIT + file LICENSE Encoding UTF-8 LazyData true Depends R (>= 3.0) Imports magrittr, reshape2, sandwich, stats, stringr, tidyr, utils Suggests AER, survival, testthat RoxygenNote 5.0.1 URL https://github.com/kota7/outreg BugReports https://github.com/kota7/outreg/issues NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2017-03-14 06:58:31 R topics documented: get_display_name... 2 outre... 2 outreg_stat_lis... 4 get_display_names Return display name for stats Description Return display name for stats Usage get_display_names(stats) Arguments stats character vector of stats Value character vector of display names outreg Generate Regression Table Description Generate a regression table in data.frame format from a set of model fit objects. Currently supports lm, glm, survreg, and ivreg model outcomes. Usage outreg(fitlist, digits = 3L, alpha = c(0.1, 0.05, 0.01), bracket = c("se"), starred = c("coef"), robust = FALSE, small = TRUE, constlast = FALSE, norepeat = TRUE, displayed = list(), ...) Arguments fitlist list of regression outcomes digits number of dicimal places for real numbers alpha vector of significance levels to star bracket stats to be in brackets starred stats to put stars on robust if TRUE, robust standard error is used small if TRUE, small sample parameter distribution is used constlast if TRUE, intercept is moved to the end of coefficient list norepeat if TRUE, repeated variable names are replaced by a empty string displayed a list of named logicals to customize the stats to display ... alternative way to specify which stats to display Details Use outreg_stat_list to see the available stats names. The stats names are to be used for speci- fying bracket, starred, and displayed options. Statistics to include can be chosen by displayed option or by `...`. For example, outreg(fitlist, displayed = list(pv = TRUE)) is identical with outreg(fitlist pv = TRUE), and p values of co- efficients are displayed. Value regression table in data.frame format Examples fitlist <- list(lm(mpg ~ cyl, data = mtcars), lm(mpg ~ cyl + wt + hp, data = mtcars), lm(mpg ~ cyl + wt + hp + drat, data = mtcars)) outreg(fitlist) # with custom regression names outreg(setNames(fitlist, c('small', 'medium', 'large'))) # star on standard errors, instead of estimate outreg(fitlist, starred = 'se') # include other stats outreg(fitlist, pv = TRUE, tv = TRUE, se = FALSE) # poisson regression counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) fitlist2 <- list(glm(counts ~ outcome, family = poisson()), glm(counts ~ outcome + treatment, family = poisson())) outreg(fitlist2) # logistic regression fitlist3 <- list(glm(cbind(ncases, ncontrols) ~ agegp, data = esoph, family = binomial()), glm(cbind(ncases, ncontrols) ~ agegp + tobgp + alcgp, data = esoph, family = binomial()), glm(cbind(ncases, ncontrols) ~ agegp + tobgp * alcgp, data = esoph, family = binomial())) outreg(fitlist3) # survival regression library(survival) fitlist4 <- list(survreg(Surv(time, status) ~ ph.ecog + age, data = lung), survreg(Surv(time, status) ~ ph.ecog + age + strata(sex), data = lung)) outreg(fitlist4) # tobit regression fitlist5 <- list(survreg(Surv(durable, durable>0, type='left') ~ 1, data=tobin, dist='gaussian'), survreg(Surv(durable, durable>0, type='left') ~ age + quant, data=tobin, dist='gaussian')) outreg(fitlist5) # instrumental variable regression library(AER) data("CigarettesSW", package = "AER") CigarettesSW$rprice <- with(CigarettesSW, price/cpi) CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi) CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi) fitlist6 <- list(OLS = lm(log(packs) ~ log(rprice) + log(rincome), data = CigarettesSW, subset = year == "1995"), IV1 = ivreg(log(packs) ~ log(rprice) + log(rincome) | log(rincome) + tdiff + I(tax/cpi), data = CigarettesSW, subset = year == "1995"), IV2 = ivreg(log(packs) ~ log(rprice) + log(rincome) | log(population) + tdiff + I(tax/cpi), data = CigarettesSW, subset = year == "1995")) outreg(fitlist6) outreg_stat_list List of Statictics Available on outreg Description Returns all available statistics on outreg. Statistics names can be used for customizing the outputs, e.g., to choose stats to display or to choose stats to put starts. Usage outreg_stat_list() Value a data.frame that matches stat name and display name Examples outreg_stat_list()
simmer
ruby
Ruby
Simmer === --- [![Gem Version](https://badge.fury.io/rb/simmer.svg)](https://badge.fury.io/rb/simmer) [![Build Status](https://travis-ci.org/bluemarblepayroll/simmer.svg?branch=master)](https://travis-ci.org/bluemarblepayroll/simmer) [![Maintainability](https://api.codeclimate.com/v1/badges/61996dff817d44efc408/maintainability)](https://codeclimate.com/github/bluemarblepayroll/simmer/maintainability) [![Test Coverage](https://api.codeclimate.com/v1/badges/61996dff817d44efc408/test_coverage)](https://codeclimate.com/github/bluemarblepayroll/simmer/test_coverage) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) *Note: This is not officially supported by Hitachi Vantara.* This library provides is a Ruby-based testing suite for [Pentaho Data Integration](https://www.hitachivantara.com/en-us/products/data-management-analytics/pentaho-platform/pentaho-data-integration.html). You can create specifications for Pentaho transformations and jobs then ensure they always run correctly. Compatibility & Limitations --- This library was tested against: * Kettle version 6.1.0.1-196 * MacOS and Linux Note that it also is currently limited to: * MySQL * Amazon Simple Storage Service Future enhancements potentially could include breaking these out and making them plug-ins in order to support other database and cloud storage vendors/systems. Installation --- To install through Rubygems: ``` gem install simmer ``` You can also add this to your Gemfile: ``` bundle add simmer ``` After installation, you will need do to two things: 1. Add simmer configuration file 2. Add simmer directory ### Simmer Configuration File The configuration file contains information about external systems, such as: * Amazon Simple Storage Service * Local File System * Pentaho Data Integration * MySQL Database Copy this configuration template into your project's root to: `config/simmer.yaml`: ``` # Automatically retry a test when it has failed this many times due to a timeout error: timeout_failure_retry_count: 0 mysql_database: database: username: host: port: flags: MULTI_STATEMENTS spoon_client: dir: spec/mocks/spoon args: 0 # local_file_system: # dir: tmp/store_test # aws_file_system: # access_key_id: # bucket: # default_expires_in_seconds: 3600 # encryption: AES256 # region: # secret_access_key: ``` Note: You can configure any options for `mysql_database` listed in the [mysql2 gem configuration options](https://github.com/brianmario/mysql2#connection-options). Fill out the missing configuration values required for each section. If you would like to use your local file system then un-comment the `local_file_system` key. If you would like to use AWS S3 then un-comment out the `aws_file_system` key. Note: There is a naming-convention-based protection to help ensure non-test database and file systems do not get accidentally wiped that you must follow: 1. Database names must end in `_test' 2. local file system dir must end in `-test` 3. AWS file system bucket must end in `-test` ### Simmer Directory You will also need to create the following folder structure in your project's root folder: * **simmer/files**: Place any files necessary to stage in this directory. * **simmer/fixtures**: Place YAML files, that describe database records, necessary to stage the database. * **simmer/specs**: Place specification YAML files here. It does not matter how each of these directories are internally structured, they can contain folder structure in any arbitrary way. These directories should be version controlled as they contain the necessary information to execute your tests. But you may want to ignore the `simmer/results` directory as that will store the results after execution. Getting Started --- ### What is a Specification? A specification is a blueprint for how to run a transformation or job and contains configuration for: * File system state before execution * Database state before execution * Execution command * Expected database state after execution * Expected execution output #### Specification Example The following is an example specification for a transformation: ``` name: Declassify Users stage: files: src: noc_list.csv dest: input/noc_list.csv fixtures: - iron_man - hulk act: name: load_noc_list repository: top_secret type: transformation params: files: input_file: noc_list.csv keys: code: 'The secret code is: {codes.the_secret_one}' assert: assertions: - type: table name: agents records: - call_sign: iron_man first: tony last: stark - call_sign: hulk first: bruce last: banner - type: table name: agents logic: includes records: - last: stark - type: output value: output to stdout ``` ##### Stage Section The stage section defines the pre-execution state that needs to exist before PDI execution. There are two options: 1. Files 2. Fixtures ###### Files Each file entry specifies two things: * **src**: the location of the file (relative to the `simmer/files`) * **dest**: where to copy it to (within the configured file system: local or S3) ###### Fixtures Fixtures will populate the database specified in the `mysql_database` section of `simmer.yaml`. In order to do this you need to: 1. Add the fixture to a YAML file in the `simmer/fixtures` directory. 2. Add the name of the fixture you wish to use in the `stage/fixtures` section as illustrated above **Adding Fixtures** Fixtures live in YAML files within the `simmer/fixtures` directory. They can be placed in any arbitrary file, the only restriction is their top-level keys that uniquely identify a fixture. Here is an example of a fixture file: ``` hulk: table: agents fields: call_sign: hulk first: CLASSIFIED last: CLASSIFIED iron_man: table: agents fields: call_sign: iron_man first: CLASSIFIED last: CLASSIFIED ``` This example specifies two fixtures: `hulk` and `iron_man`. Each will end up creating a record in the `agents` table with their respective attributes (columns). ##### Act Section The act configuration contains the necessary information for invoking Pentaho through its Spoon script. The options are: * **name**: The name of the transformation or job * **repository**: The name of the Kettle repository * **type**: transformation or job * **file params**: key-value pairs to send through to Spoon as params. The values will be joined with and are relative to the `simmer/files` directory. * **key params**: key-value pairs to send through to Spoon as params. ##### Assert Section The assert section contains the expected state of: * Database table contents * Pentaho output contents Take the assert block from the example above: ``` assert: assertions: - type: table name: agents records: - call_sign: iron_man first: tony last: stark - call_sign: hulk first: bruce last: banner - type: table name: agents logic: includes records: - last: stark - type: output value: output to stdout ``` This contains two table and one output assertion. It explicitly states that: * The table `agents` should exactly contain two records with the column values as described (iron_man and hulk) * The table `agents` should include a record where the last name is `stark` * The standard output should contain the string described in the value somewhere in the log **Note**: Output does not currently test the standard error, just the standard output. ###### Table Assertion Rules Currently table assertions operate under a very rudimentary set of rules: * Record order does not matter * Each record being asserted should have the same keys compared * All values are asserted against their string coerced value * There is no concept of relationships or associations (yet) ### Running Tests After you have configured simmer and written a specification, you can run it by executing: ``` bundle exec simmer ./simmer/specs/name_of_the_spec.yaml ``` The passed in path can also be a directory and all specs in the directory (recursively) will be executed: ``` bundle exec simmer ./simmer/specs/some_directory ``` You can also omit the path altogether to execute all specs: ``` bundle exec simmer ``` Custom Configuration --- It is possible to define custom test lifecycle hooks. These are very similar to [Rspec](https://relishapp.com/rspec/rspec-core/v/3-9/docs/hooks/before-and-after-hooks). Here is an example of how to ensure that code called before and after the entire suite: ``` [Simmer](/gems/simmer/Simmer "Simmer (module)").[configure](/gems/simmer/Simmer#configure-class_method "Simmer.configure (method)") do |config| config.before(:suite) { puts 'about to run the entire suite' } config.after(:suite) do |result| result_msg = result.passed? ? 'passed' : 'failed' puts "The suite #{result_msg}." end end ``` Not that after callbacks taken an optional parameter which is the result object. It is also possible to specify custom code which runs before and after each individual specification. ``` [Simmer](/gems/simmer/Simmer "Simmer (module)").[configure](/gems/simmer/Simmer#configure-class_method "Simmer.configure (method)") do |config| config.before(:each) { puts 'I will run before each test' } config.after(:each) do |result| result_msg = result.passed? ? 'passed' : 'failed' puts "The specification #{result_msg}." end end ``` Contributing --- ### Development Environment Configuration Basic steps to take to get this repository compiling: 1. Install [Ruby](https://www.ruby-lang.org/en/documentation/installation/) (check simmer.gemspec for versions supported) 2. Install bundler (gem install bundler) 3. Clone the repository (git clone [[email protected]](/cdn-cgi/l/email-protection#72151b0632151b061a07105c111d1f):bluemarblepayroll/simmer.git) 4. Navigate to the root folder (cd simmer) 5. Install dependencies (bundle) 6. Create the 'simmer_test' MySQL database as defined in `spec/db/tables.sql`. 7. Add the tables from `spec/db/tables.sql` to this database. 8. Configure your test simmer.yaml: ``` cp spec/config/simmer.yaml.ci spec/config/simmer.yaml ``` 9. Edit `spec/config/simmer.yaml` so that it can connect to the database created in step seven. ### Running Tests To execute the test suite and code-coverage tool, run: ````bash bundle exec rspec spec --format documentation ``` Alternatively, you can have Guard watch for changes: ``` bundle exec guard ``` Also, do not forget to run Rubocop: ``` bundle exec rubocop ``` or run all three in one command: ``` bundle exec rake ``` ### Publishing Note: ensure you have proper authorization before trying to publish new versions. After code changes have successfully gone through the Pull Request review process then the following steps should be followed for publishing new versions: 1. Merge Pull Request into master 2. Update `lib/simmer/version.rb` using [semantic versioning](https://semver.org/) 3. Install dependencies: `bundle` 4. Update `CHANGELOG.md` with release notes 5. Commit & push master to remote and ensure CI builds master successfully 6. Run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org). Code of Conduct --- Everyone interacting in this codebase, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/bluemarblepayroll/simmer/blob/master/CODE_OF_CONDUCT.md). License --- This project is MIT Licensed.
hcpupd
readthedoc
Ruby
## Binary Distribution¶ For most modern Linux derivates, you should be able to simply run the binary provided here. Grab it there and follow the instructions in chapter Run hcpupd. ## Build your own Binary¶ In case the provided binary fails to run on your Linux, you need to build it on your own. Here’s how to do that: Warning Make sure the `objcopy` utility is installed: `$ objcopy` If the command isn’t found, you need to install the GNU binutils package. For Fedora 24, this is: ``` $ sudo dnf install binutils.x86_64 ``` Clone the repository from GitLab: > $ git clone https://gitlab.com/simont3/hcpupd.git * Change into the project folder and create a Python 3 virtual environment and activate it: > $ cd hcpupd/src * Update pip and setuptools, then load all required dev-packages: > $ sudo pip3 install --upgrade pip setuptools $ sudo pip3 install -r pip-requirements-dev.txt * Run the build tool: > $ pyinstaller hcpupd.spec You should find the executable in the `dist` subfolder. * Now follow the instructions in chapter Run hcpupd. hcpupd’s behavior is controlled by a configuration file, which is searched for in this order: If specified, the file identified by the `-c` parameter, otherwise * Current working directory: `.hcpupd.conf` * User’s home directory: `~/.hcpupd.conf` * System-wide: ``` /var/lib/misc/hcpupd.conf ``` Tip hcpupd will offer to create a template configuration file in case it can’t find one in any of the given locations: ``` $ hcpupd No configuration file found. Do you want me to create a template file in the current directory (y/n)? y A template file (.hcpup.conf) has been created in the current directory. Please edit it to fit your needs... ``` ## The configuration file explained¶ The configuration file is an ini-style text file with several sections, each of them holding configuration values. ### The [src] section¶ describes from where files are to be uploaded to HCP, and how. ``` [src] watchdir = /watchdir upload existing files = yes delete after upload = yes remove empty folders after upload = yes ``` * watchdir - is the folder that will be monitored; every file written into it will be uploaded to HCP. The folder specified here must exist when hcpupd is started. * upload existing files - enable discovery of files that are already in `watchdir` when hcpupd is started. * delete after upload - enabled auto-deletion of files as soon as they have been uploaded successfully. * remove empty folders after upload - enable auto-deletion of folders as soon as the last file has been uploaded. Warning Be aware that setting ``` remove empty folders after upload = yes ``` will cause hcpupd to immediately delete a folder when the last file it contained has been uploaded. This may cause applications writing into the watchdir to fail, as they might still expect a folder to exist they created earlier. ### The [tgt] section¶ describes where to store the files found in [src], and how. ``` [tgt] namespace = namespace.tenant.hcp.domain.com path = hcpupd/application user = username password = her_password ssl = yes obfuscate = yes local DNS resolver = no upload threads = 2 ``` * namespace - the HCP Namespace to write to * path - the path within the Namespace * user - a user with write access to the Namespace * password - her password * ssl - enable transfer encryption (HTTPS) * obfuscate - enable obfuscation of the file names stored to HCP * local DNS resolver - set to `no` to use the built-in resolver * upload threads - the number of uploader threads to use ### The [meta] section¶ describes how to build the custom metadata annotation stored with each files (if `obfuscate = yes` , only). ``` [meta] annotation = hcpupd tag_timestamp = yes tag_note = files from my application retention = 0 ``` * annotation - the name of the annotation to write * tag_timestamp - enable adding the file’s creation time * tag_note - a note that will be added * retention - `0` (zero) - the only supported value at this time ### The [log] section¶ defines the logfile to write and if extended debug logging shall be performed. ``` [log] logfile = /var/log/hcpupd.log log uploaded files = yes debug = yes ``` * log uploaded files - this will enable logging of uploaded files even if debug = no Tip Make sure to create the folder into which the `logfile` shall be stored before you start hcpupd the first time! Moving a folder structure into the watchdir… …will lead to the files in the top-level folder to be uploaded, but everything else will not. Reason for this is that the inotify mechanism in charge is not getting the information for all the sub-folders when moving in a folder structure. Workaround: avoid moving in folder structures, copy them instead. * Renaming a folder (or moving a folder within watchdir) is not supported. 0.2.5 2017-06-11 * fixed a bug that caused installation through pip to fail * changed documentation telling not to run hcpupd as root, plus some systemd-related info 0.2.4 2017-03-27 * in case of inotify queue overflow, a directory scan is triggered to make sure no new files get lost * in case the queue runs empty, we now preventively trigger a directory scan, as well * new config item ‘log uploaded files’ allows to log uploaded files in non-debug logging mode * added a message on shutdown that tells how many files are left for later upload 0.2.3 2017-03-01 * re-factored configuration file handling * now surviving connection loss to HCP (missed file recovery still requires hcpupd restart) 0.2.2 2017-02-15 * fixed a bug that caused moved_in folders not to be processed * added some more debug output for watched folder handling 0.2.1 2017-02-12 * various fixes related to publishing throuh gitlab and readthedocs.org 0.2.0 2017-02-11 * First public release Date: 2017-01-01 Categories: Tags: ## The MIT License (MIT)¶ Copyright (c) 2017 <NAME> (<EMAIL>) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Trademarks and Copyrights of used material¶ Hitachi Content Platform is a registered trademark of Hitachi Data Systems Corp., in the United States and other countries. All other trademarks, service marks, and company names in this document or web site are properties of their respective owners. The logo used in the documentation is based on the picture found at pixabay.com, where it is declared under CC0 Public Domain license.
PatientProfiles
cran
R
Package ‘PatientProfiles’ October 6, 2023 Type Package Title Identify Characteristics of Patients in the OMOP Common Data Model Version 0.4.0 Maintainer <NAME> <<EMAIL>> Description Identify the characteristics of patients in data mapped to the Observational Medical Out- comes Partnership (OMOP) common data model. License Apache License (>= 2) Encoding UTF-8 RoxygenNote 7.2.3 Suggests covr, duckdb, testthat (>= 3.1.5), knitr, CodelistGenerator, rmarkdown, glue, odbc, ggplot2, spelling, RPostgres, dbplyr, PaRe, here, magick, plotly, ggraph, DT, cowplot, DiagrammeRsvg Imports magrittr, CDMConnector (>= 1.0.0), dplyr, tidyr, checkmate, lubridate, DBI, rlang, cli, pillar, stringr, gt VignetteBuilder knitr URL https://darwin-eu-dev.github.io/PatientProfiles/ Language en-US Depends R (>= 2.10) Config/testthat/edition 3 Config/testthat/parallel true NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-3308-9905>), <NAME> [aut] (<https://orcid.org/0000-0002-0847-4855>), <NAME> [aut] (<https://orcid.org/0000-0002-9517-8834>), <NAME> [aut] (<https://orcid.org/0000-0002-8462-8668>), <NAME> [aut] (<https://orcid.org/0000-0002-9286-1128>), <NAME> [ctb] (<https://orcid.org/0009-0006-7948-3747>), <NAME> [ctb] (<https://orcid.org/0000-0002-6872-5804>) Repository CRAN Date/Publication 2023-10-06 15:00:05 UTC R topics documented: addAg... 3 addAttribute... 4 addCategorie... 5 addCdmNam... 6 addCohortIntersec... 7 addCohortIntersectCoun... 9 addCohortIntersectDat... 12 addCohortIntersectDay... 14 addCohortIntersectFla... 16 addCohortNam... 18 addConceptIntersec... 19 addConceptIntersectCoun... 21 addConceptIntersectDat... 22 addConceptIntersectDay... 23 addConceptIntersectFla... 24 addDateOfBirt... 25 addDemographic... 26 addFutureObservatio... 28 addInObservatio... 29 addIntersec... 30 addPriorObservatio... 32 addSe... 33 availableFunction... 34 detectVariable... 35 getConceptNam... 36 getEndNam... 36 getSourceConceptNam... 37 getStartNam... 37 gtCharacteristic... 38 gtResul... 39 mockPatientProfile... 41 summariseCharacteristic... 43 summariseLargeScaleCharacteristic... 45 summariseResul... 46 suppressCount... 47 variableType... 47 addAge Compute the age of the individuals at a certain date Description Compute the age of the individuals at a certain date Usage addAge( x, cdm = attr(x, "cdm_reference"), indexDate = "cohort_start_date", ageName = "age", ageGroup = NULL, ageDefaultMonth = 1, ageDefaultDay = 1, ageImposeMonth = FALSE, ageImposeDay = FALSE ) Arguments x Table with individuals in the cdm. cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. indexDate Variable in x that contains the date to compute the age. ageName Name of the new column that contains age. ageGroup List of age groups to be added. ageDefaultMonth Month of the year assigned to individuals with missing month of birth. By default: 1. ageDefaultDay day of the month assigned to individuals with missing day of birth. By default: 1. ageImposeMonth Whether the month of the date of birth will be considered as missing for all the individuals. ageImposeDay Whether the day of the date of birth will be considered as missing for all the individuals. Value tibble with the age column added Examples library(DBI) library(duckdb) library(PatientProfiles) cohort1 <- dplyr::tibble( cohort_definition_id = c("1", "1", "1"), subject_id = c("1", "2", "3"), cohort_start_date = c( as.Date("2010-01-01"), as.Date("2010-01-01"), as.Date("2010-01-01") ), cohort_end_date = c( as.Date("2015-01-01"), as.Date("2013-01-01"), as.Date("2018-01-01") ) ) person <- dplyr::tibble( person_id = c("1", "2", "3"), gender_concept_id = c("8507", "8532", "8507"), year_of_birth = c(2000, 1995, NA), month_of_birth = c(NA, 07, 08), day_of_birth = c(01, 25, 03) ) cdm <- mockPatientProfiles(person = person, cohort1 = cohort1) addAge(x = cdm[["cohort1"]], cdm = cdm) addAttributes Get attributes from one cohort to another Description Get attributes from one cohort to another Usage addAttributes(newcohort, oldcohort) Arguments newcohort cohort to which to attach the attributes oldcohort cohort from which to get the attributes Value new cohort with added attributes from the other given cohort Examples library(CDMConnector) library(PatientProfiles) library(dplyr) cdm <- mockPatientProfiles() attributes(cdm$cohort1) x <- cdm$cohort1 %>% filter(cohort_definition_id == 1) %>% computeQuery() attributes(x) x <- addAttributes(x, cdm$cohort1) attributes(cdm$cohort1) addCategories Categorize a numeric variable Description Categorize a numeric variable Usage addCategories( x, variable, categories, missingCategoryValue = "None", overlap = FALSE ) Arguments x Table with individuals in the cdm variable Target variable that we want to categorize. categories List of lists of named categories with lower and upper limit. missingCategoryValue Value to assign to those individuals not in any named category. If NULL or NA, missing will values will be given. overlap TRUE if the categories given overlap Value tibble with the categorical variable added. Examples #' library(DBI) library(duckdb) library(PatientProfiles) cohort1 <- dplyr::tibble( cohort_definition_id = c("1", "1", "1"), subject_id = c("1", "2", "3"), cohort_start_date = c( as.Date("2010-03-03"), as.Date("2010-03-01"), as.Date("2010-02-01") ), cohort_end_date = c( as.Date("2015-01-01"), as.Date("2013-01-01"), as.Date("2013-01-01") ) ) person <- dplyr::tibble( person_id = c("1", "2", "3"), gender_concept_id = c("8507", "8507", "8507"), year_of_birth = c(1980, 1970, 2000), month_of_birth = c(03, 07, NA), day_of_birth = c(NA, 02, 01) ) cdm <- mockPatientProfiles(person = person, cohort1 = cohort1) result <- cdm$cohort1 %>% addAge(cdm) %>% addCategories( variable = "age", categories = list("age_group" = list( "0 to 39" = c(0, 39), "40 to 79" = c(40, 79), "80 to 150" = c(80, 150) )) ) addCdmName Add cdm name Description Add cdm name Usage addCdmName(table, cdm = NULL) Arguments table Table in the cdm cdm A cdm reference object Value Table with an extra column with the cdm names Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addCdmName() addCohortIntersect Compute the intersect with a target cohort, you can compute the num- ber of occurrences, a flag of presence, a certain date and/or the time difference Description Compute the intersect with a target cohort, you can compute the number of occurrences, a flag of presence, a certain date and/or the time difference Usage addCohortIntersect( x, cdm = attr(x, "cdm_reference"), targetCohortTable, targetCohortId = NULL, indexDate = "cohort_start_date", censorDate = NULL, targetStartDate = "cohort_start_date", targetEndDate = "cohort_end_date", window = list(c(0, Inf)), order = "first", flag = TRUE, count = TRUE, date = TRUE, days = TRUE, nameStyle = "{value}_{cohort_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. targetCohortTable name of the cohort that we want to check for overlap targetCohortId vector of cohort definition ids to include indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a specific date or a column date of x targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) window window to consider events of order which record is considered in case of multiple records flag TRUE or FALSE. If TRUE, flag will calculated for this intersection count TRUE or FALSE. If TRUE, the number of counts will be calculated for this intersection date TRUE or FALSE. If TRUE, date will be calculated for this intersection days TRUE or FALSE. If TRUE, time difference in days will be calculated for this intersection nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples cohort1 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2), cohort_start_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ), cohort_end_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ) ) cohort2 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2, 2, 1), cohort_start_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), cohort_end_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), ) cdm <- mockPatientProfiles(cohort1 = cohort1, cohort2 = cohort2) result <- cdm$cohort1 %>% addCohortIntersect( targetCohortTable = "cohort2" ) %>% dplyr::collect() addCohortIntersectCount It creates columns to indicate number of occurrences of intersection with a cohort Description It creates columns to indicate number of occurrences of intersection with a cohort Usage addCohortIntersectCount( x, cdm = attr(x, "cdm_reference"), targetCohortTable, targetCohortId = NULL, indexDate = "cohort_start_date", censorDate = NULL, targetStartDate = "cohort_start_date", targetEndDate = "cohort_end_date", window = list(c(0, Inf)), nameStyle = "{cohort_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. targetCohortTable name of the cohort that we want to check for overlap targetCohortId vector of cohort definition ids to include indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a specific date or a column date of x targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) window window to consider events of nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(dplyr) cohort1 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1), addCohortIntersectCount 11 subject_id = c(1, 1, 1, 2, 2), cohort_start_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ), cohort_end_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ) ) cohort2 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2, 2, 1), cohort_start_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), cohort_end_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), ) cdm <- mockPatientProfiles(cohort1 = cohort1, cohort2 = cohort2) result <- cdm$cohort1 %>% addCohortIntersectCount( targetCohortTable = "cohort2" ) %>% dplyr::collect() addCohortIntersectDate Date of cohorts that are present in a certain window Description Date of cohorts that are present in a certain window Usage addCohortIntersectDate( x, cdm = attr(x, "cdm_reference"), targetCohortTable, targetCohortId = NULL, indexDate = "cohort_start_date", censorDate = NULL, targetDate = "cohort_start_date", order = "first", window = c(0, Inf), nameStyle = "{cohort_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. targetCohortTable Cohort table to targetCohortId Cohort IDs of interest from the other cohort table. If NULL, all cohorts will be used with a time variable added for each cohort of interest indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a specific date or a column date of x targetDate Date of interest in the other cohort table. Either cohort_start_date or cohort_end_date order date to use if there are multiple records for an individual during the window of interest. Either first or last. window Window of time to identify records relative to the indexDate. Records outside of this time period will be ignored. nameStyle naming of the added column or columns, should include required parameters addCohortIntersectDate 13 Value x along with additional columns for each cohort of interest. Examples library(PatientProfiles) library(dplyr) cohort1 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2), cohort_start_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ), cohort_end_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ) ) cohort2 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2, 2, 1), cohort_start_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), cohort_end_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), ) cdm <- mockPatientProfiles(cohort1 = cohort1, cohort2 = cohort2) result <- cdm$cohort1 %>% addCohortIntersectDate( targetCohortTable = "cohort2" ) %>% dplyr::collect() addCohortIntersectDays It creates columns to indicate the number of days between the current table and a target cohort Description It creates columns to indicate the number of days between the current table and a target cohort Usage addCohortIntersectDays( x, cdm = attr(x, "cdm_reference"), targetCohortTable, targetCohortId = NULL, indexDate = "cohort_start_date", censorDate = NULL, targetDate = "cohort_start_date", order = "first", window = c(0, Inf), nameStyle = "{cohort_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. targetCohortTable Cohort table to targetCohortId Cohort IDs of interest from the other cohort table. If NULL, all cohorts will be used with a days variable added for each cohort of interest indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a specific date or a column date of x targetDate Date of interest in the other cohort table. Either cohort_start_date or cohort_end_date order date to use if there are multiple records for an individual during the window of interest. Either first or last. window Window of time to identify records relative to the indexDate. Records outside of this time period will be ignored. nameStyle naming of the added column or columns, should include required parameters Value x along with additional columns for each cohort of interest. Examples library(PatientProfiles) library(dplyr) cohort1 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2), cohort_start_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ), cohort_end_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ) ) cohort2 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2, 2, 1), cohort_start_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), cohort_end_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), ) cdm <- mockPatientProfiles(cohort1 = cohort1, cohort2 = cohort2) result <- cdm$cohort1 %>% addCohortIntersectDays( targetCohortTable = "cohort2" ) %>% dplyr::collect() addCohortIntersectFlag It creates columns to indicate the presence of cohorts Description It creates columns to indicate the presence of cohorts Usage addCohortIntersectFlag( x, cdm = attr(x, "cdm_reference"), targetCohortTable, targetCohortId = NULL, indexDate = "cohort_start_date", censorDate = NULL, targetStartDate = "cohort_start_date", targetEndDate = "cohort_end_date", window = list(c(0, Inf)), nameStyle = "{cohort_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. targetCohortTable name of the cohort that we want to check for overlap targetCohortId vector of cohort definition ids to include indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a specific date or a column date of x targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) window window to consider events of nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples cohort1 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2), cohort_start_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ), cohort_end_date = as.Date( c( "2020-01-01", "2020-01-15", "2020-01-20", "2020-01-01", "2020-02-01" ) ) ) cohort2 <- dplyr::tibble( cohort_definition_id = c(1, 1, 1, 1, 1, 1, 1), subject_id = c(1, 1, 1, 2, 2, 2, 1), cohort_start_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), cohort_end_date = as.Date( c( "2020-01-15", "2020-01-25", "2020-01-26", "2020-01-29", "2020-03-15", "2020-01-24", "2020-02-16" ) ), ) cdm <- mockPatientProfiles(cohort1 = cohort1, cohort2 = cohort2) result <- cdm$cohort1 %>% addCohortIntersectFlag( targetCohortTable = "cohort2" ) %>% dplyr::collect() addCohortName Add cohort name for each cohort_definition_id Description Add cohort name for each cohort_definition_id Usage addCohortName(cohort) Arguments cohort cohort to which add the cohort name Value cohort with an extra column with the cohort names Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addCohortName() addConceptIntersect It creates columns to indicate overlap information between a table and a concept Description It creates columns to indicate overlap information between a table and a concept Usage addConceptIntersect( x, conceptSet, indexDate = "cohort_start_date", censorDate = NULL, window = list(c(0, Inf)), targetStartDate = "cohort_start_date", targetEndDate = NULL, order = "first", flag = TRUE, count = TRUE, date = TRUE, days = TRUE, nameStyle = "{value}_{concept_name}_{window_name}" ) Arguments x Table with individuals in the cdm conceptSet Concept set list. indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x window window to consider events in. targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) order which record is considered in case of multiple records flag TRUE or FALSE. If TRUE, flag will calculated for this intersection count TRUE or FALSE. If TRUE, the number of counts will be calculated for this intersection date TRUE or FALSE. If TRUE, date will be calculated for this intersection days TRUE or FALSE. If TRUE, time difference in days will be calculated for this intersection nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(CodelistGenerator) cdm <- mockPatientProfiles() # result <- cdm$cohort1 %>% # addConceptIntersect( # conceptSet = getDrugIngredientCodes(cdm, "acetaminophen") # ) %>% # dplyr::collect() addConceptIntersectCount It creates column to indicate the count overlap information between a table and a concept Description It creates column to indicate the count overlap information between a table and a concept Usage addConceptIntersectCount( x, conceptSet, indexDate = "cohort_start_date", censorDate = NULL, window = list(c(0, Inf)), targetStartDate = "cohort_start_date", targetEndDate = NULL, order = "first", nameStyle = "{concept_name}_{window_name}" ) Arguments x Table with individuals in the cdm conceptSet Concept set list. indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x window window to consider events in. targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) order last or first date to use for date/time calculations. nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(CodelistGenerator) cdm <- mockPatientProfiles() # result <- cdm$cohort1 %>% # addConceptIntersectCount( # conceptSet = getDrugIngredientCodes(cdm, "acetaminophen") # ) %>% # dplyr::collect() addConceptIntersectDate It creates column to indicate the date overlap information between a table and a concept Description It creates column to indicate the date overlap information between a table and a concept Usage addConceptIntersectDate( x, conceptSet, indexDate = "cohort_start_date", censorDate = NULL, window = list(c(0, Inf)), targetDate = "cohort_start_date", order = "first", nameStyle = "{concept_name}_{window_name}" ) Arguments x Table with individuals in the cdm conceptSet Concept set list. indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x window window to consider events in. targetDate date of reference in cohort table order last or first date to use for date/time calculations. nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(CodelistGenerator) cdm <- mockPatientProfiles() # result <- cdm$cohort1 %>% # addConceptIntersectDate( # conceptSet = getDrugIngredientCodes(cdm, "acetaminophen") # ) %>% # dplyr::collect() addConceptIntersectDays It creates column to indicate the days of difference from an index date to a concept Description It creates column to indicate the days of difference from an index date to a concept Usage addConceptIntersectDays( x, conceptSet, indexDate = "cohort_start_date", censorDate = NULL, window = list(c(0, Inf)), targetDate = "cohort_start_date", order = "first", nameStyle = "{concept_name}_{window_name}" ) Arguments x Table with individuals in the cdm conceptSet Concept set list. indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x window window to consider events in. targetDate date of reference in cohort table order last or first date to use for date/time calculations. nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(CodelistGenerator) cdm <- mockPatientProfiles() # result <- cdm$cohort1 %>% # addConceptIntersectDays( # conceptSet = getDrugIngredientCodes(cdm, "acetaminophen") # ) %>% # dplyr::collect() addConceptIntersectFlag It creates column to indicate the flag overlap information between a table and a concept Description It creates column to indicate the flag overlap information between a table and a concept Usage addConceptIntersectFlag( x, conceptSet, indexDate = "cohort_start_date", censorDate = NULL, window = list(c(0, Inf)), targetStartDate = "cohort_start_date", targetEndDate = NULL, order = "first", nameStyle = "{concept_name}_{window_name}" ) Arguments x Table with individuals in the cdm conceptSet Concept set list. indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x window window to consider events in. targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) order last or first date to use for date/time calculations. nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) library(CodelistGenerator) cdm <- mockPatientProfiles() # result <- cdm$cohort1 %>% # addConceptIntersectFlag( # conceptSet = getDrugIngredientCodes(cdm, "acetaminophen") # ) %>% # dplyr::collect() addDateOfBirth Add a column with the individual birth date Description Add a column with the individual birth date Usage addDateOfBirth( x, cdm = attr(x, "cdm_reference"), name = "date_of_birth", missingDay = 1, missingMonth = 1, imposeDay = FALSE, imposeMonth = FALSE ) Arguments x Table in the cdm that contains ’person_id’ or ’subject_id’ cdm ’cdm’ object created with CDMConnector::cdm_from_con(). name Name of the column to be added with the date of birth missingDay Day of the individuals with no or imposed day of birth missingMonth Month of the individuals with no or imposed month of birth imposeDay Whether to impose day of birth imposeMonth Whether to impose month of birth Value The function returns the table x with an extra column that contains the date of birth Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addDateOfBirth() addDemographics Compute demographic characteristics at a certain date Description Compute demographic characteristics at a certain date Usage addDemographics( x, cdm = attr(x, "cdm_reference"), indexDate = "cohort_start_date", age = TRUE, ageName = "age", ageDefaultMonth = 1, ageDefaultDay = 1, ageImposeMonth = FALSE, ageImposeDay = FALSE, ageGroup = NULL, sex = TRUE, sexName = "sex", priorObservation = TRUE, priorObservationName = "prior_observation", futureObservation = TRUE, futureObservationName = "future_observation" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. indexDate Variable in x that contains the date to compute the demographics characteristics. age TRUE or FALSE. If TRUE, age will be calculated relative to indexDate ageName Age variable name ageDefaultMonth Month of the year assigned to individuals with missing month of birth. ageDefaultDay day of the month assigned to individuals with missing day of birth. ageImposeMonth TRUE or FALSE. Whether the month of the date of birth will be considered as missing for all the individuals. ageImposeDay TRUE or FALSE. Whether the day of the date of birth will be considered as missing for all the individuals. ageGroup if not NULL, a list of ageGroup vectors sex TRUE or FALSE. If TRUE, sex will be identified sexName Sex variable name priorObservation TRUE or FALSE. If TRUE, days of between the start of the current observation period and the indexDate will be calculated priorObservationName Prior observation variable name futureObservation TRUE or FALSE. If TRUE, days between the indexDate and the end of the current observation period will be calculated futureObservationName Future observation variable name Value cohort table with the added demographic information columns Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addDemographics(cdm) addFutureObservation Compute the number of days till the end of the observation period at a certain date Description Compute the number of days till the end of the observation period at a certain date Usage addFutureObservation( x, cdm = attr(x, "cdm_reference"), indexDate = "cohort_start_date", futureObservationName = "future_observation" ) Arguments x Table with individuals in the cdm. cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. indexDate Variable in x that contains the date to compute the future observation. futureObservationName name of the new column to be added Value cohort table with added column containing future observation of the individuals Examples library(DBI) library(duckdb) library(PatientProfiles) cohort1 <- dplyr::tibble( cohort_definition_id = c("1", "1", "1"), subject_id = c("1", "2", "3"), cohort_start_date = c( as.Date("2010-03-03"), as.Date("2010-03-01"), as.Date("2010-02-01") ), cohort_end_date = c( as.Date("2015-01-01"), as.Date("2013-01-01"), as.Date("2013-01-01") ) ) obs_1 <- dplyr::tibble( observation_period_id = c("1", "2", "3"), person_id = c("1", "2", "3"), observation_period_start_date = c( as.Date("2010-02-03"), as.Date("2010-02-01"), as.Date("2010-01-01") ), observation_period_end_date = c( as.Date("2014-01-01"), as.Date("2012-01-01"), as.Date("2012-01-01") ) ) cdm <- mockPatientProfiles( seed = 1, cohort1 = cohort1, observation_period = obs_1 ) result <- cdm$cohort1 %>% addFutureObservation(cdm) addInObservation Indicate if a certain record is within the observation period Description Indicate if a certain record is within the observation period Usage addInObservation( x, cdm = attr(x, "cdm_reference"), indexDate = "cohort_start_date", name = "in_observation" ) Arguments x Table with individuals in the cdm. cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. indexDate Variable in x that contains the date to compute the observation flag. name name of the column to hold the result of the query: 1 if the individual is in observation, 0 if not Value cohort table with the added binary column assessing inObservation Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addInObservation(cdm) addIntersect It creates columns to indicate overlap information between two tables Description It creates columns to indicate overlap information between two tables Usage addIntersect( x, cdm = attr(x, "cdm_reference"), tableName, value, filterVariable = NULL, filterId = NULL, idName = NULL, window = list(c(0, Inf)), indexDate = "cohort_start_date", censorDate = NULL, targetStartDate = getStartName(tableName), targetEndDate = getEndName(tableName), order = "first", nameStyle = "{value}_{id_name}_{window_name}" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. tableName name of the cohort that we want to check for overlap value value of interest to add: it can be count, flag, date or time filterVariable the variable that we are going to use to filter (e.g. cohort_definition_id) filterId the value of filterVariable that we are interested in, it can be a vector idName the name of each filterId, must have same length than filterId window window to consider events of indexDate Variable in x that contains the date to compute the intersection. censorDate whether to censor overlap events at a date column of x targetStartDate date of reference in cohort table, either for start (in overlap) or on its own (for incidence) targetEndDate date of reference in cohort table, either for end (overlap) or NULL (if incidence) order last or first date to use for date/time calculations nameStyle naming of the added column or columns, should include required parameters Value table with added columns with overlap information Examples library(PatientProfiles) cdm <- mockPatientProfiles() result <- cdm$cohort1 %>% addIntersect(tableName = "cohort2", value = "date") %>% dplyr::collect() addPriorObservation Compute the number of days of prior observation in the current obser- vation period at a certain date Description Compute the number of days of prior observation in the current observation period at a certain date Usage addPriorObservation( x, cdm = attr(x, "cdm_reference"), indexDate = "cohort_start_date", priorObservationName = "prior_observation" ) Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. indexDate Variable in x that contains the date to compute the prior observation. priorObservationName name of the new column to be added Value cohort table with added column containing prior observation of the individuals Examples library(DBI) library(duckdb) library(PatientProfiles) cohort1 <- dplyr::tibble( cohort_definition_id = c("1", "1", "1"), subject_id = c("1", "2", "3"), cohort_start_date = c( as.Date("2010-03-03"), as.Date("2010-03-01"), as.Date("2010-02-01") ), cohort_end_date = c( as.Date("2015-01-01"), as.Date("2013-01-01"), as.Date("2013-01-01") ) ) obs_1 <- dplyr::tibble( observation_period_id = c("1", "2", "3"), person_id = c("1", "2", "3"), observation_period_start_date = c( as.Date("2010-02-03"), as.Date("2010-02-01"), as.Date("2010-01-01") ), observation_period_end_date = c( as.Date("2014-01-01"), as.Date("2012-01-01"), as.Date("2012-01-01") ) ) cdm <- mockPatientProfiles( seed = 1, cohort1 = cohort1, observation_period = obs_1 ) result <- cdm$cohort1 %>% addPriorObservation(cdm) addSex Compute the sex of the individuals Description Compute the sex of the individuals Usage addSex(x, cdm = attr(x, "cdm_reference"), sexName = "sex") Arguments x Table with individuals in the cdm cdm Object that contains a cdm reference. Use CDMConnector to obtain a cdm reference. sexName name of the new column to be added Value table x with the added column with sex information Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% addSex(cdm) availableFunctions Show the available functions for the 4 classifications of data that are supported (numeric, date, binary and categorical) Description Show the available functions for the 4 classifications of data that are supported (numeric, date, binary and categorical) Usage availableFunctions(variableType = NULL) Arguments variableType A choice between: "numeric", "date", "binary" or "categorical". Value A tibble with the available functions for a certain variable classification (or all if NULL) Examples library(PatientProfiles) availableFunctions() availableFunctions("numeric") availableFunctions("date") availableFunctions("binary") availableFunctions("categorical") detectVariables Detect automatically variables with a certain classification Description Detect automatically variables with a certain classification Usage detectVariables( table, variableType, exclude = c("person_id", "subject_id", "cohort_definition_id", "cohort_name", "strata_name", "strata_level") ) Arguments table Tibble variableType Classification of interest, choice between "numeric", "date", "binary" and "cate- gorical" exclude Variables to exclude Value Variables in x with the desired classification Examples library(PatientProfiles) x <- dplyr::tibble( person_id = c(1, 2), start_date = as.Date(c("2020-05-02", "2021-11-19")), asthma = c(0, 1) ) detectVariables(x, "numeric") getConceptName Get the name of the concept_id column for a certain table in the cdm Description Get the name of the concept_id column for a certain table in the cdm Usage getConceptName(tableName) Arguments tableName Name of the table Value Name of the concept_id column in that table Examples library(PatientProfiles) getConceptName("condition_occurrence") getEndName Get the name of the end date column for a certain table in the cdm Description Get the name of the end date column for a certain table in the cdm Usage getEndName(tableName) Arguments tableName Name of the table Value Name of the end date column in that table Examples library(PatientProfiles) getEndName("condition_occurrence") getSourceConceptName Get the name of the source_concept_id column for a certain table in the cdm Description Get the name of the source_concept_id column for a certain table in the cdm Usage getSourceConceptName(tableName) Arguments tableName Name of the table Value Name of the source_concept_id column in that table Examples library(PatientProfiles) getSourceConceptName("condition_occurrence") getStartName Get the name of the start date column for a certain table in the cdm Description Get the name of the start date column for a certain table in the cdm Usage getStartName(tableName) Arguments tableName Name of the table Value Name of the start date column in that table Examples library(PatientProfiles) getStartName("condition_occurrence") gtCharacteristics Create a gt table from a summarisedCharacteristics object. Description ‘r lifecycle::badge("experimental")‘ Usage gtCharacteristics( summarisedCharacteristics, pivotWide = c("CDM Name", "Group", "Strata"), format = c(`N (%)` = "count (percentage%)", "median [min; q25 - q75; max]", "mean (sd)", "median [q25 - q75]", N = "count"), keepNotFormatted = TRUE, decimals = c(default = 0), decimalMark = ".", bigMark = "," ) Arguments summarisedCharacteristics Summary characteristics long table. pivotWide variables to pivot wide format formats and labels to use keepNotFormatted Wheather to keep not formated estimate types decimals Decimals per estimate_type decimalMark decimal mark bigMark big mark Value New table in gt format Examples library(PatientProfiles) cdm <- mockPatientProfiles() summariseCharacteristics( cohort = cdm$cohort1, ageGroup = list(c(0, 19), c(20, 39), c(40, 59), c(60, 79), c(80, 150)), tableIntersect = list( "Visits" = list( tableName = "visit_occurrence", value = "count", window = c(-365, 0) ) ), cohortIntersect = list( "Medications" = list( targetCohortTable = "cohort2", value = "flag", window = c(-365, 0) ) ), minCellCount = 1 ) %>% gtCharacteristics() gtResult Create a gt table from a summary object. Description ‘r lifecycle::badge("experimental")‘ Usage gtResult( summarisedResult, long, wide, format = c(`N (%)` = "count (percentage%)", "median [min; q25 - q75; max]", "mean (sd)", "median [q25 - q75]", N = "count"), keepNotFormatted = TRUE, decimals = c(default = 0), decimalMark = ".", bigMark = "," ) Arguments summarisedResult A SummarisedResult object. long List of variables and specification to long wide List of variables and specification to wide format formats and labels to use keepNotFormatted Wheather to keep not formated estimate types decimals Decimals per estimate_type decimalMark decimal mark bigMark big mark Value A formatted summarisedResult gt object. Examples library(PatientProfiles) cdm <- mockPatientProfiles() cdm$cohort1 %>% summariseCharacteristics( ageGroup = list(c(0, 19), c(20, 39), c(40, 59), c(60, 79), c(80, 150)), minCellCount = 1 ) %>% gtResult( long = list( "Variable" = c(level = "variable", "clean"), "Level" = c(level = "variable_level"), "Format" = c(level = "format", "separator-right") ), wide = list( "CDM Name" = c(level = "cdm_name"), "Group" = c(level = c("group_name", "group_level")), "Strata" = c(level = c("strata_name", "strata_level")) ), format = c( "N (%)" = "count (percentage%)", "N" = "count", "median [Q25-Q75]" = "median [q25-q75]" ), decimals = c(count = 0), keepNotFormatted = FALSE ) mockPatientProfiles It creates a mock database for testing PatientProfiles package Description It creates a mock database for testing PatientProfiles package Usage mockPatientProfiles( connectionDetails = list(con = DBI::dbConnect(duckdb::duckdb(), ":memory:"), write_schema = "main", mock_prefix = NULL), drug_exposure = NULL, drug_strength = NULL, observation_period = NULL, condition_occurrence = NULL, visit_occurrence = NULL, concept_ancestor = NULL, person = NULL, cohort1 = NULL, cohort2 = NULL, drug_concept_id_size = 5, ancestor_concept_id_size = 5, condition_concept_id_size = 5, visit_concept_id_size = 5, visit_occurrence_id_size = 5, ingredient_concept_id_size = 1, drug_exposure_size = 10, patient_size = 1, min_drug_exposure_start_date = "2000-01-01", max_drug_exposure_start_date = "2020-01-01", earliest_date_of_birth = NULL, latest_date_of_birth = NULL, earliest_observation_start_date = NULL, latest_observation_start_date = NULL, min_days_to_observation_end = NULL, max_days_to_observation_end = NULL, earliest_condition_start_date = NULL, latest_condition_start_date = NULL, min_days_to_condition_end = NULL, max_days_to_condition_end = NULL, earliest_visit_start_date = NULL, latest_visit_start_date = NULL, min_days_to_visit_end = NULL, max_days_to_visit_end = NULL, seed = 1, ... ) Arguments connectionDetails Connection an details to create the cdm mock object drug_exposure default null user can define its own table drug_strength default null user can define its own table observation_period default null user can define its own table condition_occurrence default null user can define its own table visit_occurrence default null user can define its own visit_occurrence table concept_ancestor the concept ancestor table person default null user can define its own table cohort1 cohort table for test to run in getindication cohort2 cohort table for test to run in getindication drug_concept_id_size number of unique drug concept id ancestor_concept_id_size the size of concept ancestor table condition_concept_id_size number of unique row in the condition concept table visit_concept_id_size number of unique visit concept id visit_occurrence_id_size number of unique visit occurrence id ingredient_concept_id_size number of unique drug ingredient concept id drug_exposure_size number of unique drug exposure patient_size number of unique patient min_drug_exposure_start_date user define minimum drug exposure start date max_drug_exposure_start_date user define maximum drug exposure start date earliest_date_of_birth the earliest date of birth of patient in person table format "dd-mm-yyyy" latest_date_of_birth the latest date of birth for patient in person table format "dd-mm-yyyy" earliest_observation_start_date the earliest observation start date for patient format "dd-mm-yyyy" latest_observation_start_date the latest observation start date for patient format "dd-mm-yyyy" min_days_to_observation_end the minimum number of days of the observational integer max_days_to_observation_end the maximum number of days of the observation period integer earliest_condition_start_date the earliest condition start date for patient format "dd-mm-yyyy" latest_condition_start_date the latest condition start date for patient format "dd-mm-yyyy" min_days_to_condition_end the minimum number of days of the condition integer max_days_to_condition_end the maximum number of days of the condition integer earliest_visit_start_date the earliest visit start date for patient format "dd-mm-yyyy" latest_visit_start_date the latest visit start date for patient format "dd-mm-yyyy" min_days_to_visit_end the minimum number of days of the visit integer max_days_to_visit_end the maximum number of days of the visit integer seed seed ... user self defined tibble table to put in cdm, it can input as many as the user want Value cdm of the mock database following user’s specifications Examples library(PatientProfiles) cdm <- mockPatientProfiles() summariseCharacteristics Summarise characteristics of individuals Description Summarise characteristics of individuals Usage summariseCharacteristics( cohort, cdm = attr(cohort, "cdm_reference"), strata = list(), ageGroup = NULL, tableIntersect = list(), cohortIntersect = list(), minCellCount = 5 ) Arguments cohort A cohort in the cdm cdm A cdm reference. strata Stratification list ageGroup A list of age groups. tableIntersect A list of arguments that uses addTableIntersect function to add covariates and comorbidities. cohortIntersect A list of arguments that uses addCohortIntersect function to add covariates and comorbidities. minCellCount minimum counts due to obscure Value A summary of the characteristics of the individuals Examples library(PatientProfiles) cdm <- mockPatientProfiles() summariseCharacteristics( cohort = cdm$cohort1, ageGroup = list(c(0, 19), c(20, 39), c(40, 59), c(60, 79), c(80, 150)), tableIntersect = list( "Visits" = list( tableName = "visit_occurrence", value = "count", window = c(-365, 0) ) ), cohortIntersect = list( "Medications" = list( targetCohortTable = "cohort2", value = "flag", window = c(-365, 0) ) ) ) summariseLargeScaleCharacteristics This function is used to summarise the large scale characteristics of a cohort table Description This function is used to summarise the large scale characteristics of a cohort table Usage summariseLargeScaleCharacteristics( cohort, strata = list(), window = list(c(-Inf, -366), c(-365, -31), c(-30, -1), c(0, 0), c(1, 30), c(31, 365), c(366, Inf)), eventInWindow = NULL, episodeInWindow = NULL, includeSource = FALSE, minCellCount = 5, minimumFrequency = 0.005, cdm = attr(cohort, "cdm_reference") ) Arguments cohort The cohort to characterise. strata Stratification list. window Temporal windows that we want to characterize. eventInWindow Tables to characterise the events in the window. episodeInWindow Tables to characterise the episodes in the window. includeSource Whether to include source concepts. minCellCount All counts lower than minCellCount will be obscured. minimumFrequency Minimum frequency covariates to report. cdm A cdm reference. Value The output of this function is a ‘ResultSummary‘ containing the relevant information. summariseResult Summarise the characteristics of different individuals Description Summarise the characteristics of different individuals Usage summariseResult( table, group = list(), includeOverallGroup = FALSE, strata = list(), includeOverallStrata = TRUE, variables = list(numericVariables = detectVariables(table, "numeric"), dateVariables = detectVariables(table, "date"), binaryVariables = detectVariables(table, "binary"), categoricalVariables = detectVariables(table, "categorical")), functions = list(numericVariables = c("median", "min", "q25", "q75", "max"), dateVariables = c("median", "min", "q25", "q75", "max"), binaryVariables = c("count", "percentage"), categoricalVariables = c("count", "percentage")), minCellCount = 5 ) Arguments table Table with different records group List of groups to be considered. includeOverallGroup TRUE or FALSE. If TRUE, results for an overall group will be reported when a list of groups has been specified. strata List of the stratifications within each group to be considered. includeOverallStrata TRUE or FALSE. If TRUE, results for an overall strata will be reported when a list of strata has been specified. variables List of the different groups of variables, by default they are automatically clas- sified. functions List of functions to be applied to each one of the group of variables. minCellCount Minimum count of records to report results. Value Table that summarises the characteristics of the individual. Examples library(PatientProfiles) library(dplyr) cdm <- mockPatientProfiles() x <- cdm$cohort1 %>% addDemographics(cdm) %>% collect() result <- summariseResult(x) suppressCounts Function to suppress counts in summarised objects Description Function to suppress counts in summarised objects Usage suppressCounts(result, minCellCount = 5) Arguments result SummarisedResult object minCellCount Minimum count of records to report results. Value Table with suppressed counts variableTypes Classify the variables between 5 types: "numeric", "categorical", "bi- nary", "date", or NA. Description Classify the variables between 5 types: "numeric", "categorical", "binary", "date", or NA. Usage variableTypes(table) Arguments table Tibble Value Tibble with the variables type and classification Examples library(PatientProfiles) x <- dplyr::tibble( person_id = c(1, 2), start_date = as.Date(c("2020-05-02", "2021-11-19")), asthma = c(0, 1) ) variableTypes(x)
typeops
hex
Erlang
[typeops](#typeops) === Alternative type outcomes for arithmetic in Clojure. [API](https://inqwell.github.io/typeops/index.html) [![Clojars Project](http://clojars.org/typeops/latest-version.svg)](http://clojars.org/typeops) `[typeops "0.1.2"]` * In Clojure, functions are agnostic about argument types, yet the host platform is not and likely neither is your database. * If you are writing a calculation engine then in many cases floating point types are unsuitable - inaccuracies will accumulate making results unpredictable. * Value type systems and how the types combine are a policy decision. While Clojure supports integer and floating point types, and mostly makes precise decimal usage transparent, the way these combine as operands in the arithmetic functions may not be to your liking. OK: ``` (/ 123.456M 3) => 41.152M ``` Not OK: ``` (/ 123.457M 3) ArithmeticException Non-terminating decimal expansion; no exact representable decimal result. java.math.BigDecimal.divide (BigDecimal.java:1690) ``` We can get round this using `with-precision` : ``` (with-precision 5 (/ 123.457M 3)) => 41.152M ``` but this taints the code, forcing us to be aware of the underlying types, and what precision do we choose if our interest is accuracy? If you are using `decimal` it doesn't make sense to allow these to combine with floating point: ``` (* 123.457M 3.142) => 387.90189399999997 ``` If you want to avoid `ratio` preferring integer arithmetic, again, you have to be explicit: ``` (/ 4 3) => 4/3 (quot 4 3) => 1 ``` Typeops does the following for `+` `-` `*` and `/` : * Integer arithmetic gives a (truncated) integer result * Intermediate results do not lose accuracy * decimals cannot combine with floating point ["assign"](#assign) --- If you are modelling domain types it is useful to "assign" fields according to their underlying type and accuracy, rather than say relying on your database to do this for you: ``` (def m {:Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Long 1, :Byte 0, :Double 0.0, :String ""}) (assoc m :Decimal2 2.7182818M) => {:Integer 0, :Decimal2 2.72M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Long 1, :Byte 0, :Double 0.0, :String ""} ``` ### [nil](#nil) If your domain model permits `NULL` values you can represent these as `nil` in Clojure. This destroys the type information however if a map has meta data: ``` (meta m) => {:proto {:Integer 0, :Decimal2 0.00M, ...}} ``` then "assigning" away from `nil` will use the corresponding field in the `:proto` map to align the type. ### [Non-numerics](#non-numerics) For non-numeric fields typeops will use any :proto to keep map values as their intended type. Attempting to "assign" something that is false for `instance?` results in an exception ### [Error Handling](#error-handling) Typeops uses dynamic vars to help with error handling and debugging. Bind `*debug*` to `true` to carry information about type-incompatible `assoc` operations out via the exception. ``` (binding [typeops.core/*debug* true] (assoc m :Integer "foo")) => ExceptionInfo Incompatible type for operation: class java.lang.String clojure.core/ex-info (core.clj:4617) *e => #error{:cause "Incompatible type for operation: class java.lang.String", :data {:map {:Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Date #inst"2019-03-28T17:14:27.816-00:00", :Long 1, :Byte 0, :Double 0.0, :String ""}, :key :Integer, :val "foo", :cur 0}, :via [{:type clojure.lang.ExceptionInfo, :message "Incompatible type for operation: class java.lang.String" . . ``` Bind `*warn-on-absent-key*` to a function of two arguments `(fn [m k] ...)` which will be called when `assoc` puts a key into a map that wasn't there before. ``` (binding [typeops.assign/*warn-on-absent-key* (fn [m k] (println k "Boo!"))] (assoc m :Absent "foo")) :Absent Boo! => {:Absent "foo", :Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Date #inst"2019-03-28T17:14:27.816-00:00", :Long 1, :Byte 0, :Double 0.0, :String ""} ``` [Usage](#usage) --- ### [Per Namespace](#per-namespace) ``` (ns myns (:refer-clojure :exclude [+ - * / assoc merge]) (:require [typeops.core :refer :all]) (:require [typeops.assign :refer :all])) (+ 3.142M 2.7182818M) => 5.8602818M (- 3.142M 2.7182818M 3.142M) => -2.7182818M (* 3.142M 2.7182818M 3.142M) => 26.8353237278152M (/ 3.142M 2.7182818M 0.1234M) => 9.368M (assoc my-map k v ... ks vs) ; assoc preserves type and precision (assign my-map k v ... ks vs) ; same as above ``` ### [Globally](#globally) Call the function `init-global!` somewhere in your system start up to alter the vars `+` `-` `*` and `/` in `clojure.core`. [License](#license) --- Copyright © 2018-2019 Inqwell Ltd Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version. [Change Log](#change-log) === [[0.1.2] - 2019-03-28](#012---2019-03-28) --- ### [Changed](#changed) * Added/improved `*debug*` and `*warn-on-absent-key*` for error handling and debugging [[0.1.1] - 2017-06-10](#011---2017-06-10) --- ### [Changed](#changed-1) * Moved assign operations to a new name space `assign` * Added `assoc` symbol as an alias for `assign` * Added `merge` to accompany `assign` * Improve error handling for unknown types [0.1.0 - 2017-04-26](#010---2017-04-26) --- ### [Initial release](#initial-release) [Introduction to typeops](#introduction-to-typeops) === Alternative type outcomes for arithmetic in Clojure. [![Clojars Project](http://clojars.org/typeops/latest-version.svg)](http://clojars.org/typeops) `[typeops "0.1.2"]` * In Clojure, functions are agnostic about argument types, yet the host platform is not and likely neither is your database. * If you are writing a calculation engine then in many cases floating point types are unsuitable - inaccuracies will accumulate making results unpredictable. * Value type systems and how the types combine are a policy decision. While Clojure supports integer and floating point types, and mostly makes precise decimal usage transparent, the way these combine as operands in the arithmetic functions may not be to your liking. OK: ``` (/ 123.456M 3) => 41.152M ``` Not OK: ``` (/ 123.457M 3) ArithmeticException Non-terminating decimal expansion; no exact representable decimal result. java.math.BigDecimal.divide (BigDecimal.java:1690) ``` We can get round this using `with-precision` : ``` (with-precision 5 (/ 123.457M 3)) => 41.152M ``` but this taints the code, forcing us to be aware of the underlying types, and what precision do we choose if our interest is accuracy? If you are using `decimal` it doesn't make sense to allow these to combine with floating point: ``` (* 123.457M 3.142) => 387.90189399999997 ``` If you want to avoid `ratio` preferring integer arithmetic, again, you have to be explicit: ``` (/ 4 3) => 4/3 (quot 4 3) => 1 ``` Typeops does the following for `+` `-` `*` and `/` : * Integer arithmetic gives a (truncated) integer result * Intermediate results do not lose accuracy * decimals cannot combine with floating point ["assign"](#assign) --- If you are modelling domain types it is useful to "assign" fields according to their underlying type and accuracy, rather than say relying on your database to do this for you: ``` (def m {:Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Long 1, :Byte 0, :Double 0.0, :String ""}) (assoc m :Decimal2 2.7182818M) => {:Integer 0, :Decimal2 2.72M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Long 1, :Byte 0, :Double 0.0, :String ""} ``` ### [nil](#nil) If your domain model permits `NULL` values you can represent these as `nil` in Clojure. This destroys the type information however if a map has meta data: ``` (meta m) => {:proto {:Integer 0, :Decimal2 0.00M, ...}} ``` then "assigning" away from `nil` will use the corresponding field in the `:proto` map to align the type. ### [Non-numerics](#non-numerics) For non-numeric fields typeops will use any :proto to keep map values as their intended type. Attempting to "assign" something that is false for `instance?` results in an exception ### [Error Handling](#error-handling) Typeops uses dynamic vars to help with error handling and debugging. Bind `*debug*` to `true` to carry information about type-incompatible `assoc` operations out via the exception. ``` (binding [typeops.core/*debug* true] (assoc m :Integer "foo")) => ExceptionInfo Incompatible type for operation: class java.lang.String clojure.core/ex-info (core.clj:4617) *e => #error{:cause "Incompatible type for operation: class java.lang.String", :data {:map {:Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Date #inst"2019-03-28T17:14:27.816-00:00", :Long 1, :Byte 0, :Double 0.0, :String ""}, :key :Integer, :val "foo", :cur 0}, :via [{:type clojure.lang.ExceptionInfo, :message "Incompatible type for operation: class java.lang.String" . . ``` Bind `*warn-on-absent-key*` to a function of two arguments `(fn [m k] ...)` which will be called when `assoc` puts a key into a map that wasn't there before. ``` (binding [typeops.assign/*warn-on-absent-key* (fn [m k] (println k "Boo!"))] (assoc m :Absent "foo")) :Absent Boo! => {:Absent "foo", :Integer 0, :Decimal2 0.00M, :Short 0, :Decimal 0E-15M, :Float 0.0, :Date #inst"2019-03-28T17:14:27.816-00:00", :Long 1, :Byte 0, :Double 0.0, :String ""} ``` [Usage](#usage) --- ### [Per Namespace](#per-namespace) ``` (ns myns (:refer-clojure :exclude [+ - * / assoc merge]) (:require [typeops.core :refer :all]) (:require [typeops.assign :refer :all])) (+ 3.142M 2.7182818M) => 5.8602818M (- 3.142M 2.7182818M 3.142M) => -2.7182818M (* 3.142M 2.7182818M 3.142M) => 26.8353237278152M (/ 3.142M 2.7182818M 0.1234M) => 9.368M (assoc my-map k v ... ks vs) ; assoc preserves type and precision (assign my-map k v ... ks vs) ; same as above ``` ### [Globally](#globally) Call the function `init-global!` somewhere in your system start up to alter the vars `+` `-` `*` and `/` in `clojure.core`. [License](#license) --- Copyright © 2018-2019 Inqwell Ltd Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version. typeops.assign === --- #### *warn-on-absent-key*clj May be bound to a function of two arguments. When 'assigning' to a map key that is absent the function is called passing the map and the key being applied. ``` May be bound to a function of two arguments. When 'assigning' to a map key that is absent the function is called passing the map and the key being applied. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/assign.clj#L7)[raw docstring](#) --- #### assignclj ``` (assign map key val) ``` ``` (assign map key val & kvs) ``` 'assigns' val to the key within map. If the meta data is a map containing the key :proto the corresponding value will be used to align the type of val, with rounding or truncation as necessary. If there is no meta data any existing value is used to maintain the correct type/precision. ``` 'assigns' val to the key within map. If the meta data is a map containing the key :proto the corresponding value will be used to align the type of val, with rounding or truncation as necessary. If there is no meta data any existing value is used to maintain the correct type/precision. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/assign.clj#L27)[raw docstring](#) --- #### assocclj An alias for assign ``` An alias for assign ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/assign.clj#L48)[raw docstring](#) --- #### mergeclj ``` (merge & maps) ``` Returns a map that consists of the rest of the maps conj-ed on to the first, using assign semantics. If a key occurs in more than one map, the mapping from the latter (left-to-right) will be the mapping in the result. ``` Returns a map that consists of the rest of the maps conj-ed on to the first, using assign semantics. If a key occurs in more than one map, the mapping from the latter (left-to-right) will be the mapping in the result. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/assign.clj#L52)[raw docstring](#) typeops.core === --- #### *clj ``` (*) ``` ``` (* x) ``` ``` (* x y) ``` ``` (* x y & more) ``` Returns the product of nums. (multiply) returns 1. ``` Returns the product of nums. (multiply) returns 1. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1039)[raw docstring](#) --- #### *debug*clj When set to true (not just truthy) any exception thrown for assoc or merge operations that cause type violations will carry the data {:map map :key key :val val :cur cur} ``` When set to true (not just truthy) any exception thrown for assoc or merge operations that cause type violations will carry the data {:map map :key key :val val :cur cur} ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L19)[raw docstring](#) --- #### *rounding*clj The rounding mode applied when decimal accuracy is lost. Defaults to BigDecimal/ROUND_HALF_UP ``` The rounding mode applied when decimal accuracy is lost. Defaults to BigDecimal/ROUND_HALF_UP ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L14)[raw docstring](#) --- #### +clj ``` (+) ``` ``` (+ x) ``` ``` (+ x y) ``` ``` (+ x y & more) ``` Returns the sum of nums. (+) returns 0. ``` Returns the sum of nums. (+) returns 0. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1023)[raw docstring](#) --- #### -clj ``` (- x) ``` ``` (- x y) ``` ``` (- x y & more) ``` If no ys are supplied, returns the negation of x, else subtracts the ys from x and returns the result. ``` If no ys are supplied, returns the negation of x, else subtracts the ys from x and returns the result. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1031)[raw docstring](#) --- #### /clj ``` (/ x) ``` ``` (/ x y) ``` ``` (/ x y & more) ``` If no denominators are supplied, returns 1/numerator, else returns numerator divided by all of the denominators. ``` If no denominators are supplied, returns 1/numerator, else returns numerator divided by all of the denominators. ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1047)[raw docstring](#) --- #### init-global!clj ``` (init-global!) ``` Alter the root bindings of vars +, -, * and / to use typeops arithmetic operations globally ``` Alter the root bindings of vars +, -, * and / to use typeops arithmetic operations globally ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1071)[raw docstring](#) --- #### reset-global!clj ``` (reset-global!) ``` Reset the root bindings of vars +, -, * and / to use clojure.core arithmetic operations globally ``` Reset the root bindings of vars +, -, * and / to use clojure.core arithmetic operations globally ``` [source](https://github.com/inqwell/typeops/blob/681b00d7527e4182ea5566d62eb9094e999a332a/src/typeops/core.clj#L1077)[raw docstring](#)
nrba
cran
R
Package ‘nrba’ July 17, 2023 Title Methods for Conducting Nonresponse Bias Analysis (NRBA) Version 0.2.0 Description Facilitates nonresponse bias analysis (NRBA) for survey data. Such data may arise from a complex sampling design with features such as stratification, clustering, or unequal probabilities of selection. Multiple types of analyses may be conducted: comparisons of response rates across subgroups; comparisons of estimates before and after weighting adjustments; comparisons of sample-based estimates to external population totals; tests of systematic differences in covariate means between respondents and full samples; tests of independence between response status and covariates; and modeling of outcomes and response status as a function of covariates. Extensive documentation and references are provided for each type of analysis. Krenzke, <NAME>, and Mohadjer (2005) <http: //www.asasrms.org/Proceedings/y2005/files/JSM2005-000572.pdf> and <NAME> (2016) <https://www150.statcan.gc.ca/n1/en/pub/12-001-x/ 2016002/article/14677-eng.pdf?st=q7PyNsGR> provide an overview of the methods implemented in this package. License GPL (>= 3) Encoding UTF-8 LazyData true RoxygenNote 7.2.3 Imports broom, dplyr, magrittr, stats, survey (>= 4.1-1), svrep Suggests knitr, rmarkdown, stringr, testthat (>= 3.0.0), tibble Config/testthat/edition 3 Depends R (>= 4.1.0) VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-0406-8470>), <NAME> [aut], <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), <NAME> [aut] (Author of original SAS macro, WesNRBA), Westat [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-07-17 06:50:02 UTC R topics documented: calculate_response_rate... 2 chisq_test_ind_respons... 6 chisq_test_vs_external_estimat... 8 get_cumulative_estimate... 10 involvement_survey_po... 12 involvement_survey_sr... 13 involvement_survey_str2... 14 predict_outcome_via_gl... 16 predict_response_status_via_gl... 18 rake_to_benchmark... 21 stepwise_model_selectio... 24 t_test_by_response_statu... 26 t_test_of_weight_adjustmen... 30 t_test_vs_external_estimat... 32 wt_class_adjus... 35 calculate_response_rates Calculate Response Rates Description Calculates response rates using one of the response rate formulas defined by AAPOR (American Association of Public Opinion Research). Usage calculate_response_rates( data, status, status_codes = c("ER", "EN", "IE", "UE"), weights, rr_formula = "RR3", elig_method = "CASRO-subgroup", e = NULL ) Arguments data A data frame containing the selected sample, one row per case. status A character string giving the name of the variable representing response/eligibility status. The status variable should have at most four categories, representing eligible respondents (ER), eligible nonrespondents (EN), known ineligible cases (IE), and cases whose eligibility is unknown (UE). status_codes A named vector, with four entries named ’ER’, ’EN’, ’IE’, and ’UE’. status_codes indicates how the values of the status variable are to be interpreted. weights (Optional) A character string giving the name of a variable representing weights in the data to use for calculating weighted response rates rr_formula A character vector including any of the following: ’RR1’, ’RR3’, and ’RR5’. These are the names of formulas defined by AAPOR. See the Formulas section below for formulas. elig_method If rr_formula='RR3', this specifies how to estimate an eligibility rate for cases with unknown eligibility. Must be one of the following: 'CASRO-overall' Estimates an eligibility rate using the overall sample. If response rates are cal- culated for subgroups, the single overall sample estimate will be used as the estimated eligibility rate for subgroups as well. 'CASRO-subgroup' Estimates eligibility rates separately for each subgroup. 'specified' With this option, a numeric value is supplied by the user to the parameter e. For elig_method='CASRO-overall' or elig_method='CASRO-subgroup', the eligibility rate is estimated as (ER)/(ER + N R + IE). e (Required if elig_method='specified'). A numeric value between 0 and 1 specifying the estimated eligibility rate for cases with unknown eligibility. A character string giving the name of a numeric variable may also be supplied; in that case, the eligibility rate must be constant for all cases in a subgroup. Value Output consists of a data frame giving weighted and unweighted response rates. The following columns may be included, depending on the arguments supplied: • RR1_Unweighted • RR1_Weighted • RR3_Unweighted • RR3_Weighted • RR5_Unweighted • RR5_Weighted • n: Total sample size • Nhat: Sum of weights for the total sample • n_ER: Number of eligible respondents • Nhat_ER: Sum of weights for eligible respondents • n_EN: Number of eligible nonrespondents • Nhat_EN: Sum of weights for eligible nonrespondents • n_IE: Number of ineligible cases • Nhat_IE: Sum of weights for ineligible cases • n_UE: Number of cases whose eligibility is unknown • Nhat_UE: Sum of weights for cases whose eligibility is unknown • e_unwtd: If RR3 is calculated, the eligibility rate estimate e used for the unweighted response rate. • e_wtd: If RR3 is calculated, the eligibility rate estimate e used for the weighted response rate. If the data frame is grouped (i.e. by using df %>% group_by(Region)), then the output contains one row per subgroup. Formulas Denote the sample totals as follows: • ER: Total number of eligible respondents • EN: Total number of eligible non-respondents • IE: Total number of ineligible cases • UE: Total number of cases whose eligibility is unknown For weighted response rates, these totals are calculated using weights. The response rate formulas are then as follows: RR1 = ER/(ER + EN + U E) RR1 essentially assumes that all cases with unknown eligibility are in fact eligible. RR3 = ER/(ER + EN + (e ∗ U E)) RR3 uses an estimate, e, of the eligibility rate among cases with unknown eligibility. RR5 = ER/(ER + EN ) RR5 essentially assumes that all cases with unknown eligibility are in fact ineligible. For RR3, an estimate, e, of the eligibility rate among cases with unknown eligibility must be used. AAPOR strongly recommends that the basis for the estimate should be explicitly stated and detailed. The CASRO methods, which might be appropriate for the design, use the formula e = 1 − (IE/(ER + EN + IE)). • For elig_method='CASRO-overall', an estimate is calculated for the overall sample and this single estimate is used when calculating response rates for subgroups. • For elig_method='CASRO-subgroup', estimates are calculated separately for each subgroup. Please consult AAPOR’s current Standard Definitions for in-depth explanations. References The American Association for Public Opinion Research. 2016. Standard Definitions: Final Dispo- sitions of Case Codes and Outcome Rates for Surveys. 9th edition. AAPOR. Examples # Load example data data(involvement_survey_srs, package = "nrba") involvement_survey_srs[["RESPONSE_STATUS"]] <- sample(1:4, size = 5000, replace = TRUE) # Calculate overall response rates involvement_survey_srs %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), weights = "BASE_WEIGHT", rr_formula = "RR3", elig_method = "CASRO-overall" ) # Calculate response rates by subgroup library(dplyr) involvement_survey_srs %>% group_by(STUDENT_RACE, STUDENT_SEX) %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), weights = "BASE_WEIGHT", rr_formula = "RR3", elig_method = "CASRO-overall" ) # Compare alternative approaches for handling of cases with unknown eligiblity involvement_survey_srs %>% group_by(STUDENT_RACE) %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), rr_formula = "RR3", elig_method = "CASRO-overall" ) involvement_survey_srs %>% group_by(STUDENT_RACE) %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), rr_formula = "RR3", elig_method = "CASRO-subgroup" ) involvement_survey_srs %>% group_by(STUDENT_RACE) %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), rr_formula = "RR3", elig_method = "specified", e = 0.5 ) involvement_survey_srs %>% transform(e_by_email = ifelse(PARENT_HAS_EMAIL == "Has Email", 0.75, 0.25)) %>% group_by(PARENT_HAS_EMAIL) %>% calculate_response_rates( status = "RESPONSE_STATUS", status_codes = c("ER" = 1, "EN" = 2, "IE" = 3, "UE" = 4), rr_formula = "RR3", elig_method = "specified", e = "e_by_email" ) chisq_test_ind_response Test the independence of survey response and auxiliary variables Description Tests whether response status among eligible sample cases is independent of categorical auxiliary variables, using a Chi-Square test with Rao-Scott’s second-order adjustment. If the data include cases known to be ineligible or who have unknown eligibility status, the data are subsetted to only include respondents and nonrespondents known to be eligible. Usage chisq_test_ind_response( survey_design, status, status_codes = c("ER", "EN", "UE", "IE"), aux_vars ) Arguments survey_design A survey design object created with the survey package. status A character string giving the name of the variable representing response/eligibility status. The status variable should have at most four categories, representing eligible respondents (ER), eligible nonrespondents (EN), known ineligible cases (IE), and cases whose eligibility is unknown (UE). status_codes A named vector, with four entries named ’ER’, ’EN’, ’IE’, and ’UE’. status_codes indicates how the values of the status variable are to be inter- preted. aux_vars A list of names of auxiliary variables. Details Please see svychisq for details of how the Rao-Scott second-order adjusted test is conducted. Value A data frame containing the results of the Chi-Square test(s) of independence between response sta- tus and each auxiliary variable. If multiple auxiliary variables are specified, the output data contains one row per auxiliary variable. The columns of the output dataset include: • auxiliary_variable: The name of the auxiliary variable tested • statistic: The value of the test statistic • ndf: Numerator degrees of freedom for the reference distribution • ddf: Denominator degrees of freedom for the reference distribution • p_value: The p-value of the test of independence • test_method: Text giving the name of the statistical test • variance_method: Text describing the method of variance estimation References • Rao, JNK, <NAME> (1984) "On Chi-squared Tests For Multiway Contigency Tables with Pro- portions Estimated From Survey Data" Annals of Statistics 12:46-60. Examples # Create a survey design object ---- library(survey) data(involvement_survey_srs, package = "nrba") involvement_survey <- svydesign( weights = ~BASE_WEIGHT, id = ~UNIQUE_ID, data = involvement_survey_srs ) # Test whether response status varies by race or by sex ---- test_results <- chisq_test_ind_response( survey_design = involvement_survey, status = "RESPONSE_STATUS", status_codes = c( "ER" = "Respondent", "EN" = "Nonrespondent", "UE" = "Unknown", "IE" = "Ineligible" ), aux_vars = c("STUDENT_RACE", "STUDENT_SEX") ) print(test_results) chisq_test_vs_external_estimate Test of differences in survey percentages relative to external estimates Description Compare estimated percentages from the present survey to external estimates from a benchmark source. A Chi-Square test with Rao-Scott’s second-order adjustment is used to evaluate whether the survey’s estimates differ from the external estimates. Usage chisq_test_vs_external_estimate(survey_design, y_var, ext_ests, na.rm = TRUE) Arguments survey_design A survey design object created with the survey package. y_var Name of dependent categorical variable. ext_ests A numeric vector containing the external estimate of the percentages for each category. The vector must have names, each name corresponding to a given category. na.rm Whether to drop cases with missing values Details Please see svygofchisq for details of how the Rao-Scott second-order adjusted test is conducted. The test statistic, statistic is obtained by calculating the Pearson Chi-squared statistic for the es- timated table of population totals. The reference distribution is a Satterthwaite approximation. The p-value is obtained by comparing statistic/scale to a Chi-squared distribution with df degrees of freedom. Value A data frame containing the results of the Chi-Square test(s) of whether survey-based estimates systematically differ from external estimates. The columns of the output dataset include: • statistic: The value of the test statistic • df: Degrees of freedom for the reference Chi-Squared distribution • scale: Estimated scale parameter. • p_value: The p-value of the test of independence • test_method: Text giving the name of the statistical test • variance_method: Text describing the method of variance estimation References • Rao, JNK, Scott, AJ (1984) "On Chi-squared Tests For Multiway Contigency Tables with Pro- portions Estimated From Survey Data" Annals of Statistics 12:46-60. Examples library(survey) # Create a survey design ---- data("involvement_survey_pop", package = "nrba") data("involvement_survey_str2s", package = "nrba") involvement_survey_sample <- svydesign( data = involvement_survey_str2s, weights = ~BASE_WEIGHT, strata = ~SCHOOL_DISTRICT, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL ) # Subset to only include survey respondents ---- involvement_survey_respondents <- subset( involvement_survey_sample, RESPONSE_STATUS == "Respondent" ) # Test whether percentages of categorical variable differ from benchmark ---- parent_email_benchmark <- c( "Has Email" = 0.85, "No Email" = 0.15 ) chisq_test_vs_external_estimate( survey_design = involvement_survey_respondents, y_var = "PARENT_HAS_EMAIL", ext_ests = parent_email_benchmark ) get_cumulative_estimates Calculate cumulative estimates of a mean/proportion Description Calculates estimates of a mean/proportion which are cumulative with respect to a predictor variable, such as week of data collection or number of contact attempts. This can be useful for examining whether estimates are affected by decisions such as whether to extend the data collection period or make additional contact attempts. Usage get_cumulative_estimates( survey_design, y_var, y_var_type = NULL, predictor_variable ) Arguments survey_design A survey design object created with the survey package. y_var Name of a variable whose mean or proportion is to be estimated. y_var_type Either NULL, "categorical" or "numeric". For "categorical", proportions are estimated. For "numeric", means are estimated. For NULL (the default), then proportions are estimated if y_var is a factor or character variable. Otherwise, means are estimated. The data will be subset to remove any missing values in this variable. predictor_variable Name of a variable for which cumulative estimates of y_var will be calculated. This variable should either be numeric or have categories which when sorted by their label are arranged in ascending order. The data will be subset to remove any missing values of the predictor variable. Value A dataframe of cumulative estimates. The first column–whose name matches predictor_variable– gives describes the values of predictor_variable for which a given estimate was computed. The other columns of the result include the following: • "outcome": The name of the variable for which estimates are computed • "outcome_category": For a categorical variable, the category of that variable • "estimate": The estimated mean or proportion. • "std_error": The estimated standard error • "respondent_sample_size": The number of cases used to produce the estimate (excluding missing values) References See Maitland et al. (2017) for an example of a level-of-effort analysis based on this method. • <NAME> al. (2017). A Nonresponse Bias Analysis of the Health Information National Trends Survey (HINTS). Journal of Health Communication 22, 545-553. doi:10.1080/10810730.2017.1324539 Examples # Create an example survey design # with a variable representing number of contact attempts library(survey) data(involvement_survey_srs, package = "nrba") survey_design <- svydesign( weights = ~BASE_WEIGHT, id = ~UNIQUE_ID, fpc = ~N_STUDENTS, data = involvement_survey_srs ) # Cumulative estimates from respondents for average student age ---- get_cumulative_estimates( survey_design = survey_design |> subset(RESPONSE_STATUS == "Respondent"), y_var = "STUDENT_AGE", y_var_type = "numeric", predictor_variable = "CONTACT_ATTEMPTS" ) # Cumulative estimates from respondents for proportions of categorical variable ---- get_cumulative_estimates( survey_design = survey_design |> subset(RESPONSE_STATUS == "Respondent"), y_var = "WHETHER_PARENT_AGREES", y_var_type = "categorical", predictor_variable = "CONTACT_ATTEMPTS" ) involvement_survey_pop Parent involvement survey: population data Description An example dataset describing a population of 20,000 students with disabilities in 20 school dis- tricts. This population is the basis for selecting a sample of students for a parent involvement survey. Usage involvement_survey_pop Format A data frame with 20,000 rows and 9 variables Fields UNIQUE_ID A unique identifier for students SCHOOL_DISTRICT A unique identifier for school districts SCHOOL_ID A unique identifier for schools, nested within districts STUDENT_GRADE Student’s grade level: ’PK’, ’K’, 1-12 STUDENT_AGE Student’s age, measured in years STUDENT_DISABILITY_CODE Code for student’s disability category (e.g. ’VI’ for ’Visual Impairments’) STUDENT_DISABILITY_CATEGORY Student’s disability category (e.g. ’Visual Impairments’) STUDENT_SEX ’Female’ or ’Male’ STUDENT_RACE Seven-level code with descriptive label (e.g. ’AS7 (Asian)’) Examples involvement_survey_pop involvement_survey_srs Parent involvement survey: simple random sample Description An example dataset describing a simple random sample of 5,000 parents of students with disabili- ties, from a population of 20,000. The parent involvement survey measures a single key outcome: whether "parents perceive that schools facilitate parent involvement as a means of improving ser- vices and results for children with disabilities." The variable BASE_WEIGHT provides the base sampling weight. The variable N_STUDENTS_IN_SCHOOL can be used to provide a finite population correction for variance estimation. Usage involvement_survey_srs Format A data frame with 5,000 rows and 17 variables Fields UNIQUE_ID A unique identifier for students RESPONSE_STATUS Survey response/eligibility status: ’Respondent’, ’Nonrespondent’, ’In- eligble’, ’Unknown’ WHETHER_PARENT_AGREES Parent agreement (’AGREE’ or ’DISAGREE’) for whether they perceive that schools facilitate parent involvement SCHOOL_DISTRICT A unique identifier for school districts SCHOOL_ID A unique identifier for schools, nested within districts STUDENT_GRADE Student’s grade level: ’PK’, ’K’, 1-12 STUDENT_AGE Student’s age, measured in years STUDENT_DISABILITY_CODE Code for student’s disability category (e.g. ’VI’ for ’Visual Impairments’) STUDENT_DISABILITY_CATEGORY Student’s disability category (e.g. ’Visual Impairments’) STUDENT_SEX ’Female’ or ’Male’ STUDENT_RACE Seven-level code with descriptive label (e.g. ’AS7 (Asian)’) PARENT_HAS_EMAIL Whether parent has an e-mail address (’Has Email’ vs ’No Email’) PARENT_HAS_EMAIL_BENCHMARK Population benchmark for category of PARENT_HAS_EMAIL PARENT_HAS_EMAIL_BENCHMARK Population benchmark for category of STUDENT_RACE BASE_WEIGHT Sampling weight to use for weighted estimates N_STUDENTS Total number of students in the population CONTACT_ATTEMPTS The number of contact attempts made for each case (ranges between 1 and 6) Examples involvement_survey_srs involvement_survey_str2s Parent involvement survey: stratified, two-stage sample Description An example dataset describing a stratified, multistage sample of 1,000 parents of students with disabilities, from a population of 20,000. The parent involvement survey measures a single key out- come: whether "parents perceive that schools facilitate parent involvement as a means of improving services and results for children with disabilities." The sample was selected by sampling 5 schools from each of 20 districts, and then sampling par- ents of 10 children in each sampled school. The variable BASE_WEIGHT provides the base sampling weight. The variable SCHOOL_DISTRICT was used for stratification, and the variables SCHOOL_ID and UNIQUE_ID uniquely identify the first and second stage sampling units (schools and parents). The variables N_SCHOOLS_IN_DISTRICT and N_STUDENTS_IN_SCHOOL can be used to provide finite population corrections. Usage involvement_survey_str2s Format A data frame with 5,000 rows and 18 variables Fields UNIQUE_ID A unique identifier for students RESPONSE_STATUS Survey response/eligibility status: ’Respondent’, ’Nonrespondent’, ’In- eligble’, ’Unknown’ WHETHER_PARENT_AGREES Parent agreement (’AGREE’ or ’DISAGREE’) for whether they perceive that schools facilitate parent involvement SCHOOL_DISTRICT A unique identifier for school districts SCHOOL_ID A unique identifier for schools, nested within districts STUDENT_GRADE Student’s grade level: ’PK’, ’K’, 1-12 STUDENT_AGE Student’s age, measured in years STUDENT_DISABILITY_CODE Code for student’s disability category (e.g. ’VI’ for ’Visual Impairments’) STUDENT_DISABILITY_CATEGORY Student’s disability category (e.g. ’Visual Impairments’) STUDENT_SEX ’Female’ or ’Male’ STUDENT_RACE Seven-level code with descriptive label (e.g. ’AS7 (Asian)’) PARENT_HAS_EMAIL Whether parent has an e-mail address (’Has Email’ vs ’No Email’) PARENT_HAS_EMAIL_BENCHMARK Population benchmark for category of PARENT_HAS_EMAIL STUDENT_RACE_BENCHMARK Population benchmark for category of STUDENT_RACE N_SCHOOLS_IN_DISTRICT Total number of schools in each district N_STUDENTS_IN_SCHOOL Total number of students in each school BASE_WEIGHT Sampling weight to use for weighted estimates CONTACT_ATTEMPTS The number of contact attempts made for each case (ranges between 1 and 6) Examples # Load the data involvement_survey_str2s # Prepare the data for analysis with the 'survey' package library(survey) involvement_survey <- svydesign( data = involvement_survey_str2s, weights = ~ BASE_WEIGHT, strata = ~ SCHOOL_DISTRICT, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL ) predict_outcome_via_glm Fit a regression model to predict survey outcomes Description A regression model is fit to the sample data to predict outcomes measured by a survey. This model can be used to identify auxiliary variables that are predictive of survey outcomes and hence are potentially useful for nonresponse bias analysis or weighting adjustments. Only data from survey respondents will be used to fit the model, since survey outcomes are only measured among respondents. The function returns a summary of the model, including overall tests for each variable of whether that variable improves the model’s ability to predict response status in the population of interest (not just in the random sample at hand). Usage predict_outcome_via_glm( survey_design, outcome_variable, outcome_type = "continuous", outcome_to_predict = NULL, numeric_predictors = NULL, categorical_predictors = NULL, model_selection = "main-effects", selection_controls = list(alpha_enter = 0.5, alpha_remain = 0.5, max_iterations = 100L) ) Arguments survey_design A survey design object created with the survey package. outcome_variable Name of an outcome variable to use as the dependent variable in the model The value of this variable is expected to be NA (i.e. missing) for all cases other than eligible respondents. outcome_type Either "binary" or "continuous". For "binary", a logistic regression model is used. For "continuous", a generalized linear model is fit using using an identity link function. outcome_to_predict Only required if outcome_type="binary". Specify which category of outcome_variable is to be predicted. numeric_predictors A list of names of numeric auxiliary variables to use for predicting response status. categorical_predictors A list of names of categorical auxiliary variables to use for predicting response status. model_selection A character string specifying how to select a model. The default and recom- mended method is ’main-effects’, which simply includes main effects for each of the predictor variables. The method 'stepwise' can be used to perform stepwise selection of variables for the model. However, stepwise selection invalidates p-values, standard errors, and confidence intervals, which are generally calculated under the assumption that model specification is predetermined. selection_controls Only required if model-selection isn’t set to "main-effects". Otherwise, a list of parameters for model selection to pass on to stepwise_model_selection, with elements alpha_enter, alpha_remain, and max_iterations. Details See Lumley and Scott (2017) for details of how regression models are fit to survey data. For overall tests of variables, a Rao-Scott Likelihood Ratio Test is conducted (see section 4 of Lumley and Scott (2017) for statistical details) using the function regTermTest(method = "LRT", lrt.approximation = "saddlepoint") from the ’survey’ package. If the user specifies model_selection = "stepwise", a regression model is selected by adding and removing variables based on the p-value from a likelihood ratio rest. At each stage, a single variable is added to the model if the p-value of the likelihood ratio test from adding the variable is below alpha_enter and its p-value is less than that of all other variables not already in the model. Next, of the variables already in the model, the variable with the largest p-value is dropped if its p-value is greater than alpha_remain. This iterative process continues until a maximum number of iterations is reached or until either all variables have been added to the model or there are no unadded variables for which the likelihood ratio test has a p-value below alpha_enter. Value A data frame summarizing the fitted regression model. Each row in the data frame represents a coefficient in the model. The column variable describes the underlying variable for the coefficient. For categorical variables, the column variable_category indicates the particular category of that variable for which a coefficient is estimated. The columns estimated_coefficient, se_coefficient, conf_intrvl_lower, conf_intrvl_upper, and p_value_coefficient are summary statistics for the estimated coefficient. Note that p_value_coefficient is based on the Wald t-test for the coefficient. The column variable_level_p_value gives the p-value of the Rao-Scott Likelihood Ratio Test for including the variable in the model. This likelihood ratio test has its test statistic given by the column LRT_chisq_statistic, and the reference distribution for this test is a linear combination of p F-distributions with numerator degrees of freedom given by LRT_df_numerator and denominator degrees of freedom given by LRT_df_denominator, where p is the number of coefficients in the model corresponding to the variable being tested. References • <NAME>., & <NAME>. (2017). Fitting Regression Models to Survey Data. Statistical Science 32 (2) 265 - 278. https://doi.org/10.1214/16-STS605 Examples library(survey) # Create a survey design ---- data(involvement_survey_str2s, package = "nrba") survey_design <- svydesign( weights = ~BASE_WEIGHT, strata = ~SCHOOL_DISTRICT, id = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL, data = involvement_survey_str2s ) predict_outcome_via_glm( survey_design = survey_design, outcome_variable = "WHETHER_PARENT_AGREES", outcome_type = "binary", outcome_to_predict = "AGREE", model_selection = "main-effects", numeric_predictors = c("STUDENT_AGE"), categorical_predictors = c("STUDENT_DISABILITY_CATEGORY", "PARENT_HAS_EMAIL") ) predict_response_status_via_glm Fit a logistic regression model to predict response to the survey. Description A logistic regression model is fit to the sample data to predict whether an individual responds to the survey (i.e. is an eligible respondent) rather than a nonrespondent. Ineligible cases and cases with unknown eligibility status are not included in this model. The function returns a summary of the model, including overall tests for each variable of whether that variable improves the model’s ability to predict response status in the population of interest (not just in the random sample at hand). This model can be used to identify auxiliary variables associated with response status and compare multiple auxiliary variables in terms of their ability to predict response status. Usage predict_response_status_via_glm( survey_design, status, status_codes = c("ER", "EN", "IE", "UE"), numeric_predictors = NULL, categorical_predictors = NULL, model_selection = "main-effects", selection_controls = list(alpha_enter = 0.5, alpha_remain = 0.5, max_iterations = 100L) ) Arguments survey_design A survey design object created with the survey package. status A character string giving the name of the variable representing response/eligibility status. The status variable should have at most four categories, representing eligible respondents (ER), eligible nonrespondents (EN), known ineligible cases (IE), and cases whose eligibility is unknown (UE). status_codes A named vector, with two entries named ’ER’ and ’EN’ indicating which values of the status variable represent eligible respondents (ER) and eligible nonre- spondents (EN). numeric_predictors A list of names of numeric auxiliary variables to use for predicting response status. categorical_predictors A list of names of categorical auxiliary variables to use for predicting response status. model_selection A character string specifying how to select a model. The default and recom- mended method is ’main-effects’, which simply includes main effects for each of the predictor variables. The method 'stepwise' can be used to perform stepwise selection of variables for the model. However, stepwise selection invalidates p-values, standard errors, and confidence intervals, which are generally calculated under the assumption that model specification is predetermined. selection_controls Only required if model-selection isn’t set to "main-effects". Otherwise, a list of parameters for model selection to pass on to stepwise_model_selection, with elements alpha_enter, alpha_remain, and max_iterations. Details See Lumley and Scott (2017) for details of how regression models are fit to survey data. For overall tests of variables, a Rao-Scott Likelihood Ratio Test is conducted (see section 4 of Lumley and Scott (2017) for statistical details) using the function regTermTest(method = "LRT", lrt.approximation = "saddlepoint") from the ’survey’ package. If the user specifies model_selection = "stepwise", a regression model is selected by adding and removing variables based on the p-value from a likelihood ratio rest. At each stage, a single variable is added to the model if the p-value of the likelihood ratio test from adding the variable is below alpha_enter and its p-value is less than that of all other variables not already in the model. Next, of the variables already in the model, the variable with the largest p-value is dropped if its p-value is greater than alpha_remain. This iterative process continues until a maximum number of iterations is reached or until either all variables have been added to the model or there are no unadded variables for which the likelihood ratio test has a p-value below alpha_enter. Value A data frame summarizing the fitted logistic regression model. Each row in the data frame represents a coefficient in the model. The column variable describes the underlying variable for the coefficient. For categorical variables, the column variable_category indicates the particular category of that variable for which a coefficient is estimated. The columns estimated_coefficient, se_coefficient, conf_intrvl_lower, conf_intrvl_upper, and p_value_coefficient are summary statistics for the estimated coefficient. Note that p_value_coefficient is based on the Wald t-test for the coefficient. The column variable_level_p_value gives the p-value of the Rao-Scott Likelihood Ratio Test for including the variable in the model. This likelihood ratio test has its test statistic given by the column LRT_chisq_statistic, and the reference distribution for this test is a linear combination of p F-distributions with numerator degrees of freedom given by LRT_df_numerator and denominator degrees of freedom given by LRT_df_denominator, where p is the number of coefficients in the model corresponding to the variable being tested. References • <NAME>., & <NAME>. (2017). Fitting Regression Models to Survey Data. Statistical Science 32 (2) 265 - 278. https://doi.org/10.1214/16-STS605 Examples library(survey) # Create a survey design ---- data(involvement_survey_str2s, package = "nrba") survey_design <- survey_design <- svydesign( data = involvement_survey_str2s, weights = ~BASE_WEIGHT, strata = ~SCHOOL_DISTRICT, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL ) predict_response_status_via_glm( survey_design = survey_design, status = "RESPONSE_STATUS", status_codes = c( "ER" = "Respondent", "EN" = "Nonrespondent", "IE" = "Ineligible", "UE" = "Unknown" ), model_selection = "main-effects", numeric_predictors = c("STUDENT_AGE"), categorical_predictors = c("PARENT_HAS_EMAIL", "STUDENT_GRADE") ) rake_to_benchmarks Re-weight data to match population benchmarks, using raking or post- stratification Description Adjusts weights in the data to ensure that estimated population totals for grouping variables match known population benchmarks. If there is only one grouping variable, simple post-stratification is used. If there are multiple grouping variables, raking (also known as iterative post-stratification) is used. Usage rake_to_benchmarks( survey_design, group_vars, group_benchmark_vars, max_iterations = 100, epsilon = 5e-06 ) Arguments survey_design A survey design object created with the survey package. group_vars Names of grouping variables in the data dividing the sample into groups for which benchmark data are available. These variables cannot have any missing values group_benchmark_vars Names of group benchmark variables in the data corresponding to group_vars. For each category of a grouping variable, the group benchmark variable gives the population benchmark (i.e. population size) for that category. max_iterations If there are multiple grouping variables, then raking is used rather than post- stratification. The parameter max_iterations controls the maximum number of iterations to use in raking. epsilon If raking is used, convergence for a given margin is declared if the maximum change in a re-weighted total is less than epsilon times the total sum of the original weights in the design. Details Raking adjusts the weight assigned to each sample member so that, after reweighting, the weighted sample percentages for population subgroups match their known population percentages. In a sense, raking causes the sample to more closely resemble the population in terms of variables for which population sizes are known. Raking can be useful to reduce nonresponse bias caused by having groups which are overrep- resented in the responding sample relative to their population size. If the population subgroups systematically differ in terms of outcome variables of interest, then raking can also be helpful in terms of reduce sampling variances. However, when population subgroups do not differ in terms of outcome variables of interest, then raking may increase sampling variances. There are two basic requirements for raking. • Basic Requirement 1 - Values of the grouping variable(s) must be known for all respondents. • Basic Requirement 2 - The population size of each group must be known (or precisely esti- mated). When there is effectively only one grouping variable (though this variable can be defined as a combination of other variables), raking amounts to simple post-stratification. For example, simple post-stratification would be used if the grouping variable is "Age x Sex x Race", and the population size of each combination of age, sex, and race is known. The method of "iterative poststratification" (also known as "iterative proportional fitting") is used when there are multiple grouping variables, and population sizes are known for each grouping variable but not for combinations of grouping variables. For example, iterative proportional fitting would be necessary if population sizes are known for age groups and for gender categories but not for combinations of age groups and gender categories. Value A survey design object with raked or post-stratified weights Examples # Load the survey data data(involvement_survey_srs, package = "nrba") # Calculate population benchmarks population_benchmarks <- list( "PARENT_HAS_EMAIL" = data.frame( PARENT_HAS_EMAIL = c("Has Email", "No Email"), PARENT_HAS_EMAIL_POP_BENCHMARK = c(17036, 2964) ), "STUDENT_RACE" = data.frame( STUDENT_RACE = c( "AM7 (American Indian or Alaska Native)", "AS7 (Asian)", "BL7 (Black or African American)", "HI7 (Hispanic or Latino Ethnicity)", "MU7 (Two or More Races)", "PI7 (Native Hawaiian or Other Pacific Islander)", "WH7 (White)" ), STUDENT_RACE_POP_BENCHMARK = c(206, 258, 3227, 1097, 595, 153, 14464) ) ) # Add the population benchmarks as variables in the data involvement_survey_srs <- merge( x = involvement_survey_srs, y = population_benchmarks$PARENT_HAS_EMAIL, by = "PARENT_HAS_EMAIL" ) involvement_survey_srs <- merge( x = involvement_survey_srs, y = population_benchmarks$STUDENT_RACE, by = "STUDENT_RACE" ) # Create a survey design object library(survey) survey_design <- svydesign( weights = ~BASE_WEIGHT, id = ~UNIQUE_ID, fpc = ~N_STUDENTS, data = involvement_survey_srs ) # Subset data to only include respondents survey_respondents <- subset( survey_design, RESPONSE_STATUS == "Respondent" ) # Rake to the benchmarks raked_survey_design <- rake_to_benchmarks( survey_design = survey_respondents, group_vars = c("PARENT_HAS_EMAIL", "STUDENT_RACE"), group_benchmark_vars = c( "PARENT_HAS_EMAIL_POP_BENCHMARK", "STUDENT_RACE_POP_BENCHMARK" ), ) # Inspect estimates from respondents, before and after raking svymean( x = ~PARENT_HAS_EMAIL, design = survey_respondents ) svymean( x = ~PARENT_HAS_EMAIL, design = raked_survey_design ) svymean( x = ~WHETHER_PARENT_AGREES, design = survey_respondents ) svymean( x = ~WHETHER_PARENT_AGREES, design = raked_survey_design ) stepwise_model_selection Select and fit a model using stepwise regression Description A regression model is selected by iteratively adding and removing variables based on the p-value from a likelihood ratio rest. At each stage, a single variable is added to the model if the p-value of the likelihood ratio test from adding the variable is below alpha_enter and its p-value is less than that of all other variables not already in the model. Next, of the variables already in the model, the variable with the largest p-value is dropped if its p-value is greater than alpha_remain. This itera- tive process continues until a maximum number of iterations is reached or until either all variables have been added to the model or there are no variables not yet in the model whose likelihood ratio test has a p-value below alpha_enter. Stepwise model selection generally invalidates inferential statistics such as p-values, standard er- rors, or confidence intervals and leads to overestimation of the size of coefficients for variables included in the selected model. This bias increases as the value of alpha_enter or alpha_remain decreases. The use of stepwise model selection should be limited only to reducing a large list of candidate variables for nonresponse adjustment. Usage stepwise_model_selection( survey_design, outcome_variable, predictor_variables, model_type = "binary-logistic", max_iterations = 100L, alpha_enter = 0.5, alpha_remain = 0.5 ) Arguments survey_design A survey design object created with the survey package. outcome_variable The name of an outcome variable to use as the dependent variable. predictor_variables A list of names of variables to consider as predictors for the model. model_type A character string describing the type of model to fit. 'binary-logistic' for a binary logistic regression, 'ordinal-logistic' for an ordinal logistic re- gression (cumulative proportional-odds), 'normal' for the typical model which assumes residuals follow a Normal distribution. max_iterations Maximum number of iterations to try adding new variables to the model. alpha_enter The maximum p-value allowed for a variable to be added to the model. Large values such as 0.5 or greater are recommended to reduce the bias of estimates from the selected model. alpha_remain The maximum p-value allowed for a variable to remain in the model. Large values such as 0.5 or greater are recommended to reduce the bias of estimates from the selected model. Details See Lumley and Scott (2017) for details of how regression models are fit to survey data. For overall tests of variables, a Rao-Scott Likelihood Ratio Test is conducted (see section 4 of Lumley and Scott (2017) for statistical details) using the function regTermTest(method = "LRT", lrt.approximation = "saddlepoint") from the ’survey’ package. See Sauerbrei et al. (2020) for a discussion of statistical issues with using stepwise model selection. Value An object of class svyglm representing a regression model fit using the ’survey’ package. References • <NAME>., & <NAME>. (2017). Fitting Regression Models to Survey Data. Statistical Science 32 (2) 265 - 278. https://doi.org/10.1214/16-STS605 • <NAME>., <NAME>., <NAME>. et al. (2020). State of the art in selection of variables and functional forms in multivariable analysis - outstanding issues. Diagnostic and Prognostic Research 4, 3. https://doi.org/10.1186/s41512-020-00074-3 Examples library(survey) # Load example data and prepare it for analysis data(involvement_survey_str2s, package = 'nrba') involvement_survey <- svydesign( data = involvement_survey_str2s, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL, strata = ~ SCHOOL_DISTRICT, weights = ~ BASE_WEIGHT ) involvement_survey <- involvement_survey |> transform(WHETHER_PARENT_AGREES = factor(WHETHER_PARENT_AGREES)) # Fit a regression model using stepwise selection selected_model <- stepwise_model_selection( survey_design = involvement_survey, outcome_variable = "WHETHER_PARENT_AGREES", predictor_variables = c("STUDENT_RACE", "STUDENT_DISABILITY_CATEGORY"), model_type = "binary-logistic", max_iterations = 100, alpha_enter = 0.5, alpha_remain = 0.5 ) t_test_by_response_status t-test of differences in means/percentages between responding sample and full sample, or between responding sample and eligible sample Description The function t_test_resp_vs_full tests whether means of auxiliary variables differ between re- spondents and the full selected sample, where the full sample consists of all cases regardless of response status or eligibility status. The function t_test_resp_vs_elig tests whether means differ between the responding sample and the eligible sample, where the eligible sample consists of all cases known to be eligible, regard- less of response status. Usage t_test_resp_vs_full( survey_design, y_vars, na.rm = TRUE, status, status_codes = c("ER", "EN", "IE", "UE"), null_difference = 0, alternative = "unequal", degrees_of_freedom = survey::degf(survey_design) - 1 ) t_test_resp_vs_elig( survey_design, y_vars, na.rm = TRUE, status, status_codes = c("ER", "EN", "IE", "UE"), null_difference = 0, alternative = "unequal", degrees_of_freedom = survey::degf(survey_design) - 1 ) Arguments survey_design A survey design object created with the survey package. y_vars Names of dependent variables for tests. For categorical variables, percentages of each category are tested. na.rm Whether to drop cases with missing values for a given dependent variable. status The name of the variable representing response/eligibility status. The status variable should have at most four categories, representing eligible respondents (ER), eligible nonrespondents (EN), known ineligible cases (IE), and cases whose eligibility is unknown (UE). status_codes A named vector, with four entries named ’ER’, ’EN’, ’IE’, and ’UE’. status_codes indicates how the values of the status variable are to be inter- preted. null_difference The difference between the two means under the null hypothesis. Default is 0. alternative Can be one of the following: • 'unequal' (the default): two-sided test of whether difference in means is equal to null_difference • 'less': one-sided test of whether difference is less than null_difference • 'greater': one-sided test of whether difference is greater than null_difference degrees_of_freedom The degrees of freedom to use for the test’s reference distribution. Unless spec- ified otherwise, the default is the design degrees of freedom minus one, where the design degrees of freedom are estimated using the survey package’s degf method. Value A data frame describing the results of the t-tests, one row per dependent variable. Statistical Details The t-statistic used for the test has as its numerator the difference in means between the two sam- ples, minus the null_difference. The denominator for the t-statistic is the estimated standard error of the difference in means. Because the two means are based on overlapping groups and thus have correlated sampling errors, special care is taken to estimate the covariance of the two esti- mates. For designs which use sets of replicate weights for variance estimation, the two means and their difference are estimated using each set of replicate weights; the estimated differences from the sets of replicate weights are then used to estimate sampling error with a formula appropriate to the replication method (JKn, BRR, etc.). For designs which use linearization methods for variance esti- mation, the covariance between the two means is estimated using the method of linearization based on influence functions implemented in the survey package. See Osier (2009) for an overview of the method of linearization based on influence functions. Unless specified otherwise using the degrees_of_freedom parameter, the degrees of freedom for the test are set to the design degrees of freedom minus one. Design degrees of freedom are esti- mated using the survey package’s degf method. See Amaya and Presser (2017) and Van de Kerckhove et al. (2009) for examples of a nonresponse bias analysis which uses t-tests to compare responding samples to eligible samples. See Lohr and Riddles (2016) for the statistical details of this test. References • <NAME>., <NAME>. (2017). Nonresponse Bias for Univariate and Multivariate Estimates of Social Activities and Roles. Public Opinion Quarterly, Volume 81, Issue 1, 1 March 2017, Pages 1–36, https://doi.org/10.1093/poq/nfw037 • <NAME>., <NAME>. (2016). Tests for Evaluating Nonresponse Bias in Surveys. Survey Methodology 42(2): 195-218. https://www150.statcan.gc.ca/n1/pub/12-001-x/2016002/article/14677- eng.pdf • <NAME>. (2009). Variance estimation for complex indicators of poverty and inequality using linearization techniques. Survey Research Methods, 3(3), 167-195. https://doi.org/10.18148/srm/2009.v3i3.369 • <NAME>., <NAME>., and <NAME>. (2009). Adult Literacy and Lifeskills Survey (ALL) 2003: U.S. Nonresponse Bias Analysis (NCES 2009-063). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Wash- ington, DC. Examples library(survey) # Create a survey design ---- data(involvement_survey_srs, package = 'nrba') survey_design <- svydesign(weights = ~ BASE_WEIGHT, id = ~ UNIQUE_ID, fpc = ~ N_STUDENTS, data = involvement_survey_srs) # Compare respondents' mean to the full sample mean ---- t_test_resp_vs_full(survey_design = survey_design, y_vars = c("STUDENT_AGE", "WHETHER_PARENT_AGREES"), status = 'RESPONSE_STATUS', status_codes = c('ER' = "Respondent", 'EN' = "Nonrespondent", 'IE' = "Ineligible", 'UE' = "Unknown")) # Compare respondents' mean to the mean of all eligible cases ---- t_test_resp_vs_full(survey_design = survey_design, y_vars = c("STUDENT_AGE", "WHETHER_PARENT_AGREES"), status = 'RESPONSE_STATUS', status_codes = c('ER' = "Respondent", 'EN' = "Nonrespondent", 'IE' = "Ineligible", 'UE' = "Unknown")) # One-sided tests ---- ## Null Hypothesis: Y_bar_resp - Y_bar_full <= 0.1 ## Alt. Hypothesis: Y_bar_resp - Y_bar_full > 0.1 t_test_resp_vs_full(survey_design = survey_design, y_vars = c("STUDENT_AGE", "WHETHER_PARENT_AGREES"), status = 'RESPONSE_STATUS', status_codes = c('ER' = "Respondent", 'EN' = "Nonrespondent", 'IE' = "Ineligible", 'UE' = "Unknown"), null_difference = 0.1, alternative = 'greater') ## Null Hypothesis: Y_bar_resp - Y_bar_full >= 0.1 ## Alt. Hypothesis: Y_bar_resp - Y_bar_full < 0.1 t_test_resp_vs_full(survey_design = survey_design, y_vars = c("STUDENT_AGE", "WHETHER_PARENT_AGREES"), status = 'RESPONSE_STATUS', status_codes = c('ER' = "Respondent", 'EN' = "Nonrespondent", 'IE' = "Ineligible", 'UE' = "Unknown"), null_difference = 0.1, alternative = 'less') t_test_of_weight_adjustment t-test of differences in estimated means/percentages from two different sets of replicate weights. Description Tests whether estimates of means/percentages differ systematically between two sets of replicate weights: an original set of weights, and the weights after adjustment (e.g. post-stratification or nonresponse adjustments) and possibly subsetting (e.g. subsetting to only include respondents). Usage t_test_of_weight_adjustment( orig_design, updated_design, y_vars, na.rm = TRUE, null_difference = 0, alternative = "unequal", degrees_of_freedom = NULL ) Arguments orig_design A replicate design object created with the survey package. updated_design A potentially updated version of orig_design, for example where weights have been adjusted for nonresponse or updated using post-stratification. The type and number of sets of replicate weights must match that of orig_design. The number of rows may differ (e.g. if orig_design includes the full sample but updated_design only includes respondents). y_vars Names of dependent variables for tests. For categorical variables, percentages of each category are tested. na.rm Whether to drop cases with missing values for a given dependent variable. null_difference The difference between the two means/percentages under the null hypothesis. Default is 0. alternative Can be one of the following: • 'unequal' (the default): two-sided test of whether difference in means is equal to null_difference • 'less': one-sided test of whether difference is less than null_difference • 'greater': one-sided test of whether difference is greater than null_difference degrees_of_freedom The degrees of freedom to use for the test’s reference distribution. Unless spec- ified otherwise, the default is the design degrees of freedom minus one, where the design degrees of freedom are estimated using the survey package’s degf method applied to the ’stacked’ design formed by combining orig_design and updated_design. Value A data frame describing the results of the t-tests, one row per dependent variable. Statistical Details The t-statistic used for the test has as its numerator the difference in means/percentages between the two samples, minus the null_difference. The denominator for the t-statistic is the estimated standard error of the difference in means. Because the two means are based on overlapping groups and thus have correlated sampling errors, special care is taken to estimate the covariance of the two estimates. For designs which use sets of replicate weights for variance estimation, the two means and their difference are estimated using each set of replicate weights; the estimated differences from the sets of replicate weights are then used to estimate sampling error with a formula appropriate to the replication method (JKn, BRR, etc.). This analysis is not implemented for designs which use linearization methods for variance esti- mation. Unless specified otherwise using the degrees_of_freedom parameter, the degrees of freedom for the test are set to the design degrees of freedom minus one. Design degrees of freedom are esti- mated using the survey package’s degf method. See Van de Kerckhove et al. (2009) for an example of this type of nonresponse bias analysis (among others). See Lohr and Riddles (2016) for the statistical details of this test. References • <NAME>., <NAME>. (2016). Tests for Evaluating Nonresponse Bias in Surveys. Survey Methodology 42(2): 195-218. https://www150.statcan.gc.ca/n1/pub/12-001-x/2016002/article/14677- eng.pdf • <NAME>., <NAME>., and <NAME>. (2009). Adult Literacy and Lifeskills Survey (ALL) 2003: U.S. Nonresponse Bias Analysis (NCES 2009-063). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Wash- ington, DC. Examples library(survey) # Create a survey design ---- data(involvement_survey_srs, package = 'nrba') survey_design <- svydesign(weights = ~ BASE_WEIGHT, id = ~ UNIQUE_ID, fpc = ~ N_STUDENTS, data = involvement_survey_srs) # Create replicate weights for the design ---- rep_svy_design <- as.svrepdesign(survey_design, type = "subbootstrap", replicates = 500) # Subset to only respondents (always subset *after* creating replicate weights) rep_svy_respondents <- subset(rep_svy_design, RESPONSE_STATUS == "Respondent") # Apply raking adjustment ---- raked_rep_svy_respondents <- rake_to_benchmarks( survey_design = rep_svy_respondents, group_vars = c("PARENT_HAS_EMAIL", "STUDENT_RACE"), group_benchmark_vars = c("PARENT_HAS_EMAIL_BENCHMARK", "STUDENT_RACE_BENCHMARK"), ) # Compare estimates from respondents in original vs. adjusted design ---- t_test_of_weight_adjustment(orig_design = rep_svy_respondents, updated_design = raked_rep_svy_respondents, y_vars = c('STUDENT_AGE', 'STUDENT_SEX')) t_test_of_weight_adjustment(orig_design = rep_svy_respondents, updated_design = raked_rep_svy_respondents, y_vars = c('WHETHER_PARENT_AGREES')) # Compare estimates to true population values ---- data('involvement_survey_pop', package = 'nrba') mean(involvement_survey_pop$STUDENT_AGE) prop.table(table(involvement_survey_pop$STUDENT_SEX)) t_test_vs_external_estimate t-test of differences in means/percentages relative to external estimates Description Compare estimated means/percentages from the present survey to external estimates from a bench- mark source. A t-test is used to evaluate whether the survey’s estimates differ from the external estimates. Usage t_test_vs_external_estimate( survey_design, y_var, ext_ests, ext_std_errors = NULL, na.rm = TRUE, null_difference = 0, alternative = "unequal", degrees_of_freedom = survey::degf(survey_design) - 1 ) Arguments survey_design A survey design object created with the survey package. y_var Name of dependent variable. For categorical variables, percentages of each cat- egory are tested. ext_ests A numeric vector containing the external estimate of the mean for the dependent variable. If variable is a categorical variable, a named vector of means must be provided. ext_std_errors (Optional) The standard errors of the external estimates. This is useful if the external data are estimated with an appreciable level of uncertainty, for instance if the external data come from a survey with a small-to-moderate sample size. If supplied, the variance of the difference between the survey and external esti- mates is estimated by adding the variance of the external estimates to the esti- mated variance of the survey’s estimates. na.rm Whether to drop cases with missing values for y_var null_difference The hypothesized difference between the estimate and the external mean. De- fault is 0. alternative Can be one of the following: • 'unequal': two-sided test of whether difference in means is equal to null_difference • 'less': one-sided test of whether difference is less than null_difference • 'greater': one-sided test of whether difference is greater than null_difference degrees_of_freedom The degrees of freedom to use for the test’s reference distribution. Unless spec- ified otherwise, the default is the design degrees of freedom minus one, where the design degrees of freedom are estimated using the survey package’s degf method. Value A data frame describing the results of the t-tests, one row per mean being compared. References See Brick and Bose (2001) for an example of this analysis method and a discussion of its limitations. • <NAME>., and <NAME>. (2001). Analysis of Potential Nonresponse Bias. in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association. http://www.asasrms.org/Proceedings/y2001/Proceed/00021.pdf Examples library(survey) # Create a survey design ---- data("involvement_survey_str2s", package = 'nrba') involvement_survey_sample <- svydesign( data = involvement_survey_str2s, weights = ~ BASE_WEIGHT, strata = ~ SCHOOL_DISTRICT, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL ) # Subset to only include survey respondents ---- involvement_survey_respondents <- subset(involvement_survey_sample, RESPONSE_STATUS == "Respondent") # Test whether percentages of categorical variable differ from benchmark ---- parent_email_benchmark <- c( 'Has Email' = 0.85, 'No Email' = 0.15 ) t_test_vs_external_estimate( survey_design = involvement_survey_respondents, y_var = "PARENT_HAS_EMAIL", ext_ests = parent_email_benchmark ) # Test whether the sample mean differs from the population benchmark ---- average_age_benchmark <- 11 t_test_vs_external_estimate( survey_design = involvement_survey_respondents, y_var = "STUDENT_AGE", ext_ests = average_age_benchmark, null_difference = 0 ) wt_class_adjust Adjust weights in a replicate design for nonresponse or unknown eli- gibility status, using weighting classes Description Updates weights in a survey design object to adjust for nonresponse and/or unknown eligibility us- ing the method of weighting class adjustment. For unknown eligibility adjustments, the weight in each class is set to zero for cases with unknown eligibility, and the weight of all other cases in the class is increased so that the total weight is unchanged. For nonresponse adjustments, the weight in each class is set to zero for cases classified as eligible nonrespondents, and the weight of eligible respondent cases in the class is increased so that the total weight is unchanged. This function currently only works for survey designs with replicate weights, since the linearization- based estimators included in the survey package (or Stata or SAS for that matter) are unable to fully reflect the impact of nonresponse adjustment. Adjustments are made to both the full-sample weights and all of the sets of replicate weights. Usage wt_class_adjust( survey_design, status, status_codes, wt_class = NULL, type = c("UE", "NR") ) Arguments survey_design A replicate survey design object created with the survey package. status A character string giving the name of the variable representing response/eligibility status. The status variable should have at most four categories, representing eligible respondents (ER), eligible nonrespondents (EN), known ineligible cases (IE), and cases whose eligibility is unknown (UE). status_codes A named vector, with four entries named ’ER’, ’EN’, ’IE’, and ’UE’. status_codes indicates how the values of the status variable are to be inter- preted. wt_class (Optional) A character string giving the name of the variable which divides sam- ple cases into weighting classes. If wt_class=NULL (the default), adjustment is done using the entire sample. type A character vector including one or more of the following options: • 'UE': Adjust for unknown eligibility. • 'NR': Adjust for nonresponse. To sequentially adjust for unknown eligibility and then nonresponse, set type=c('UE', 'NR'). Details See the vignette "Nonresponse Adjustments" from the svrep package for a step-by-step walkthrough of nonresponse weighting adjustments in R: vignette(topic = "nonresponse-adjustments", package = "svrep") Value A replicate survey design object, with adjusted full-sample and replicate weights References See Chapter 2 of Heeringa, West, and Berglund (2017) or Chapter 13 of Valliant, Dever, and Kreuter (2018) for an overview of nonresponse adjustment methods based on redistributing weights. • <NAME>., <NAME>., <NAME>. (2017). Applied Survey Data Analysis, 2nd edition. Boca Raton, FL: CRC Press. "Applied Survey Data Analysis, 2nd edition." Boca Raton, FL: CRC Press. • <NAME>., <NAME>., <NAME>. (2018). "Practical Tools for Designing and Weighting Sur- vey Samples, 2nd edition." New York: Springer. See Also svrep::redistribute_weights(), vignette(topic = "nonresponse-adjustments", package = "svrep") Examples library(survey) # Load an example dataset data("involvement_survey_str2s", package = "nrba") # Create a survey design object involvement_survey_sample <- svydesign( data = involvement_survey_str2s, weights = ~BASE_WEIGHT, strata = ~SCHOOL_DISTRICT, ids = ~ SCHOOL_ID + UNIQUE_ID, fpc = ~ N_SCHOOLS_IN_DISTRICT + N_STUDENTS_IN_SCHOOL ) rep_design <- as.svrepdesign(involvement_survey_sample, type = "mrbbootstrap") # Adjust weights for nonresponse within weighting classes nr_adjusted_design <- wt_class_adjust( survey_design = rep_design, status = "RESPONSE_STATUS", status_codes = c( "ER" = "Respondent", "EN" = "Nonrespondent", "IE" = "Ineligible", "UE" = "Unknown" ), wt_class = "PARENT_HAS_EMAIL", type = "NR" )
samplesizeCMH
cran
R
Package ‘samplesizeCMH’ October 14, 2022 Title Power and Sample Size Calculation for the Cochran-Mantel-Haenszel Test Date 2017-12-13 Version 0.0.0 Copyright Spectrum Health, Grand Rapids, MI Description Calculates the power and sample size for Cochran-Mantel-Haenszel tests. There are also several helper functions for working with probability, odds, relative risk, and odds ratio values. Depends R (>= 3.4.0) Imports stats License GPL-2 | GPL-3 Encoding UTF-8 LazyData true Suggests knitr, rmarkdown, DescTools, datasets, testthat VignetteBuilder knitr RoxygenNote 6.0.1 URL https://github.com/pegeler/samplesizeCMH BugReports https://github.com/pegeler/samplesizeCMH/issues NeedsCompilation no Author <NAME> [aut, cre], Spectrum Health, Grand Rapids, MI [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2017-12-21 15:15:57 UTC R topics documented: contraceptive... 2 odds.and.proportion... 3 odds.rati... 4 power.cmh.tes... 5 print.power.cm... 7 rel.ris... 8 samplesizeCM... 8 contraceptives Oral contraceptive use and breast cancer rates Description This data summarizes counts of a case-control study investigating the link between breast cancer rates and oral contraceptive use, stratified by age group. In toto, 10,890 subjects. See source for details. Usage data(contraceptives) Format A 3-dimensional table. 1. OC Usage: Subject exposure to oral contraceptives. 2. Disease Status: Breast cancer present (case) or absent (control). 3. Age Group: Age group of the subject. Source <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. (1984). "A Case-Control Study of Oral Contraceptive Use and Breast Cancer." Journal of the National Cancer Institute 72 (1): 39–42. Table 1. odds.and.proportions Interconvert odds and proportion values Description These functions will create either odds for a given probability, probability for a given odds, calculate the odds ratio between two probabilities, or calculate effect size (raise a probability by theta) Usage prop2odds(p) odds2prop(o) effect.size(p, theta) props2theta(p1, p2) rr2theta(rr, p1, p2) theta2rr(theta, p1, p2) Arguments p, p1, p2 Proportion vector. o Odds vector. theta Odds ratio vector. rr Relative risk vector (p1 / p2). Value A numeric vector. Author(s) <NAME>, M.S. Examples # Convert proportions of 0 through 1 to odds props <- seq(0,1,0.1) prop2odds(props) # Convert odds to proportions odds2prop(1:3) # Raise a proportion by an effect size theta effect.size(0.5, 2) # Find the odds ratio between to proportions props2theta(0.75, 0.5) odds.ratio Create an odds ratio estimate from a 2-by-2 table of frequencies or proportions Description Create an odds ratio estimate from a 2-by-2 table of frequencies or proportions Usage odds.ratio(x) Arguments x A two-dimensional matrix or table containing frequencies or proportions. Value A numeric vector. Author(s) <NAME>. Examples # Load in Titanic data from datasets package data(Titanic, package = "datasets") # Get marginal table of survival by sex marginal_table <- margin.table(Titanic, c(2,4)) marginal_table # Compute odds ratio of marginal table odds.ratio(marginal_table) # Get partial tables of survival by sex, stratified by class partial_tables <- margin.table(Titanic, c(2,4,1)) partial_tables # Compute odds ratio of each partial table apply(partial_tables, 3, odds.ratio) power.cmh.test Power and sample size calculation for the Cochran-Mantel-Haenszel test Description Compute the post-hoc power or required number of subjects for the Cochran-Mantel-Haenszel test for association in J stratified 2 x 2 tables. Usage power.cmh.test(p1 = NULL, p2 = NULL, theta = NULL, N = NULL, sig.level = 0.05, power = 0.8, alternative = c("two.sided", "less", "greater"), s = 0.5, t = 1/J, correct = TRUE) Arguments p1 Vector of proportions of the J case groups. p2 Vector of proportions of the J control groups. theta Vector of odds ratios relating to the J 2 x 2 tables. N Total number of subjects. sig.level Significance level (Type I error probability). power Power of test (1 minus Type II error probability). alternative Two- or one-sided test. If one-sided, the direction of the association must be defined (less than 1 or greater than 1). Can be abbreviated. s Proportion (weight) of case versus control in J stratum. t Proportion (weight) of total number of cases of J stratum. correct Logical indicating whether to apply continuity correction. Details This sample size calculation is based on the derivations described in the Woolson et al. (1986). It is designed for case-control studies where one margin is fixed. The method is "based on the Cochran-Mantel-Haenszel statistic expressed as a weighted difference in binomial proportions." Continuity corrected sample size is described in Nam’s 1992 paper. This uses the weighted bino- mial sample size calculation described in Woolson et al. (1986) but is enhanced for use with the continuity corrected Cochran’s test. Power calculations are based on the writings of Wittes and Wallenstein (1987). They are function- ally equivalent to the derivations of the sample size calculation described by Woolson and others and Nam, but have slightly added precision. Terminology and symbolic conventions are borrowed from Woolson et al. (1986). The p1 group is dubbed the Case group and p2 group is called the Control group. Value An object of class "power.cmh": a list of the original arguments and the calculated sample size or power. Also included are vectors of n’s per each group, an indicator or whether continuity correction was used, the original function call, and N.effective. The vectors of n’s per each group, n1 and n2, are the fractional n’s required to achieve a final total N specified by the calculation while satisfying the constraints of s and t. However, the effective N, given the requirement of cell counts populated by whole numbers is provided by N.effective. By default, the print method is set to n.frac = FALSE, which will round each cell n up to the nearest whole number. Arguments To calculate power, the power parameter must be set to NULL. To calculate sample size, the N parameter must be set to NULL. The J number of groups will be inferred by the maximum length of p1, p2, or theta. Effect size must be specified using one of the following combinations of arguments. • Both case and control proportion vectors, ex., – p1 and p2 with theta = NULL. • One proportion vector and an effect size, ex., – p1 and theta with p2 = NULL, or – p2 and theta with p1 = NULL. Author(s) <NAME>. References <NAME>. (1973). "The determination of sample sizes for trials involving several 2 x 2 tables." Journal of Chronic Disease 26: 669-673. <NAME>. and <NAME>. (1984). "Power and sample size for a collection of 2 x 2 tables." Biometrics 40: 995-1004. <NAME>. (1992). "Sample size determination for case-control studies and the comparison of stratified and unstratified analyses." Biometrics 48: 389-395. <NAME>. and <NAME>. (1987). "The power of the Mantel-Haenszel test." Journal of the American Statistical Association 82: 1104-1109. <NAME>., <NAME>., and <NAME>. (1986). "Sample size for case-control studies using Cochran’s statistic." Biometrics 42: 927-932. See Also power.prop.test, mantelhaen.test, BreslowDayTest Examples # From "Sample size determination for case-control studies and the comparison # of stratified and unstratified analyses", (Nam 1992). See references. # Uncorrected sample size estimate first introduced # by Woolson and others in 1986 sample_size_uncorrected <- power.cmh.test( p2 = c(0.75,0.70,0.65,0.60), theta = 3, power = 0.9, t = c(0.10,0.40,0.35,0.15), alternative = "greater", correct = FALSE ) print(sample_size_uncorrected, detail = FALSE) # We see that the N is 171, the same as calculated by Nam sample_size_uncorrected$N # Continuity corrected sample size estimate added by Nam sample_size_corrected <- power.cmh.test( p2 = c(0.75,0.70,0.65,0.60), theta = 3, power = 0.9, t = c(0.10,0.40,0.35,0.15), alternative = "greater", correct = TRUE ) print(sample_size_corrected, n.frac = TRUE) # We see that the N is indeed equal to that which is reported in the paper sample_size_corrected$N print.power.cmh Print power.cmh object Description Print power.cmh object Usage ## S3 method for class 'power.cmh' print(x, detail = TRUE, n.frac = FALSE, ...) Arguments x A "power.cmh" object. detail Logical to toggle detailed or simple output. n.frac Logical indicating whether sample n’s should be rounded to the next whole num- ber. ... Ignored. rel.risk Calculate the relative risk from a 2-by-2 table Description Computes the relative risk of a specified column of a two-by-two table. Usage rel.risk(x, col.num = 1) Arguments x A table or matrix containing frequencies. col.num The column number upon which relative risk should be calculated. Value A numeric vector. Author(s) <NAME>, M.S. samplesizeCMH samplesizeCMH: Power and Sample Size Calculation for the Cochran- Mantel-Haenszel Test Description This package provides functions relating to power and sample size calculation for the CMH test. There are also several helper functions for interconverting probability, odds, relative risk, and odds ratio values. Details The Cochran-Mantel-Haenszel test (CMH) is an inferential test for the association between two binary variables, while controlling for a third confounding nominal variable. Two variables of inter- est, X and Y, are compared at each level of the confounder variable Z and the results are combined, creating a common odds ratio. Essentially, the CMH test examines the weighted association of X and Y. The CMH test is a common technique in the field of biostatistics, where it is often used for case-control studies. Sample Size Calculation Given a target power which the researcher would like to achieve, a calculation can be performed in order to estimate the appropriate number of subjects for a study. The power.cmh.test function calculates the required number of subjects per group to achieve a specified power for a Cochran- Mantel-Haenszel test. Power Calculation Researchers interested in estimating the probability of detecting a true positive result from an in- ferential test must perform a power calculation using a known sample size, effect size, significance level, et cetera. The power.cmh.test function can compute the power of a CMH test, given pa- rameters from the experiment.
lazybar
cran
R
Package ‘lazybar’ October 13, 2022 Type Package Title Progress Bar with Remaining Time Forecast Method Version 0.1.0 Description A simple progress bar showing estimated remaining time. Multiple forecast methods and user defined forecast method for the remaining time are supported. License GPL-3 Encoding UTF-8 LazyData true URL https://pkg.yangzhuoranyang.com/lazybar/, https://github.com/FinYang/lazybar/ BugReports https://github.com/FinYang/lazybar/issues/ Imports R6 Suggests forecast RoxygenNote 7.1.0 Language en-AU NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-1232-8017>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-04-28 11:20:02 UTC R topics documented: lazyProgressBa... 2 lazyProgressBar Progress bar with customisable estimated remaining time Description Display a progress bar displaying the estimated time. The purpose of having various estimation methods is to provide a more accurate estimation when the run time between ticks is assumed to be different, e.g., online estimation, time series cross validation, expanding window approach, etc. Usage lazyProgressBar(n, method = "average", fn = NULL, ...) Arguments n Integer. Total number of ticks method Character. The embedded forecasting method of remaining time: drift (de- fault), average, naive, or snaive. Ignored if fn is not NULL. average (default) Average method. The run time between future ticks are as- sumed to be the average run time of the past ticks. This is the most common estimation method for remaining time. drift Drift method. The run time between future ticks are assumed to increase (decrease), and the level changed is set to be the average change of the run time of the past ticks. This is to assume the run time between ticks is linearly increasing or decreasing. naive Naive method. The run time between future ticks are assumed to be the run time between the last two ticks, snaive Seasonal naive method. If this method is chosen, an argument of s needs to be supplied in the .... The run time between future ticks is set to be the run time s times before. By default s is set to be 1/10 of the total number of ticks. fn Function. User defined function to estimate the remaining time. The function should predict the remaining time using the arguments and return a scalar. It should have at least three arguments in the order of dtime, i, and n, which represent the status of the progress bar at the current tick: dtime A numeric vector of the run time between past ticks. i The number of the current tick. n The number of total ticks. ... Other arguments to pass to estimation method. The arguments need to be named. Details Four simple forecasting methods are available for the estimation of the remaining time: Average method (default), Drift method, Naive method and Seasonal naive method. For the summary of the simple methods, see Chapter 3 of References. User can also supply their customised estimation method as a function. See Arguments and Examples. Value An R6 object with methods tick() and print(). Author(s) <NAME> References <NAME>., & <NAME>. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 24/04/2020. Examples pb <- lazyProgressBar(4) pb$tick() pb$tick() pb$tick() pb$tick() # With linearly increasing run time pb <- lazyProgressBar(4, method = "drift") for(i in 1:4){ Sys.sleep(i * 0.2) pb$tick()$print() } # With user defined forecast function # The forecast function itself will # require certain computational power forecast_fn <- function(dtime, i, n, s = 10){ # When the number of ticks is smaller than s # Estimate the future run time # as the average of the past if(i<s){ eta <- mean(dtime)*(n-i) } # When the number of ticks is larger than s # Fit an arima model every s ticks # using forecast package if(i>=s){ if(i %% s ==0){ model <- forecast::auto.arima(dtime) } runtime <- forecast::forecast(model, h=n-i)$mean if(i %% s !=0){ runtime <- runtime[-seq_len(i %% s)] } eta <- sum(runtime) } return(eta) } pb <- lazyProgressBar(10, fn = forecast_fn, s=3) for(i in 1:10){ Sys.sleep(i * 0.2) pb$tick()$print() }
lettier_github_io_3d-game-shaders-for-beginners
free_programming_book
Markdown
Interested in adding textures, lighting, shadows, normal maps, glowing objects, ambient occlusion, reflections, refractions, and more to your 3D game? Great! Below is a collection of shading techniques that will take your game visuals to new heights. I've explained each technique in such a way that you can take what you learn here and apply/port it to whatever stack you use—be it Godot, Unity, Unreal, or something else. For the glue in between the shaders, I've chosen the fabulous Panda3D game engine and the OpenGL Shading Language (GLSL). So if that is your stack, then you'll also get the benefit of learning how to use these shading techniques with Panda3D and OpenGL specifically. The included license applies only to the software portion of 3D Game Shaders For Beginners— specifically the `.cxx` , `.vert` , and `.frag` source code files. No other portion of 3D Game Shaders For Beginners has been licensed for use. (C) 2019 <NAME> <EMAIL> Below is the setup used to develop and test the example code. The example code was developed and tested using the following environment. Each Blender material used to build `mill-scene.egg` has five textures in the following order. By having the same maps in the same positions for all models, the shaders can be generalized, reducing the need to duplicate code. If an object uses its vertex normals, a "flat blue" normal map is used. Here is an example of a flat normal map. The only color it contains is flat blue ``` (red = 128, green = 128, blue = 255) ``` . This color represents a unit (length one) normal pointing in the positive z-axis `(0, 0, 1)` . ``` (0, 0, 1) = ( round((0 * 0.5 + 0.5) * 255) , round((0 * 0.5 + 0.5) * 255) , round((1 * 0.5 + 0.5) * 255) ) = (128, 128, 255) = ( round(128 / 255 * 2 - 1) , round(128 / 255 * 2 - 1) , round(255 / 255 * 2 - 1) ) = (0, 0, 1) ``` Here you see the unit normal `(0, 0, 1)` converted to flat blue `(128, 128, 255)` and flat blue converted to the unit normal. You'll learn more about this in the normal mapping technique. Up above is one of the specular maps used. The red and blue channel work to control the amount of specular reflection seen based on the camera angle. The green channel controls the shininess factor. You'll learn more about this in the lighting and fresnel factor sections. The reflection and refraction textures mask off the objects that are either reflective, refractive, or both. For the reflection texture, the red channel controls the amount of reflection and the green channel controls how clear or blurry the reflection is. The example code uses Panda3D as the glue between the shaders. This has no real influence over the techniques described, meaning you'll be able to take what you learn here and apply it to your stack or game engine of choice. Panda3D does provide some conveniences. I have pointed these out so you can either find an equivalent convenience provided by your stack or replicate it yourself, if your stack doesn't provide something equivalent. Three Panda3D configurations were changed for the purposes of the demo program. You can find these in config.prc. The configurations changed were . Refer to the Panda3D configuration page in the manual for more details. Panda3D defaults to a z-up, right-handed coordinate system while OpenGL uses a y-up, right-handed system. keeps you from having to translate between the two inside your shaders. allows us to use texture sizes that are not a power of two if the system supports it. This comes in handy when doing SSAO and other screen/window sized related techniques since the screen/window size is usually not a power of two. downsizes our textures to a power of two if the system only supports texture sizes being a power of two. (C) 2019 David <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Before you can try out the demo program, you'll have to build the example code first. Before you can compile the example code, you'll need to install Panda3D for your platform. Panda3D is available for Linux, Mac, and Windows. Start by installing the Panda3D SDK for your distribution. Make sure to locate where the Panda3D headers and libraries are. The headers and libraries are most likely in ``` /usr/include/panda3d/ ``` ``` git clone https://github.com/lettier/3d-game-shaders-for-beginners.git cd 3d-game-shaders-for-beginners/demonstration ``` Now compile the source code into an object file. ``` g++ \ -c src/main.cxx \ -o 3d-game-shaders-for-beginners.o \ -std=gnu++11 \ -O2 \ -I/path/to/python/include/ \ -I/path/to/panda3d/include/ ``` With the object file created, create the executable by linking the object file to its dependencies. Now compile the source code into an object file. You'll have to find where the Python 2.7 and Panda3D include directories are. ``` clang++ \ -c main.cxx \ -o 3d-game-shaders-for-beginners.o \ -std=gnu++11 \ -g \ -O2 \ -I/path/to/python/include/ \ -I/path/to/panda3d/include/ ``` With the object file created, create the executable by linking the object file to its dependencies. You'll need to track down where the Panda3D libraries are located. For more help, see the Panda3D manual. (C) 2019 David <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ ## Running The Demo After you've built the example code, you can now run the executable or demo. ./3d-game-shaders-for-beginners ``` ./3d-game-shaders-for-beginners ``` Here's how you run it on Linux or Mac. 3d-game-shaders-for-beginners.exe ``` 3d-game-shaders-for-beginners.exe ``` Here's how you run it on Windows. ### Demo Controls The demo comes with both keyboard and mouse controls to move the camera around, toggle on and off the different effects, adjust the fog, and view the various different framebuffer textures. # Mouse You can rotate the scene around by holding down the Left Mouse button and dragging. Hold down the Right Mouse button and drag to move up, down, left, and/or right. To zoom in, roll the Mouse Wheel forward. To zoom out, roll the Mouse Wheel backward. You can also change the focus point using the mouse. To change the focus point, click anywhere on the scene using the Middle Mouse button. # Keyboard * w to rotate the scene down. * a to rotate the scene clockwise. * s to rotate the scene up. * d to rotate the scene counterclockwise. * z to zoom in to the scene. * x to zoom out of the scene. * ⬅ to move left. * ➡ to move right. * ⬆ to move up. * ⬇ to move down. * 1 to show midday. * 2 to show midnight. * Delete to toggle the sound. * 3 to toggle fresnel. * 4 to toggle rim lighting. * 5 to toggle particles. * 6 to toggle motion blur. * 7 to toggle Kuwahara filtering. * 8 to toggle cel shading. * 9 to toggle lookup table processing. * 0 to toggle between Phong and Blinn-Phong. * y to toggle SSAO. * u to toggle outlining. * i to toggle bloom. * o to toggle normal mapping. * p to toggle fog. * h to toggle depth of field. * j to toggle posterization. * k to toggle pixelization. * l to toggle sharpen. * n to toggle film grain. * m to toggle screen space reflection. * , to toggle screen space refraction. * . to toggle flow mapping. * / to toggle the sun animation. * \ to toggle chromatic aberration. * [ to decrease the fog near distance. * Shift+[ to increase the fog near distance. * ] to increase the fog far distance. * Shift+] to decrease the fog far distance. * Shift+- to decrease the amount of foam. * - to increase the amount of foam. * Shift+= to decrease the relative index of refraction. * = to increase the relative index of refraction. * Tab to move forward through the framebuffer textures. * Shift+Tab to move backward through the framebuffer textures. ## Copyright (C) 2019 <NAME> <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ ◀️ ⏫ 🔼 🔽 ▶️ Before you write any shaders, you should be familiar with the following frames of reference or coordinate systems. All of them boil down to what origin `(0, 0, 0)` are these coordinates currently relative to? Once you know that, you can then transform them, via some matrix, to some other vector space if need be. Typically, when the output of some shader looks wrong, it's because of some coordinate system mix up. The model or object coordinate system is relative to the origin of the model. This is typically set to the center of the model's geometry in a modeling program like Blender. The world space is relative to the origin of the scene/level/universe that you've created. The view or eye coordinate space is relative to the position of the active camera. The clip space is relative to the center of the camera's film. All coordinates are now homogeneous, ranging from negative one to one `(-1, 1)` . X and y are parallel with the camera's film and the z coordinate is the depth. Any vertex not within the bounds of the camera's frustum or view volume is clipped or discarded. You can see this happening with the cube towards the back, clipped by the camera's far plane, and the cube off to the side. The screen space is (typically) relative to the lower left corner of the screen. X goes from zero to the screen width. Y goes from zero to the screen height. (C) 2019 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Instead of using the fixed-function pipeline, you'll be using the programmable GPU rendering pipeline. Since it is programmable, it is up to you to supply the programming in the form of shaders. A shader is a (typically small) program you write using a syntax reminiscent of C. The programmable GPU rendering pipeline has various different stages that you can program with shaders. The different types of shaders include vertex, tessellation, geometry, fragment, and compute. You'll only need to focus on the vertex and fragment stages for the techniques below. Here is a bare-bones GLSL shader consisting of the GLSL version number and the main function. void main() { gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex; } ``` Here is a stripped down GLSL vertex shader that transforms an incoming vertex to clip space and outputs this new position as the vertex's homogeneous position. The `main` procedure doesn't return anything since it is `void` and the `gl_Position` variable is a built-in output. Take note of the keywords `uniform` and `in` . The `uniform` keyword means this global variable is the same for all vertexes. Panda3D sets the ``` p3d_ModelViewProjectionMatrix ``` for you and it is the same matrix for each vertex. The `in` keyword means this global variable is being given to the shader. The vertex shader receives each vertex that makes up the geometry the vertex shader is attached to. Here is a stripped down GLSL fragment shader that outputs the fragment color as solid green. Keep in mind that a fragment affects at most one screen pixel but a single pixel can be affected by many fragments. Take note of the `out` keyword. The `out` keyword means this global variable is being set by the shader. The name `fragColor` is arbitrary so feel free to choose a different one. This is the output of the two shaders shown above. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Instead of rendering/drawing/painting directly to the screen, the example code uses a technique called "render to texture". In order to render to a texture, you'll need to set up a framebuffer and bind a texture to it. Multiple textures can be bound to a single framebuffer. The textures bound to the framebuffer hold the vector(s) returned by the fragment shader. Typically these vectors are color vectors `(r, g, b, a)` but they could also be position or normal vectors `(x, y, z, w)` . For each bound texture, the fragment shader can output a different vector. For example you could output a vertex's position and normal in a single pass. Most of the example code dealing with Panda3D involves setting up framebuffer textures. To keep things straightforward, nearly all of the fragment shaders in the example code have only one output. However, you'll want to output as much as you can each render pass to keep your frames per second (FPS) high. There are two framebuffer texture setups found in the example code. The first setup renders the mill scene into a framebuffer texture using a variety of vertex and fragment shaders. This setup will go through each of the mill scene's vertexes and corresponding fragments. In this setup, the example code performs the following. The second setup is an orthographic camera pointed at a screen-shaped rectangle. This setup will go through just the four vertexes and their corresponding fragments. In this second setup, the example code performs the following. I like to think of this second setup as using layers in GIMP, Krita, or Inkscape. In the example code, you can see the output of a particular framebuffer texture by using the Tab key or the Shift+Tab keys. (C) 2019 David Let<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Texturing involves mapping some color or some other kind of vector to a fragment using UV coordinates. Both U and V range from zero to one. Each vertex gets a UV coordinate and this is outputted in the vertex shader. The fragment shader receives the UV coordinate interpolated. Interpolated meaning the UV coordinate for the fragment is somewhere between the UV coordinates for the vertexes that make up the triangle face. out vec2 texCoord; void main() { texCoord = p3d_MultiTexCoord0; gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex; } ``` Here you see the vertex shader outputting the texture coordinate to the fragment shader. Notice how it's a two dimensional vector. One dimension for U and one for V. uniform sampler2D p3d_Texture0; in vec2 texCoord; void main() { texColor = texture(p3d_Texture0, texCoord); Here you see the fragment shader looking up the color at its UV coordinate and outputting that as the fragment color. uniform sampler2D screenSizedTexture; void main() { vec2 texSize = textureSize(texture, 0).xy; vec2 texCoord = gl_FragCoord.xy / texSize; texColor = texture(screenSizedTexture, texCoord); When performing render to texture, the mesh is a flat rectangle with the same aspect ratio as the screen. Because of this, you can calculate the UV coordinates knowing only A) the width and height of the screen sized texture being UV mapped to the rectangle and B) the fragment's x and y coordinate. To map x to U, divide x by the width of the input texture. Similarly, to map y to V, divide y by the height of the input texture. You'll see this technique used in the example code. (C) 2019 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Completing the lighting involves calculating and combining the ambient, diffuse, specular, and emission light aspects. The example code uses either Phong or Blinn-Phong lighting. uniform struct p3d_LightSourceParameters { vec4 color ; vec4 ambient ; vec4 diffuse ; vec4 specular ; vec4 position ; vec3 spotDirection ; float spotExponent ; float spotCutoff ; float spotCosCutoff ; float constantAttenuation ; float linearAttenuation ; float quadraticAttenuation ; vec3 attenuation ; sampler2DShadow shadowMap ; mat4 shadowViewMatrix ; } p3d_LightSource[NUMBER_OF_LIGHTS]; For every light, minus the ambient light, Panda3D gives you this convenient struct which is available to both the vertex and fragment shaders. The biggest convenience being the shadow map and shadow view matrix for transforming vertexes to shadow or light space. vertexPosition = p3d_ModelViewMatrix * p3d_Vertex; for (int i = 0; i < p3d_LightSource.length(); ++i) { vertexInShadowSpaces[i] = p3d_LightSource[i].shadowViewMatrix * vertexPosition; } Starting in the vertex shader, you'll need to transform and output the vertex from view space to shadow or light space for each light in your scene. You'll need this later in the fragment shader in order to render the shadows. Shadow or light space is where every coordinate is relative to the light position (the light is the origin). The fragment shader is where most of the lighting calculations take place. uniform struct { vec4 ambient ; vec4 diffuse ; vec4 emission ; vec3 specular ; float shininess ; } p3d_Material; Panda3D gives us the material (in the form of a struct) for the mesh or model you are currently rendering. vec4 diffuse = vec4(0.0, 0.0, 0.0, diffuseTex.a); vec4 specular = vec4(0.0, 0.0, 0.0, diffuseTex.a); Before you loop through the scene's lights, create an accumulator for both the diffuse and specular colors. Now you can loop through the lights, calculating the diffuse and specular colors for each one. Here you see the four major vectors you'll need to calculate the diffuse and specular colors contributed by each light. The light direction vector is the light blue arrow pointing to the light. The normal vector is the green arrow standing straight up. The reflection vector is the dark blue arrow mirroring the light direction vector. The eye or view vector is the orange arrow pointing towards the camera. vec3 lightDirection = p3d_LightSource[i].position.xyz - vertexPosition.xyz * p3d_LightSource[i].position.w; The light direction is from the vertex's position to the light's position. Panda3D sets ``` p3d_LightSource[i].position.w ``` to zero if this is a directional light. Directional lights do not have a position as they only have a direction. So if this is a directional light, the light direction will be the negative or opposite direction of the light as Panda3D sets ``` p3d_LightSource[i].position.xyz ``` to be `-direction` for directional lights. You'll need the vertex normal to be a unit vector. Unit vectors have a length of magnitude of one. vec3 unitLightDirection = normalize(lightDirection); vec3 eyeDirection = normalize(-vertexPosition.xyz); vec3 reflectedDirection = normalize(-reflect(unitLightDirection, normal)); Next you'll need three more vectors. You'll need to take the dot product involving the light direction so its best to normalize it. This gives it a distance or magnitude of one (unit vector). The eye direction is the opposite of the vertex/fragment position since the vertex/fragment position is relative to the camera's position. Remember that the vertex/fragment position is in view space. So instead of going from the camera (eye) to the vertex/fragment, you go from the vertex/fragment to the eye (camera). The reflection vector is a reflection of the light direction at the surface normal. As the light "ray" hits the surface, it bounces off at the same angle it came in at. The angle between the light direction vector and the normal is known as the "angle of incidence". The angle between the reflection vector and the normal is known as the "angle of reflection". You'll have to negate the reflected light vector as it needs to point in the same direction as the eye vector. Remember the eye direction is from the vertex/fragment to the camera position. You'll use the reflection vector to calculate the intensity of the specular highlight. float diffuseIntensity = dot(normal, unitLightDirection); if (diffuseIntensity < 0.0) { continue; } The diffuse intensity is the dot product between the surface normal and the unit vector light direction. The dot product can range from negative one to one. If both vectors point in the same direction, the intensity is one. Any other case will be less than one. As the light vector approaches the same direction as the normal, the diffuse intensity approaches one. If the diffuse intensity is zero or less, move on to the next light. vec4 diffuseTemp = vec4 ( clamp ( diffuseTex.rgb * p3d_LightSource[i].diffuse.rgb * diffuseIntensity , 0 , 1 ) , diffuseTex.a ); diffuseTemp = clamp(diffuseTemp, vec4(0), diffuseTex); You can now calculate the diffuse color contributed by this light. If the diffuse intensity is one, the diffuse color will be a mix between the diffuse texture color and the lights color. Any other intensity will cause the diffuse color to be darker. Notice how I clamp the diffuse color to be only as bright as the diffuse texture color is. This will protect the scene from being over exposed. When creating your diffuse textures, make sure to create them as if they were fully lit. After diffuse, comes specular. float specularIntensity = max(dot(reflectedDirection, eyeDirection), 0); vec4 specularTemp = clamp ( vec4(p3d_Material.specular, 1) * p3d_LightSource[i].specular * pow ( specularIntensity , p3d_Material.shininess ) , 0 , 1 ); The specular intensity is the dot product between the eye vector and the reflection vector. As with the diffuse intensity, if the two vectors point in the same direction, the specular intensity is one. Any other intensity will diminish the amount of specular color contributed by this light. The material shininess determines how spread out the specular highlight is. This is typically set in a modeling program like Blender. In Blender it's known as the specular hardness. float unitLightDirectionDelta = dot ( normalize(p3d_LightSource[i].spotDirection) , -unitLightDirection ); if (unitLightDirectionDelta < p3d_LightSource[i].spotCosCutoff) { continue; } // ... } ``` This snippet keeps fragments outside of a spotlight's cone or frustum from being affected by the light. Fortunately, Panda3D sets up `spotDirection` and `spotCosCutoff` to also work for directional lights and points lights. Spotlights have both a position and direction. However, directional lights only have a direction and point lights only have a position. Still, this code works for all three lights avoiding the need for noisy if statements. You must negate `unitLightDirection` . `unitLightDirection` goes from the fragment to the spotlight and you need it to go from the spotlight to the fragment since the `spotDirection` goes directly down the center of the spotlight's frustum some distance away from the spotlight's position. For a spotlight, if the dot product between the fragment-to-light vector and the spotlight's direction vector is less than the cosine of half the spotlight's field of view angle, the shader disregards this light's influence. For directional lights and point lights, Panda3D sets `spotCosCutoff` to negative one. Recall that the dot product ranges from negative one to one. So it doesn't matter what the is because it will always be greater than or equal to negative one. Like the snippet, this snippet also works for all three light types. For spotlights, this will make the fragments brighter as you move closer to the center of the spotlight's frustum. For directional lights and point lights, `spotExponent` is zero. Recall that anything to the power of zero is one so the diffuse color is one times itself meaning it is unchanged. float shadow = textureProj ( p3d_LightSource[i].shadowMap , vertexInShadowSpaces[i] ); diffuseTemp.rgb *= shadow; specularTemp.rgb *= shadow; Panda3D makes applying shadows relatively easy by providing the shadow map and shadow transformation matrix for every scene light. To create the shadow transformation matrix yourself, you'll need to assemble a matrix that transforms view space coordinates to light space (coordinates are relative to the light's position). To create the shadow map yourself, you'll need to render the scene from the perspective of the light to a framebuffer texture. The framebuffer texture must hold the distances from the light to the fragments. This is known as a "depth map". Lastly, you'll need to manually give to your shader your DIY depth map as a ``` uniform sampler2DShadow ``` and your DIY shadow transformation matrix as a `uniform mat4` . At this point, you've recreated what Panda3D does for you automatically. The shadow snippet shown uses `textureProj` which is different from the `texure` function shown earlier. `textureProj` first divides ``` vertexInShadowSpaces[i].xyz ``` by ``` vertexInShadowSpaces[i].w ``` . After this, it uses to locate the depth stored in the shadow map. Next it uses ``` vertexInShadowSpaces[i].z ``` to compare this vertex's depth against the shadow map depth at . If the comparison passes, `textureProj` will return one. Otherwise, it will return zero. Zero meaning this vertex/fragment is in the shadow and one meaning this vertex/fragment is not in the shadow. `textureProj` can also return a value between zero and one depending on how the shadow map was set up. In this instance, `textureProj` performs multiple depth tests using neighboring depth values and returns a weighted average. This weighted average can give shadows a softer look. float lightDistance = length(lightDirection); float attenuation = 1 / ( p3d_LightSource[i].constantAttenuation + p3d_LightSource[i].linearAttenuation * lightDistance + p3d_LightSource[i].quadraticAttenuation * (lightDistance * lightDistance) ); diffuseTemp.rgb *= attenuation; specularTemp.rgb *= attenuation; The light's distance is just the magnitude or length of the light direction vector. Notice it's not using the normalized light direction as that distance would be one. You'll need the light distance to calculate the attenuation. Attenuation meaning the light's influence diminishes as you get further away from it. You can set `constantAttenuation` , `linearAttenuation` , and `quadraticAttenuation` to whatever values you would like. A good starting point is ``` constantAttenuation = 1 ``` ``` linearAttenuation = 0 ``` ``` quadraticAttenuation = 1 ``` . With these settings, the attenuation is one at the light's position and approaches zero as you move further away. To calculate the final light color, add the diffuse and specular together. Be sure to add this to the accumulator as you loop through the scene's lights. uniform struct { vec4 ambient ; } p3d_LightModel; vec4 diffuseTex = texture(p3d_Texture1, diffuseCoord); vec4 ambient = p3d_Material.ambient * p3d_LightModel.ambient * diffuseTex; The ambient component to the lighting model is based on the material's ambient color, the ambient light's color, and the diffuse texture color. There should only ever be one ambient light. Because of this, the ambient color calculation only needs to occur once. Contrast this with the diffuse and specular color which must be accumulated for each spot/directional/point light. When you reach SSAO, you'll revisit the ambient color calculation. The final color is the sum of the ambient color, diffuse color, specular color, and the emission color. (C) 2019 <NAME>tier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Blinn-Phong is a slight adjustment of the Phong model you saw in the lighting section. It provides more plausible or realistic specular reflections. You'll notice that Blinn-Phong produces elliptical or elongated specular reflections versus the spherical specular reflections produced by the Phong model. In certain cases, Blinn-Phong can be more efficient to calculate than Phong. vec3 light = normal(lightPosition.xyz - vertexPosition.xyz); vec3 eye = normalize(-vertexPosition.xyz); vec3 halfway = normalize(light + eye); Instead of computing the reflection vector, compute the halfway or half angle vector. This vector is between the view/camera/eye and light direction vector. The specular intensity is now the dot product of the normal and halfway vector. In the Phong model, it is the dot product of the reflection and view vector. The half angle vector (magenta arrow) will point in the same direction as the normal (green arrow) when the view vector (orange arrow) points in the same direction as the reflection vector (magenta arrow). In this case, both the Blinn-Phong and Phong specular intensity will be one. In other cases, the specular intensity for Blinn-Phong will be greater than zero while the specular intensity for Phong will be zero. (C) 2020 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ The fresnel factor alters the reflectiveness of a surface based on the camera or viewing angle. As a surface points away from the camera, its reflectiveness goes up. Similarly, as a surface points towards the camera, its reflectiveness goes down. In other words, as a surface becomes perpendicular with the camera, it becomes more mirror like. Utilizing this property, you can vary the opacity of reflections (such as specular and screen space reflections) and/or vary a surface's alpha values for a more plausible or realistic look. ``` vec4 specular = materialSpecularColor * lightSpecularColor * pow(max(dot(eye, reflection), 0.0), shininess); ``` In the lighting section, the specular component was a combination of the material's specular color, the light's specular color, and by how much the camera pointed into the light's reflection direction. Incorporating the fresnel factor, you'll now vary the material specular color based on the angle between the camera and the surface it's pointed at. The first vector you'll need is the eye/view/camera vector. Recall that the eye vector points from the vertex position to the camera's position. If the vertex position is in view or camera space, the eye vector is the vertex position pointed in the opposite direction. vec3 light = normal(lightPosition.xyz - vertexPosition.xyz); vec3 halfway = normalize(light + eye); The fresnel factor is calculated using two vectors. The simplest two vectors to use are the eye and normal vector. However, if you're using the halfway vector (from the Blinn-Phong section), you can instead calculate the fresnel factor using the halfway and eye vector. float fresnelFactor = dot(halfway, eye); // Or dot(normal, eye). fresnelFactor = max(fresnelFactor, 0.0); fresnelFactor = 1.0 - fresnelFactor; fresnelFactor = pow(fresnelFactor, fresnelPower); With the needed vectors in hand, you can now compute the fresnel factor. The fresnel factor ranges from zero to one. When the dot product is one, the fresnel factor is zero. When the dot product is less than or equal to zero, the fresnel factor is one. This equation comes from Schlick's approximation. In Schlick's approximation, the `fresnelPower` is five but you can alter this to your liking. The demo code varies it using the blue channel of the specular map with a maximum value of five. Once the fresnel factor is determined, use it to modulate the material's specular color. As the fresnel factor approaches one, the material becomes more like a mirror or fully reflective. vec4 specular = vec4(vec3(0.0), 1.0); specular.rgb = materialSpecularColor.rgb * lightSpecularColor.rgb * pow ( max(dot(normal, halfway), 0.0) // Or max(dot(reflection, eye), 0.0). , shininess ); As before, the specular component is a combination of the material's specular color, the light's specular color, and by how much the camera points into the direction of the light's reflection. However, using the fresnel factor, the material's specular color various depending on the orientation of the camera and the surface it's looking at. (C) 2020 David Lettierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Taking inspiration from the fresnel factor, rim lighting targets the rim or silhouette of an object. When combined with cel shading and outlining, it can really complete that cartoon look. You can also use it to highlight objects in the game, making it easier for players to navigate and accomplish tasks. As it was for the fresnel factor, you'll need the eye vector. If your vertex positions are in view space, the eye vector is the negation of the vertex position. float rimLightIntensity = dot(eye, normal); rimLightIntensity = 1.0 - rimLightIntensity; rimLightIntensity = max(0.0, rimLightIntensity); The Intensity of the rim light ranges from zero to one. When the eye and normal vector point in the same direction, the rim light intensity is zero. As the two vectors start to point in different directions, the rim light intensity increases until it eventually reaches one when the eye and normal become orthogonal or perpendicular to one another. You can control the falloff of the rim light using the power function. `step` or `smoothstep` can also be used to control the falloff. This tends to look better when using cel shading. You'll learn more about these functions in later sections. What color you use for the rim light is up to you. The demo code multiplies the diffuse light by the `rimLightIntensity` . This will highlight the silhouette without overexposing it and without lighting any shadowed fragments. vec4 outputColor = vec4(0.0); outputColor.a = diffuseColor.a; outputColor.rgb = ambient.rgb + diffuse.rgb + specular.rgb + rimLight.rgb + emission.rgb; After you've calculated the rim light, add it to the ambient, diffuse, specular, and emission lights. (C) 2020 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Cel shading is a technique to make 3D objects look 2D or flat. In 2D, you can make an object look 3D by applying a smooth gradient. However, with cel shading, you're breaking up the gradients into abrupt transitions. Typically there is only one transition where the shading goes from fully lit to fully shadowed. When combined with outlining, cel shading can really sell the 2D cartoon look. float diffuseIntensity = max(dot(normal, unitLightDirection), 0.0); diffuseIntensity = step(0.1, diffuseIntensity); Revisiting the lighting model, modify the `diffuseIntensity` such that it is either zero or one. The `step` function returns zero if the input is less than the edge and one otherwise. if (diffuseIntensity >= 0.8) { diffuseIntensity = 1.0; } else if (diffuseIntensity >= 0.6) { diffuseIntensity = 0.6; } else if (diffuseIntensity >= 0.3) { diffuseIntensity = 0.3; } else { diffuseIntensity = 0.0; } If you would like to have a few steps or transitions, you can perform something like the above. Another approach is to put your step values into a texture with the transitions going from darker to lighter. Using the `diffuseIntensity` as a U coordinate, it will automatically transform itself. float specularIntensity = clamp(dot(normal, halfwayDirection), 0.0, 1.0); specularIntensity = step(0.98, specularIntensity); Using the `step` function again, set the `specularIntensity` to be either zero or one. You can also use one of the other approaches described up above for the specular highlight as well. After you've altered the `specularIntensity` , the rest of the lighting calculations are the same. (C) 2020 <NAME> <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Normal mapping allows you to add surface details without adding any geometry. Typically, in a modeling program like Blender, you create a high poly and a low poly version of your mesh. You take the vertex normals from the high poly mesh and bake them into a texture. This texture is the normal map. Then inside the fragment shader, you replace the low poly mesh's vertex normals with the high poly mesh's normals you baked into the normal map. Now when you light your mesh, it will appear to have more polygons than it really has. This will keep your FPS high while at the same time retain most of the details from the high poly version. Here you see the progression from the high poly model to the low poly model to the low poly model with the normal map applied. Keep in mind though, normal mapping is only an illusion. After a certain angle, the surface will look flat again. uniform mat3 p3d_NormalMatrix; in vec3 p3d_Normal; in vec3 p3d_Binormal; in vec3 p3d_Tangent; vertexNormal = normalize(p3d_NormalMatrix * p3d_Normal); binormal = normalize(p3d_NormalMatrix * p3d_Binormal); tangent = normalize(p3d_NormalMatrix * p3d_Tangent); Starting in the vertex shader, you'll need to output to the fragment shader the normal vector, binormal vector, and the tangent vector. These vectors are used, in the fragment shader, to transform the normal map normal from tangent space to view space. `p3d_NormalMatrix` transforms the vertex normal, binormal, and tangent vectors to view space. Remember that in view space, all of the coordinates are relative to the camera's position. > [p3d_NormalMatrix] is the upper 3x3 of the inverse transpose of the ModelViewMatrix. It is used to transform the normal vector into view-space coordinates. out vec2 normalCoord; normalCoord = p3d_MultiTexCoord0; You'll also need to output, to the fragment shader, the UV coordinates for the normal map. Recall that the vertex normal was used to calculate the lighting. However, the normal map provides us with different normals to use when calculating the lighting. In the fragment shader, you need to swap out the vertex normals for the normals found in the normal map. in vec2 normalCoord; /* Find */ vec4 normalTex = texture(p3d_Texture1, normalCoord); Using the normal map coordinates the vertex shader sent, pull out the normal from the normal map. Earlier I showed how the normals are mapped to colors to create the normal map. Now this process needs to be reversed so you can get back the original normals that were baked into the map. Here's the process for unpacking the normals from the normal map. /* Transform */ normal = normalize ( mat3 ( tangent , binormal , vertexNormal ) * normal ); The normals you get back from the normal map are typically in tangent space. They could be in another space, however. For example, Blender allows you to bake the normals in tangent, object, world, or camera space. To take the normal map normal from tangent space to view pace, construct a three by three matrix using the tangent, binormal, and vertex normal vectors. Multiply the normal by this matrix and be sure to normalize it. At this point, you're done. The rest of the lighting calculations are the same. (C) 2019 David Lettierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Deferred rendering (deferred shading) is a screen space lighting technique. Instead of calculating the lighting for a scene while you traverse its geometry—you defer or wait to perform the lighting calculations until after the scene's geometry fragments have been culled or discarded. This can give you a performance boost depending on the complexity of your scene. Deferred rendering is performed in two phases. The first phase involves going through the scene's geometry and rendering its positions or depths, normals, and materials into a framebuffer known as the geometry buffer or G-buffer. With the exception of some transformations, this is mostly a read-only phase so its performance cost is minimal. After this phase, you're only dealing with 2D textures in the shape of the screen. The second and last phase is where you perform your lighting calculations using the output of the first phase. This is when you calculate the ambient, diffuse, and specular colors. Shadow and normal mapping are performed in this phase as well. The reason for using deferred rendering is to reduce the number of lighting calculations made. With forward rendering, the number of lighting calculations scales with the number of fragments and lights. However, with deferred shading, the number of lighting calculations scales with the number of pixels and lights. Recall that for a single pixel, there can be multiple fragments produced. As you add geometry, the number of lighting calculations per pixel increases when using forward but not when using deferred. For simple scenes, deferred rendering doesn't provide much of a performance boost and may even hurt performance. However, for complex scenes with lots of lighting, it becomes the better option. Deferred becomes faster than forward because you're only calculating the lighting per light, per pixel. In forward rendering, you're calculating the lighting per light per fragment which can be multiple times per pixel. Deferred rendering allows you render complex scenes using many lights but it does come with its own set of tradeoffs. Transparency becomes an issue because the geometry data you could see through a semitransparent object is discarded in the first phase. Other tradesoffs include increased memory consumption due to the G-buffer and the extra workarounds needed to deal with aliasing. (C) 2019 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Fog (or mist as it's called in Blender) adds atmospheric haze to a scene, providing mystique and softening pop-ins (geometry suddenly entering into the camera's frustum). To calculate the fog, you'll need its color, near distance, and far distance. uniform sampler2D positionTexture; In addition to the fog's attributes, you'll also need the fragment's vertex `position` . `fogMax` controls how much of the scene is still visible when the fog is most intense. `fogMin` controls how much of the fog is still visible when the fog is least intense. float near = nearFar.x; float far = nearFar.y; float intensity = clamp ( (position.y - near) / (far - near) , fogMin , fogMax ); The example code uses a linear model for calculating the fog's intensity. There's also an exponential model you could use. The fog's intensity is `fogMin` before or at the start of the fog's `near` distance. As the vertex `position` gets closer to the end of the fog's `far` distance, the `intensity` moves closer to `fogMax` . For any vertexes after the end of the fog, the `intensity` is clamped to `fogMax` . Set the fragment's color to the fog `color` and the fragment's alpha channel to the `intensity` . As `intensity` gets closer to one, you'll have less of the scene's color and more of the fog color. When `intensity` reaches one, you'll have all fog color and nothing else. uniform sampler2D baseTexture; uniform sampler2D fogTexture; vec4 baseColor = texture(baseTexture, texCoord); vec4 fogColor = texture(fogTexture, texCoord); fragColor = baseColor; fragColor = mix(fragColor, fogColor, min(fogColor.a, 1)); Normally you calculate the fog in the same shader that does the lighting calculations. However, you can also break it out into its own framebuffer texture. Here you see the fog color being applied to the rest of the scene much like you would apply a layer in GIMP. This allows you to calculate the fog once instead calculating it in every shader that eventually needs it. (C) 2019 <NAME>tier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ The need to blur this or that can come up quite often as you try to obtain a particular look or perform some technique like motion blur. Below are just some of ways you can blur your game's imagery. The box blur or mean filter algorithm is a simple to implement blurring effect. It's fast and gets the job done. If you need more finesse, you can upgrade to a Gaussian blur. The `size` parameter controls how blurry the result is. If the `size` is zero or less, return the fragment untouched. The `separation` parameter spreads out the blur without having to sample additional fragments. `separation` ranges from one to infinity. Like the outlining technique, the box blur technique uses a kernel/matrix/window centered around the current fragment. The size of the window is `size * 2 + 1` by `size * 2 + 1` . So for example, with a `size` setting of two, the window uses `(2 * 2 + 1)^2 = 25` samples per fragment. fragColor += texture ( colorTexture , ( gl_FragCoord.xy + (vec2(i, j) * separation) ) / texSize ); To compute the mean or average of the samples in the window, start by loop through the window, adding up each color vector. To finish computing the mean, divide the sum of the colors sampled by the number of samples taken. The final fragment color is the mean or average of the fragments sampled inside the window. The box blur uses the mean color of the samples taken. The median filter uses the median color of the samples taken. By using the median instead of the mean, the edges in the image are preserved—meaning the edges stay nice and crisp. For example, look at the windows in the box blurred image versus the median filtered image. Unfortunately, finding the median can be slower than finding the mean. You could sort the values and choose the middle one but that would take at least quasilinear time. There is a technique to find the median in linear time but it can be quite awkward inside a shader. The numerical approach below approximates the median in linear time. How well it approximates the median can be controlled. At lower quality approximations, you end up with a nice painterly look. #define MAX_SIZE 4 #define MAX_KERNEL_SIZE ((MAX_SIZE * 2 + 1) * (MAX_SIZE * 2 + 1)) #define MAX_BINS_SIZE 100 These are the hard limits for the `size` parameter, window size, and `bins` array. int size = int(parameters.x); if (size <= 0) { fragColor = texture(colorTexture, texCoord); return; } if (size > MAX_SIZE) { size = MAX_SIZE; } int kernelSize = int(pow(size * 2 + 1, 2)); The `size` parameter controls how blurry or smeared the effect is. If the size is at or below zero, return the current fragment untouched. From the `size` parameter, calculate the total size of the kernel or window. This is how many samples you'll be taking per fragment. Set up the `binsSize` , making sure to limit it by the `MAX_BINS_SIZE` . `i` and `j` are used to sample the given texture around the current fragment. `i` is also used as a general for loop count. `count` is used in the initialization of the `colors` array which you'll see later. `binIndex` is used to approximate the median color. vec4 colors[MAX_KERNEL_SIZE]; float bins[MAX_BINS_SIZE]; int binIndexes[colors.length()]; The `colors` array holds the sampled colors taken from the input texture. `bins` is used to approximate the median of the sampled colors. Each bin holds a count of how many colors fall into its range when converting each color into a greyscale value (between zero and one). As `binsSize` approaches 100, the algorithm finds the true median almost always. `binIndexes` stores the `bins` index or which bin each sample falls into. `total` keeps track of how many colors you've come across as you loop through `bins` . When `total` reaches `limit` , you return whatever `bins` index you're at. The `limit` is the median index. For example, if the window size is 81, `limit` is 41 which is directly in the middle (40 samples below and 40 samples above). These are used to covert and hold each color sample's greyscale value. Instead of dividing red, green, and blue by one third, it uses 30% of red, 59% of green, and 11% of blue for a total of 100%. for (i = -size; i <= size; ++i) { for (j = -size; j <= size; ++j) { colors[count] = texture ( colorTexture , ( gl_FragCoord.xy + vec2(i, j) ) / texSize ); count += 1; } } Loop through the window and collect the color samples into `colors` . Initialize the `bins` array with zeros. for (i = 0; i < kernelSize; ++i) { value = dot(colors[i].rgb, valueRatios); binIndex = int(floor(value * binsSize)); binIndex = clamp(binIndex, 0, binsSize - 1); bins[binIndex] += 1; binIndexes[i] = binIndex; } Loop through the colors and convert each one to a greyscale value. ``` dot(colors[i].rgb, valueRatios) ``` is the weighted sum ``` colors.r * 0.3 + colors.g * 0.59 + colors.b * 0.11 ``` . Each value will fall into some bin. Each bin covers some range of values. For example, if the number of bins is 10, the first bin covers everything from zero up to but not including 0.1. Increment the number of colors that fall into this bin and remember the color sample's bin index so you can look it up later. binIndex = 0; for (i = 0; i < binsSize; ++i) { total += bins[i]; if (total >= limit) { binIndex = i; break; } } Loop through the bins, tallying up the number of colors seen so far. When you reach the median index, exit the loop and remember the last `bins` index reached. fragColor = colors[0]; for (i = 0; i < kernelSize; ++i) { if (binIndexes[i] == binIndex) { fragColor = colors[i]; break; } } Now loop through the `binIndexes` and find the first color with the last `bins` indexed reached. Its greyscale value is the approximated median which in many cases will be the true median value. Set this color as the fragColor and exit the loop and shader. Like the median filter, the kuwahara filter preserves the major edges found in the image. You'll notice that it has a more block like or chunky pattern to it. In practice, the Kuwahara filter runs faster than the median filter, allowing for larger `size` values without a noticeable slowdown. Set a hard limit for the `size` parameter and the number of samples taken. These are used to sample the input texture and set up the `values` array. Like the median filter, you'll be converting the color samples into greyscale values. Initialize the `values` array. This will hold the greyscale values for the color samples. vec4 color = vec4(0); vec4 meanTemp = vec4(0); vec4 mean = vec4(0); float valueMean = 0; float variance = 0; float minVariance = -1; The Kuwahara filter works by computing the variance of four subwindows and then using the mean of the subwindow with the smallest variance. `findMean` is a function defined outside of `main` . Each run of `findMean` will remember the mean of the given subwindow that has the lowest variance seen so far. Make sure to reset `count` and `meanTemp` before computing the mean of the given subwindow. for (i = i0; i <= i1; ++i) { for (j = j0; j <= j1; ++j) { color = texture ( colorTexture , (gl_FragCoord.xy + vec2(i, j)) / texSize ); meanTemp += color; values[count] = dot(color.rgb, valueRatios); count += 1; } } Similar to the box blur, loop through the given subwindow and add up each color. At the same time, make sure to store the greyscale value for this sample in `values` . To compute the mean, divide the samples sum by the number of samples taken. Calculate the greyscale value for the mean. for (i = 0; i < count; ++i) { variance += pow(values[i] - valueMean, 2); } variance /= count; Now calculate the variance for this given subwindow. The variance is the average squared difference between each sample's greyscale value the mean greyscale value. if (variance < minVariance || minVariance <= -1) { mean = meanTemp; minVariance = variance; } } If the variance is smaller than what you've seen before or this is the first variance you've seen, set the mean of this subwindow as the final mean and update the minimum variance seen so far. Back in `main` , set the `size` parameter. If the size is at or below zero, return the fragment unchanged. // Lower Left findMean(-size, 0, -size, 0); // Upper Right findMean(0, size, 0, size); // Upper Left findMean(-size, 0, 0, size); // Lower Right findMean(0, size, -size, 0); As stated above, the Kuwahara filter works by computing the variance of four subwindows and then using the mean of the subwindow with the lowest variance as the final fragment color. Note that the four subwindows overlap each other. After computing the variance and mean for each subwindow, set the fragment color to the mean of the subwindow with the lowest variance. (C) 2019 David Let<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Adding bloom to a scene can really sell the illusion of the lighting model. Light emitting objects are more believable and specular highlights get an extra dose of shimmer. These parameters control the look and feel. `size` determines how blurred the effect is. `separation` spreads out the blur. `threshold` controls which fragments are illuminated. And the last parameter, `amount` , controls how much bloom is outputted. float value = 0.0; float count = 0.0; vec4 result = vec4(0); vec4 color = vec4(0); The technique starts by looping through a kernel/matrix/window centered over the current fragment. This is similar to the window used for outlining. The size of the window is `size * 2 + 1` by `size * 2 + 1` . So for example, with a `size` setting of two, the window uses `(2 * 2 + 1)^2 = 25` samples per fragment. value = max(color.r, max(color.g, color.b)); if (value < threshold) { color = vec4(0); } result += color; count += 1.0; For each iteration, it retrieves the color from the input texture and turns the red, green, and blue values into a greyscale value. If this greyscale value is less than the threshold, it discards this color by making it solid black. After evaluating the sample's greyscale value, it adds its RGB values to `result` . After it's done summing up the samples, it divides the sum of the color samples by the number of samples taken. The result is the average color of itself and its neighbors. By doing this for every fragment, you end up with a blurred image. This form of blurring is known as a box blur. Here you see the progression of the bloom algorithm. (C) 2019 <NAME>tier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ SSAO is one of those effects you never knew you needed and can't live without once you have it. It can take a scene from mediocre to wow! For fairly static scenes, you can bake ambient occlusion into a texture but for more dynamic scenes, you'll need a shader. SSAO is one of the more fairly involved shading techniques, but once you pull it off, you'll feel like a shader master. By using only a handful of textures, SSAO can approximate the ambient occlusion of a scene. This is faster than trying to compute the ambient occlusion by going through all of the scene's geometry. These handful of textures all originate in screen space giving screen space ambient occlusion its name. The SSAO shader will need the following inputs. Storing the vertex positions into a framebuffer texture is not a necessity. You can recreate them from the camera's depth buffer. This being a beginners guide, I'll avoid this optimization and keep it straight forward. Feel free to use the depth buffer, however, for your implementation. ``` PT(Texture) depthTexture = new Texture("depthTexture"); depthTexture->set_format ( Texture::Format::F_depth_component32 ); PT(GraphicsOutput) depthBuffer = graphicsOutput->make_texture_buffer ( "depthBuffer" , 0 , 0 , depthTexture ); depthBuffer->set_clear_color ( LVecBase4f(0, 0, 0, 0) ); NodePath depthCameraNP = window->make_camera(); DCAST(Camera, depthCameraNP.node())->set_lens ( window->get_camera(0)->get_lens() ); PT(DisplayRegion) depthBufferRegion = depthBuffer->make_display_region ( 0 , 1 , 0 , 1 ); depthBufferRegion->set_camera(depthCameraNP); ``` If you do decide to use the depth buffer, here's how you can set it up using Panda3D. Here's the simple shader used to render out the view space vertex positions into a framebuffer texture. The more involved work is setting up the framebuffer texture such that the fragment vector components it receives are not clamped to `[0, 1]` and that each one has a high enough precision (a high enough number of bits). For example, if a particular interpolated vertex position is ``` <-139.444444566, 0.00000034343, 2.5> ``` , you don't want it stored into the texture as `<0.0, 0.0, 1.0>` . FrameBufferProperties fbp = FrameBufferProperties::get_default(); fbp.set_rgba_bits(32, 32, 32, 32); fbp.set_rgb_color(true); fbp.set_float_color(true); Here's how the example code sets up the framebuffer texture to store the vertex positions. It wants 32 bits per red, green, blue, and alpha components and disables clamping the values to `[0, 1]` The ``` set_rgba_bits(32, 32, 32, 32) ``` call sets the bits and also disables the clamping. Here's the equivalent OpenGL call. `GL_RGB32F` sets the bits and also disables the clamping. > If the color buffer is fixed-point, the components of the source and destination values and blend factors are each clamped to [0, 1] or [−1, 1] respectively for an unsigned normalized or signed normalized color buffer prior to evaluating the blend equation. If the color buffer is floating-point, no clamping occurs. . You'll need the vertex normals to correctly orient the samples you'll take in the SSAO shader. The example code generates multiple sample vectors distributed in a hemisphere but you could use a sphere and do away with the need for normals all together. ``` in vec3 vertexNormal; void main() { vec3 normal = normalize(vertexNormal); fragColor = vec4(normal, 1); } ``` Like the position shader, the normal shader is simple as well. Be sure to normalize the vertex normal and remember that they are in view space. . Here you see SSAO being used with the normal maps instead of the vertex normals. This adds an extra level of detail and will pair nicely with the normal mapped lighting. normal = normalize ( normalTex.rgb * 2.0 - 1.0 ); normal = normalize ( mat3 ( tangent , binormal , vertexNormal ) * normal ); To use the normal maps instead, you'll need to transform the normal mapped normals from tangent space to view space just like you did in the lighting calculations. To determine the amount of ambient occlusion for any particular fragment, you'll need to sample the surrounding area. The more samples you use, the better the approximation at the cost of performance. for (int i = 0; i < numberOfSamples; ++i) { LVecBase3f sample = LVecBase3f ( randomFloats(generator) * 2.0 - 1.0 , randomFloats(generator) * 2.0 - 1.0 , randomFloats(generator) ).normalized(); float rand = randomFloats(generator); sample[0] *= rand; sample[1] *= rand; sample[2] *= rand; float scale = (float) i / (float) numberOfSamples; scale = lerp(0.1, 1.0, scale * scale); sample[0] *= scale; sample[1] *= scale; sample[2] *= scale; ssaoSamples.push_back(sample); } The example code generates a number of random samples distributed in a hemisphere. These `ssaoSamples` will be sent to the SSAO shader. ``` LVecBase3f sample = LVecBase3f ( randomFloats(generator) * 2.0 - 1.0 , randomFloats(generator) * 2.0 - 1.0 , randomFloats(generator) * 2.0 - 1.0 ).normalized(); ``` If you'd like to distribute your samples in a sphere instead, change the random `z` component to range from negative one to one. for (int i = 0; i < numberOfNoise; ++i) { LVecBase3f noise = LVecBase3f ( randomFloats(generator) * 2.0 - 1.0 , randomFloats(generator) * 2.0 - 1.0 , 0.0 ); ssaoNoise.push_back(noise); } To get a good sweep of the sampled area, you'll need to generate some noise vectors. These noise vectors will randomly tilt the hemisphere around the current fragment. SSAO works by sampling the view space around a fragment. The more samples that are below a surface, the darker the fragment color. These samples are positioned at the fragment and pointed in the general direction of the vertex normal. Each sample is used to look up a position in the position framebuffer texture. The position returned is compared to the sample. If the sample is farther away from the camera than the position, the sample counts towards the fragment being occluded. Here you see the space above the surface being sampled for occlusion. Like some of the other techniques, the SSAO shader has a few control knobs you can tweak to get the exact look you're going for. The `bias` adds to the sample's distance from the camera. You can use the bias to combat "acne". The `radius` increases or decreases the coverage area of the sample space. The `magnitude` either lightens or darkens the occlusion map. The `contrast` either washes out or increases the starkness of the occlusion map. vec4 position = texture(positionTexture, texCoord); vec3 normal = normalize(texture(normalTexture, texCoord).xyz); int noiseX = int(gl_FragCoord.x - 0.5) % 4; int noiseY = int(gl_FragCoord.y - 0.5) % 4; vec3 random = noise[noiseX + (noiseY * 4)]; Retrieve the position, normal, and random vector for later use. Recall that the example code created a set number of random vectors. The random vector is chosen based on the current fragment's screen position. vec3 tangent = normalize(random - normal * dot(random, normal)); vec3 binormal = cross(normal, tangent); mat3 tbn = mat3(tangent, binormal, normal); Using the random and normal vectors, assemble the tangent, binormal, and normal matrix. You'll need this matrix to transform the sample vectors from tangent space to view space. With the matrix in hand, the shader can now loop through the samples, subtracting how many are not occluded. vec3 samplePosition = tbn * samples[i]; samplePosition = position.xyz + samplePosition * radius; Using the matrix, position the sample near the vertex/fragment position and scale it by the radius. vec4 offsetUV = vec4(samplePosition, 1.0); offsetUV = lensProjection * offsetUV; offsetUV.xyz /= offsetUV.w; offsetUV.xy = offsetUV.xy * 0.5 + 0.5; Using the sample's position in view space, transform it from view space to clip space to UV space. Recall that clip space components range from negative one to one and that UV coordinates range from zero to one. To transform clip space coordinates to UV coordinates, multiply by one half and add one half. vec4 offsetPosition = texture(positionTexture, offsetUV.xy); float occluded = 0; if (samplePosition.y + bias <= offsetPosition.y) { occluded = 0; } else { occluded = 1; } Using the offset UV coordinates, created by projecting the 3D sample onto the 2D position texture, find the corresponding position vector. This takes you from view space to clip space to UV space back to view space. The shader takes this round trip to find out if some geometry is behind, at, or in front of this sample. If the sample is in front of or at some geometry, this sample doesn't count towards the fragment being occluded. If the sample is behind some geometry, this sample counts towards the fragment being occluded. float intensity = smoothstep ( 0.0 , 1.0 , radius / abs(position.y - offsetPosition.y) ); occluded *= intensity; occlusion -= occluded; Now weight this sampled position by how far it is inside or outside the radius. Finally, subtract this sample from the occlusion factor since it assumes all of the samples are occluded before the loop. Divide the occluded count by the number of samples to scale the occlusion factor from `[0, NUM_SAMPLES]` to `[0, 1]` . Zero means full occlusion and one means no occlusion. Now assign the occlusion factor to the fragment's color and you're done. For the demo's purposes, the example code sets the alpha channel to alpha channel of the position framebuffer texture to avoid covering up the background. The SSAO framebuffer texture is noisy as is. You'll want to blur it to remove the noise. Refer back to the section on blurring. For the best results, use a median or Kuwahara filter to preserve the sharp edges. vec2 ssaoBlurTexSize = textureSize(ssaoBlurTexture, 0).xy; vec2 ssaoBlurTexCoord = gl_FragCoord.xy / ssaoBlurTexSize; float ssao = texture(ssaoBlurTexture, ssaoBlurTexCoord).r; vec4 ambient = p3d_Material.ambient * p3d_LightModel.ambient * diffuseTex * ssao; The final stop for SSAO is back in the lighting calculation. Here you see the occlusion factor being looked up in the SSAO framebuffer texture and then included in the ambient light calculation. (C) 2019 <NAME>tier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ To really sell the illusion of speed, you can do no better than motion blur. From high speed car chases to moving at warp speed, motion blur greatly improves the look and feel of fast moving objects. There are a few ways to implement motion blur as a screen space technique. The less involved implementation will only blur the scene in relation to the camera's movements while the more involved version will blur any moving objects even with the camera remaining still. The less involved technique is described below but the principle is the same. The input textures needed are the vertex positions in view space and the scene's colors. Refer back to SSAO for acquiring the vertex positions. uniform mat4 previousViewWorldMat; uniform mat4 worldViewMat; uniform mat4 lensProjection; The motion blur technique determines the blur direction by comparing the previous frame's vertex positions with the current frame's vertex positions. To do this, you'll need the previous frame's view-to-world matrix, the current frame's world-to-view matrix, and the camera lens' projection matrix. uniform vec2 parameters; void main() { int size = int(parameters.x); float separation = parameters.y; The adjustable parameters are `size` and `separation` . `size` controls how many samples are taken along the blur direction. Increasing `size` increases the amount of blur at the cost of performance. `separation` controls how spread out the samples are along the blur direction. Increasing `separation` increases the amount of blur at the cost of accuracy. vec4 position1 = texture(positionTexture, texCoord); vec4 position0 = worldViewMat * previousViewWorldMat * position1; To determine which way to blur this fragment, you'll need to know where things were last frame and where things are this frame. To figure out where things are now, sample the current vertex position. To figure out where things were last frame, transform the current position from view space to world space, using the previous frame's view-to-world matrix, and then transform it back to view space from world space using this frame's world-to-view matrix. This transformed position is this fragment's previous interpolated vertex position. position0 = lensProjection * position0; position0.xyz /= position0.w; position0.xy = position0.xy * 0.5 + 0.5; position1 = lensProjection * position1; position1.xyz /= position1.w; position1.xy = position1.xy * 0.5 + 0.5; Now that you have the current and previous positions, transform them to screen space. With the positions in screen space, you can trace out the 2D direction you'll need to blur the onscreen image. // position1.xy = position0.xy + direction; vec2 direction = position1.xy - position0.xy; if (length(direction) <= 0.0) { return; } The blur direction goes from the previous position to the current position. Sample the current fragment's color. This will be the first of the colors blurred together. Multiply the direction vector by the separation. For a more seamless blur, sample in the direction of the blur and in the opposite direction of the blur. For now, set the two vectors to the fragment's UV coordinate. `count` is used to average all of the samples taken. It starts at one since you've already sampled the current fragment's color. for (int i = 0; i < size; ++i) { forward += direction; backward -= direction; fragColor += texture ( colorTexture , forward ); fragColor += texture ( colorTexture , backward ); count += 2.0; } Sample the screen's colors both in the forward and backward direction of the blur. Be sure to add these samples together as you travel along. The final fragment color is the average color of the samples taken. (C) 2020 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Date: 2021-01-01 Categories: Tags: Chromatic aberration is a screen space technique that simulates lens distortion. Use it to give your scene a cinematic, lo-fi analog feel or to emphasize a chaotic event. The input texture needed is the scene's colors captured into a framebuffer texture. The adjustable parameters for this technique are the red, green, and blue offsets. Feel free to play around with these to get the particular color fringe you're looking for. These particular offsets produce a yellowish orange and blue fringe. uniform vec2 mouseFocusPoint; vec2 direction = texCoord - mouseFocusPoint; The offsets can occur horizontally, vertically, or radially. One approach is to radiate out from the depth of field focal point. As the scene gets more out of focus, the chromatic aberration increases. fragColor.r = texture(colorTexture, texCoord + (direction * vec2(redOffset ))).r; fragColor.g = texture(colorTexture, texCoord + (direction * vec2(greenOffset))).g; fragColor.ba = texture(colorTexture, texCoord + (direction * vec2(blueOffset ))).ba; } ``` With the direction and offsets, make three samples of the scene's colors—one for the red, green, and blue channels. These will be the final fragment color. (C) 2021 David Let<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Adding reflections can really ground your scene. Wet and shiny objects spring to life as nothing makes something look wet or shiny quite like reflections. With reflections, you can really sell the illusion of water and metallic objects. In the lighting section, you simulated the reflected, mirror-like image of the light source. This was the process of rendering the specular reflection. Recall that the specular light was computed using the reflected light direction. Similarly, using screen space reflection or SSR, you can simulate the reflection of other objects in the scene instead of just the light source. Instead of the light ray coming from the source and bouncing off into the camera, the light ray comes from some object in the scene and bounces off into the camera. SSR works by reflecting the screen image onto itself using only itself. Compare this to cube mapping which uses six screens or textures. In cube mapping, you reflect a ray from some point in your scene to some point on the inside of a cube surrounding your scene. In SSR, you reflect a ray from some point on your screen to some other point on your screen. By reflecting your screen onto itself, you can create the illusion of reflection. This illusion holds for the most part but SSR does fail in some cases as you'll see. Screen space reflection uses a technique known as ray marching to determine the reflection for each fragment. Ray marching is the process of iteratively extending or contracting the length or magnitude of some vector in order to probe or sample some space for information. The ray in screen space reflection is the position vector reflected about the normal. Intuitively, a light ray hits some point in the scene, bounces off, travels in the opposite direction of the reflected position vector, bounces off the current fragment, travels in the opposite direction of the position vector, and hits the camera lens allowing you to see the color of some point in the scene reflected in the current fragment. SSR is the process of tracing the light ray's path in reverse. It tries to find the reflected point the light ray bounced off of and hit the current fragment. With each iteration, the algorithm samples the scene's positions or depths, along the reflection ray, asking each time if the ray intersected with the scene's geometry. If there is an intersection, that position in the scene is a potential candidate for being reflected by the current fragment. Ideally there would be some analytical method for determining the first intersection point exactly. This first intersection point is the only valid point to reflect in the current fragment. Instead, this method is more like a game of battleship. You can't see the intersections (if there are any) so you start at the base of the reflection ray and call out coordinates as you travel in the direction of the reflection. With each call, you get back an answer of whether or not you hit something. If you do hit something, you try points around that area hoping to find the exact point of intersection. Here you see ray marching being used to calculate each fragment's reflected point. The vertex normal is the bright green arrow, the position vector is the bright blue arrow, and the bright red vector is the reflection ray marching through view space. To compute the reflections, you'll need the vertex normals in view space. Referrer back to SSAO for details. Here you see SSR using the normal mapped normals instead of the vertex normals. Notice how the reflection follows the ripples in the water versus the more mirror like reflection shown earlier. There are a few ways you can implement SSR. The example code starts the reflection process by computing a reflected UV coordinate for each screen fragment. You could skip this part and go straight to computing the reflected color instead, using the final rendering of the scene. Here you see the reflected UV coordinates. Without even rendering the scene yet, you can get a good feel for what the reflections will look like. uniform mat4 lensProjection; uniform sampler2D positionTexture; uniform sampler2D normalTexture; You'll need the camera lens' projection matrix as well as the interpolated vertex positions and normals in view space. float maxDistance = 15; float resolution = 0.3; int steps = 10; float thickness = 0.5; Like the other effects, SSR has a few parameters you can adjust. Depending on the complexity of the scene, it may take you awhile to find the right settings. Getting screen space reflections to look just right tends to be difficult when reflecting complex geometry. The `maxDistance` parameter controls how far a fragment can reflect. In other words, it controls the maximum length or magnitude of the reflection ray. The `resolution` parameter controls how many fragments are skipped while traveling or marching the reflection ray during the first pass. This first pass is to find a point along the ray's direction where the ray enters or goes behind some geometry in the scene. Think of this first pass as the rough pass. Note that the `resolution` ranges from zero to one. Zero will result in no reflections while one will travel fragment-by-fragment along the ray's direction. A `resolution` of one can slow down your FPS considerably especially with a large `maxDistance` . The `steps` parameter controls how many iterations occur during the second pass. This second pass is to find the exact point along the reflection ray's direction where the ray immediately hits or intersects with some geometry in the scene. Think of this second pass as the refinement pass. The `thickness` controls the cutoff between what counts as a possible reflection hit and what does not. Ideally, you'd like to have the ray immediately stop at some camera-captured position or depth in the scene. This would be the exact point where the light ray bounced off, hit your current fragment, and then bounced off into the camera. Unfortunately the calculations are not always that precise so `thickness` provides some wiggle room or tolerance. You'll want the thickness to be as small as possible—just a short distance beyond a sampled position or depth. You'll find that as the thickness gets larger, the reflections tend to smear in places. Going in the other direction, as the thickness gets smaller, the reflections become noisy with tiny little holes and narrow gaps. vec4 positionFrom = texture(positionTexture, texCoord); vec3 unitPositionFrom = normalize(positionFrom.xyz); vec3 normal = normalize(texture(normalTexture, texCoord).xyz); vec3 pivot = normalize(reflect(unitPositionFrom, normal)); Gather the current fragment's position, normal, and reflection about the normal. `positionFrom` is a vector from the camera position to the current fragment position. `normal` is a vector pointing in the direction of the interpolated vertex normal for the current fragment. `pivot` is the reflection ray or vector pointing in the reflected direction of the `positionFrom` vector. It currently has a length or magnitude of one. vec4 startView = vec4(positionFrom.xyz + (pivot * 0), 1); vec4 endView = vec4(positionFrom.xyz + (pivot * maxDistance), 1); Calculate the start and end point of the reflection ray in view space. vec4 startFrag = startView; // Project to screen space. startFrag = lensProjection * startFrag; // Perform the perspective divide. startFrag.xyz /= startFrag.w; // Convert the screen-space XY coordinates to UV coordinates. startFrag.xy = startFrag.xy * 0.5 + 0.5; // Convert the UV coordinates to fragment/pixel coordnates. startFrag.xy *= texSize; vec4 endFrag = endView; endFrag = lensProjection * endFrag; endFrag.xyz /= endFrag.w; endFrag.xy = endFrag.xy * 0.5 + 0.5; endFrag.xy *= texSize; Project or transform these start and end points from view space to screen space. These points are now fragment positions which correspond to pixel positions on the screen. Now that you know where the ray starts and ends on the screen, you can travel or march along its direction in screen space. Think of the ray as a line drawn on the screen. You'll travel along this line using it to sample the fragment positions stored in the position framebuffer texture. Note that you could march the ray through view space but this may under or over sample scene positions found in the position framebuffer texture. Recall that the position framebuffer texture is the size and shape of the screen. Every screen fragment or pixel corresponds to some position captured by the camera. A reflection ray may travel a long distance in view space, but in screen space, it may only travel through a few pixels. You can only sample the screen's pixels for positions so it is inefficient to potentially sample the same pixels over and over again while marching in view space. By marching in screen space, you'll more efficiently sample the fragments or pixels the ray actually occupies or covers. The first pass will begin at the starting fragment position of the reflection ray. Convert the fragment position to a UV coordinate by dividing the fragment's coordinates by the position texture's dimensions. Calculate the delta or difference between the X and Y coordinates of the end and start fragments. This will be how many pixels the ray line occupies in the X and Y dimension of the screen. float useX = abs(deltaX) >= abs(deltaY) ? 1 : 0; float delta = mix(abs(deltaY), abs(deltaX), useX) * clamp(resolution, 0, 1); To handle all of the various different ways (vertical, horizontal, diagonal, etc.) the line can be oriented, you'll need to keep track of and use the larger difference. The larger difference will help you determine how much to travel in the X and Y direction each iteration, how many iterations are needed to travel the entire line, and what percentage of the line does the current position represent. `useX` is either one or zero. It is used to pick the X or Y dimension depending on which delta is bigger. `delta` is the larger delta of the two X and Y deltas. It is used to determine how much to march in either dimension each iteration and how many iterations to take during the first pass. Calculate how much to increment the X and Y position by using the larger of the two deltas. If the two deltas are the same, each will increment by one each iteration. If one delta is larger than the other, the larger delta will increment by one while the smaller one will increment by less than one. This assumes the `resolution` is one. If the resolution is less than one, the algorithm will skip over fragments. ``` startFrag = ( 1, 4) endFrag = (10, 14) deltaX = (10 - 1) = 9 deltaY = (14 - 4) = 10 resolution = 0.5 delta = 10 * 0.5 = 5 increment = (deltaX, deltaY) / delta = ( 9, 10) / 5 = ( 9 / 5, 2) ``` For example, say the `resolution` is 0.5. The larger dimension will increment by two fragments instead of one. To move from the start fragment to the end fragment, the algorithm uses linear interpolation. ``` current position x = (start x) * (1 - search1) + (end x) * search1; current position y = (start y) * (1 - search1) + (end y) * search1; ``` `search1` ranges from zero to one. When `search1` is zero, the current position is the start fragment. When `search1` is one, the current position is the end fragment. For any other value, the current position is somewhere between the start and end fragment. `search0` is used to remember the last position on the line where the ray missed or didn't intersect with any geometry. The algorithm will later use `search0` in the second pass to help refine the point at which the ray touches the scene's geometry. `hit0` indicates there was an intersection during the first pass. `hit1` indicates there was an intersection during the second pass. The `viewDistance` value is how far away from the camera the current point on the ray is. Recall that for Panda3D, the Y dimension goes in and out of the screen in view space. For other systems, the Z dimension goes in and out of the screen in view space. In any case, `viewDistance` is how far away from the camera the ray currently is. Note that if you use the depth buffer, instead of the vertex positions in view space, the `viewDistance` would be the Z depth. Make sure not to confuse the `viewDistance` value with the Y dimension of the line being traveled across the screen. The `viewDistance` goes from the camera into scene while the Y dimension of the line travels up or down the screen. The `depth` is the view distance difference between the current ray point and scene position. It tells you how far behind or in front of the scene the ray currently is. Remember that the scene positions are the interpolated vertex positions stored in the position framebuffer texture. You can now begin the first pass. The first pass runs while `i` is less than the `delta` value. When `i` reaches `delta` , the algorithm has traveled the entire length of the line. Remember that `delta` is the larger of the two X and Y deltas. Advance the current fragment position closer to the end fragment. Use this new fragment position to look up a scene position stored in the position framebuffer texture. search1 = mix ( (frag.y - startFrag.y) / deltaY , (frag.x - startFrag.x) / deltaX , useX ); Calculate the percentage or portion of the line the current fragment represents. If `useX` is zero, use the Y dimension of the line. If `useX` is one, use the X dimension of the line. When `frag` equals `startFrag` , `search1` equals zero since `frag - startFrag` is zero. When `frag` equals `endFrag` , `search1` is one since `frag - startFrag` equals `delta` . `search1` is the percentage or portion of the line the current position represents. You'll need this to interpolate between the ray's view-space start and end distances from the camera. Using `search1` , interpolate the view distance (distance from the camera in view space) for the current position you're at on the reflection ray. ``` // Incorrect. viewDistance = mix(startView.y, endView.y, search1); // Correct. viewDistance = (startView.y * endView.y) / mix(endView.y, startView.y, search1); ``` You may be tempted to just interpolate between the view distances of the start and end view-space positions but this will give you the wrong view distance for the current position on the reflection ray. Instead, you'll need to perform perspective-correct interpolation which you see here. Calculate the difference between the ray's view distance at this point and the sampled view distance of the scene at this point. If the difference is between zero and the thickness, this is a hit. Set `hit0` to one and exit the first pass. If the difference is not between zero and the thickness, this is a miss. Set `search0` to equal `search1` to remember this position as the last known miss. Continue marching the ray towards the end fragment. At this point you have finished the first pass. Set the `search1` position to be halfway between the position of the last miss and the position of the last hit. You can now begin the second pass. If the reflection ray didn't hit anything in the first pass, skip the second pass. frag = mix(startFrag.xy, endFrag.xy, search1); uv.xy = frag / texSize; positionTo = texture(positionTexture, uv.xy); As you did in the first pass, use the current position on the ray line to sample a position from the scene. viewDistance = (startView.y * endView.y) / mix(endView.y, startView.y, search1); depth = viewDistance - positionTo.y; Interpolate the view distance for the current ray line position and calculate the camera distance difference between the ray at this point and the scene. if (depth > 0 && depth < thickness) { hit1 = 1; search1 = search0 + ((search1 - search0) / 2); } else { float temp = search1; search1 = search1 + ((search1 - search0) / 2); search0 = temp; } If the depth is within bounds, this is a hit. Set `hit1` to one and set `search1` to be halfway between the last known miss position and this current hit position. If the depth is not within bounds, this is a miss. Set `search1` to be halfway between this current miss position and the last known hit position. Move `search0` to this current miss position. Continue this back and forth search while `i` is less than `steps` . You're now done with the second and final pass but before you can output the reflected UV coordinates, you'll need to calculate the `visibility` of the reflection. The `visibility` ranges from zero to one. If there wasn't a hit in the second pass, the `visibility` is zero. If the reflected scene position's alpha or `w` component is zero, the `visibility` is zero. Note that if `w` is zero, there was no scene position at that point. One of the ways in which screen space reflection can fail is when the reflection ray points in the general direction of the camera. If the reflection ray points towards the camera and hits something, it's most likely hitting the back side of something facing away from the camera. To handle this failure case, you'll need to gradually fade out the reflection based on how much the reflection vector points to the camera's position. If the reflection vector points directly in the opposite direction of the position vector, the visibility is zero. Any other direction results in the visibility being greater than zero. Remember to normalize both vectors when taking the dot product. `unitPositionFrom` is the normalized position vector. It has a length or magnitude of one. As you sample scene positions along the reflection ray, you're hoping to find the exact point where the reflection ray first intersects with the scene's geometry. Unfortunately, you may not find this particular point. Fade out the reflection the further it is from the intersection point you did find. Fade out the reflection based on how far way the reflected point is from the initial starting point. This will fade out the reflection instead of it ending abruptly as it reaches `maxDistance` . If the reflected UV coordinates are out of bounds, set the `visibility` to zero. This occurs when the reflection ray travels outside the camera's frustum. Set the blue and alpha component to the visibility as the UV coordinates only need the RG or XY components of the final vector. The final fragment color is the reflected UV coordinates and the visibility. In addition to the reflected UV coordinates, you'll also need a specular map. The example code creates one using the fragment's material specular properties. #define MAX_SHININESS 127.75 uniform struct { vec3 specular ; float shininess ; } p3d_Material; void main() { fragColor = vec4 ( p3d_Material.specular , clamp(p3d_Material.shininess / MAX_SHININESS, 0, 1) ); } ``` The specular fragment shader is quite simple. Using the fragment's material, the shader outputs the specular color and uses the alpha channel for the shininess. The shininess is mapped to a range of zero to one. In Blender, the maximum specular hardness or shininess is 511. When exporting from Blender to Panda3D, 511 is exported as 127.75. Feel free to adjust the shininess to range of zero to one however you see fit for your particular stack. The example code generates a specular map from the material specular properties but you could create one in GIMP, for example, and attach that as a texture to your 3D model. For instance, say your 3D treasure chest has shiny brackets on it but nothing else should reflect the environment. You can paint the brackets some shade of gray and the rest of the treasure chest black. This will mask off the brackets, allowing your shader to render the reflections on only the brackets and nothing else. You'll need to render the parts of the scene you wish to reflect and store this in a framebuffer texture. This is typically just the scene without any reflections. Here you see the reflected colors saved to a framebuffer texture. Once you have the reflected UV coordinates, looking up the reflected colors is fairly easy. You'll need the reflected UV coordinates texture and the color texture containing the colors you wish to reflect. vec4 uv = texture(uvTexture, texCoord); vec4 color = texture(colorTexture, uv.xy); Using the UV coordinates for the current fragment, look up the reflected color. Recall that the reflected UV texture stored the visibility in the B or blue component. This is the alpha channel for the reflected colors framebuffer texture. The fragment color is a mix between no reflection and the reflected color based on the visibility. The visibility was computed during the reflected UV coordinates step. Now blur the reflected scene colors and store this in a framebuffer texture. The blurring is done using a box blur. Refer to the SSAO blurring step for details. The blurred reflected colors are used for surfaces that have a less than mirror like finish. These surfaces have tiny little hills and valleys that tend to diffuse or blur the reflection. I'll cover this more during the roughness calculation. uniform sampler2D colorTexture; uniform sampler2D colorBlurTexture; uniform sampler2D specularTexture; To generate the final reflections, you'll need the three framebuffer textures computed earlier. You'll need the reflected colors, the blurred reflected colors, and the specular map. vec4 specular = texture(specularTexture, texCoord); vec4 color = texture(colorTexture, texCoord); vec4 colorBlur = texture(colorBlurTexture, texCoord); Look up the specular amount and shininess, the reflected scene color, and the blurred reflected scene color. float specularAmount = dot(specular.rgb, vec3(1)) / 3; Map the specular color to a greyscale value. If the specular amount is none, set the frag color to nothing and return. Later on, you'll multiply the final reflection color by the specular amount. Multiplying by the specular amount allows you to control how much a material reflects its environment simply by brightening or darkening the greyscale value in the specular map. Using the dot product to produce the greyscale value is just a short way of summing the three color components. Calculate the roughness based on the shininess value set during the specular map step. Recall that the shininess value was saved in the alpha channel of the specular map. The shininess determined how spread out or blurred the specular reflection was. Similarly, the `roughness` determines how blurred the reflection is. A roughness of one will produce the blurred reflection color. A roughness of zero will produce the non-blurred reflection color. Doing it this way allows you to control how blurred the reflection is just by changing the material's shininess value. The example code generates a roughness map from the material specular properties but you could create one in GIMP, for example, and attach that as a texture to your 3D model. For instance, say you have a tiled floor that has polished tiles and scratched up tiles. The polished tiles could be painted a more translucent white while the scratched up tiles could be painted a more opaque white. The more translucent/transparent the greyscale value, the more the shader will use the blurred reflected color. The scratched tiles will have a blurry reflection while the polished tiles will have a mirror like reflection. Mix the reflected color and blurred reflected color based on the roughness. Multiply that vector by the specular amount and then set that value as the fragment color. The reflection color is a mix between the reflected scene color and the blurred reflected scene color based on the roughness. A high roughness will produce a blurry reflection meaning the surface is rough. A low roughness will produce a clear reflection meaning the surface is smooth. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Screen space refraction, much like screen space reflection, adds a touch of realism you can't find anywhere else. Glass, plastic, water, and other transparent/translucent materials spring to life. Screen space reflection and screen space refraction work almost identically expect for one major difference. Instead of using the reflected vector, screen space refraction uses the refracted vector. It's a slight change in code but a big difference visually. However, unlike SSAO, you'll need the scene's vertex positions with and without the refractive objects. Refractive surfaces are translucent, meaning you can see through them. Since you can see through them, you'll need the vertex positions behind the refractive surface. Having both the foreground and background vertex positions will allow you to calculate UV coordinates and depth. To compute the refractions, you'll need the scene's foreground vertex normals in view space. The background vertex normals aren't needed unless you need to incorporate the background surface detail while calculating the refracted UV coordinates and distances. Referrer back to SSAO for details. Here you see the water refracting the light with and without normal maps. If available, be sure to use the normal mapped normals instead of the vertex normals. The smoother and flatter the surface, the harder it is to tell the light is being refracted. There will be some distortion but not enough to make it worthwhile. The process of refracting the UV coordinates is very similar to the process of reflecting the UV coordinates. Below are the adjustments you'll need to turn reflection into refraction. uniform sampler2D positionFromTexture; uniform sampler2D positionToTexture; uniform sampler2D normalFromTexture; Reflection only deals with what is in front of the reflective surface. Refraction, however, deals with what is behind the refractive surface. To accommodate this, you'll need both the vertex positions of the scene with the refracting surfaces taken out and the vertex positions of the scene with the refracting surfaces left in. `positionFromTexture` are the scene's vertex positions with the refracting surfaces left in. `positionToTexture` are the scene's vertex positions with the refracting surfaces taken out. `normalFromTexture` are the scene's vertex normals with the refraction surfaces left in. There's no need for the vertex normals behind the refractive surfaces unless you want to incorporate the surface detail for the background geometry. Refraction has one more adjustable parameter than reflection. `rior` is the relative index of refraction or relative refractive index. It is the ratio of the refraction indexes of two mediums. So for example, going from water to air is `1 / 1.33 ≈ 0.75` . The numerator is the refractive index of the medium the light is leaving and the denominator is the refractive index of the medium the light is entering. An `rior` of one means the light passes right through without being refracted or bent. As `rior` grows, the refraction will become a reflection. There's no requirement that `rior` must adhere to the real world. The demo uses `1.05` . This is completely unrealistic (light does not travel faster through water than air) but the realistic setting produced too many artifacts. In the end, the distortion only has to be believable—not realistic. `rior` values above one tend to elongate the refraction while numbers below one tend to shrink the refraction. As it was with screen space reflection, the screen doesn't have the entire geometry of the scene. A refracted ray may march through the screen space and never hit a captured surface. Or it may hit a surface but it's the backside not captured by the camera. When this happened during reflection, the fragment was left blank. This indicated no reflection or not enough information to determine a reflection. Leaving the fragment blank was fine for reflection since the reflective surface would fill in the gaps. For refraction, however, we must set the fragment to some UV. If the fragment is left blank, the refractive surface will contain holes that let the detail behind it come through. This would be okay for a completely transparent surface but usually the refractive surface will have some tint to it, reflection, etc. vec4 uv = vec4(texCoord.xy, 1, 1); The best choice is to select the UV as if the `rior` was one. This will leave the UV coordinate unchanged, allowing the background to show through instead of there being a hole in the refractive surface. Here you see the refracted UV texture for the mill scene. The wheel and waterway disturb what is otherwise a smooth gradient. The disruptions shift the UV coordinates from their screen position to their refracted screen position. vec3 unitPositionFrom = normalize(positionFrom.xyz); vec3 normalFrom = normalize(texture(normalFromTexture, texCoord).xyz); vec3 pivot = normalize(refract(unitPositionFrom, normalFrom, rior.x)); The most important difference is the calculation of the refracted vector versus the reflected vector. Both use the unit position and normal but `refract` takes an additional parameter specifying the relative refractive index. The `positionTo` , sampled by the `uv` coordinates, uses the `positionToTexture` . For reflection, you only need one framebuffer texture containing the scene's interpolated vertex positions in view space. However, for refraction, `positionToTexture` contains the vertex positions of the scene minus the refractive surfaces since the refraction ray typically goes behind the surface. If `positionFromTexture` and `positionToTexture` were the same for refraction, the refracted ray would hit the refractive surface instead of what is behind it. You'll need a mask to filter out the non-refractive parts of the scene. This mask will determine which fragment does and does not receive a refracted color. You could use this mask during the refracted UV calculation step or later when you actually sample the colors at the refracted UV coordinates. The mill scene uses the models' material specular as a mask. For the demo's purposes, the specular map is sufficient but you may want to use a more specialized map. Refer back to screen space reflection for how to render the specular map. You'll need to render the parts of the scene behind the refractive objects. This can be done by hiding the refractive objects and then rendering the scene to a framebuffer texture. uniform sampler2D uvTexture; uniform sampler2D refractionMaskTexture; uniform sampler2D positionFromTexture; uniform sampler2D positionToTexture; uniform sampler2D backgroundColorTexture; To render the actual refractions or foreground colors, you'll need the refracted UV coordinates, refraction mask, the foreground and background vertex positions, and the background colors. `tintColor` and `depthMax` are adjustable parameters. `tintColor` colorizes the background color. `depthMax` ranges from zero to infinity. When the distance between the foreground and background position reaches `depthMax` , the foreground color will be the fully tinted background color. At distance zero, the foreground will be the background color. vec4 uv = texture(uvTexture, texCoord); vec4 mask = texture(maskTexture, texCoord); vec4 positionFrom = texture(positionFromTexture, texCoord); vec4 positionTo = texture(positionToTexture, uv.xy); vec4 backgroundColor = texture(backgroundColorTexture, uv.xy); Pull out the uv coordinates, mask, background position, foreground position, and the background color. If the refraction mask is turned off for this fragment, return nothing. float depth = length(positionTo.xyz - positionFrom.xyz); float mixture = clamp(depth / depthMax, 0, 1); vec3 shallowColor = backgroundColor.rgb; vec3 deepColor = mix(shallowColor, tintColor.rgb, tintColor.a); vec3 foregroundColor = mix(shallowColor, deepColor, mixture); Calculate the depth or distance between the foreground position and the background position. At zero depth, the foreground color will be the shallow color. At `depthMax` , the foreground color will be the deep color. The deep color is the background color tinted with `tintColor` . Recall that the blue channel, in the refracted UV texture, is set to the visibility. The visibility declines as the refracted ray points back at the camera. While the visibility should always be one, it is put here for completeness. As the visibility lessens, the fragment color will receive less and less of the foreground color. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Foam is typically used when simulating some body of water. Anywhere the water's flow is disrupted, you add some foam. The foam isn't much by itself but it can really connect the water with the rest of the scene. But don't stop at just water. You can use the same technique to make a river of lava for example. Like screen space refraction, you'll need both the foreground and background vertex positions. The foreground being the scene with the foamy surface and the background being the scene without the foamy surface. Referrer back to SSAO for the details on how to acquire the vertex positions in view space. You'll need to texture your scene with a foam mask. The demo masks everything off except the water. For the water, it textures it with a foam pattern. uniform sampler2D foamPatternTexture; void main() { vec4 foamPattern = texture(foamPatternTexture, diffuseCoord); fragColor = vec4(vec3(dot(foamPattern.rgb, vec3(1)) / 3), 1); } ``` Here you see the fragment shader that generates the foam mask. It takes a foam pattern texture and UV maps it to the scene's geometry using the diffuse UV coordinates. For every model, except the water, the shader is given a solid black texture as the `foamPatternTexture` . The fragment color is converted to greyscale, as a precaution, since the foam shader expects the foam mask to be greyscale. uniform sampler2D maskTexture; uniform sampler2D positionFromTexture; uniform sampler2D positionToTexture; The foam shader accepts a mask texture, the foreground vertex positions ( `positionFromTexture` ), and the background vertex positions ( `positionToTexture` ). The adjustable parameters for the foam shader are the foam depth and color. The foam depth controls how much foam is shown. As the foam depth increases, the amount of foam shown increases. vec4 positionFrom = texture(positionFromTexture, texCoord); vec4 positionTo = texture(positionToTexture, texCoord); float depth = (positionTo.xyz - positionFrom.xyz).y; Compute the distance from the foreground position to the background position. Since the positions are in view (camera) space, we only need the y value since it goes into the screen. float amount = clamp(depth / foamDepth.x, 0, 1); amount = 1 - amount; amount *= mask.r; amount = amount * amount / (2 * (amount * amount - amount) + 1); The amount of foam is based on the depth, the foam depth parameter, and the mask value. Reshape the amount using the ease in and out easing function. This will give a lot of foam near depth zero and little to no foam near `foamDepth` . The fragment color is a mix between transparent black and the foam color based on the amount. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Flow mapping is useful when you need to animate some fluid material. Much like diffuse maps map UV coordinates to diffuse colors and normal maps map UV coordinates to normals, flow maps map UV coordinates to 2D translations or flows. Here you see a flow map that maps UV coordinates to translations in the positive y-axis direction. Flow maps use the red and green channels to store translations in the x and y direction. The red channel is for the x-axis and the green channel is the y-axis. Both range from zero to one which translates to flows that range from `(-1, -1)` to `(1, 1)` . This flow map is all one color consisting of 0.5 red and 0.6 green. Recall how the colors in a normal map are converted to actual normals. There is a similar process for flow maps. uniform sampler2D flowTexture; vec2 flow = texture(flowTexture, uv).xy; flow = (flow - 0.5) * 2; To convert a flow map color to a flow, you minus 0.5 from the channel (red and green) and multiply by two. ``` (r, g) = ( (r - 0.5) * 2 , (g - 0.5) * 2 ) = ( (0.5 - 0.5) * 2 , (0.6 - 0.5) * 2 ) = (x, y) = (0, 0.2) ``` The flow map above maps each UV coordinate to the flow `(0, 0.2)` . This indicates zero movement in the x direction and a movement of 0.2 in the y direction. The flows can be used to translate all sorts of things but they're typically used to offset the UV coordinates of a another texture. vec2 flow = texture(flowTexture, diffuseCoord).xy; flow = (flow - 0.5) * 2; vec4 foamPattern = texture ( foamPatternTexture , vec2 ( diffuseCoord.x + flow.x * osg_FrameTime , diffuseCoord.y + flow.y * osg_FrameTime ) ); For example, the demo program uses a flow map to animate the water. Here you see the flow map being used to animate the foam mask. This continuously moves the diffuse UV coordinates directly up, giving the foam mask the appearance of moving down stream. You'll need how many seconds have passed since the first frame in order to animate the UV coordinates in the direction indicated by the flow. `osg_FrameTime` is provided by Panda3D. It is a timestamp of how many seconds have passed since the first frame. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Outlining your scene's geometry can give your game a distinctive look, reminiscent of comic books and cartoons. The process of outlining is the process of finding and labeling discontinuities or differences. Every time you find what you consider a significant difference, you mark it with your line color. As you go about labeling or coloring in the differences, outlines or edges will start to form. Where you choose to search for the discontinuities is up to you. It could be the diffuse colors in your scene, the normals of your models, the depth buffer, or some other scene related data. The demo uses the interpolated vertex positions to render the outlines. However, a less straightforward but more typical way is to use both the scene's normals and depth buffer values to construct the outlines. The demo darkens the colors of the scene where there's an outline. This tends to look nicer than a constant color since it provides some color variation to the edges. float minSeparation = 1.0; float maxSeparation = 3.0; float minDistance = 0.5; float maxDistance = 2.0; int size = 1; vec3 colorModifier = vec3(0.324, 0.063, 0.099); The min and max separation parameters control the thickness of the outline depending on the fragment's distance from the camera or depth. The min and max distance control the significance of any changes found. The `size` parameter controls the constant thickness of the line no matter the fragment's position. The outline color is based on `colorModifier` and the current fragment's color. vec2 texSize = textureSize(colorTexture, 0).xy; vec2 fragCoord = gl_FragCoord.xy; vec2 texCoord = fragCoord / texSize; Sample the position texture for the current fragment's position in the scene. Recall that the position texture is just a screen shaped quad making the UV coordinate the current fragment's screen coordinate divided by the dimensions of the screen. float depth = clamp ( 1.0 - ( (far - position.y) / (far - near) ) , 0.0 , 1.0 ); float separation = mix(maxSeparation, minSeparation, depth); The fragment's depth ranges from zero to one. When the fragment's view-space y coordinate matches the far clipping plane, the depth is one. When the fragment's view-space y coordinate matches the near clipping plane, the depth is zero. In other words, the depth ranges from zero at the near clipping plane all the way up to one at the far clipping plane. Converting the position to a depth value isn't necessary but it allows you to vary the thickness of the outline based on how far away the fragment is from the camera. Far away fragments get a thinner line while nearer fragments get a thicker outline. This tends to look nicer than a constant thickness since it gives depth to the outline. float mx = 0.0; Now that you have the current fragment's position, loop through an i by j grid or window surrounding the current fragment. texCoord = ( fragCoord + (vec2(i, j) * separation) ) / texSize; mx = max(mx, abs(position.y - positionTemp.y)); With each iteration, find the biggest distance between this fragment's and the surrounding fragments' positions. Calculate the significance of any difference discovered using the `minDistance` , `maxDistance` , and `smoothstep` . `smoothstep` returns values from zero to one. The `minDistance` is the left-most edge. Any difference less than the minimum distance will be zero. The `maxDistance` is the right-most edge. Any difference greater than the maximum distance will be one. For distances between the edges, the difference will be between zero and one. These values are interpolated along a s-shaped curve. The line color is the current fragment color either darkened or lightened. The fragment's RGB color is the `lineColor` and its alpha channel is `diff` . For a sketchy outline, you can distort the UV coordinates used to sample the position vectors. Start by creating a RGB noise texture. A good size is either 128 by 128 or 512 by 512. Be sure to blur it and make it tileable. This will produce a nice wavy, inky outline. The `noiseScale` parameter controls how distorted the outline is. The bigger the `noiseScale` , the sketchier the line. vec2 fragCoord = gl_FragCoord.xy; vec2 noise = texture(noiseTexture, fragCoord / textureSize(noiseTexture, 0).xy).rb; noise = noise * 2.0 - 1.0; noise *= noiseScale; Sample the noise texture using the current screen/fragment position and the size of the noise texture. Since you're distorting the UV coordinates used to sample the position vectors, you'll only need two of the three color channels. Map the two color channels from `[0, 1]` to `[-1, 1]` . Finally, scale the noise by the scale chosen earlier. vec2 texSize = textureSize(colorTexture, 0).xy; vec2 texCoord = (fragCoord - noise) / texSize; When sampling the current position, subtract the noise vector from the current fragment's coordinates. You could instead add it to the current fragment's coordinates which will create more of a squiggly line that loosely follows the geometry. texCoord = (vec2(i, j) * separation + fragCoord + noise) / texSize; When sampling the surrounding positions inside the loop, add the noise vector to the current fragment's coordinates. The rest of the calculations are the same. (C) 2019 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Like SSAO, depth of field is an effect you can't live without once you've used it. Artistically, you can use it to draw your viewer's eye to a certain subject. But in general, depth of field adds a lot of realism for a little bit of effort. The first step is to render your scene completely in focus. Be sure to render this into a framebuffer texture. This will be one of the inputs to the depth of field shader. The second step is to blur the scene as if it was completely out of focus. Like bloom and SSAO, you can use a box blur. Be sure to render this out-of-focus-scene into a framebuffer texture. This will be one of the inputs to the depth of field shader. For a great bokeh effect, dilate the out of focus texture and use that as the out of focus input. See dilation for more details. Feel free to tweak these two parameters. All positions at or below `minDistance` will be completely in focus. All positions at or beyond `maxDistance` will be completely out of focus. vec4 focusColor = texture(focusTexture, texCoord); vec4 outOfFocusColor = texture(outOfFocusTexture, texCoord); You'll need the in focus and out of focus colors. You'll also need the vertex position in view space. You can reuse the position framebuffer texture you used for SSAO. The focus point is a position somewhere in your scene. All of the points in your scene are measured from the focus point. Choosing the focus point is up to you. The demo uses the scene position directly under the mouse when clicking the middle mouse button. However, it could be a constant distance from the camera or a static position. float blur = smoothstep ( minDistance , maxDistance , abs(position.y - focusPoint.y) ); `smoothstep` returns values from zero to one. The `minDistance` is the left-most edge. Any position less than the minimum distance, from the focus point, will be in focus or have a blur of zero. The `maxDistance` is the right-most edge. Any position greater than the maximum distance, from the focus point, will be out of focus or have a blur or one. For distances between the edges, blur will be between zero and one. These values are interpolated along a s-shaped curve. The `fragColor` is a mixture of the in focus and out of focus color. The closer `blur` is to one, the more it will use the `outOfFocusColor` . Zero `blur` means this fragment is entirely in focus. At `blur >= 1` , this fragment is completely out of focus. (C) 2019 David Let<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Posterization or color quantization is the process of reducing the number of unique colors in an image. You can use this shader to give your game a comic book or retro look. Combine it with outlining for a full-on cartoon art style. There are various different ways to implement posterization. This method works directly with the greyscale values and indirectly with the RGB values of the image. For each fragment, it maps the RGB color to a greyscale value. This greyscale value is then mapped to both its lower and upper level value. The closest level to the original greyscale value is then mapped back to an RGB value This new RGB value becomes the fragment color. I find this method produces nicer results than the more typical methods you'll find. The `levels` parameter controls how many discrete bands or steps there are. This will break up the continuous values from zero to one into chunks. With four levels, `0.0` to `1.0` becomes `0.0` , `0.25` , `0.5` , `0.75` , and `1.0` . Sample the current fragment's color. Map the RGB values to a greyscale value. In this instance, the greyscale value is the maximum value of the R, G, and B values. float lower = floor(greyscale * levels) / levels; float lowerDiff = abs(greyscale - lower); Map the greyscale value to its lower level and then calculate the difference between its lower level and itself. For example, if the greyscale value is `0.87` and there are four levels, its lower level is `0.75` and the difference is `0.12` . float upper = ceil(greyscale * levels) / levels; float upperDiff = abs(upper - greyscale); Now calculate the upper level and the difference. Keeping with the example up above, the upper level is `1.0` and the difference is `0.13` . float level = lowerDiff <= upperDiff ? lower : upper; float adjustment = level / greyscale; The closest level is used to calculate the adjustment. The adjustment is the ratio between the quantized and unquantized greyscale value. This adjustment is used to map the quantized greyscale value back to an RGB value. After multiplying `rgb` by the adjustment, `max(r, max(g, b))` will now equal the quantized greyscale value. This maps the quantized greyscale value back to a red, green, and blue vector. (C) 2019 <NAME> <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Pixelizing your 3D game can give it a interesting look and possibly save you time by not having to create all of the pixel art by hand. Combine it with the posterization for a true retro look. Feel free to adjust the pixel size. The bigger the pixel size, the blockier the image will be. float x = int(gl_FragCoord.x) % pixelSize; float y = int(gl_FragCoord.y) % pixelSize; x = floor(pixelSize / 2.0) - x; y = floor(pixelSize / 2.0) - y; x = gl_FragCoord.x + x; y = gl_FragCoord.y + y; The technique works by mapping each fragment to the center of its closest, non-overlapping pixel-sized window. These windows are laid out in a grid over the input texture. The center-of-the-window fragments determine the color for the other fragments in their window. Once you have determined the correct fragment coordinate to use, pull its color from the input texture and assign that to the fragment color. (C) 2019 David Lettierlettier.com ◀️ ⏫ 🔼 🔽 ▶️ The sharpen effect increases the contrast at the edges of the image. This comes in handy when your graphics are bit too soft. You can control how sharp the result is by adjusting the amount. An amount of zero leaves the image untouched. Try negative values for an odd look. Neighboring fragments are multiplied by `amount * -1` . The current fragment is multiplied by `amount * 4 + 1` . vec3 color = texture(sharpenTexture, vec2(gl_FragCoord.x + 0, gl_FragCoord.y + 1) / texSize).rgb * neighbor + texture(sharpenTexture, vec2(gl_FragCoord.x - 1, gl_FragCoord.y + 0) / texSize).rgb * neighbor + texture(sharpenTexture, vec2(gl_FragCoord.x + 0, gl_FragCoord.y + 0) / texSize).rgb * center + texture(sharpenTexture, vec2(gl_FragCoord.x + 1, gl_FragCoord.y + 0) / texSize).rgb * neighbor + texture(sharpenTexture, vec2(gl_FragCoord.x + 0, gl_FragCoord.y - 1) / texSize).rgb * neighbor ; The neighboring fragments are up, down, left, and right. After multiplying both the neighbors and the current fragment by their particular values, sum the result. This sum is the final fragment color. (C) 2019 David Lettierlettier.com ◀️ ⏫ 🔼 🔽 ▶️ Dilation dilates or enlarges the brighter areas of an image while at the same time, contracts or shrinks the darker areas of an image. This tends to create a pillowy look. You can use dilation for a glow/bloom effect or to add bokeh to your depth of field. int size = int(parameters.x); float separation = parameters.y; float minThreshold = 0.1; float maxThreshold = 0.3; The `size` and `separation` parameters control how dilated the image becomes. A larger `size` will increase the dilation at the cost of performance. A larger `separation` will increase the dilation at the cost of quality. The `minThreshold` and `maxThreshold` parameters control which parts of the image become dilated. vec2 texSize = textureSize(colorTexture, 0).xy; vec2 fragCoord = gl_FragCoord.xy; fragColor = texture(colorTexture, fragCoord / texSize); Sample the color at the current fragment's position. float mx = 0.0; vec4 cmx = fragColor; Loop through a `size` by `size` window, centered at the current fragment position. As you loop, find the brightest color based on the surrounding greyscale values. // For a rectangular shape. //if (false); // For a diamond shape; //if (!(abs(i) <= size - abs(j))) { continue; } // For a circular shape. if (!(distance(vec2(i, j), vec2(0, 0)) <= size)) { continue; } The window shape will determine the shape of the dilated parts of the image. For a rectangular shape, you can use every fragment covered by the window. For any other shape, skip the fragments that fall outside the desired shape. Sample a fragment color from the surrounding window. Convert the sampled color to a greyscale value. If the sampled greyscale value is larger than the current maximum greyscale value, update the maximum greyscale value and its corresponding color. fragColor.rgb = mix ( fragColor.rgb , cmx.rgb , smoothstep(minThreshold, maxThreshold, mx) ); The new fragment color is a mixture between the existing fragment color and the brightest color found. If the maximum greyscale value found is less than `minThreshold` , the fragment color is unchanged. If the maximum greyscale value is greater than `maxThreshold` , the fragment color is replaced with the brightest color found. For any other case, the fragment color is a mix between the current fragment color and the brightest color. (C) 2020 <NAME>tierlet<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Film grain (when applied in subtle doses, unlike here) can add a bit of realism you don't notice until it's removed. Typically, it's the imperfections that make a digitally generated image more believable. In terms of the shader graph, film grain is usually the last effect applied before the game is put on screen. The `amount` controls how noticeable the film grain is. Crank it up for a snowy picture. uniform float osg_FrameTime; float toRadians = 3.14 / 180; float randomIntensity = fract ( 10000 * sin ( ( gl_FragCoord.x + gl_FragCoord.y * osg_FrameTime ) * toRadians ) ); This snippet calculates the random intensity needed to adjust the amount. ``` Time Since F1 = 00 01 02 03 04 05 06 07 08 09 10 Frame Number = F1 F3 F4 F5 F6 osg_FrameTime = 00 02 04 07 08 ``` `osg_FrameTime` is provided by Panda3D. The frame time is a timestamp of how many seconds have passed since the first frame. The example code uses this to animate the film grain as `osg_FrameTime` will always be different each frame. For static film grain, replace `osg_FrameTime` with a large number. You may have to try different numbers to avoid seeing any patterns. Both the x and y coordinate are used to create points or specs of film grain. If only x was used, there would only be vertical lines. Similarly, if only y was used, there would be only horizontal lines. The reason the snippet multiplies one coordinate by some number is to break up the diagonal symmetry. You can of course remove the coordinate multiplier for a somewhat decent looking rain effect. To animate the rain effect, multiply the output of `sin` by `osg_FrameTime` . Play around with the x and y coordinate to try and get the rain to change directions. Keep only the x coordinate for a straight downpour. ``` input = (gl_FragCoord.x + gl_FragCoord.y * osg_FrameTime) * toRadians frame(10000 * sin(input)) = fract(10000 * sin(6.977777777777778)) = fract(10000 * 0.6400723818964882) = ``` `sin` is used as a hashing function. The fragment's coordinates are hashed to some output of `sin` . This has the nice property that no matter the input (big or small), the output range is negative one to one. ``` fract(10000 * sin(6.977777777777778)) = fract(10000 * 0.6400723818964882) = fract(6400.723818964882) = 0.723818964882 ``` `sin` is also used as a pseudo random number generator when combined with `fract` . ``` >>> [floor(fract(4 * sin(x * toRadians)) * 10) for x in range(0, 10)] [0, 0, 1, 2, 2, 3, 4, 4, 5, 6] >>> [floor(fract(10000 * sin(x * toRadians)) * 10) for x in range(0, 10)] [0, 4, 8, 0, 2, 1, 7, 0, 0, 5] ``` Take a look at the first sequence of numbers and then the second. Each sequence is deterministic but the second sequence has less of a pattern than the first. So while the output of ``` fract(10000 * sin(...)) ``` is deterministic, it doesn't have much of a discernible pattern. Here you see the `sin` multiplier going from 1, to 10, to 100, and then to 1000. As you increase the `sin` output multiplier, you get less and less of a pattern. This is the reason the snippet multiplies `sin` by 10,000. vec4 color = texture(colorTexture, texCoord); Convert the fragment's coordinates to UV coordinates. Using these UV coordinates, look up the texture color for this fragment. Adjust the amount by the random intensity and add this to the color. Set the fragment color and you're done. (C) 2019 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ The lookup table or LUT shader allows you to transform the colors of your game using an image editor like the GIMP. From color grading to turning day into night, the LUT shader is a handy tool for tweaking the look of your game. Before you can get started, you'll need to find a neutral LUT image. Neutral meaning that it leaves the fragment colors unchanged. The LUT needs to be 256 pixels wide by 16 pixels tall and contain 16 blocks with each block being 16 by 16 pixels. The LUT is mapped out into 16 blocks. Each block has a different level of blue. As you move across the blocks, from left to right, the amount of blue increases. You can see the amount of blue in each block's upper-left corner. Within each block, the amount of red increases as you move from left to right and the amount of green increases as you move from top to bottom. The upper-left corner of the first block is black since every RGB channel is zero. The lower-right corner of the last block is white since every RGB channel is one. With the neutral LUT in hand, take a screenshot of your game and open it in your image editor. Add the neutral LUT as a new layer and merge it with the screenshot. As you manipulate the colors of the screenshot, the LUT will be altered in the same way. When you're done editing, select only the LUT and save it as a new image. You now have your new lookup table and can begin writing your shader. vec4 color = texture(colorTexture, gl_FragCoord.xy / texSize); The LUT shader is a screen space technique. Therefore, sample the scene's color at the current fragment or screen position. float u = floor(color.b * 15.0) / 15.0 * 240.0; u = (floor(color.r * 15.0) / 15.0 * 15.0) + u; u /= 255.0; float v = ceil(color.g * 15.0); v /= 15.0; v = 1.0 - v; In order to transform the current fragment's color, using the LUT, you'll need to map the color to two UV coordinates on the lookup table texture. The first mapping (shown up above) is to the nearest left or lower bound block location and the second mapping (shown below) is to the nearest right or upper bound block mapping. At the end, you'll combine these two mappings to create the final color transformation. Each of the red, green, and blue channels maps to one of 16 possibilities in the LUT. The blue channel maps to one of the 16 upper-left block corners. After the blue channel maps to a block, the red channel maps to one of the 16 horizontal pixel positions within the block and the green channel maps to one of the 16 vertical pixel positions within the block. These three mappings will determine the UV coordinate you'll need to sample a color from the LUT. To calculate the final U coordinate, divide it by 255 since the LUT is 256 pixels wide and U ranges from zero to one. To calculate the final V coordinate, divide it by 15 since the LUT is 16 pixels tall and V ranges from zero to one. You'll also need to subtract the normalized V coordinate from one since V ranges from zero at the bottom to one at the top while the green channel ranges from zero at the top to 15 at the bottom. Using the UV coordinates, sample a color from the lookup table. This is the nearest left block color. u = ceil(color.b * 15.0) / 15.0 * 240.0; u = (ceil(color.r * 15.0) / 15.0 * 15.0) + u; u /= 255.0; v = 1.0 - (ceil(color.g * 15.0) / 15.0); vec3 right = texture(lookupTableTexture, vec2(u, v)).rgb; Now you'll need to calculate the UV coordinates for the nearest right block color. Notice how `ceil` or ceiling is being used now instead of `floor` . color.r = mix(left.r, right.r, fract(color.r * 15.0)); color.g = mix(left.g, right.g, fract(color.g * 15.0)); color.b = mix(left.b, right.b, fract(color.b * 15.0)); Not every channel will map perfectly to one of its 16 possibilities. For example, `0.5` doesn't map perfectly. At the lower bound ( `floor` ), it maps to `0.4666666666666667` and at the upper bound ( `ceil` ), it maps to `0.5333333333333333` . Compare that with `0.4` which maps to `0.4` at the lower bound and `0.4` at the upper bound. For those channels which do not map perfectly, you'll need to mix the left and right sides based on where the channel falls between its lower and upper bound. For `0.5` , it falls directly between them making the final color a mixture of half left and half right. However, for `0.132` the mixture will be 98% right and 2% left since the fractional part of `0.123` times `15.0` is `0.98` . Set the fragment color to the final mix and you're done. (C) 2020 David <EMAIL> ◀️ ⏫ 🔼 🔽 ▶️ Correcting for gamma will make your color calculations look correct. This isn't to say they'll look amazing but with gamma correction, you'll find that the colors meld together better, the shadows are more nuanced, and the highlights are more subtle. Without gamma correction, the shadowed areas tend to get crushed while the highlighted areas tend to get blown-out and over saturated making for a harsh contrast overall. If you're aiming for realism, gamma correction is especially important. As you perform more and more calculations, the tiny errors add up making it harder to achieve photorealism. The equations will be correct but the inputs and outputs will be wrong leaving you frustrated. It's easy to get twisted around when thinking about gamma correction but essentially it boils down to knowing what color space a color is in and how to convert that color to the color space you need. With those two pieces of the puzzle, gamma correction becomes a tedious yet simple chore you'll have to perform from time to time. The two color spaces you'll need to be aware of are sRGB (standard Red Green Blue) and RGB or linear color space. ``` identify -format "%[colorspace]\n" house-diffuse-srgb.png sRGB identify -format "%[colorspace]\n" house-diffuse-rgb.png RGB ``` Knowing what color space a color texture is in will determine how you handle it in your shaders. To determine the color space of a texture, use ImageMagick's `identify` . You'll find that most textures are in sRGB. To convert a texture to a particular color space, use ImageMagick's `convert` program. Notice how a texture is darkened when transforming from sRGB to RGB. The red, green, and blue values in a sRGB color texture are encoded and cannot be modified directly. Modifying them directly would be like running spellcheck on an encrypted message. Before you can run spellcheck, you first have to decrypt the message. Similarly, to modify the values of an sRGB texture, you first have to decode or transform them to RGB or linear color space. To decode a sRGB encoded color, raise the `rgb` values to the power of `2.2` . Once you have decoded the color, you are now free to add, subtract, multiply, and divide it. By raising the color values to the power of `2.2` , you're converting them from sRGB to RGB or linear color space. This conversion has the effect of darkening the colors. For example, `vec3(0.9, 0.2, 0.3)` becomes ``` vec3(0.793, 0.028, 0.07) ``` . The `2.2` value is known as gamma. Loosely speaking, gamma can either be `1.0 / 2.2` , `2.2` , or `1.0` . As you've seen, `2.2` is for decoding sRGB encoded color textures. As you will see, `1.0 / 2.2` is for encoding linear or RGB color textures. And `1.0` is RGB or linear color space since `y = 1 * x + 0` and any base raised to the power of `1.0` is itself. One important exception to decoding is when the "colors" of a texture represent non-color data. Some examples of non-color data would be the normals in a normal map, the alpha channel, the heights in a height map, and the directions in a flow map. Only decode color related data or data that represents color. When dealing with non-color data, treat the sRGB color values as RGB or linear and skip the decoding process. The necessity for encoding and decoding stems from the fact that humans do not perceive lightness linearly and most displays (like a monitor) lack the precision or number of bits to accurately show both lighter and darker tonal values or shades. With only so many bits to go around, colors are encoded in such a way that more bits are devoted to the darker shades than the lighter shades since humans are more sensitive to darker tones than lighter tones. Encoding it this way uses the limited number of bits more effectively for human perception. Still, the only thing to remember is that your display is expecting sRGB encoded values. Therefore, if you decoded a sRGB value, you have to encode it before it makes its way to your display. To encode a linear value or convert RGB to sRGB, raise the `rgb` values to the power of `1.0 / 2.2` . Notice how `1.1 / 2.2` is the reciprocal of `2.2` or `2.2 / 1.0` . Here you see the symmetry in decoding and encoding. (C) 2020 David Lettier<EMAIL> ◀️ ⏫ 🔼 🔽 ▶️
latimes-calculate
readthedoc
Unknown
latimes-calculate Documentation Release 0.2 Los Angeles Times Data Desk September 13, 2016 Contents 2.1 Basic function... 5 2.2 Geospatial function... 12 i ii latimes-calculate Documentation, Release 0.2 Some simple math we use to do journalism latimes-calculate Documentation, Release 0.2 2 Contents CHAPTER 1 Getting started Install the latest package from pypi. $ pip install latimes-calculate Note: For most functions, there are no additional requirements. The exception is the small number of geospatial functions, which require GeoDjango. latimes-calculate Documentation, Release 0.2 4 Chapter 1. Getting started CHAPTER 2 Documentation 2.1 Basic functions 2.1.1 Adjusted-monthly value adjusted_monthly_value(value, datetime) Accepts a value and a datetime object, and then prorates the value to a 30-day figure depending on how many days are in the month. This can be useful for month-to-month comparisons in circumstances where fluctuations in the number of days per month may skew the analysis. For instance, February typically has only 28 days, in comparison to March, which has 31. >>> import calculate >>> calculate.adjusted_monthly_value(10, datetime.datetime(2009, 4, 1)) 10.0 >>> calculate.adjusted_monthly_value(10, datetime.datetime(2009, 2, 17)) 10.714285714285714 >>> calculate.adjusted_monthly_value(10, datetime.datetime(2009, 12, 31)) 9.67741935483871 2.1.2 Age age(born, as_of=None) Returns the current age, in years, of a person born on the provided date. First argument should be the birthdate and can be a datetime.date or datetime.datetime object, although datetimes will be converted to a date object and hours, minutes and seconds will not be part of the calculation. The second argument is the as_of date that the person’s age will be calculate at. By default, it is not provided and the age is returned as of the current date. But if you wanted to calculate someone’s age at a past or future date, you could do that by providing the as_of date as the second argument. >>> import calculate >>> from datetime import date >>> dob = date(1982, 7, 22) >>> calculate.age(dob) 29 # As of the writing of this README, of course. >>> as_of = date(1982, 7, 23) >>> calculate.age(dob, as_of) 0 latimes-calculate Documentation, Release 0.2 2.1.3 At percentile at_percentile(data_list, value, interpolation=’fraction’) Accepts a list of values and a percentile for which to return the value. A percentile of, for example, 80 means that 80 percent of the scores in the sequence are below the given score. If the requested percentile falls between two values, the result can be interpolated using one of the following methods. The default is “fraction”. •fraction: The value proportionally between the pair of bordering values. •lower: The lower of the two bordering values. •higher: The higher of the two bordering values. >>> import calculate >>> calculate.at_percentile([1, 2, 3, 4], 75) 3.25 >>> calculate.at_percentile([1, 2, 3, 4], 75, interpolation='lower') 3.0 >>> calculate.at_percentile([1, 2, 3, 4], 75, interpolation='higher') 4.0 2.1.4 Benford’s Law benfords_law(number_list, method=’first_digit’, verbose=True) Accepts a list of numbers and applies a quick-and-dirty run against Benford’s Law. Benford’s Law makes statements about the occurance of leading digits in a dataset. It claims that a leading digit of 1 will occur about 30 percent of the time, and each number after it a little bit less, with the number 9 occuring the least. Datasets that greatly vary from the law are sometimes suspected of fraud. The function returns the Pearson correlation coefficient, also known as Pearson’s r, which reports how closely the two datasets are related. This function also includes a variation on the classic Benford analysis popularized by blogger <NAME>, who conducted an analysis of the final digits of polling data. To use Silver’s variation, provide the keyword argument method with the value ‘last_digit’. To prevent the function from printing, set the optional keyword argument verbose to False. This function is based upon code from a variety of sources around the web, but owes a particular debt to the work of <NAME>. >>> import calculate >>> calculate.benfords_law([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) BENFORD'S LAW: FIRST_DIGIT Pearson's R: 0.86412304649 | Number | Count | Expected Percentage | Actual Percentage | ------------------------------------------------------------ | 1 | 2 | 30.1029995664 | 20.0 | | 2 | 1 | 17.6091259056 | 10.0 | | 3 | 1 | 12.4938736608 | 10.0 | | 4 | 1 | 9.69100130081 | 10.0 | | 5 | 1 | 7.91812460476 | 10.0 | | 6 | 1 | 6.69467896306 | 10.0 | | 7 | 1 | 5.79919469777 | 10.0 | | 8 | 1 | 5.11525224474 | 10.0 | | 9 | 1 | 4.57574905607 | 10.0 | >>> calculate.benfords_law([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], verbose=False) -0.863801937698704 latimes-calculate Documentation, Release 0.2 2.1.5 Competition rank competition_rank(data_list, obj, order_by, direction=’desc’) Accepts a list, an item plus the value and direction to order by. Then returns the supplied object’s competition rank as an integer. In competition ranking equal numbers receive the same ranking and a gap is left before the next value (i.e. “1224”). You can submit a Django queryset, objects, or just a list of dictionaries. >>> import calculate >>> qs = Player.objects.all().order_by("-career_home_runs") >>> ernie = Player.objects.get(first_name__iexact='Ernie', last_name__iexact='Banks') >>> eddie = Player.objects.get(first_name__iexact='Eddie', last_name__iexact='Matthews') >>> mel = Player.objects.get(first_name__iexact='Mel', last_name__iexact='Ott') >>> calculate.competition_rank(qs, ernie, career_home_runs', direction='desc') 21 >>> calculate.competition_rank(qs, eddie, 'career_home_runs', direction='desc') 21 >>> calculate.competition_rank(qs, mel, 'career_home_runs', direction='desc') 23 2.1.6 Date range date_range(start_date, end_date) Returns a generator of all the days between two date objects. Results include the start and end dates. Arguments can be either datetime.datetime or date type objects. >>> import datetime >>> import calculate >>> dr = calculate.date_range(datetime.date(2009,1,1), datetime.date(2009,1,3)) >>> dr <generator object at 0x718e90> >>> list(dr) [datetime.date(2009, 1, 1), datetime.date(2009, 1, 2), datetime.date(2009, 1, 3)] 2.1.7 Decile decile(data_list, score, kind=’weak’) Accepts a sample of values and a single number to add to it and determine the decile equivilent of its percentile rank. By default, the method used to negotiate gaps and ties is “weak” because it returns the percentile of all values at or below the provided value. For an explanation of alternative methods, refer to the percentile function. >>> import calculate >>> calculate.decile([1, 2, 3, 3, 4], 3) 9 2.1.8 Ethnolinguistic Fractionalization Index elfi(data_list) The ELFI is a simplified method for calculating the Ethnolinguistic Fractionalization Index (ELFI). This is one form of what is commonly called a “diversity index.” Accepts a list of decimal percentages, which are used to calculate the index. Returns a decimal value as a floating point number. latimes-calculate Documentation, Release 0.2 >>> import calculate >>> calculate.elfi([0.2, 0.5, 0.05, 0.25]) 0.64500000000000002 2.1.9 Equal-sized breakpoints equal_sized_breakpoints(data_list, classes) Returns break points for groups of equal size, known as quartiles, quintiles, etc. Provide a list of data values and the number of classes you’d like the list broken up into. No flashy math, just sorts them in order and makes the cuts. >>> import calculate >>> calculate.equal_sized_breakpoints(range(1,101), 5) [1.0, 21.0, 41.0, 61.0, 81.0, 100] 2.1.10 Margin of victory margin_of_victory(data_list) Accepts a list of numbers and returns the difference between the first place and second place values. This can be useful for covering elections as an easy to way to figure out the margin of victory for a leading candidate. >>> import calculate >>> # 2008 Iowa caucus results for [Edwards, Clinton, Obama] >>> calculate.margin_of_victory([3285, 2804, 7170]) 3885 2.1.11 Mean (Average) mean(data_list) Accepts a sample of values and returns their mean. The mean is the sum of all values in the sample divided by the number of members. It is also known as the average. Since the value is strongly influenced by outliers, median is generally a better indicator of central tendency. >>> import calculate >>> calculate.mean([1,2,3]) 2.0 >>> calculate.mean([1, 99]) 50.0 2.1.12 Median median(data_list) Accepts a list of numbers and returns the median value. The median is the number in the middle of a sequence, with 50 percent of the values above, and 50 percent below. In cases where the sequence contains an even number of values – and therefore no exact middle – the two values nearest the middle are averaged and the mean returned. >>> import calculate >>> calculate.median([1,2,3]) 2.0 latimes-calculate Documentation, Release 0.2 >> calculate.median((1,4,3,2)) 2.5 2.1.13 Mode mode(data_list) Accepts a sample of numbers and returns the mode value. The mode is the most common value in a data set. If there is a tie for the highest count, no value is returned. >>> import calculate >>> calculate.mode([1,2,2,3]) 2.0 >>> calculate.mode([1,2,3]) >>> 2.1.14 Ordinal rank ordinal_rank(sequence, item, order_by=None, direction=’desc’) Accepts a list and an object. Returns the object’s ordinal rank as an integer. Does not negiotiate ties. >>> import calculate >>> qs = Player.objects.all().order_by("-career_home_runs") >>> barry = Player.objects.get(first_name__iexact='Barry', last_name__iexact='Bonds') >>> calculate.ordinal_rank(qs, barry) 1 2.1.15 Pearson’s r pearson(list_one, list_two) Accepts paired lists and returns a number between -1 and 1, known as Pearson’s r, that indicates of how closely correlated the two datasets are. A score of close to one indicates a high positive correlation. That means that X tends to be big when Y is big. A score close to negative one indicates a high negative correlation. That means X tends to be small when Y is big. A score close to zero indicates little correlation between the two datasets. A warning, though, correlation does not equal causation. Just because the two datasets are closely related doesn’t not mean that one causes the other to be the way it is. >>> import calculate >>> calculate.pearson([6,5,2], [2,5,6]) -0.8461538461538467 2.1.16 Per capita per_capita(value, population, per=10000, fail_silently=True) Accepts two numbers, a value and population total, and returns the per capita rate. By default, the result is returned as a per 10,000 person figure. If you divide into zero – an illegal operation – a null value is returned by default. If you prefer for an error to be raised, set the kwarg ‘fail_silently’ to False. >>> import calculate >>> calculate.per_capita(12, 100000) 1.2 latimes-calculate Documentation, Release 0.2 2.1.17 Per square mile per_sqmi(value, square_miles, fail_silently=True) Accepts two numbers, a value and an area, and returns the per square mile rate. Not much more going on here than a simple bit of division. If you divide into zero – an illegal operation – a null value is returned by default. If you prefer for an error to be raised, set the kwarg ‘fail_silently’ to False. >>> import calculate >>> calculate.per_sqmi(20, 10) 2.0 2.1.18 Percentage percentage(value, total, multiply=True, fail_silently=True) Accepts two integers, a value and a total. The value is divided into the total and then multiplied by 100, returning its percentage as a float. If you don’t want the number multiplied by 100, set the ‘multiply’ kwarg to False. If you divide into zero – an illegal operation – a null value is returned by default. If you prefer for an error to be raised, set the kwarg ‘fail_silently’ to False. >>> import calculate >>> calculate.percentage(2, 10) 20.0 >>> calculate.percentage(2,0, multiply=False) 0.20000000000000001 >>> calculate.percentage(2,0) 2.1.19 Percentage change percentage_change(old_value, new_value, multiply=True, fail_silently=True) Accepts two integers, an old and a new number, and then measures the percent change between them. The change between the two numbers is determined and then divided into the original figure. By default, it is then multiplied by 100, and returning as a float. If you don’t want the number multiplied by 100, set the ‘multiply’ kwarg to False. If you divide into zero – an illegal operation – a null value is returned by default. If you prefer for an error to be raised, set the kwarg ‘fail_silently’ to False. >>> import calculate >>> calculate.percentage_change(2, 10) 400.0 2.1.20 Percentile percentile(data_list, value, kind=’weak’) Accepts a sample of values and a single number to add to it and determine its percentile rank. A percentile of, for example, 80 percent means that 80 percent of the scores in the sequence are below the given score. In the case of gaps or ties, the exact definition depends on the type of the calculation stipulated by the “kind” keyword argument. There are three kinds of percentile calculations provided here. The default is “weak”. •weak: Corresponds to the definition of a cumulative distribution function, with the result generated by returning the percentage of values at or equal to the the provided value. •strict: Similar to “weak”, except that only values that are less than the given score are counted. This can often produce a result much lower than “weak” when the provided score is occurs many times in the sample. latimes-calculate Documentation, Release 0.2 •mean: The average of the “weak” and “strict” scores. >>> import calculate >>> calculate.percentile([1, 2, 3, 4], 3) 75.0 >>> calculate.percentile([1, 2, 3, 3, 4], 3, kind='strict') 40.0 >>> calculate.percentile([1, 2, 3, 3, 4], 3, kind='weak') 80.0 >>> calculate.percentile([1, 2, 3, 3, 4], 3, kind='mean') 60.0 2.1.21 Range range(data_list) Accepts a sample of values and return the range. The range is the difference between the maximum and mini- mum values of a data set. >>> import calculate >>> calculate.range([1,2,3]) 2 >>> calculate.range([2,2]) 0 2.1.22 Split at breakpoints split_at_breakpoints(data_list, breakpoint_list) Splits up a list at the provided breakpoints. First argument is a list of data values. Second is a list of the breakpoints you’d like it to be split up with. Returns a list of lists, in order by breakpoint. Useful for splitting up a list after you’ve determined breakpoints using another method like calcu- late.equal_sized_breakpoints. >>> import calculate >>> l = range(1,101) >>> bp = calculate.equal_sized_breakpoints(l, 5) >>> print bp [1.0, 21.0, 41.0, 61.0, 81.0, 100] >>> print calculate.split_at_breakpoints(l, bp) [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], [21, 22, 23, 24, 25... 2.1.23 Standard deviation standard_deviation(data_list) Accepts a sample of values and returns the standard deviation. Standard deviation measures how widely dis- persed the values are from the mean. A lower value means the data tend to be bunched close to the average. A higher value means they tend to be further away. This is a “population” calculation that assumes that you are submitting all of the values, not a sample. >>> import calculate >>> calculate.standard_deviation([2,3,3,4]) 0.70710678118654757 >>> calculate.standard_deviation([-2,3,3,40]) 16.867127793432999 latimes-calculate Documentation, Release 0.2 2.1.24 Summary statistics summary_stats(data_list) Accepts a sample of numbers and returns a pretty print out of a variety of descriptive statistics. >>> import calculate >>> calculate.summary_stats(range(1,101)) | Statistic | Value | ----------------------------------------| | n | 100 | | mean | 50.5 | | median | 50.5 | | mode | None | | maximum | 100 | | minimum | 1 | | range | 99.0 | | standard deviation | 28.8660700477 | | variation coefficient | 0.57160534748 | 2.1.25 Variation coefficient variation_coefficient(data_list) Accepts a list of values and returns the variation coefficient, which is a normalized measure of the distribution. This is the sort of thing you can use to compare the standard deviation of sets that are measured in different units. Note that it uses our “population” standard deviation as part of the calculation, not a “sample” standard deviation. >>> import calculate >>> calculate.variation_coefficient(range(1, 1000)) 0.5767726299562651 2.2 Geospatial functions 2.2.1 Mean center mean_center(obj_list, point_attribute_name=’point’) Accepts a geoqueryset, list of objects or list of dictionaries, expected to contain GeoDjango Point objects as one of their attributes. Returns a Point object with the mean center of the provided points. The mean center is the average x and y of all those points. By default, the function expects the Point field on your model to be called ‘point’. If the point field is called something else, change the kwarg ‘point_attribute_name’ to whatever your field might be called. >>> import calculate >>> calculate.mean_center(qs) <Point object at 0x77a1694> 2.2.2 Nudge points nudge_points(geoqueryset, point_attribute_name=’point’, radius=0.0001) A utility that accepts a GeoDjango QuerySet and nudges slightly apart any identical points. Nothing is returned. By default, the distance of the move is 0.0001 decimal degrees. I’m not sure if this will go wrong if your data latimes-calculate Documentation, Release 0.2 is in a different unit of measurement. This can be useful for running certain geospatial statistics, or even for presentation issues, like spacing out markers on a Google Map for instance. >>> import calculate >>> calculate.nudge_points(qs) >>> 2.2.3 Random point random_point(extent) A utility that accepts the extent of a polygon and returns a random point from within its boundaries. The extent is a four-point tuple with (xmin, ymin, xmax, ymax). >>> polygon = Model.objects.get(pk=1).polygon >>> import calculate >>> calculate.random_point(polygon.extent) 2.2.4 Standard-deviation distance standard_deviation_distance(obj_list, point_attribute_name=’point’) Accepts a geoqueryset, list of objects or list of dictionaries, expected to contain objects with Point properties, and returns a float with the standard deviation distance of the provided points. The standard deviation distance is the average variation in the distance of points from the mean center. By default, the function expects the Point field on your model to be called point. If the point field is called something else, change the kwarg point_attribute_name to whatever your field might be called. >>> import calculate >>> calculate.standard_deviation_distance(qs) 0.046301584704149731 latimes-calculate Documentation, Release 0.2 14 Chapter 2. Documentation CHAPTER 3 Contributing • Code repository: https://github.com/datadesk/latimes-calculate • Issues: https://github.com/datadesk/latimes-calculate/issues • Packaging: https://pypi.python.org/pypi/latimes-calculate • Testing: https://travis-ci.org/datadesk/latimes-calculate • Coverage: https://coveralls.io/r/datadesk/latimes-calculate latimes-calculate Documentation, Release 0.2 16 Chapter 3. Contributing
landscapemetrics
cran
R
Package ‘landscapemetrics’ October 3, 2023 Type Package Title Landscape Metrics for Categorical Map Patterns Version 2.0.0 Maintainer <NAME> <<EMAIL>> Description Calculates landscape metrics for categorical landscape patterns in a tidy workflow. 'landscapemetrics' reimplements the most common metrics from 'FRAGSTATS' (<https://www.fragstats.org/>) and new ones from the current literature on landscape metrics. This package supports 'terra' SpatRaster objects as input arguments. It further provides utility functions to visualize patches, select metrics and building blocks to develop new metrics. License GPL-3 URL https://r-spatialecology.github.io/landscapemetrics/ BugReports https://github.com/r-spatialecology/landscapemetrics/issues Depends R (>= 3.6) Imports cli, ggplot2, methods, Rcpp (>= 0.11.0), stats, terra, tibble Suggests covr, dplyr, knitr, raster, rmarkdown, sf, sp, stars, stringr, testthat, tidyr LinkingTo Rcpp, RcppArmadillo ByteCompile true Encoding UTF-8 LazyData true Config/testthat/edition 3 RoxygenNote 7.2.3 VignetteBuilder knitr NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-1125-9918>), <NAME> [aut] (<https://orcid.org/0000-0002-3042-5435>), <NAME> [aut] (<https://orcid.org/0000-0002-1057-3721>), <NAME> [aut] (<https://orcid.org/0000-0002-3990-4897>), <NAME> [ctb] (Input on package structure), Jeffrey Hollister [ctb] (Input on package structure), <NAME> [ctb] (Input on package structure), <NAME> [ctb] (Original author of underlying C++ code for get_nearestneighbour() function), Project Nayuki [ctb] (Original author of underlying C++ code for get_circumscribingcircle and lsm_p_circle), <NAME> [ctb] (Bugfix in sample_metrics()) Repository CRAN Date/Publication 2023-10-03 13:00:02 UTC R topics documented: augusta_nlc... 5 calculate_correlatio... 6 calculate_ls... 7 check_landscap... 9 extract_ls... 10 get_adjacencie... 11 get_boundarie... 12 get_centroid... 14 get_circumscribingcircl... 15 get_nearestneighbou... 16 get_patche... 17 get_unique_value... 18 landscap... 19 landscape_as_lis... 19 list_ls... 20 lsm_abbreviations_name... 22 lsm_c_a... 23 lsm_c_area_c... 24 lsm_c_area_m... 25 lsm_c_area_s... 26 lsm_c_c... 27 lsm_c_cai_c... 29 lsm_c_cai_m... 30 lsm_c_cai_s... 31 lsm_c_circle_c... 33 lsm_c_circle_m... 34 lsm_c_circle_s... 36 lsm_c_clump... 37 lsm_c_cohesio... 38 lsm_c_contig_c... 39 lsm_c_contig_m... 41 lsm_c_contig_s... 42 lsm_c_core_c... 43 lsm_c_core_m... 45 lsm_c_core_s... 46 lsm_c_cplan... 48 lsm_c_dca... 49 lsm_c_dcore_c... 51 lsm_c_dcore_m... 52 lsm_c_dcore_s... 53 lsm_c_divisio... 55 lsm_c_e... 56 lsm_c_enn_c... 58 lsm_c_enn_m... 59 lsm_c_enn_s... 60 lsm_c_frac_c... 62 lsm_c_frac_m... 63 lsm_c_frac_s... 64 lsm_c_gyrate_c... 65 lsm_c_gyrate_m... 67 lsm_c_gyrate_s... 68 lsm_c_ij... 69 lsm_c_lp... 71 lsm_c_ls... 72 lsm_c_mes... 73 lsm_c_ndc... 74 lsm_c_nls... 76 lsm_c_n... 77 lsm_c_pafra... 78 lsm_c_para_c... 79 lsm_c_para_m... 81 lsm_c_para_s... 82 lsm_c_p... 83 lsm_c_plad... 84 lsm_c_plan... 85 lsm_c_shape_c... 87 lsm_c_shape_m... 88 lsm_c_shape_s... 89 lsm_c_spli... 90 lsm_c_tc... 92 lsm_c_t... 93 lsm_l_a... 94 lsm_l_area_c... 96 lsm_l_area_m... 97 lsm_l_area_s... 98 lsm_l_cai_c... 99 lsm_l_cai_m... 101 lsm_l_cai_s... 102 lsm_l_circle_c... 104 lsm_l_circle_m... 105 lsm_l_circle_s... 106 lsm_l_cohesio... 107 lsm_l_conden... 109 lsm_l_conta... 110 lsm_l_contig_c... 111 lsm_l_contig_m... 112 lsm_l_contig_s... 114 lsm_l_core_c... 115 lsm_l_core_m... 117 lsm_l_core_s... 118 lsm_l_dca... 119 lsm_l_dcore_c... 121 lsm_l_dcore_m... 122 lsm_l_dcore_s... 124 lsm_l_divisio... 125 lsm_l_e... 126 lsm_l_enn_c... 128 lsm_l_enn_m... 129 lsm_l_enn_s... 130 lsm_l_en... 132 lsm_l_frac_c... 133 lsm_l_frac_m... 134 lsm_l_frac_s... 135 lsm_l_gyrate_c... 137 lsm_l_gyrate_m... 138 lsm_l_gyrate_s... 139 lsm_l_ij... 141 lsm_l_joinen... 142 lsm_l_lp... 143 lsm_l_ls... 144 lsm_l_mes... 145 lsm_l_msid... 147 lsm_l_msie... 148 lsm_l_mutin... 149 lsm_l_ndc... 150 lsm_l_n... 152 lsm_l_pafra... 153 lsm_l_para_c... 154 lsm_l_para_m... 156 lsm_l_para_s... 157 lsm_l_p... 158 lsm_l_plad... 159 lsm_l_p... 160 lsm_l_pr... 161 lsm_l_relmutin... 162 lsm_l_rp... 163 lsm_l_shape_c... 165 lsm_l_shape_m... 166 lsm_l_shape_s... 167 lsm_l_shd... 168 lsm_l_she... 169 lsm_l_sid... 171 lsm_l_sie... 172 lsm_l_spli... 173 lsm_l_t... 174 lsm_l_tc... 175 lsm_l_t... 177 lsm_p_are... 178 lsm_p_ca... 179 lsm_p_circl... 181 lsm_p_conti... 182 lsm_p_cor... 183 lsm_p_en... 185 lsm_p_fra... 186 lsm_p_gyrat... 187 lsm_p_ncor... 188 lsm_p_par... 190 lsm_p_peri... 191 lsm_p_shap... 192 options_landscapemetric... 193 podlasie_ccil... 194 sample_ls... 194 show_core... 196 show_correlatio... 197 show_ls... 198 show_patche... 199 spatialize_ls... 200 window_ls... 202 augusta_nlcd Augusta NLCD 2011 Description A real landscape of area near Augusta, Georgia obtained from the National Land Cover Database (NLCD) Usage augusta_nlcd Format A raster object. Source https://www.mrlc.gov/nlcd2011.php References <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., 2015, Completion of the 2011 National Land Cover Database for the conterminous United States-Representing a decade of land cover change information. Pho- togrammetric Engineering and Remote Sensing, v. 81, no. 5, p. 345-354 calculate_correlation Calculate correlation Description Calculate correlation Usage calculate_correlation( metrics, method = "pearson", diag = TRUE, simplify = FALSE ) Arguments metrics Tibble with results of as returned by the landscapemetrics package. method Type of correlation. See link{cor} for details. diag If FALSE, values on the diagonal will be NA. simplify If TRUE and only one level is present, only a tibble is returned. Details The functions calculates the correlation between all metrics. In order to calculate correlations, for the landscape level more than one landscape needs to be present. All input must be structured as returned by the landscapemetrics package. Value list Examples landscape <- terra::rast(landscapemetrics::landscape) metrics <- calculate_lsm(landscape, what = c("patch", "class")) calculate_correlation(metrics, method = "pearson") calculate_lsm calculate_lsm Description Calculate a selected group of metrics Usage calculate_lsm( landscape, level = NULL, metric = NULL, name = NULL, type = NULL, what = NULL, directions = 8, count_boundary = FALSE, consider_boundary = FALSE, edge_depth = 1, cell_center = FALSE, classes_max = NULL, neighbourhood = 4, ordered = TRUE, base = "log2", full_name = FALSE, verbose = TRUE, progress = FALSE ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. level Level of metrics. Either ’patch’, ’class’ or ’landscape’ (or vector with combina- tion). metric Abbreviation of metrics (e.g. ’area’). name Full name of metrics (e.g. ’core area’). type Type according to FRAGSTATS grouping (e.g. ’aggregation metrics’). what Selected level of metrics: either "patch", "class" or "landscape". It is also pos- sible to specify functions as a vector of strings, e.g. what = c("lsm_c_ca", "lsm_l_ta"). directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). count_boundary Include landscape boundary in edge length. consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core. edge_depth Distance (in cells) a cell has to be away from the patch edge to be considered as core cell. cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. classes_max Potential maximum number of present classes. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. ordered The type of pairs considered. Either ordered (TRUE) or unordered (FALSE). The default is TRUE. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. full_name Should the full names of all functions be included in the tibble. verbose Print warning messages. progress Print progress report. Details Wrapper to calculate several landscape metrics. The metrics can be specified by the arguments what, level, metric, name and/or type (combinations of different arguments are possible (e.g. level = "class", type = "aggregation metric"). If an argument is not provided, automat- ically all possibilities are selected. Therefore, to get all available metrics, don’t specify any of the above arguments. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also list_lsm Examples ## Not run: landscape <- terra::rast(landscapemetrics::landscape) calculate_lsm(landscape, progress = TRUE) calculate_lsm(landscape, what = c("patch", "lsm_c_te", "lsm_l_pr")) calculate_lsm(landscape, level = c("class", "landscape"), type = "aggregation metric") ## End(Not run) check_landscape Check input landscape Description Check input landscape Usage check_landscape(landscape, verbose = TRUE) Arguments landscape Raster* Layer, Stack, Brick, Stars or a list of rasterLayers verbose Print warning messages. Details This function extracts basic information about the input landscape. It includes a type of coordinate reference system (crs) - either "geographic", "projected", or NA, units of the coordinate reference system, a class of the input landscape’s values and the number of classes found in the landscape. Value tibble Examples augusta_nlcd <- terra::rast(landscapemetrics::augusta_nlcd) check_landscape(augusta_nlcd) podlasie_ccilc <- terra::rast(landscapemetrics::podlasie_ccilc) check_landscape(podlasie_ccilc) landscape <- terra::rast(landscapemetrics::landscape) check_landscape(c(landscape, landscape)) extract_lsm extract_lsm Description Extract metrics Usage extract_lsm( landscape, y, extract_id = NULL, metric = NULL, name = NULL, type = NULL, what = NULL, directions = 8, progress = FALSE, verbose = TRUE, ... ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. y 2-column matrix with coordinates or sf point geometries. extract_id Vector with id of sample points. If not provided, sample points will be labelled 1...n. metric Abbreviation of metrics (e.g. ’area’). name Full name of metrics (e.g. ’core area’) type Type according to FRAGSTATS grouping (e.g. ’aggregation metrics’). what Selected level of metrics: either "patch", "class" or "landscape". It is also pos- sible to specify functions as a vector of strings, e.g. what = c("lsm_c_ca", "lsm_l_ta"). directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). progress Print progress report. verbose Print warning messages. ... Arguments passed to calculate_lsm(). Details This functions extracts the metrics of all patches the spatial object(s) y (e.g. spatial points) are located within. Only patch level metrics are possible to extract. Please be aware that the output is slightly different to all other lsm-function of landscapemetrics. Returns a tibble with chosen metrics and the ID of the spatial objects. Value tibble See Also calculate_lsm Examples landscape <- terra::rast(landscapemetrics::landscape) points <- matrix(c(10, 5, 25, 15, 5, 25), ncol = 2, byrow = TRUE) extract_lsm(landscape, y = points) extract_lsm(landscape, y = points, type = "aggregation metric") ## Not run: # use lines ## End(Not run) get_adjacencies get_adjacencies Description Fast calculation of adjacencies between classes in a raster Usage get_adjacencies(landscape, neighbourhood = 4, what = "full", upper = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case), 8 (queen’s case) or a binary matrix where the ones define the neighbourhood. The default is 4. what Which adjacencies to calculate: "full" for a full adjacency matrix, "like" for the diagonal, "unlike" for the off diagonal part of the matrix and "triangle" for a triangular matrix counting adjacencies only once. upper Logical value indicating whether the upper triangle of the adjacency matrix should be returned (default FALSE). Details A fast implementation with Rcpp to calculate the adjacency matrix for raster. The adjacency matrix is most often used in landscape metrics to describe the configuration of landscapes, is it is a cellwise count of edges between classes. The "full" adjacency matrix is double-count method, as it contains the pairwise counts of cells between all classes. The diagonal of this matrix contains the like adjacencies, a count for how many edges a shared in each class with the same class. The "unlike" adjacencies are counting the cellwise edges between different classes. Value matrix with adjacencies between classes in a raster and between cells from the same class. Examples landscape <- terra::rast(landscapemetrics::landscape) # calculate full adjacency matrix get_adjacencies(landscape, 4) # equivalent with the terra package: adjacencies <- terra::adjacent(landscape, 1:terra::ncell(landscape), "rook", pairs = TRUE) table(terra::values(landscape, mat = FALSE)[adjacencies[,1]], terra::values(landscape, mat = FALSE)[adjacencies[,2]]) # count diagonal neighbour adjacencies diagonal_matrix <- matrix(c(1, NA, 1, NA, 0, NA, 1, NA, 1), 3, 3, byrow = TRUE) get_adjacencies(landscape, diagonal_matrix) get_boundaries get_boundaries Description Get boundary cells of patches Usage get_boundaries( landscape, consider_boundary = FALSE, edge_depth = 1, as_NA = FALSE, patch_id = FALSE, return_raster = TRUE ) Arguments landscape SpatRaster or matrix. consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as edge. edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell. as_NA If true, non-boundary cells area labeld NA. patch_id If true, boundary/edge cells are labeled with the original patch id. return_raster If false, matrix is returned. Details All boundary/edge cells are labeled 1, all non-boundary cells 0. NA values are not changed. Bound- ary cells are defined as cells that neighbour either a NA cell or a cell with a different value than itself. Non-boundary cells only neighbour cells with the same value than themself. Value List with RasterLayer or matrix Examples landscape <- terra::rast(landscapemetrics::landscape) class_1 <- get_patches(landscape, class = 1)[[1]][[1]] get_boundaries(class_1) get_boundaries(class_1, return_raster = FALSE) get_centroids get_centroids Description Centroid of patches Usage get_centroids( landscape, directions = 8, cell_center = FALSE, return_vec = FALSE, verbose = TRUE ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. return_vec If true, a sf object is returned. verbose Print warning messages Details Get the coordinates of the centroid of each patch. The centroid is by default defined as the mean lo- cation of all cell centers. To force the centroid to be located within each patch, use the cell_center argument. In this case, the centroid is defined as the cell center that is the closest to the mean loca- tion. Examples # get centroid location landscape <- terra::rast(landscapemetrics::landscape) get_centroids(landscape) get_circumscribingcircle get_circumscribingcircle Description Diameter of the circumscribing circle around patches Usage get_circumscribingcircle(landscape, directions = 8, level = "patch") Arguments landscape SpatRaster or matrix (with x, y, id columns) directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). level Either ’patch’ or ’class’ for the corresponding level. Details The diameter of the smallest circumscribing circle around a patch in the landscape is based on the maximum distance between the corners of each cell. This ensures that all cells of the patch are included in the patch. References Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). Examples landscape <- terra::rast(landscapemetrics::landscape) # get circle around each patch get_circumscribingcircle(landscape) # get circle around whole class get_circumscribingcircle(landscape, level = "class") get_nearestneighbour get_nearestneighbour Description Euclidean distance to nearest neighbour Usage get_nearestneighbour(landscape, return_id = FALSE) Arguments landscape SpatRaster or matrix (with x,y,id columns). return_id If TRUE, also the patch ID of the nearest neighbour is returned. Details Fast and memory safe Rcpp implementation for calculating the minimum Euclidean distances to the nearest patch of the same class in a raster or matrix. All patches need an unique ID (see get_patches). Please be aware that the patch ID is not identical to the patch ID of all metric functions (lsm_). If return_ID = TRUE, for some focal patches several nearest neighbour patches might be returned. References Based on RCpp code of <NAME> <<EMAIL>> Examples # get patches for class 1 landscape <- terra::rast(landscapemetrics::landscape) class_1 <- get_patches(landscape, class = 2)[[1]][[1]] # calculate the distance between patches get_nearestneighbour(class_1) get_nearestneighbour(class_1, return_id = TRUE) get_patches get_patches Description Connected components labeling to derive patches in a landscape. Usage get_patches( landscape, class = "all", directions = 8, to_disk = getOption("to_disk", default = FALSE), return_raster = TRUE ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. class Either "all" (default) for every class in the raster, or specify class value. See Details. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). to_disk Logical argument, if FALSE results of get_patches are hold in memory. If true, get_patches writes temporary files and hence, does not hold everything in mem- ory. Can be set with a global option, e.g. option(to_disk = TRUE). See Details. return_raster If false, matrix is returned Details Searches for connected patches (neighbouring cells of the same class i). The 8-neighbours rule (’queen’s case) or 4-neighbours rule (rook’s case) is used. Returns a list with raster. For each class the connected patches have the value 1 - n. All cells not belonging to the class are NA. Landscape metrics rely on the delineation of patches. Hence, get_patches is heavily used in landscapemetrics. As raster can be quite big, the fact that get_patches creates a copy of the raster for each class in a landscape becomes a burden for computer memory. Hence, the argument to_disk allows to store the results of the connected labeling algorithm on disk. Furthermore, this option can be set globally, so that every function that internally uses get_patches can make use of that. Value List References <NAME>., <NAME>. 1991. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence. 13 (6), 583-598 Examples landscape <- terra::rast(landscapemetrics::landscape) # check for patches of class 1 patched_raster <- get_patches(landscape, class = 1) # count patches nrow(terra::unique(patched_raster[[1]][[1]])) # check for patches of every class patched_raster <- get_patches(landscape) get_unique_values get_unique_values Description This function returns the unique values of an object. Usage get_unique_values(x, simplify = FALSE, verbose = TRUE) Arguments x Vector, matrix, raster, stars, or terra object or list of previous. simplify If true, a vector will be returned instead of a list for 1-dimensional input verbose If true, warning messages are printed Details Fast and memory friendly Rcpp implementation to find the unique values of an object. Examples landscape <- terra::rast(landscapemetrics::landscape) get_unique_values(landscape) landscape_stack <- c(landscape, landscape, landscape) get_unique_values(landscape_stack) landscape_matrix <- terra::as.matrix(landscape, wide = TRUE) get_unique_values(landscape_matrix) x_vec <- c(1, 2, 1, 1, 2, 2) get_unique_values(x_vec) landscape_list <- list(landscape, landscape_matrix, x_vec) get_unique_values(landscape_list) landscape Example map (random cluster neutral landscape model). Description An example map to show landscapemetrics functionality generated with the nlm_randomcluster() algorithm. Usage landscape Format A raster object. Source Simulated neutral landscape model with R. https://github.com/ropensci/NLMR/ landscape_as_list Landscape as list Description Convert raster input to list Usage landscape_as_list(landscape) ## S3 method for class 'SpatRaster' landscape_as_list(landscape) ## S3 method for class 'RasterLayer' landscape_as_list(landscape) ## S3 method for class 'RasterBrick' landscape_as_list(landscape) ## S3 method for class 'RasterStack' landscape_as_list(landscape) ## S3 method for class 'stars' landscape_as_list(landscape) ## S3 method for class 'list' landscape_as_list(landscape) ## S3 method for class 'matrix' landscape_as_list(landscape) ## S3 method for class 'numeric' landscape_as_list(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters Details Mainly for internal use Value list Examples landscape <- terra::rast(landscapemetrics::landscape) landscape_as_list(c(landscape, landscape)) list_lsm List landscape metrics Description List landscape metrics Usage list_lsm( level = NULL, metric = NULL, name = NULL, type = NULL, what = NULL, simplify = FALSE, verbose = TRUE ) Arguments level Level of metrics. Either ’patch’, ’class’ or ’landscape’ (or vector with combina- tion). metric Abbreviation of metrics (e.g. ’area’). name Full name of metrics (e.g. ’core area’) type Type according to FRAGSTATS grouping (e.g. ’aggregation metrics’). what Selected level of metrics: either "patch", "class" or "landscape". It is also pos- sible to specify functions as a vector of strings, e.g. what = c("lsm_c_ca", "lsm_l_ta"). simplify If true, function names are returned as vector. verbose Print warning messages Details List all available landscape metrics depending on the provided filter arguments. If an argument is not provided, automatically all possibilities are selected. Therefore, to get all available metrics, use simply list_lsm(). For all arguments with exception of the what argument, it is also possible to use a negative subset, i.e. all metrics but the selected ones. Therefore, simply use e.g. level = "-patch". Furthermore, it is possible to only get a vector with all function names instead of the full tibble. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org Examples list_lsm(level = c("patch", "landscape"), type = "aggregation metric") list_lsm(level = "-patch", type = "area and edge metric") list_lsm(metric = "area", simplify = TRUE) list_lsm(metric = "area", what = "lsm_p_shape") list_lsm(metric = "area", what = c("patch", "lsm_l_ta")) list_lsm(what = c("lsm_c_tca", "lsm_l_ta")) lsm_abbreviations_names Tibble of abbreviations coming from FRAGSTATS Description A single tibble for every abbreviation of every metric that is reimplemented in landscapemetrics and its corresponding full name in the literature. Usage lsm_abbreviations_names Format A tibble object. Details Can be used after calculating the metric(s) with a join to have a more readable results tibble or for visualizing your results. Examples landscape <- terra::rast(landscapemetrics::landscape) patch_area <- lsm_p_area(landscape) patch_area <- merge(x = patch_area, y = lsm_abbreviations_names, by = c("level", "metric")) lsm_c_ai AI (class level) Description Aggregation index (Aggregation metric) Usage lsm_c_ai(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters Details " # gii AI = (100) max − gii where gii is the number of like adjacencies based on the single-count method and max − gii is the classwise maximum number of like adjacencies of class i. AI is an ’Aggregation metric’. It equals the number of like adjacencies divided by the theoretical maximum possible number of like adjacencies for that class. The metric is based on he adjacency matrix and the the single-count method. Units: Percent Range: 0 <= AI <= 100 Behaviour: Equals 0 for maximally disaggregated and 100 for maximally aggregated classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 2000. An aggregation index (AI) to quantify spatial patterns of landscapes. Landscape ecology, 15(7), 591-601. See Also lsm_l_ai Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_ai(landscape) lsm_c_area_cv AREA_CV (class level) Description Coefficient of variation of patch area (Area and edge metric) Usage lsm_c_area_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREACV = cv(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. AREA_CV is an ’Area and Edge metric’. The metric summarises each class as the Coefficient of variation of all patch areas belonging to class i. The metric describes the differences among patches of the same class i in the landscape and is easily comparable because it is scaled to the mean. Units: Hectares Range: AREA_CV >= 0 Behaviour: Equals AREA_CV = 0 if all patches are identical in size. Increases, without limit, as the variation of patch areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, lsm_c_area_mn, lsm_c_area_sd, lsm_l_area_mn, lsm_l_area_sd, lsm_l_area_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_area_cv(landscape) lsm_c_area_mn AREA_MN (class level) Description Mean of patch area (Area and edge metric) Usage lsm_c_area_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREAM N = mean(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares AREA_MN is an ’Area and Edge metric’. The metric summarises each class as the mean of all patch areas belonging to class i. The metric is a simple way to describe the composition of the landscape. Especially together with the total class area (lsm_c_ca), it can also give an an idea of patch structure (e.g. many small patches vs. few larges patches). Units: Hectares Range: AREA_MN > 0 Behaviour: Approaches AREA_MN = 0 if all patches are small. Increases, without limit, as the patch areas increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, mean, lsm_c_area_cv, lsm_c_area_sd, lsm_l_area_mn, lsm_l_area_sd, lsm_l_area_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_area_mn(landscape) lsm_c_area_sd AREA_SD (class level) Description Standard deviation of patch area (Area and edge metric) Usage lsm_c_area_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREASD = sd(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. AREA_SD is an ’Area and Edge metric’. The metric summarises each class as the standard devia- tion of all patch areas belonging to class i. The metric describes the differences among patches of the same class i in the landscape. Units: Hectares Range: AREA_SD >= 0 Behaviour: Equals AREA_SD = 0 if all patches are identical in size. Increases, without limit, as the variation of patch areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, sd, lsm_c_area_mn, lsm_c_area_cv, lsm_l_area_mn, lsm_l_area_sd, lsm_l_area_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_area_sd(landscape) lsm_c_ca CA (class level) Description Total (class) area (Area and edge metric) Usage lsm_c_ca(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CA = sum(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. CA is an ’Area and edge metric’ and a measure of composition. The total (class) area sums the area of all patches belonging to class i. It shows if the landscape is e.g. dominated by one class or if all classes are equally present. CA is an absolute measure, making comparisons among landscapes with different total areas difficult. Units: Hectares Range: CA > 0 Behaviour: Approaches CA > 0 as the patch areas of class i become small. Increases, without limit, as the patch areas of class i become large. CA = TA if only one class is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, sum, lsm_l_ta Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_ca(landscape) lsm_c_cai_cv CAI_CV (class level) Description Coefficient of variation of core area index (Core area metric) Usage lsm_c_cai_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAICV = cv(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_CV is a ’Core area metric’. The metric summarises each class as the Coefficient of variation of the core area index of all patches belonging to class i. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric describes the differences among patches of the same class i in the landscape. Because it is scaled to the mean, it is easily comparable. Units: Percent Range: CAI_CV >= 0 Behaviour: Equals CAI_CV = 0 if the core area index is identical for all patches. Increases, without limit, as the variation of the core area indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, lsm_c_cai_mn, lsm_c_cai_sd, lsm_l_cai_mn, lsm_l_cai_sd, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_cai_cv(landscape) lsm_c_cai_mn CAI_MN (class level) Description Mean of core area index (Core area metric) Usage lsm_c_cai_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAIM N = mean(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_MN is a ’Core area metric’. The metric summarises each class as the mean of the core area index of all patches belonging to class i. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). Units: Percent Range: 0 <= CAI_MN <= 100 Behaviour: CAI_MN = 0 when all patches have no core area and approaches CAI_MN = 100 with increasing percentage of core area within patches. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, mean, lsm_c_cai_sd, lsm_c_cai_cv, lsm_l_cai_mn, lsm_l_cai_sd, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_cai_mn(landscape) lsm_c_cai_sd CAI_SD (class level) Description Standard deviation of core area index (Core area metric) Usage lsm_c_cai_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAISD = sd(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_SD is a ’Core area metric’. The metric summarises each class as the standard deviation of the core area index of all patches belonging to class i. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric describes the differences among patches of the same class i in the landscape. Units: Percent Range: CAI_SD >= 0 Behaviour: Equals CAI_SD = 0 if the core area index is identical for all patches. Increases, without limit, as the variation of core area indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, sd, lsm_c_cai_mn, lsm_c_cai_cv, lsm_l_cai_mn, lsm_l_cai_sd, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_cai_sd(landscape) lsm_c_circle_cv CIRCLE_CV (Class level) Description Coefficient of variation of related circumscribing circle (Shape metric) Usage lsm_c_circle_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLECV = cv(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_CV is a ’Shape metric’ and summarises each class as the Coefficient of variation of the related circumscribing circle of all patches belonging to class i. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. CIRCLE_CV describes the differences among patches of the same class i in the landscape. Because it is scaled to the mean, it is easily comparable. Units: None Range: CIRCLE_CV >= 0 Behaviour: Equals CIRCLE_CV if the related circumscribing circle is identical for all patches. Increases, without limit, as the variation of related circumscribing circles increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_mn, lsm_c_circle_sd, lsm_l_circle_mn, lsm_l_circle_sd, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_circle_cv(landscape) lsm_c_circle_mn CIRCLE_MN (Class level) Description Mean of related circumscribing circle (Shape metric) Usage lsm_c_circle_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLEM N = mean(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_MN is a ’Shape metric’ and summarises each class as the mean of the related circum- scribing circle of all patches belonging to class i. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. Units: None Range: CIRCLE_MN > 0 Behaviour: Approaches CIRCLE_MN = 0 if the related circumscribing circle of all patches is small. Increases, without limit, as the related circumscribing circles increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_sd, lsm_c_circle_cv, lsm_l_circle_mn, lsm_l_circle_sd, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_circle_mn(landscape) lsm_c_circle_sd CIRCLE_SD (Class level) Description Standard deviation of related circumscribing circle (Shape metric) Usage lsm_c_circle_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLESD = sd(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_SD is a ’Shape metric’ and summarises each class as the standard deviation of the related circumscribing circle of all patches belonging to class i. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. The metric describes the differences among patches of the same class i in the landscape. Units: None Range: CIRCLE_SD >= 0 Behaviour: Equals CIRCLE_SD if the related circumscribing circle is identical for all patches. Increases, without limit, as the variation of related circumscribing circles increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_mn, lsm_c_circle_cv, lsm_l_circle_mn, lsm_l_circle_sd, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_circle_sd(landscape) lsm_c_clumpy CLUMPY (class level) Description Clumpiness index (Aggregation metric) Usage lsm_c_clumpy(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters Details ! gii GivenGi = m P ( gik ) − minei " # Gi − Pi Gi − Pi CLU M P Y = f orGi < Pi &Pi < .5; else Pi 1 − Pi where gii is the number of like adjacencies, gik is the classwise number of all adjacencies including the focal class, minei is the minimum perimeter of the total class in terms of cell surfaces assuming total clumping and Pi is the proportion of landscape occupied by each class. CLUMPY is an ’Aggregation metric’. It equals the proportional deviation of the proportion of like adjacencies involving the corresponding class from that expected under a spatially random distribution. The metric is based on he adjacency matrix and the the double-count method. Units: None Range: -1 <= CLUMPY <= 1 Behaviour: Equals -1 for maximally disaggregated, 0 for randomly distributed and 1 for maxi- mally aggregated classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_clumpy(landscape) lsm_c_cohesion COHESION (class level) Description Patch Cohesion Index (Aggregation metric) Usage lsm_c_cohesion(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Pn pij COHESION = 1 − ( P n ) ∗ (1 − √ )−1 ∗ 100 √ Z pij aij where pij is the perimeter in meters, aij is the area in square meters and Z is the number of cells. COHESION is an ’Aggregation metric’. It characterises the connectedness of patches belonging to class i. It can be used to asses if patches of the same class are located aggregated or rather isolated and thereby COHESION gives information about the configuration of the landscape. Units: Percent Ranges: 0 < COHESION < 100 Behaviour: Approaches COHESION = 0 if patches of class i become more isolated. Increases if patches of class i become more aggregated. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1996. Using landscape indices to predict habitat connectivity. Ecology, 77(4), 1210-1225. See Also lsm_p_perim, lsm_p_area, lsm_l_cohesion Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_cohesion(landscape) lsm_c_contig_cv CONTIG_CV (class level) Description Coefficient of variation of Contiguity index (Shape metric) Usage lsm_c_contig_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGCV = cv(CON T IG[patchij ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_CV is a ’Shape metric’. It summarises each class as the mean of each patch belonging to class i. CONTIG_CV asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: CONTIG_CV >= 0 Behaviour: CONTIG_CV = 0 if the contiguity index is identical for all patches. Increases, without limit, as the variation of CONTIG increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_mn, lsm_c_contig_cv, lsm_l_contig_mn, lsm_l_contig_sd, lsm_l_contig_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_contig_cv(landscape) lsm_c_contig_mn CONTIG_MN (class level) Description Mean of Contiguity index (Shape metric) Usage lsm_c_contig_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGM N = mean(CON T IG[patchij ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_MN is a ’Shape metric’. It summarises each class as the mean of each patch belonging to class i. CONTIG_MN asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: 0 >= CONTIG_MN <= 1 Behaviour: CONTIG equals the mean of the contiguity index on class level for all patches. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org LaGro, J. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_sd, lsm_c_contig_cv, lsm_l_contig_mn, lsm_l_contig_sd, lsm_l_contig_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_contig_mn(landscape) lsm_c_contig_sd CONTIG_SD (class level) Description Standard deviation of Contiguity index (Shape metric) Usage lsm_c_contig_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGSD = sd(CON T IG[patchij ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_SD is a ’Shape metric’. It summarises each class as the mean of each patch belonging to class i. CONTIG_SD asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: CONTIG_CV >= 0 Behaviour: CONTIG_SD = 0 if the contiguity index is identical for all patches. Increases, without limit, as the variation of CONTIG increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_mn, lsm_c_contig_cv, lsm_l_contig_mn, lsm_l_contig_sd, lsm_l_contig_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_contig_sd(landscape) lsm_c_core_cv CORE_CV (class level) Description Coefficient of variation of core area (Core area metric) Usage lsm_c_core_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CORECV = cv(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_CV is a ’Core area metric’. It equals the Coefficient of variation of the core area of each patch belonging to class i. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). The metric describes the differences among patches of the same class i in the landscape and is easily comparable because it is scaled to the mean. Units: Hectares Range: CORE_CV >= 0 Behaviour: Equals CORE_CV = 0 if all patches have the same core area. Increases, without limit, as the variation of patch core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, lsm_c_core_mn, lsm_c_core_sd, lsm_l_core_mn, lsm_l_core_sd, lsm_l_core_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_core_cv(landscape) lsm_c_core_mn CORE_MN (class level) Description Mean of core area (Core area metric) Usage lsm_c_core_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details COREM N = mean(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_MN is a ’Core area metric’ and equals the mean of core areas of all patches belonging to class i. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). Units: Hectares Range: CORE_MN >= 0 Behaviour: Equals CORE_MN = 0 if CORE = 0 for all patches. Increases, without limit, as the core area indices increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, mean, lsm_c_core_sd, lsm_c_core_cv, lsm_l_core_mn, lsm_l_core_sd, lsm_l_core_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_core_mn(landscape) lsm_c_core_sd CORE_SD (class level) Description Standard deviation patch core area (class level) Usage lsm_c_core_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CORESD = sd(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_SD is a ’Core area metric’. It equals the standard deviation of the core area of each patch belonging to class i. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). The metric describes the differences among patches of the same class i in the landscape. Units: Hectares Range: CORE_SD >= 0 Behaviour: Equals CORE_SD = 0 if all patches have the same core area. Increases, without limit, as the variation of patch core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, sd, lsm_c_core_mn, lsm_c_core_cv, lsm_l_core_mn, lsm_l_core_sd, lsm_l_core_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_core_sd(landscape) lsm_c_cpland CPLAND (class level) Description Core area percentage of landscape (Core area metric) Usage lsm_c_cpland( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details n acore P ij CP LAN D = ( ) ∗ 100 A where acore ij is the core area in square meters and A is the total landscape area in square meters. CPLAND is a ’Core area metric’. It is the percentage of core area of class i in relation to the total landscape area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). Because CPLAND is a relative measure, it is comparable among landscapes with different total areas. Units: Percentage Range: 0 <= CPLAND < 100 Behaviour: Approaches CPLAND = 0 if CORE = 0 for all patches. Increases as the amount of core area increases, i.e. patches become larger while being rather simple in shape. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core and lsm_l_ta Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_cpland(landscape) lsm_c_dcad DCAD (class level) Description Disjunct core area density (core area metric) Usage lsm_c_dcad( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details n ncore P ij DCAD = ( ) ∗ 10000 ∗ 100 A where ncore ij is the number of disjunct core areas and A is the total landscape area in square meters. DCAD is a ’Core area metric’. It equals the number of disjunct core areas per 100 ha relative to the total area. A disjunct core area is a ’patch within the patch’ containing only core cells. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric is relative and therefore comparable among landscapes with different total areas. Units: Number per 100 hectares Range: DCAD >= 0 Behaviour: Equals DCAD = 0 when DCORE = 0, i.e. no patch of class i contains a disjunct core area. Increases, without limit, as disjunct core areas become more present, i.e. patches becoming larger and less complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_ndca, lsm_l_ta, lsm_l_dcad Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_dcad(landscape) lsm_c_dcore_cv DCORE_CV (class level) Description Coefficient of variation number of disjunct core areas (Core area metric) Usage lsm_c_dcore_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCORECV = cv(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_CV is an ’Core area metric’. It summarises each class as the Coefficient of variation of all patch areas belonging to class i. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. The metric describes the differences among patches of the same class i in the landscape and is easily comparable because it is scaled to the mean. Units: None Range: DCORE_CV >= 0 Behaviour: Equals DCORE_CV = 0 if all patches have the same number of disjunct core areas. Increases, without limit, as the variation of number of disjunct core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, lsm_c_dcore_mn, lsm_c_dcore_sd, lsm_l_dcore_mn, lsm_l_dcore_sd, lsm_l_dcore_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_dcore_cv(landscape) lsm_c_dcore_mn DCORE_MN (class level) Description Mean number of disjunct core areas (Core area metric) Usage lsm_c_dcore_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCOREM N = mean(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_MN is an ’Core area metric’. It summarises each class as the mean of all patch areas belonging to class i. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. Units: None Range: DCORE_MN > 0 Behaviour: Equals DCORE_MN = 0 if NCORE = 0 for all patches. Increases, without limit, as the number of disjunct core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, mean, lsm_c_dcore_sd, lsm_c_dcore_cv, lsm_l_dcore_mn, lsm_l_dcore_sd, lsm_l_dcore_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_dcore_mn(landscape) lsm_c_dcore_sd DCORE_SD (class level) Description Standard deviation number of disjunct core areas (Core area metric) Usage lsm_c_dcore_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCORESD = sd(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_SD is an ’Core area metric’. It summarises each class as the standard deviation of all patch areas belonging to class i. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. The metric describes the differences among patches of the same class i in the landscape. Units: None Range: DCORE_SD >= 0 Behaviour: Equals DCORE_SD = 0 if all patches have the same number of disjunct core areas. Increases, without limit, as the variation of number of disjunct core areas increases. Value tibble References <NAME>., <NAME>ushman, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, sd, lsm_c_dcore_mn, lsm_c_dcore_cv, lsm_l_dcore_mn, lsm_l_dcore_sd, lsm_l_dcore_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_dcore_sd(landscape) lsm_c_division DIVISION (class level) Description Landscape division index (Aggregation metric) Usage lsm_c_division(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details n X aij DIV ISON = (1 − ( )2 ) A where aij is the area in square meters and A is the total landscape area in square meters. DIVISION is an ’Aggregation metric. It can be in as the probability that two randomly selected cells are not located in the same patch of class i. The landscape division index is negatively correlated with the effective mesh size (lsm_c_mesh). Units: Proportion Ranges: 0 <= Division < 1 Behaviour: Equals DIVISION = 0 if only one patch is present. Approaches DIVISION = 1 if all patches of class i are single cells. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_l_division Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_division(landscape) lsm_c_ed ED (class level) Description Edge Density (Area and Edge metric) Usage lsm_c_ed(landscape, count_boundary = FALSE, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. count_boundary Count landscape boundary as edge. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Pm eik ED = ∗ 10000 A where eik is the total edge length in meters and A is the total landscape area in square meters. ED is an ’Area and Edge metric’. The edge density equals the sum of all edges of class i in relation to the landscape area. The boundary of the landscape is only included in the corresponding total class edge length if count_boundary = TRUE. The metric describes the configuration of the landscape, e.g. because an aggregation of the same class will result in a low edge density. The metric is standardized to the total landscape area, and therefore comparisons among landscapes with different total areas are possible. Units: Meters per hectare Range: ED >= 0 Behaviour: Equals ED = 0 if only one patch is present (and the landscape boundary is not included) and increases, without limit, as the landscapes becomes more patchy Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_te, lsm_l_ta, lsm_l_ed Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_ed(landscape) lsm_c_enn_cv ENN_CV (class level) Description Coefficient of variation of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_c_enn_cv(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NCV = cv(EN N [patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_CV is an ’Aggregation metric’. It summarises each class as the Coefficient of variation of each patch belonging to class i. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. The metric is a simple way to describe patch isolation. Because it is scaled to the mean, it is easily comparable among different landscapes. Units: Meters Range: ENN_CV >= 0 Behaviour: Equals ENN_CV = 0 if the euclidean nearest-neighbor distance is identical for all patches. Increases, without limit, as the variation of ENN increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, lsm_c_enn_mn, lsm_c_enn_sd, lsm_l_enn_mn, lsm_l_enn_sd, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_enn_cv(landscape) lsm_c_enn_mn ENN_MN (class level) Description Mean of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_c_enn_mn(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NM N = mean(EN N [patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_MN is an ’Aggregation metric’. It summarises each class as the mean of each patch belonging to class i. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. Units: Meters Range: ENN_MN > 0 Behaviour: Approaches ENN_MN = 0 as the distance to the nearest neighbour decreases, i.e. patches of the same class i are more aggregated. Increases, without limit, as the distance between neighbouring patches of the same class i increases, i.e. patches are more isolated. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, mean, lsm_c_enn_sd, lsm_c_enn_cv, lsm_l_enn_mn, lsm_l_enn_sd, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_enn_mn(landscape) lsm_c_enn_sd ENN_SD (class level) Description Standard deviation of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_c_enn_sd(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NSD = sd(EN N [patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_CV is an ’Aggregation metric’. It summarises each class as the standard deviation of each patch belonging to class i. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. The metric is a simple way to describe patch isolation. Because it is scaled to the mean, it is easily comparable among different landscapes. Units: Meters Range: ENN_SD >= 0 Behaviour: Equals ENN_SD = 0 if the euclidean nearest-neighbor distance is identical for all patches. Increases, without limit, as the variation of ENN increases. Value tibble References <NAME>., SA Cushman, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, sd, lsm_c_enn_mn, lsm_c_enn_cv, lsm_l_enn_mn, lsm_l_enn_sd, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_enn_sd(landscape) lsm_c_frac_cv FRAC_CV (class level) Description Coefficient of variation fractal dimension index (Shape metric) Usage lsm_c_frac_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACCV = cv(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_CV is a ’Shape metric’. The metric summarises each class as the Coefficient of variation of the fractal dimension index of all patches belonging to class i. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. The Coefficient of variation is scaled to the mean and comparable among different landscapes. Units: None Range: FRAC_CV >= 0 Behaviour: Equals FRAC_CV = 0 if the fractal dimension index is identical for all patches. Increases, without limit, as the variation of the fractal dimension indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, lsm_c_frac_mn, lsm_c_frac_sd, lsm_l_frac_mn, lsm_l_frac_sd, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_frac_cv(landscape) lsm_c_frac_mn FRAC_MN (class level) Description Mean fractal dimension index (Shape metric) Usage lsm_c_frac_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACM N = mean(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_MN is a ’Shape metric’. The metric summarises each class as the mean of the fractal di- mension index of all patches belonging to class i. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. The Coefficient of variation is scaled to the mean and comparable among different landscapes. Units: None Range: FRAC_MN > 0 Behaviour: Approaches FRAC_MN = 1 if all patches are squared and FRAC_MN = 2 if all patches are irregular. References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, mean, lsm_c_frac_sd, lsm_c_frac_cv, lsm_l_frac_mn, lsm_l_frac_sd, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_frac_mn(landscape) lsm_c_frac_sd FRAC_SD (class level) Description Standard deviation fractal dimension index (Shape metric) Usage lsm_c_frac_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACSD = sd(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_SD is a ’Shape metric’. The metric summarises each class as the standard deviation of the fractal dimension index of all patches belonging to class i. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. Units: None Range: FRAC_SD>= 0 Behaviour: Equals FRAC_SD = 0 if the fractal dimension index is identical for all patches. Increases, without limit, as the variation of the fractal dimension indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, sd, lsm_c_frac_mn, lsm_c_frac_cv, lsm_l_frac_mn, lsm_l_frac_sd, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_frac_sd(landscape) lsm_c_gyrate_cv GYRATE_CV (class level) Description Coefficient of variation radius of gyration (Area and edge metric) Usage lsm_c_gyrate_cv(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT ECV = cv(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_CV is an ’Area and edge metric’. The metric summarises each class as the Coefficient of variation of the radius of gyration of all patches belonging to class i. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. The Coefficient of variation is scaled to the mean and comparable among different landscapes. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_CV >= 0 Behaviour: Equals GYRATE_CV = 0 if the radius of gyration is identical for all patches. In- creases, without limit, as the variation of the radius of gyration increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, lsm_c_gyrate_mn, lsm_c_gyrate_sd, lsm_l_gyrate_mn, lsm_l_gyrate_sd, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_gyrate_cv(landscape) lsm_c_gyrate_mn GYRATE_MN (class level) Description Mean radius of gyration (Area and edge metric) Usage lsm_c_gyrate_mn(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT EM N = mean(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_MN is an ’Area and edge metric’. The metric summarises each class as the mean of the radius of gyration of all patches belonging to class i. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_MN >= 0 Behaviour: Approaches GYRATE_MN = 0 if every patch is a single cell. Increases, without limit, when only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, mean, lsm_c_gyrate_sd, lsm_c_gyrate_cv, lsm_l_gyrate_mn, lsm_l_gyrate_sd, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_gyrate_mn(landscape) lsm_c_gyrate_sd GYRATE_SD (class level) Description Standard deviation radius of gyration (Area and edge metric) Usage lsm_c_gyrate_sd(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT ESD = sd(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_SD is an ’Area and edge metric’. The metric summarises each class as the standard deviation of the radius of gyration of all patches belonging to class i. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_SD >= 0 Behaviour: Equals GYRATE_SD = 0 if the radius of gyration is identical for all patches. In- creases, without limit, as the variation of the radius of gyration increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, lsm_c_gyrate_mn, lsm_c_gyrate_cv, lsm_l_gyrate_mn, lsm_l_gyrate_sd, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_gyrate_sd(landscape) lsm_c_iji Interspersion and Juxtaposition index (class level) Description Interspersion and Juxtaposition index (Aggregation metric) Usage lsm_c_iji(landscape, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. verbose Print warning message if not sufficient patches are present Details " ! !# m P eik eik − m P ln m P k=1 eik eik IJI = ∗ 100 where eik are the unique adjacencies of all classes (lower/upper triangle of the adjacency table - without the diagonal) and m is the number of classes. IJI is an ’Aggregation metric’. It is a so called "salt and pepper" metric and describes the intermixing of classes (i.e. without considering like adjacencies - the diagonal of the adjacency table). The number of classes to calculate IJI must be >= than 3. Units: Percent Range: 0 < IJI <= 100 Behaviour: Approaches 0 if a class is only adjacent to a single other class and equals 100 when a class is equally adjacent to all other classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., & <NAME>. 1995. FRAGSTATS: spatial pattern analysis program for quanti- fying landscape structure. Gen. Tech. Rep. PNW-GTR-351. Portland, OR: US Department of Agriculture, Forest Service, Pacific Northwest Research Station. 122 p, 351. See Also lsm_l_iji Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_iji(landscape) lsm_c_lpi LPI (class level) Description Largest patch index (Area and Edge metric) Usage lsm_c_lpi(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details n max(aij ) LP I = ∗ 100 A where max(aij ) is the area of the patch in square meters and A is the total landscape area in square meters. The largest patch index is an ’Area and edge metric’. It is the percentage of the landscape covered by the corresponding largest patch of each class i. It is a simple measure of dominance. Units: Percentage Range: 0 < LPI <= 100 Behaviour: Approaches LPI = 0 when the largest patch is becoming small and equals LPI = 100 when only one patch is present Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, lsm_l_ta, lsm_l_lpi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_lpi(landscape) lsm_c_lsi LSI (class level) Description Landscape shape index (Aggregation metric) Usage lsm_c_lsi(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details ei LSI = min ei where ei is the total edge length in cell surfaces and min ei is the minimum total edge length in cell surfaces. LSI is an ’Aggregation metric’. It is the ratio between the actual edge length of class i and the hypothetical minimum edge length of class i. The minimum edge length equals the edge length if class i would be maximally aggregated. Units: None Ranges: LSI >= 1 Behaviour: Equals LSI = 1 when only one squared patch is present or all patches are maximally aggregated. Increases, without limit, as the length of the actual edges increases, i.e. the patches become less compact. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, lsm_l_lsi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_lsi(landscape) lsm_c_mesh MESH (class level) Description Effective Mesh Size (Aggregation metric) Usage lsm_c_mesh(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details n P M ESH = ∗ where aij is the patch area in square meters and A is the total landscape area in square meters. The effective mesh size is an ’Aggregation metric’. Because each patch is squared before the sums for each group i are calculated and the sum is standardized by the total landscape area, MESH is a relative measure of patch structure. MESH is perfectly, negatively correlated to lsm_c_division. Units: Hectares Range: cell size / total area <= MESH <= total area Behaviour: Equals cellsize/total area if class covers only one cell and equals total area if only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_l_mesh Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_mesh(landscape) lsm_c_ndca NDCA (class level) Description Number of disjunct core areas (Core area metric) Usage lsm_c_ndca( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details Xn N DCA = ncore ij where ncore ij is the number of disjunct core areas. NDCA is a ’Core area metric’. The metric summarises class i as the sum of all patches belonging to class i. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NDCA counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. It describes patch area and shape simultaneously (more core area when the patch is large, however, the shape must allow disjunct core areas). Thereby, a compact shape (e.g. a square) will contain less disjunct core areas than a more irregular patch. Units: None Range: NDCA >= 0 Behaviour: NDCA = 0 when TCA = 0, i.e. every cell in patches of class i is an edge. NDCA increases, with out limit, as core area increases and patch shapes allow disjunct core areas (i.e. patch shapes become rather complex). Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_tca, lsm_p_ncore, lsm_l_ndca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_ndca(landscape) lsm_c_nlsi nLSI (class level) Description Normalized landscape shape index (Aggregation metric) Usage lsm_c_nlsi(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details ei − min ei nLSI = max ei − min ei where ei is the total edge length in cell surfaces and min ei max ei are the minimum and maximum total edge length in cell surfaces, respectively. nLSI is an ’Aggregation metric’. It is closely related to the lsm_c_lsi and describes the ratio of the actual edge length of class i in relation to the hypothetical range of possible edge lengths of class i (min/max). Currently, nLSI ignores all background cells when calculating the minimum and maximum total edge length. Also, a correct calculation of the minimum and maximum total edge length is currently only possible for rectangular landscapes. Units: None Ranges: 0 <= nlsi <= 1 Behaviour: Equals nLSI = 0 when only one squared patch is present. nLSI increases the more disaggregated patches are and equals nLSI = 1 for a maximal disaggregated (i.e. a "checkerboard pattern"). Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_c_lsi lsm_l_lsi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_nlsi(landscape) lsm_c_np NP (class level) Description Number of patches (Aggregation metric) Usage lsm_c_np(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details N P = ni where ni is the number of patches. NP is an ’Aggregation metric’. It describes the fragmentation of a class, however, does not neces- sarily contain information about the configuration or composition of the class. Units: None Ranges: NP >= 1 Behaviour: Equals NP = 1 when only one patch is present and increases, without limit, as the number of patches increases Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_l_np Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_np(landscape) lsm_c_pafrac PAFRAC (class level) Description Perimeter-Area Fractal Dimension (Shape metric) Usage lsm_c_pafrac(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details P AF RAC = β Pn where β is the slope of the regression of the area against the perimeter (logarithm) ni ln aij = n P a + βni ln pij j=1 PAFRAC is a ’Shape metric’. It describes the patch complexity of class i while being scale indepen- dent. This means that increasing the patch size while not changing the patch form will not change the metric. However, it is only meaningful if the relationship between the area and perimeter is linear on a logarithmic scale. Furthermore, if there are less than 10 patches in class i, the metric returns NA because of the small-sample issue. Units: None Range: 1 <= PAFRAC <= 2 Behaviour: Approaches PAFRAC = 1 for patches with simple shapes and approaches PAFRAC = 2 for irregular shapes Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1986. Principles of Geographical Information Systems for Land Resources Assess- ment. Monographs on Soil and Resources Survey No. 12. Clarendon Press, Oxford See Also lsm_p_area, lsm_p_perim, lsm_l_pafrac Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_pafrac(landscape) lsm_c_para_cv PARA_CV (class level) Description Coefficient of variation perimeter-area ratio (Shape metric) Usage lsm_c_para_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARACV = cv(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_CV is a ’Shape metric’. It summarises each class as the Coefficient of variation of each patch belonging to class i. The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_CV >= 0 Behaviour: Equals PARA_CV = 0 if the perimeter-area ratio is identical for all patches. In- creases, without limit, as the variation of the perimeter-area ratio increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, lsm_c_para_mn, lsm_c_para_sd, lsm_l_para_mn, lsm_l_para_sd, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_para_cv(landscape) lsm_c_para_mn PARA_MN (class level) Description Mean perimeter-area ratio (Shape metric) Usage lsm_c_para_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARAM N = mean(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_MN is a ’Shape metric’. It summarises each class as the mean of each patch belonging to class i. The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_MN > 0 Behaviour: Approaches PARA_MN > 0 if PARA for each patch approaches PARA > 0, i.e. the form approaches a rather small square. Increases, without limit, as PARA increases, i.e. patches become more complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, mean, lsm_c_para_sd, lsm_c_para_cv, lsm_l_para_mn, lsm_l_para_sd, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_para_mn(landscape) lsm_c_para_sd PARA_SD (class level) Description Standard deviation perimeter-area ratio (Shape metric) Usage lsm_c_para_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARASD = sd(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_SD is a ’Shape metric’. It summarises each class as the standard deviation of each patch belonging to class i. The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_SD >= 0 Behaviour: Equals PARA_SD = 0 if the perimeter-area ratio is identical for all patches. In- creases, without limit, as the variation of the perimeter-area ratio increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, sd, lsm_c_para_mn, lsm_c_para_cv, lsm_l_para_mn, lsm_l_para_sd, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_para_sd(landscape) lsm_c_pd PD (class level) Description Patch density (Aggregation metric) Usage lsm_c_pd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details ni PD = ∗ 10000 ∗ 100 A where ni is the number of patches and A is the total landscape area in square meters. PD is an ’Aggregation metric’. It describes the fragmentation of a class, however, does not nec- essarily contain information about the configuration or composition of the class. In contrast to lsm_c_np it is standardized to the area and comparisons among landscapes with different total area are possible. Units: Number per 100 hectares Ranges: 0 < PD <= 1e+06 Behaviour: Increases as the landscape gets more patchy. Reaches its maximum if every cell is a different patch. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_np, lsm_l_ta, lsm_l_pd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_pd(landscape) lsm_c_pladj PLADJ (class level) Description Percentage of Like Adjacencies (Aggregation metric) Usage lsm_c_pladj(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details gij P LADJ = ( P m ) ∗ 100 gik where gii is the number of adjacencies between cells of class i and gik is the number of adjacencies between cells of class i and k. PLADJ is an ’Aggregation metric’. It calculates the frequency how often patches of different classes i (focal class) and k are next to each other, and following is a measure of class aggregation. The adjacencies are counted using the double-count method. Units: Percent Ranges: 0 <= PLADJ <= 100 Behaviour: Equals PLADJ = 0 if class i is maximal disaggregated, i.e. every cell is a different patch. Equals PLADJ = 100 when the only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org. Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_pladj(landscape) lsm_c_pland PLAND (class level) Description Percentage of landscape of class (Area and Edge metric) Usage lsm_c_pland(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Pn aij P LAN D = ∗ 100 A where aij is the area of each patch and A is the total landscape area. PLAND is an ’Area and edge metric’. It is the percentage of the landscape belonging to class i. It is a measure of composition and because of the relative character directly comparable among landscapes with different total areas. Units: Percentage Range: 0 < PLAND <= 100 Behaviour: Approaches PLAND = 0 when the proportional class area is decreasing. Equals PLAND = 100 when only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_ca, lsm_l_ta Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_pland(landscape) lsm_c_shape_cv SHAPE_CV (class level) Description Covariance of variation shape index (Shape metric) Usage lsm_c_shape_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP ECV = cv(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_CV is a ’Shape metric’. Each class is summarised as the Coefficient of variation of each patch belonging to class i. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_CV >= 0 Behaviour: Equals SHAPE_CV = 0 if all patches have an identical shape index. Increases, without limit, as the variation of the shape index increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, lsm_c_shape_mn, lsm_c_shape_sd, lsm_l_shape_mn, lsm_l_shape_sd, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_shape_cv(landscape) lsm_c_shape_mn SHAPE_MN (class level) Description Mean shape index (Shape metric) Usage lsm_c_shape_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP EM N = mean(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_MN is a ’Shape metric’. Each class is summarised as the mean of each patch belonging to class i. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_SD >= 1 Behaviour: Equals SHAPE_MN = 1 if all patches are squares. Increases, without limit, as the shapes of patches become more complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, mean, lsm_c_shape_sd, lsm_c_shape_cv, lsm_l_shape_mn, lsm_l_shape_sd, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_shape_mn(landscape) lsm_c_shape_sd SHAPE_SD (class level) Description Standard deviation shape index (Shape metric) Usage lsm_c_shape_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP ESD = sd(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_SD is a ’Shape metric’. Each class is summarised as the standard deviation of each patch belonging to class i. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_SD >= 0 Behaviour: Equals SHAPE_SD = 0 if all patches have an identical shape index. Increases, without limit, as the variation of the shape index increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, sd, lsm_c_shape_mn, lsm_c_shape_cv, lsm_l_shape_mn, lsm_l_shape_sd, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_shape_sd(landscape) lsm_c_split SPLIT (class level) Description Splitting index (Aggregation metric) Usage lsm_c_split(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SP LIT = P n where aij is the patch area in square meters and A is the total landscape area. SPLIT is an ’Aggregation metric’. It describes the number of patches if all patches of class i would be divided into equally sized patches. Units: None Range: 1 <= SPLIT <= Number of cells squared Behaviour: Equals SPLIT = 1 if only one patch is present. Increases as the number of patches of class i increases and is limited if all cells are a patch Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_l_split Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_split(landscape) lsm_c_tca TCA (class level) Description Total core area (Core area metric) Usage lsm_c_tca(landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details n T CA = acore ij ∗( ) where here acore ij is the core area in square meters. TCA is a ’Core area metric’ and equals the sum of core areas of all patches belonging to class i. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). In other words, the core area of a patch is all area that is not an edge. It characterises patch areas and shapes of patches belonging to class i simultaneously (more core area when the patch is large and the shape is rather compact, i.e. a square). Additionally, TCA is a measure for the configuration of the landscape, because the sum of edges increase as patches are less aggregated. Units: Hectares Range: TCA >= 0 Behaviour: Increases, without limit, as patch areas increase and patch shapes simplify. TCA = 0 when every cell in every patch of class i is an edge. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, lsm_l_tca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_tca(landscape) lsm_c_te TE (class level) Description Total (class) edge (Area and Edge metric) Usage lsm_c_te(landscape, count_boundary = FALSE, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. count_boundary Include landscape boundary in edge length directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Xm TE = eik where eik is the edge lengths in meters. TE is an ’Area and edge metric’. Total (class) edge includes all edges between class i and all other classes k. It measures the configuration of the landscape because a highly fragmented landscape will have many edges. However, total edge is an absolute measure, making comparisons among landscapes with different total areas difficult. If count_boundary = TRUE also edges to the landscape boundary are included. Units: Meters Range: TE >= 0 Behaviour: Equals TE = 0 if all cells are edge cells. Increases, without limit, as landscape becomes more fragmented Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_perim lsm_l_te Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_c_te(landscape) lsm_l_ai AI (landscape level) Description Aggregation index (Aggregation metric) Usage lsm_l_ai(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters Details " m  # X gii  AI = Pi (100) i=1 max − gii where gii is the number of like adjacencies based on the single-count method and max − gii is the classwise maximum number of like adjacencies of class i and Pi the proportion of landscape compromised of class i. AI is an ’Aggregation metric’. It equals the number of like adjacencies divided by the theoretical maximum possible number of like adjacencies for that class summed over each class for the entire landscape. The metric is based on he adjacency matrix and the single-count method. Units: Percent Range: 0 <= AI <= 100 Behaviour: Equals 0 for maximally disaggregated and 100 for maximally aggregated classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 2000. An aggregation index (AI) to quantify spatial patterns of landscapes. Landscape ecology, 15(7), 591-601. See Also lsm_c_ai Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_ai(landscape) lsm_l_area_cv AREA_CV (landscape level) Description Coefficient of variation of patch area (Area and edge metric) Usage lsm_l_area_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREACV = cv(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. AREA_CV is an ’Area and Edge metric’. The metric summarises the landscape as the Coefficient of variation of all patches in the landscape. The metric describes the differences among patches in the landscape and is easily comparable because it is scaled to the mean. Units: Hectares Range: AREA_CV >= 0 Behaviour: Equals AREA_CV = 0 if all patches are identical in size. Increases, without limit, as the variation of patch areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, lsm_c_area_mn, lsm_c_area_sd, lsm_c_area_cv, lsm_l_area_mn, lsm_l_area_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_area_cv(landscape) lsm_l_area_mn AREA_MN (landscape level) Description Mean of patch area (Area and edge metric) Usage lsm_l_area_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREAM N = mean(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares AREA_MN is an ’Area and Edge metric’. The metric summarises the landscape as the mean of all patch in the landscape. The metric is a simple way to describe the composition of the landscape. Especially together with the total landscape area (lsm_l_ta), it can also give an an idea of patch structure (e.g. many small patches vs. few larges patches). Units: Hectares Range: AREA_MN > 0 Behaviour: Approaches AREA_MN = 0 if all patches are small. Increases, without limit, as the patch areas increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, mean, lsm_c_area_mn, lsm_c_area_sd, lsm_c_area_cv lsm_l_area_sd, lsm_l_area_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_area_mn(landscape) lsm_l_area_sd AREA_SD (landscape level) Description Standard deviation of patch area (Area and edge metric) Usage lsm_l_area_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREASD = sd(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. AREA_SD is an ’Area and Edge metric’. The metric summarises the landscape as the standard deviation of all patch in the landscape. The metric describes the differences among all patches in the landscape. Units: Hectares Range: AREA_SD >= 0 Behaviour: Equals AREA_SD = 0 if all patches are identical in size. Increases, without limit, as the variation of patch areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, sd, lsm_c_area_mn, lsm_c_area_sd, lsm_c_area_cv lsm_l_area_mn, lsm_l_area_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_area_sd(landscape) lsm_l_cai_cv CAI_CV (landscape level) Description Coefficient of variation of core area index (Core area metric) Usage lsm_l_cai_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAICV = cv(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_CV is a ’Core area metric’. The metric summarises the landscape as the Coefficient of varia- tion of the core area index of all patches in the landscape. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric describes the differences among all patches in the landscape. Because it is scaled to the mean, it is easily comparable. Units: Percent Range: CAI_CV >= 0 Behaviour: Equals CAI_CV = 0 if the core area index is identical for all patches. Increases, without limit, as the variation of the core area indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, lsm_c_cai_mn, lsm_c_cai_sd, lsm_c_cai_cv, lsm_l_cai_mn, lsm_l_cai_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_cai_cv(landscape) lsm_l_cai_mn CAI_MN (landscape level) Description Mean of core area index (Core area metric) Usage lsm_l_cai_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAIM N = mean(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_MN is a ’Core area metric’. The metric summarises the landscape as the mean of the core area index of all patches in the landscape. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). Units: Percent Range: 0 <= CAI_MN <= 100 Behaviour: CAI_MN = 0 when all patches have no core area and approaches CAI_MN = 100 with increasing percentage of core area within patches. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, mean, lsm_c_cai_sd, lsm_c_cai_sd, lsm_c_cai_cv, lsm_l_cai_sd, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_cai_mn(landscape) lsm_l_cai_sd CAI_SD (landscape level) Description Standard deviation of core area index (Core area metric) Usage lsm_l_cai_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CAISD = sd(CAI[patchij ] where CAI[patchij ] is the core area index of each patch. CAI_SD is a ’Core area metric’. The metric summarises the landscape as the standard deviation of the core area index of all patches in the landscape. The core area index is the percentage of core area in relation to patch area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric describes the differences among all patches in the landscape. Units: Percent Range: CAI_SD >= 0 Behaviour: Equals CAI_SD = 0 if the core area index is identical for all patches. Increases, without limit, as the variation of core area indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_cai, sd, lsm_c_cai_mn, lsm_c_cai_sd, lsm_c_cai_cv, lsm_l_cai_mn, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_cai_sd(landscape) lsm_l_circle_cv CIRCLE_CV (landscape level) Description Coefficient of variation of related circumscribing circle (Shape metric) Usage lsm_l_circle_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLECV = cv(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_CV is a ’Shape metric’ and summarises the landscape as the Coefficient of variation of the related circumscribing circle of all patches in the landscape. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. CIRCLE_CV describes the differences among all patches in the landscape. Because it is scaled to the mean, it is easily comparable. Units: None Range: CIRCLE_CV >= 0 Behaviour: Equals CIRCLE_CV if the related circumscribing circle is identical for all patches. Increases, without limit, as the variation of related circumscribing circles increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_mn, lsm_c_circle_sd, lsm_c_circle_cv, lsm_l_circle_mn, lsm_l_circle_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_circle_cv(landscape) lsm_l_circle_mn CIRCLE_MN (landscape level) Description Mean of related circumscribing circle (Shape metric) Usage lsm_l_circle_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLEM N = mean(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_MN is a ’Shape metric’ and summarises the landscape as the mean of the related circum- scribing circle of all patches in the landscape. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. Units: None Range: CIRCLE_MN > 0 Behaviour: Approaches CIRCLE_MN = 0 if the related circumscribing circle of all patches is small. Increases, without limit, as the related circumscribing circles increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_mn, lsm_c_circle_sd, lsm_c_circle_cv, lsm_l_circle_sd, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_circle_mn(landscape) lsm_l_circle_sd CIRCLE_SD (landscape level) Description Standard deviation of related circumscribing circle (Shape metric) Usage lsm_l_circle_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CIRCLESD = sd(CIRCLE[patchij ]) where CIRCLE[patchij ] is the related circumscribing circle of each patch. CIRCLE_SD is a ’Shape metric’ and summarises the landscape as the standard deviation of the related circumscribing circle of all patches in the landscape. CIRCLE describes the ratio between the patch area and the smallest circumscribing circle of the patch and characterises the compactness of the patch. The metric describes the differences among all patches of the landscape. Units: None Range: CIRCLE_SD >= 0 Behaviour: Equals CIRCLE_SD if the related circumscribing circle is identical for all patches. Increases, without limit, as the variation of related circumscribing circles increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_circle, mean, lsm_c_circle_mn, lsm_c_circle_sd, lsm_c_circle_cv, lsm_l_circle_mn, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_circle_sd(landscape) lsm_l_cohesion COHESION (landscape level) Description Patch Cohesion Index (Aggregation metric) Usage lsm_l_cohesion(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Pm P n pij COHESION = 1 − ( m P n ) ∗ (1 − √ )−1 ∗ 100 P √ Z pij aij i=1 j=1 where pij is the perimeter in meters, aij is the area in square meters and Z is the number of cells. COHESION is an ’Aggregation metric’. Units: Percent Ranges: Unknown Behaviour: Unknown Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1996. Using landscape indices to predict habitat connectivity. Ecology, 77(4), 1210-1225. See Also lsm_p_perim, lsm_p_area, lsm_l_cohesion Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_cohesion(landscape) lsm_l_condent Conditional entropy (landscape level) Description Conditional entropy \[H(y|x)\] Usage lsm_l_condent(landscape, neighbourhood = 4, ordered = TRUE, base = "log2") Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. ordered The type of pairs considered. Either ordered (TRUE) or unordered (FALSE). The default is TRUE. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. Details Complexity of a landscape pattern configuration. It measures a only a geometric intricacy (config- urational complexity) of a landscape pattern. Value tibble References <NAME>., <NAME>. 2019. Information theory as a consistent framework for quantification and classification of landscape patterns. https://doi.org/10.1007/s10980-019-00830-x See Also lsm_l_ent, lsm_l_mutinf, lsm_l_joinent, lsm_l_relmutinf Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_condent(landscape) lsm_l_contag CONTAG (landscape level) Description Contagion (Aggregation metric) Usage lsm_l_contag(landscape, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. verbose Print warning message if not sufficient patches are present Details Pna pq ln(pq ) CON T AG = 1 + where pq the adjacency table for all classes divided by the sum of that table and t the number of classes in the landscape. CONTAG is an ’Aggregation metric’. It is based on cell adjacencies and describes the probability of two random cells belonging to the same class. pq is the cell adjacency table, where the order is preserved and pairs of adjacent cells are counted twice. Contagion is affected by both the dispersion and interspersion of classes. E.g., low class dispersion (= high proportion of like adjacencies) and low interspersion (= uneven distribution of pairwise adjacencies) lead to a high contagion value. The number of classes to calculate CONTAG must be >= than 2. Units: Percent Range: 0 < Contag <=100 Behaviour: Approaches CONTAG = 0 if all cells are unevenly distributed and 100 indicates that all cells are equally adjacent to all other classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., <NAME>. & <NAME>. (1996). A note on contagion indices for landscape analysis. Landscape ecology, 11, 197-202. Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_contag(landscape) lsm_l_contig_cv CONTIG_CV (landscape level) Description Coefficient of variation of Contiguity index (Shape metric) Usage lsm_l_contig_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGCV = cv(CON T IG[patchij ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_CV is a ’Shape metric’. It summarises the landscape as the coefficient of variation of all patches in the landscape. CONTIG_CV asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: CONTIG_CV >= 0 Behaviour: CONTIG_CV = 0 if the contiguity index is identical for all patches. Increases, without limit, as the variation of CONTIG increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_sd, lsm_c_contig_cv, lsm_c_contig_mn, lsm_l_contig_sd, lsm_l_contig_mn Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_contig_cv(landscape) lsm_l_contig_mn CONTIG_MN (landscape level) Description Mean of Contiguity index (Shape metric) Usage lsm_l_contig_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGM N = mean(CON T IG[patchi ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_MN is a ’Shape metric’. It summarises the landscape as the mean of all patches in the landscape. CONTIG_MN asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: 0 >= CONTIG_MN <= 1 Behaviour: CONTIG equals the mean of the contiguity index on landscape level for all patches. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org LaGro, J. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_sd, lsm_c_contig_cv, lsm_c_contig_mn, lsm_l_contig_sd, lsm_l_contig_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_contig_mn(landscape) lsm_l_contig_sd CONTIG_SD (landscape level) Description Standard deviation of Contiguity index (Shape metric) Usage lsm_l_contig_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CON T IGSD = sd(CON T IG[patchij ]) where CON T IG[patchij ] is the contiguity of each patch. CONTIG_SD is a ’Shape metric’. It summarises the landscape as the standard deviation of all patches in the landscape. CONTIG_SD asses the spatial connectedness (contiguity) of cells in patches. The metric coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = TRUE) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: CONTIG_SD >= 0 Behaviour: CONTIG_SD = 0 if the contiguity index is identical for all patches. Increases, without limit, as the variation of CONTIG increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_p_contig, lsm_c_contig_sd, lsm_c_contig_cv, lsm_c_contig_mn, lsm_l_contig_cv, lsm_l_contig_mn Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_contig_sd(landscape) lsm_l_core_cv CORE_CV (landscape level) Description Coefficient of variation of core area (Core area metric) Usage lsm_l_core_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CORECV = cv(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_CV is a ’Core area metric’. It equals the Coefficient of variation of the core area of each patch in the landscape. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). The metric describes the differences among all patches in the landscape and is easily comparable because it is scaled to the mean. Units: Hectares Range: CORE_CV >= 0 Behaviour: Equals CORE_CV = 0 if all patches have the same core area. Increases, without limit, as the variation of patch core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, lsm_c_core_mn, lsm_c_core_sd, lsm_c_core_cv, lsm_l_core_mn, lsm_l_core_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_core_cv(landscape) lsm_l_core_mn CORE_MN (landscape level) Description Mean of core area (Core area metric) Usage lsm_l_core_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details COREM N = mean(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_MN is a ’Core area metric’ and equals the mean of core areas of all patches in the landscape. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). Units: Hectares Range: CORE_MN >= 0 Behaviour: Equals CORE_MN = 0 if CORE = 0 for all patches. Increases, without limit, as the core area indices increase. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, mean, lsm_c_core_mn, lsm_c_core_sd, lsm_c_core_cv, lsm_l_core_sd, lsm_l_core_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_core_mn(landscape) lsm_l_core_sd CORE_SD (landscape level) Description Standard deviation of patch core area (class level) Usage lsm_l_core_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CORESD = sd(CORE[patchij ]) where CORE[patchij ] is the core area in square meters of each patch. CORE_SD is a ’Core area metric’. It equals the standard deviation of the core area of all patches in the landscape. The core area is defined as all cells that have no neighbour with a different value than themselves (rook’s case). The metric describes the differences among all patches in the landscape. Units: Hectares Range: CORE_SD >= 0 Behaviour: Equals CORE_SD = 0 if all patches have the same core area. Increases, without limit, as the variation of patch core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, sd, lsm_c_core_mn, lsm_c_core_sd, lsm_c_core_cv, lsm_l_core_mn, lsm_l_core_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_core_sd(landscape) lsm_l_dcad DCAD (landscape level) Description Disjunct core area density (core area metric) Usage lsm_l_dcad( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details m P n ncore P ij i=1 j=1 DCAD = ( ) ∗ 10000 ∗ 100 A where ncore ij is the number of disjunct core areas and A is the total landscape area in square meters. DCAD is a ’Core area metric’. It equals the number of disjunct core areas per 100 ha relative to the total area. A disjunct core area is a ’patch within the patch’ containing only core cells. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). The metric is relative and therefore comparable among landscapes with different total areas. Units: Number per 100 hectares Range: DCAD >= 0 Behaviour: Equals DCAD = 0 when DCORE = 0, i.e. no patch contains a disjunct core area. Increases, without limit, as disjunct core areas become more present, i.e. patches becoming larger and less complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_ndca, lsm_l_ta, lsm_c_dcad Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_dcad(landscape) lsm_l_dcore_cv DCORE_CV (landscape level) Description Coefficient of variation number of disjunct core areas (Core area metric) Usage lsm_l_dcore_cv( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCORECV = cv(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_CV is an ’Core area metric’. It summarises the landscape as the Coefficient of variation of all patches belonging to the landscape. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. The metric describes the differences among all patches in the landscape and is easily comparable because it is scaled to the mean. Units: None Range: DCORE_CV >= 0 Behaviour: Equals DCORE_CV = 0 if all patches have the same number of disjunct core areas. Increases, without limit, as the variation of number of disjunct core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, lsm_c_dcore_mn, lsm_c_dcore_sd, lsm_c_dcore_cv, lsm_l_dcore_mn, lsm_l_dcore_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_dcore_cv(landscape) lsm_l_dcore_mn DCORE_MN (landscape level) Description Mean number of disjunct core areas (Core area metric) Usage lsm_l_dcore_mn( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCOREM N = mean(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_MN is an ’Core area metric’. It summarises the landscape as the mean of all patches in the landscape. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. Units: None Range: DCORE_MN > 0 Behaviour: Equals DCORE_MN = 0 if NCORE = 0 for all patches. Increases, without limit, as the number of disjunct core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, mean, lsm_c_dcore_mn, lsm_c_dcore_sd, lsm_c_dcore_cv, lsm_l_dcore_sd, lsm_l_dcore_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_dcore_mn(landscape) lsm_l_dcore_sd DCORE_SD (landscape level) Description Standard deviation number of disjunct core areas (Core area metric) Usage lsm_l_dcore_sd( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details DCORESD = sd(N CORE[patchij ]) where N CORE[patchij ] is the number of core areas. DCORE_SD is an ’Core area metric’. It summarises the landscape as the standard deviation of all patches. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NCORE counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. The metric describes the differences among all patches in the landscape. Units: None Range: DCORE_SD >= 0 Behaviour: Equals DCORE_SD = 0 if all patches have the same number of disjunct core areas. Increases, without limit, as the variation of number of disjunct core areas increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_ncore, sd, lsm_c_dcore_mn, lsm_c_dcore_sd, lsm_c_dcore_cv, lsm_l_dcore_mn, lsm_l_dcore_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_dcore_sd(landscape) lsm_l_division DIVISION (landscape level) Description Landscape division index (Aggregation metric) Usage lsm_l_division(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details m X n X aij DIV ISON = (1 − ( )2 ) A where aij is the area in square meters and A is the total landscape area in square meters. DIVISION is an ’Aggregation metric. It can be in as the probability that two randomly selected cells are not located in the same patch. The landscape division index is negatively correlated with the effective mesh size (lsm_c_mesh). Units: Proportion Ranges: 0 <= Division < 1 Behaviour: Equals DIVISION = 0 if only one patch is present. Approaches DIVISION = 1 if all patches of class i are single cells. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_c_division Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_division(landscape) lsm_l_ed ED (landscape level) Description Edge Density (Area and Edge metric) Usage lsm_l_ed(landscape, count_boundary = FALSE, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. count_boundary Count landscape boundary as edge directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details E ED = ∗ 10000 A where E is the total landscape edge in meters and A is the total landscape area in square meters. ED is an ’Area and Edge metric’. The edge density equals all edges in the landscape in relation to the landscape area. The boundary of the landscape is only included in the corresponding total class edge length if count_boundary = TRUE. The metric describes the configuration of the landscape, e.g. because an overall aggregation of classes will result in a low edge density. The metric is standardized to the total landscape area, and therefore comparisons among landscapes with different total areas are possible. Units: Meters per hectare Range: ED >= 0 Behaviour: Equals ED = 0 if only one patch is present (and the landscape boundary is not included) and increases, without limit, as the landscapes becomes more patchy Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_l_te, lsm_l_ta, lsm_c_ed Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_ed(landscape) lsm_l_enn_cv ENN_CV (landscape level) Description Coefficient of variation of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_l_enn_cv(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NCV = cv(EN N [patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_CV is an ’Aggregation metric’. It summarises the landscape as the Coefficient of variation of all patches in the landscape. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. The metric is a simple way to describe patch isolation. Because it is scaled to the mean, it is easily comparable among different landscapes. Units: Meters Range: ENN_CV >= 0 Behaviour: Equals ENN_CV = 0 if the euclidean nearest-neighbor distance is identical for all patches. Increases, without limit, as the variation of ENN increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, lsm_c_enn_mn, lsm_c_enn_sd, lsm_c_enn_cv, lsm_l_enn_mn, lsm_l_enn_sd, Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_enn_cv(landscape) lsm_l_enn_mn ENN_MN (landscape level) Description Mean of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_l_enn_mn(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NM N = cv(mean[patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_CV is an ’Aggregation metric’. It summarises the landscape as the mean of all patches in the landscape. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. Units: Meters Range: ENN_MN > 0 Behaviour: Approaches ENN_MN = 0 as the distance to the nearest neighbour decreases, i.e. patches of the same class i are more aggregated. Increases, without limit, as the distance between neighbouring patches of the same class i increases, i.e. patches are more isolated. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, mean, lsm_c_enn_mn, lsm_c_enn_sd, lsm_c_enn_cv, lsm_l_enn_sd, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_enn_mn(landscape) lsm_l_enn_sd ENN_SD (landscape level) Description Standard deviation of euclidean nearest-neighbor distance (Aggregation metric) Usage lsm_l_enn_sd(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN NSD = sd(EN N [patchij ]) where EN N [patchij ] is the euclidean nearest-neighbor distance of each patch. ENN_CV is an ’Aggregation metric’. It summarises in the landscape as the standard deviation of all patches in the landscape. ENN measures the distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. The metric is a simple way to describe patch isolation. Because it is scaled to the mean, it is easily comparable among different landscapes. Units: Meters Range: ENN_SD >= 0 Behaviour: Equals ENN_SD = 0 if the euclidean nearest-neighbor distance is identical for all patches. Increases, without limit, as the variation of ENN increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_p_enn, sd, lsm_c_enn_mn, lsm_c_enn_sd, lsm_c_enn_cv, lsm_l_enn_mn, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_enn_sd(landscape) lsm_l_ent ENT (landscape level) Description Marginal entropy \[H(x)\] Usage lsm_l_ent(landscape, neighbourhood = 4, base = "log2") Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. Details It measures a diversity (thematic complexity) of landscape classes. Value tibble References <NAME>., <NAME>. 2019. Information theory as a consistent framework for quantification and classification of landscape patterns. https://doi.org/10.1007/s10980-019-00830-x See Also lsm_l_condent, lsm_l_mutinf, lsm_l_joinent, lsm_l_relmutinf Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_ent(landscape) lsm_l_frac_cv FRAC_CV (landscape level) Description Coefficient of variation fractal dimension index (Shape metric) Usage lsm_l_frac_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACCV = cv(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_CV is a ’Shape metric’. The metric summarises the landscape as the Coefficient of variation of the fractal dimension index of all patches in the landscape. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. The Coefficient of variation is scaled to the mean and comparable among different landscapes. Units: None Range: FRAC_CV >= 0 Behaviour: Equals FRAC_CV = 0 if the fractal dimension index is identical for all patches. Increases, without limit, as the variation of the fractal dimension indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, lsm_c_frac_mn, lsm_c_frac_sd, lsm_c_frac_cv, lsm_l_frac_mn, lsm_l_frac_sd, Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_frac_cv(landscape) lsm_l_frac_mn FRAC_MN (landscape level) Description Mean fractal dimension index (Shape metric) Usage lsm_l_frac_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACM N = mean(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_MN is a ’Shape metric’. The metric summarises the landscape as the mean of the fractal dimension index of all patches in the landscape. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. The Coefficient of variation is scaled to the mean and comparable among different landscapes. Units: None Range: FRAC_MN > 0 Behaviour: Approaches FRAC_MN = 1 if all patches are squared and FRAC_MN = 2 if all patches are irregular. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, mean, lsm_c_frac_mn, lsm_c_frac_sd, lsm_c_frac_cv, lsm_l_frac_sd, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_frac_mn(landscape) lsm_l_frac_sd FRAC_SD (landscape level) Description Standard deviation fractal dimension index (Shape metric) Usage lsm_l_frac_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RACSD = sd(F RAC[patchij ]) where F RAC[patchij ] equals the fractal dimension index of each patch. FRAC_SD is a ’Shape metric’. The metric summarises the landscape as the standard deviation of the fractal dimension index of all patches in the landscape. The fractal dimension index is based on the patch perimeter and the patch area and describes the patch complexity. Units: None Range: FRAC_SD>= 0 Behaviour: Equals FRAC_SD = 0 if the fractal dimension index is identical for all patches. Increases, without limit, as the variation of the fractal dimension indices increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_frac, sd, lsm_c_frac_mn, lsm_c_frac_sd, lsm_c_frac_cv, lsm_l_frac_mn, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_frac_sd(landscape) lsm_l_gyrate_cv GYRATE_CV (landscape level) Description Coefficient of variation radius of gyration (Area and edge metric) Usage lsm_l_gyrate_cv(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT ECV = cv(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_CV is an ’Area and edge metric’. The metric summarises the landscape as the Coefficient of variation of the radius of gyration of all patches in the landscape. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. The Coefficient of variation is scaled to the mean and comparable among different landscapes. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_CV >= 0 Behaviour: Equals GYRATE_CV = 0 if the radius of gyration is identical for all patches. In- creases, without limit, as the variation of the radius of gyration increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, lsm_c_gyrate_mn, lsm_c_gyrate_sd, lsm_c_gyrate_cv, lsm_l_gyrate_mn, lsm_l_gyrate_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_gyrate_cv(landscape) lsm_l_gyrate_mn GYRATE_MN (landscape level) Description Mean radius of gyration (Area and edge metric) Usage lsm_l_gyrate_mn(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT EM N = mean(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_MN is an ’Area and edge metric’. The metric summarises the landscape as the mean of the radius of gyration of all patches in the landscape. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_MN >= 0 Behaviour: Approaches GYRATE_MN = 0 if every patch is a single cell. Increases, without limit, when only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, mean, lsm_c_gyrate_mn, lsm_c_gyrate_sd, lsm_c_gyrate_cv, lsm_l_gyrate_sd, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_gyrate_mn(landscape) lsm_l_gyrate_sd GYRATE_SD (landscape level) Description Standard deviation radius of gyration (Area and edge metric) Usage lsm_l_gyrate_sd(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details GY RAT ESD = sd(GY RAT E[patchij ]) where GY RAT E[patchij ] equals the radius of gyration of each patch. GYRATE_SD is an ’Area and edge metric’. The metric summarises the landscape as the standard deviation of the radius of gyration of all patches in the landscape. GYRATE measures the distance from each cell to the patch centroid and is based on cell center-to-cell center distances. The metrics characterises both the patch area and compactness. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE_SD >= 0 Behaviour: Equals GYRATE_SD = 0 if the radius of gyration is identical for all patches. In- creases, without limit, as the variation of the radius of gyration increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_p_gyrate, lsm_c_gyrate_mn, lsm_c_gyrate_sd, lsm_c_gyrate_cv, lsm_l_gyrate_mn, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_gyrate_sd(landscape) lsm_l_iji Interspersion and Juxtaposition index (landscape level) Description Interspersion and Juxtaposition index (Aggregation metric) Usage lsm_l_iji(landscape, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. verbose Print warning message if not sufficient patches are present Details " ! !# m m P P eik eik − E ln E i=1 k=i+1 IJI = ∗ 100 ln(0.5[m(m − 1)]) where eik are the unique adjacencies of all classes (lower/upper triangle of the adjacency table - without the diagonal), E is the total length of edges in the landscape and m is the number of classes. IJI is an ’Aggregation metric’. It is a so called "salt and pepper" metric and describes the intermixing of classes (i.e. without considering like adjacencies - the diagonal of the adjacency table). The number of classes to calculate IJI must be >= than 3. Units: Percent Range: 0 < IJI <= 100 Behaviour: Approaches 0 if a class is only adjacent to a single other class and equals 100 when a class is equally adjacent to all other classes. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., & <NAME>. 1995. FRAGSTATS: spatial pattern analysis program for quanti- fying landscape structure. Gen. Tech. Rep. PNW-GTR-351. Portland, OR: US Department of Agriculture, Forest Service, Pacific Northwest Research Station. 122 p, 351. See Also lsm_c_iji Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_iji(landscape) lsm_l_joinent JOINENT (landscape level) Description Joint entropy \[H(x, y)\] Usage lsm_l_joinent(landscape, neighbourhood = 4, ordered = TRUE, base = "log2") Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. ordered The type of pairs considered. Either ordered (TRUE) or unordered (FALSE). The default is TRUE. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. Details Complexity of a landscape pattern. An overall spatio-thematic complexity metric. Value tibble References <NAME>., <NAME>. 2019. Information theory as a consistent framework for quantification and classification of landscape patterns. https://doi.org/10.1007/s10980-019-00830-x See Also lsm_l_ent, lsm_l_condent, lsm_l_mutinf, lsm_l_relmutinf Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_joinent(landscape) lsm_l_lpi LPI (landscape level) Description Largest patch index (Area and Edge metric) Usage lsm_l_lpi(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details max(aij ) LP I = ∗ 100 A where max(aij ) is the area of the patch in square meters and A is the total landscape area in square meters. The largest patch index is an ’Area and edge metric’. It is the percentage of the landscape covered by the largest patch in the landscape. It is a simple measure of dominance. Units: Percentage Range: 0 < LPI <= 100 Behaviour: Approaches LPI = 0 when the largest patch is becoming small and equals LPI = 100 when only one patch is present Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, lsm_l_ta, lsm_c_lpi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_lpi(landscape) lsm_l_lsi LSI (landscape level) Description Landscape shape index (Aggregation metric) Usage lsm_l_lsi(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details E LSI = min E where E is the total edge length in cell surfaces and min E is the minimum total edge length in cell surfaces. LSI is an ’Aggregation metric’. It is the ratio between the actual landscape edge length and the hypothetical minimum edge length. The minimum edge length equals the edge length if only one patch would be present. Units: None Ranges: LSI >= 1 Behaviour: Equals LSI = 1 when only one squared patch is present. Increases, without limit, as the length of the actual edges increases, i.e. the patches become less compact. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, lsm_c_lsi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_lsi(landscape) lsm_l_mesh MESH (landscape level) Description Effective Mesh Size (Aggregation metric) Usage lsm_l_mesh(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details m P n P M ESH = ∗ where aij is the patch area in square meters and A is the total landscape area in square meters. The effective mesh size is an ’Aggregation metric’. Because each patch is squared before the sum is calculated and the sum is standardized by the total landscape area, MESH is a relative measure of patch structure. MESH is perfectly, negatively correlated to lsm_c_division. Units: Hectares Range: cell size / total area <= MESH <= total area Behaviour: Equals cellsize/total area if class covers only one cell and equals total area if only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_c_mesh Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_mesh(landscape) lsm_l_msidi MSIDI (landscape level) Description Modified Simpson’s diversity index (Diversity metric) Usage lsm_l_msidi(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Xm M SIDI = − ln Pi2 where Pi is the landscape area proportion of class i. MSIDI is a ’Diversity metric’. Units: None Range: MSIDI >= 0 Behaviour: MSIDI = 0 if only one patch is present and increases, without limit, as the amount of patches with equally distributed landscape proportions increases Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1949. Measurement of diversity. Nature 163:688 <NAME>. 1975. Ecological Diversity. Wiley-Interscience, New York. <NAME>. 1982. Fire and landscapediversity in subalpine forests of Yellowstone National Park.Ecol.Monogr. 52:199-221 See Also lsm_l_sidi Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_msidi(landscape) lsm_l_msiei MSIEI (landscape level) Description Modified Simpson’s evenness index (Diversity metric) Usage lsm_l_msiei(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details m P − ln M SIEi = ln m where Pi is the landscape area proportion of class i. MSIEI is a ’Diversity metric’. Units: None Range: 0 <= MSIEI < 1 Behaviour: MSIEI = 0 when only one patch is present and approaches MSIEI = 1 as the propor- tional distribution of patches becomes more even Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1949. Measurement of diversity. Nature 163:688 Pielou, <NAME>. 1975. Ecological Diversity. Wiley-Interscience, New York. See Also lsm_l_siei Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_msiei(landscape) lsm_l_mutinf MUTINF (landscape level) Description Mutual information \[I(y,x)\] Usage lsm_l_mutinf(landscape, neighbourhood = 4, ordered = TRUE, base = "log2") Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. ordered The type of pairs considered. Either ordered (TRUE) or unordered (FALSE). The default is TRUE. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. Details It disambiguates landscape pattern types characterize by the same value of an overall complexity (lsm_l_joinent). Value tibble References <NAME>., <NAME>. 2019. Information theory as a consistent framework for quantification and classification of landscape patterns. https://doi.org/10.1007/s10980-019-00830-x See Also lsm_l_ent, lsm_l_condent, lsm_l_joinent, lsm_l_relmutinf Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_mutinf(landscape) lsm_l_ndca NDCA (landscape level) Description Number of disjunct core areas (Core area metric) Usage lsm_l_ndca( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details Xm X n N DCA = ncore ij where ncore ij is the number of disjunct core areas. NDCA is a ’Core area metric’. The metric summarises the landscape as the sum of all patches in the landscape. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). NDCA counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. It describes patch area and shape simultaneously (more core area when the patch is large, however, the shape must allow disjunct core areas). Thereby, a compact shape (e.g. a square) will contain less disjunct core areas than a more irregular patch. Units: None Range: NDCA >= 0 Behaviour: NDCA = 0 when TCA = 0, i.e. every cell in the landscape is an edge cell. NDCA increases, with out limit, as core area increases and patch shapes allow disjunct core areas (i.e. patch shapes become rather complex). Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_tca, lsm_p_ncore, lsm_c_ndca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_ndca(landscape) lsm_l_np NP (landscape level) Description Number of patches (Aggregation metric) Usage lsm_l_np(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details NP = N where N is the number of patches. NP is an ’Aggregation metric’. It describes the fragmentation of the landscape, however, does not necessarily contain information about the configuration or composition of the landscape. Units: None Ranges: NP >= 1 Behaviour: Equals NP = 1 when only one patch is present and increases, without limit, as the number of patches increases Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_np Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_np(landscape) lsm_l_pafrac PAFRAC (landscape level) Description Perimeter-Area Fractal Dimension (Shape metric) Usage lsm_l_pafrac(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details P AF RAC = β Pm P n where β is the slope of the regression of the area against the perimeter (logarithm) N ln aij = Pm P n a + βN ln pij i=1 j=1 PAFRAC is a ’Shape metric’. It describes the patch complexity of the landscape while being scale independent. This means that increasing the patch size while not changing the patch form will not change the metric. However, it is only meaningful if the relationship between the area and perimeter is linear on a logarithmic scale. Furthermore, if there are less than 10 patches in the landscape, the metric returns NA because of the small-sample issue. Units: None Range: 1 <= PAFRAC <= 2 Behaviour: Approaches PAFRAC = 1 for patches with simple shapes and approaches PAFRAC = 2 for irregular shapes Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1986. Principles of Geographical Information Systems for Land Resources Assess- ment. Monographs on Soil and Resources Survey No. 12. Clarendon Press, Oxford See Also lsm_p_area, lsm_p_perim, lsm_c_pafrac Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_pafrac(landscape) lsm_l_para_cv PARA_CV (landscape level) Description Coefficient of variation perimeter-area ratio (Shape metric) Usage lsm_l_para_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARACV = cv(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_CV is a ’Shape metric’. It summarises the landscape as the Coefficient of variation of each patch belonging in the landscape The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_CV >= 0 Behaviour: Equals PARA_CV = 0 if the perimeter-area ratio is identical for all patches. In- creases, without limit, as the variation of the perimeter-area ratio increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, lsm_c_para_mn, lsm_c_para_sd, lsm_c_para_cv, lsm_l_para_mn, lsm_l_para_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_para_cv(landscape) lsm_l_para_mn PARA_MN (landscape level) Description Mean perimeter-area ratio (Shape metric) Usage lsm_l_para_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARAM N = mean(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_MN is a ’Shape metric’. It summarises the landscape as the mean of each patch in the land- scape. The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_MN > 0 Behaviour: Approaches PARA_MN > 0 if PARA for each patch approaches PARA > 0, i.e. the form approaches a rather small square. Increases, without limit, as PARA increases, i.e. patches become more complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, mean, lsm_c_para_mn, lsm_c_para_sd, lsm_c_para_cv, lsm_l_para_sd, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_para_mn(landscape) lsm_l_para_sd PARA_SD (landscape level) Description Standard deviation perimeter-area ratio (Shape metric) Usage lsm_l_para_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ARASD = sd(P ARA[patchij ] where P ARA[patchij ] is the perimeter area ratio of each patch. PARA_SD is a ’Shape metric’. It summarises the landscape as the standard deviation of each patch belonging in the landscape. The perimeter-area ratio describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA_SD >= 0 Behaviour: Equals PARA_SD = 0 if the perimeter-area ratio is identical for all patches. In- creases, without limit, as the variation of the perimeter-area ratio increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_para, sd, lsm_c_para_mn, lsm_c_para_sd, lsm_c_para_cv, lsm_l_para_mn, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_para_sd(landscape) lsm_l_pd PD (landscape level) Description Patch density (Aggregation metric) Usage lsm_l_pd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details N PD = ∗ 10000 ∗ 100 A where N is the number of patches and A is the total landscape area in square meters. PD is an ’Aggregation metric’. It describes the fragmentation the landscape, however, does not necessarily contain information about the configuration or composition of the landscape. In contrast to lsm_l_np it is standardized to the area and comparisons among landscapes with different total area are possible. Units: Number per 100 hectares Ranges: 0 < PD <= 1e+06 Behaviour: Increases as the landscape gets more patchy. Reaches its maximum if every cell is a different patch. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_np, lsm_l_ta, lsm_c_pd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_pd(landscape) lsm_l_pladj PLADJ (landscape level) Description Percentage of Like Adjacencies (Aggregation metric) Usage lsm_l_pladj(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details gij P LADJ = ( P m ) ∗ 100 gik where gii is the number of adjacencies between cells of class i and gik is the number of adjacencies between cells of class i and k. PLADJ is an ’Aggregation metric’. It calculates the frequency how often patches of different classes i (focal class) and k are next to each other, and following is a measure of class aggregation. The adjacencies are counted using the double-count method. Units: Percent Ranges: 0 <= PLADJ <= 100 Behaviour: Equals PLADJ = 0 if class i is maximal disaggregated, i.e. every cell is a different patch. Equals PLADJ = 100 when the only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org. Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_pladj(landscape) lsm_l_pr PR (landscape level) Description Patch richness (Diversity metric) Usage lsm_l_pr(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details PR = m where m is the number of classes PR is a ’Diversity metric’. It is one of the simplest diversity and composition measures. However, because of its absolute nature, it is not comparable among landscapes with different total areas. Units: None Range: PR >= 1 Behaviour: Equals PR = 1 when only one patch is present and increases, without limit, as the number of classes increases Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_pr(landscape) lsm_l_prd PRD (landscape level) Description Patch richness density (Diversity metric) Usage lsm_l_prd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details m P RD = ∗ 10000 ∗ 100 A where m is the number of classes and A is the total landscape area in square meters. PRD is a ’Diversity metric’. It is one of the simplest diversity and composition measures. In contrast to lsm_l_pr, it is a relative measure and following, comparable among landscapes with different total landscape areas. Units: Number per 100 hectares Range: PR > 0 Behaviour: Approaches PRD > 1 when only one patch is present and the landscape is rather large. Increases, without limit, as the number of classes increases and the total landscape area decreases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_prd(landscape) lsm_l_relmutinf RELMUTINF (landscape level) Description Relative mutual information Usage lsm_l_relmutinf(landscape, neighbourhood = 4, ordered = TRUE, base = "log2") Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. neighbourhood The number of directions in which cell adjacencies are considered as neigh- bours: 4 (rook’s case) or 8 (queen’s case). The default is 4. ordered The type of pairs considered. Either ordered (TRUE) or unordered (FALSE). The default is TRUE. base The unit in which entropy is measured. The default is "log2", which compute entropy in "bits". "log" and "log10" can be also used. Details Due to the spatial autocorrelation, the value of mutual information tends to grow with a diversity of the landscape (marginal entropy). To adjust this tendency, it is possible to calculate relative mutual information by dividing the mutual information by the marginal entropy. Relative mutual information always has a range between 0 and 1 and can be used to compare spatial data with different number and distribution of categories. When the value of mutual information equals to 0, then relative mutual information is 1. Value tibble References <NAME>., <NAME>. 2019. Information theory as a consistent framework for quantification and classification of landscape patterns. https://doi.org/10.1007/s10980-019-00830-x See Also lsm_l_ent, lsm_l_condent, lsm_l_joinent, lsm_l_mutinf Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_relmutinf(landscape) lsm_l_rpr RPD (landscape level) Description Relative patch richness (Diversity metric) Usage lsm_l_rpr(landscape, classes_max = NULL, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. classes_max Potential maximum number of present classes verbose Print warning message if not sufficient patches are present Details m RP R = ∗ 100 mmax where m is the number of classes and mmax is the (theoretical) maximum number of classes. RPR is an ’Diversity metric’. The metric calculates the percentage of present classes in the land- scape in relation to a (theoretical) number of maximum classes. The user has to specify the maxi- mum number of classes. Note, that if classes_max is not provided, the functions returns NA. Units: Percentage Ranges: 0 < RPR <= 100 Behaviour: Approaches RPR > 0 when only one class type is present, but the maximum number of classes is large. Equals RPR = 100 when m = m_max Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1982. Fire and landscapediversity in subalpine forests of Yellowstone National Park.Ecol.Monogr. 52:199-221 Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_rpr(landscape, classes_max = 5) lsm_l_shape_cv SHAPE_CV (landscape level) Description Coefficient of variation shape index (Shape metric) Usage lsm_l_shape_cv(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP ECV = cv(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_CV is a ’Shape metric’. The landscape is summarised as the Coefficient of variation of all patches in the landscape. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_CV >= 0 Behaviour: Equals SHAPE_CV = 0 if all patches have an identical shape index. Increases, without limit, as the variation of the shape index increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, lsm_c_shape_mn, lsm_c_shape_sd, lsm_c_shape_cv, lsm_l_shape_mn, lsm_l_shape_sd Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_shape_cv(landscape) lsm_l_shape_mn SHAPE_MN (landscape level) Description Mean shape index (Shape metric) Usage lsm_l_shape_mn(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP EM N = mean(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_MN is a ’Shape metric’. The landscape is summarised as the mean of all patches in the landscape. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_SD >= 1 Behaviour: Equals SHAPE_MN = 1 if all patches are squares. Increases, without limit, as the shapes of patches become more complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, mean, lsm_c_shape_mn, lsm_c_shape_sd, lsm_c_shape_cv, lsm_l_shape_sd, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_shape_mn(landscape) lsm_l_shape_sd SHAPE_SD (landscape level) Description Standard deviation shape index (Shape metric) Usage lsm_l_shape_sd(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP ESD = sd(SHAP E[patchij ]) where SHAP E[patchij ] is the shape index of each patch. SHAPE_SD is a ’Shape metric’. The landscape summarised as the standard deviation of all patches in the landscape. SHAPE describes the ratio between the actual perimeter of the patch and the square root of patch area. Units: None Range: SHAPE_SD >= 0 Behaviour: Equals SHAPE_SD = 0 if all patches have an identical shape index. Increases, without limit, as the variation of the shape index increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_shape, sd, lsm_c_shape_mn, lsm_c_shape_sd, lsm_c_shape_cv, lsm_l_shape_mn, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_shape_sd(landscape) lsm_l_shdi SHDI (landscape level) Description Shannon’s diversity index (Diversity metric) Usage lsm_l_shdi(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details Xm SHDI = − (Pi ∗ ln Pi ) where Pi is the proportion of class i. SHDI is a ’Diversity metric’. It is a widely used metric in biodiversity and ecology and takes both the number of classes and the abundance of each class into account. Units: None Range: SHDI >= 0 Behaviour: Equals SHDI = 0 when only one patch is present and increases, without limit, as the number of classes increases while the proportions are equally distributed Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1949. The mathematical theory of communication. Univ. Illinois- Press, Urbana See Also lsm_c_pland Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_shdi(landscape) lsm_l_shei SHEI (landscape level) Description Shannons’s evenness index (Diversity metric) Usage lsm_l_shei(landscape) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. Details Pm − (Pi ∗ ln Pi ) SHEI = ln m where Pi is the proportion of class i and m is the number of classes. SHEI is a ’Diversity metric’. It is the ratio between the actual Shannon’s diversity index and and the theoretical maximum of the Shannon diversity index. It can be understood as a measure of dominance. Units: None Range: 0 <= SHEI < 1 Behaviour: Equals SHEI = 0 when only one patch present and equals SHEI = 1 when the proportion of classes is completely equally distributed Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1949. The mathematical theory of communication. Univ. Illinois- Press, Urbana See Also lsm_c_pland, lsm_l_pr Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_shei(landscape) lsm_l_sidi SIDI (landscape level) Description Simpson’s diversity index (Diversity metric) Usage lsm_l_sidi(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details Xm SIDI = 1 − Pi2 where Pi is the proportion of class i and m is the number of classes. SIDI is a ’Diversity metric’. It is widely used in biodiversity and ecology. It is less sensitive to rare class types than lsm_l_shdi. It can be interpreted as the probability that two randomly selected cells belong to the same class. Units: None Range: 0 <= SIDI < 1 Behaviour: Equals SIDI = 0 when only one patch is present and approaches SIDI < 1 when the number of class types increases while the proportions are equally distributed Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1949. Measurement of diversity. Nature 163:688 See Also lsm_c_pland, lsm_l_pr Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_sidi(landscape) lsm_l_siei SIEI (landscape level) Description Simpson’s evenness index (Diversity metric) Usage lsm_l_siei(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details m P SIEI = 1 where Pi is the proportion of class i and m is the number of classes. SIEI is a ’Diversity metric’. The metric is widely used in biodiversity and ecology. It is the ratio between the actual Simpson’s diversity index and the theoretical maximum Simpson’s diversity index. Units: None Range: 0 < SIEI <= 1 Behaviour: Equals SIEI = 0 when only one patch is present and approaches SIEI = 1 when the number of class types increases while the proportions are equally distributed Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1949. Measurement of diversity. Nature 163:688 See Also lsm_c_pland, lsm_l_pr Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_siei(landscape) lsm_l_split SPLIT (landscape level) Description Splitting index (Aggregation metric) Usage lsm_l_split(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SSP LIT = P m P n where aij is the patch area in square meters and A is the total landscape area. SPLIT is an ’Aggregation metric’. It describes the number of patches if all patches the landscape would be divided into equally sized patches. Units: None Range: 1 <= SPLIT <= Number of cells squared Behaviour: Equals SPLIT = 1 if only one patch is present. Increases as the number of patches increases and is limited if all cells are a patch Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 2000. Landscape division, splitting index, and effective mesh size: new measures of landscape fragmentation. Landscape ecology, 15(2), 115-130. See Also lsm_p_area, lsm_l_ta, lsm_c_split Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_split(landscape) lsm_l_ta TA (landscape level) Description Total area (Area and edge metric) Usage lsm_l_ta(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details CA = sum(AREA[patchij ]) where AREA[patchij ] is the area of each patch in hectares. TA is an ’Area and edge metric’. The total (class) area sums the area of all patches in the landscape. It is the area of the observation area. Units: Hectares Range: TA > 0 Behaviour: Approaches TA > 0 if the landscape is small and increases, without limit, as the size of the landscape increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, sum, lsm_c_ca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_ta(landscape) lsm_l_tca TCA (landscape level) Description Total core area (Core area metric) Usage lsm_l_tca(landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details n T CA = acore ij ∗( ) where here acore ij is the core area in square meters. TCA is a ’Core area metric’ and equals the sum of core areas of all patches in the landscape. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). In other words, the core area of a patch is all area that is not an edge. It characterises patch areas and shapes of all patches in the landscape simultaneously (more core area when the patch is large and the shape is rather compact, i.e. a square). Additionally, TCA is a measure for the configuration of the landscape, because the sum of edges increase as patches are less aggregated. Units: Hectares Range: TCA >= 0 Behaviour: Increases, without limit, as patch areas increase and patch shapes simplify. TCA = 0 when every cell in every patch is an edge. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, lsm_c_tca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_tca(landscape) lsm_l_te TE (landscape level) Description Total edge (Area and Edge metric) Usage lsm_l_te(landscape, count_boundary = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. count_boundary Include landscape boundary in edge length Details Xm TE = eik where eik is the edge lengths in meters. TE is an ’Area and edge metric’. Total edge includes all edges. It measures the configuration of the landscape because a highly fragmented landscape will have many edges. However, total edge is an absolute measure, making comparisons among land- scapes with different total areas difficult. If count_boundary = TRUE also edges to the landscape boundary are included. Units: Meters Range: TE >= 0 Behaviour: Equals TE = 0 if all cells are edge cells. Increases, without limit, as landscape becomes more fragmented Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_perim lsm_l_te Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_l_te(landscape) lsm_p_area AREA (patch level) Description Patch area (Area and edge metric) Usage lsm_p_area(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details AREA = aij ∗ ( ) where aij is the area in square meters. AREA is an ’Area and edge metric’ and equals the area of each patch in hectares. The lower limit of AREA is limited by the resolution of the input raster, i.e. AREA can’t be smaller than the resolution squared (in hectares). It is one of the most basic, but also most important metrics, to characterise a landscape. The metric is the simplest measure of composition. Units: Hectares Range: AREA > 0 Behaviour: Increases, without limit, as the patch size increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_area_mn, lsm_c_area_sd, lsm_c_area_cv, lsm_c_ca, lsm_l_area_mn, lsm_l_area_sd, lsm_l_area_cv, lsm_l_ta Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_area(landscape) lsm_p_cai CAI (patch level) Description Core area index (Core area metric) Usage lsm_p_cai(landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details acore ij CAI = ( ) ∗ 100 aij where acore ij is the core area in square meters and aij is the area in square meters. CAI is a ’Core area metric’. It equals the percentage of a patch that is core area. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). It describes patch area and shape simultaneously (more core area when the patch is large and the shape is rather compact, i.e. a square). Because the index is relative, it is comparable among patches with different area. Units: Percent Range: 0 <= CAI <= 100 Behaviour: CAI = 0 when the patch has no core area and approaches CAI = 100 with increasing percentage of core area within a patch. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_core, lsm_p_area, lsm_c_cai_mn, lsm_c_cai_sd, lsm_c_cai_cv, lsm_c_cpland, lsm_l_cai_mn, lsm_l_cai_sd, lsm_l_cai_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_cai(landscape) lsm_p_circle CIRCLE (patch level) Description Related Circumscribing Circle (Shape metric) Usage lsm_p_circle(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details aij CIRCLE = 1 − ( circle ) aij where aij is the area in square meters and acircle ij the area of the smallest circumscribing circle. CIRCLE is a ’Shape metric’. The metric is the ratio between the patch area and the smallest cir- cumscribing circle of the patch. The diameter of the smallest circumscribing circle is the ’diameter’ of the patch connecting the opposing corner points of the two cells that are the furthest away from each other. The metric characterises the compactness of the patch and is comparable among patches with different area. Units: None Range: 0 <= CIRCLE < 1 Behaviour: CIRCLE = 0 for a circular patch and approaches CIRCLE = 1 for a linear patch. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and <NAME>. 1992. The r.le programs for multiscale analysis of landscape structure using the GRASS geographical information system. Landscape Ecology 7: 291-302. Based on C++ code from Project Nayuki (https://www.nayuki.io/page/smallest-enclosing-circle). See Also lsm_p_area, lsm_c_circle_mn, lsm_c_circle_sd, lsm_c_circle_cv, lsm_l_circle_mn, lsm_l_circle_sd, lsm_l_circle_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_circle(landscape) lsm_p_contig CONTIG (patch level) Description Contiguity index (Shape metric) Usage lsm_p_contig(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details " z P # cijr CON T IG = where cijr is the contiguity value for pixel r in patch ij, aij the area of the respective patch (number of cells) and v is the size of the filter matrix (13 in this case). CONTIG is a ’Shape metric’. It asses the spatial connectedness (contiguity) of cells in patches. CONTIG coerces patch values to a value of 1 and the background to NA. A nine cell focal filter matrix: filter_matrix <- matrix(c(1, 2, 1, 2, 1, 2, 1, 2, 1), 3, 3, byrow = T) ... is then used to weight orthogonally contiguous pixels more heavily than diagonally contiguous pixels. Therefore, larger and more connections between patch cells in the rookie case result in larger contiguity index values. Units: None Range: 0 >= CONTIG <= 1 Behaviour: Equals 0 for one-pixel patches and increases to a limit of 1 (fully connected patch). Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1991. Assessing patch shape in landscape mosaics. Photogrammetric Engineering and Remote Sensing, 57(3), 285-293 See Also lsm_c_contig_mn, lsm_c_contig_sd, lsm_c_contig_cv, lsm_l_contig_mn, lsm_l_contig_sd, lsm_l_contig_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_contig(landscape) lsm_p_core CORE (patch level) Description Core area (Core area metric) Usage lsm_p_core( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details CORE = acore ij where acore ij is the core area in square meters CORE is a ’Core area metric’ and equals the area within a patch that is not on the edge of it. A cell is defined as core area if the cell has no neighbour with a different value than itself (rook’s case). It describes patch area and shape simultaneously (more core area when the patch is large and the shape is rather compact, i.e. a square). Units: Hectares Range: CORE >= 0 Behaviour: Increases, without limit, as the patch area increases and the patch shape simplifies (more core area). CORE = 0 when every cell in the patch is an edge. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_core_mn, lsm_c_core_sd, lsm_c_core_cv, lsm_c_tca, lsm_l_core_mn, lsm_l_core_sd, lsm_l_core_cv, lsm_l_tca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_core(landscape) lsm_p_enn ENN (patch level) Description Euclidean Nearest-Neighbor Distance (Aggregation metric) Usage lsm_p_enn(landscape, directions = 8, verbose = TRUE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). verbose Print warning message if not sufficient patches are present Details EN N = hij where hij is the distance to the nearest neighbouring patch of the same class i in meters ENN is an ’Aggregation metric’. The distance to the nearest neighbouring patch of the same class i. The distance is measured from edge-to-edge. The range is limited by the cell resolution on the lower limit and the landscape extent on the upper limit. The metric is a simple way to describe patch isolation. Units: Meters Range: ENN > 0 Behaviour: Approaches ENN = 0 as the distance to the nearest neighbour decreases, i.e. patches of the same class i are more aggregated. Increases, without limit, as the distance between neigh- bouring patches of the same class i increases, i.e. patches are more isolated. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., and Mc<NAME>. (1995). Relationships between landscape structure and breed- ing birds in the Oregon Coast Range. Ecological monographs, 65(3), 235-260. See Also lsm_c_enn_mn, lsm_c_enn_sd, lsm_c_enn_cv, lsm_l_enn_mn, lsm_l_enn_sd, lsm_l_enn_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_enn(landscape) lsm_p_frac FRAC (patch level) Description Fractal dimension index (Shape metric) Usage lsm_p_frac(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details F RAC = ln aij where pij is the perimeter in meters and aij is the area in square meters FRAC is a ’Shape metric’. The index is based on the patch perimeter and the patch area and describes the patch complexity. Because it is standardized, it is scale independent, meaning that increasing the patch size while not changing the patch form will not change the ratio. Units: None Range: 1 <= FRAC <= 2 Behaviour: Approaches FRAC = 1 for a squared patch shape form and FRAC = 2 for a irregular patch shape. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1977. Fractals: Form, Chance, and Dimension. San Francisco. W. H. Freeman and Company. See Also lsm_p_area, lsm_p_perim, lsm_c_frac_mn, lsm_c_frac_sd, lsm_c_frac_cv, lsm_l_frac_mn, lsm_l_frac_sd, lsm_l_frac_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_frac(landscape) lsm_p_gyrate GYRATE (patch level) Description Radius of Gyration (Area and edge metric) Usage lsm_p_gyrate(landscape, directions = 8, cell_center = FALSE) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). cell_center If true, the coordinates of the centroid are forced to be a cell center within the patch. Details z X hijr GY RAT E = z where hijr is the distance from each cell to the centroid of the patch and z is the number of cells. GYRATE is an ’Area and edge metric’. The distance from each cell to the patch centroid is based on cell center to centroid distances. The metric characterises both the patch area and compactness. If cell_center = TRUE some patches might have several possible cell-center centroids. In this case, the gyrate index is based on the mean distance of all cells to all possible cell-center centroids. Units: Meters Range: GYRATE >= 0 Behaviour: Approaches GYRATE = 0 if patch is a single cell. Increases, without limit, when only one patch is present. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>., <NAME>., & <NAME>. 1997. Detecting critical scales in fragmented landscapes. Conservation ecology, 1(1). See Also lsm_c_gyrate_mn, lsm_c_gyrate_sd, lsm_c_gyrate_cv, lsm_l_gyrate_mn, lsm_l_gyrate_sd, lsm_l_gyrate_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_gyrate(landscape) lsm_p_ncore NCORE (patch level) Description Number of core areas (Core area metric) Usage lsm_p_ncore( landscape, directions = 8, consider_boundary = FALSE, edge_depth = 1 ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell #’ @details N CORE = ncore ij where ncore ij is the number of disjunct core areas. NCORE is a ’Core area metric’. A cell is defined as core if the cell has no neighbour with a different value than itself (rook’s case). The metric counts the disjunct core areas, whereby a core area is a ’patch within the patch’ containing only core cells. It describes patch area and shape simultaneously (more core area when the patch is large, however, the shape must allow disjunct core areas). Thereby, a compact shape (e.g. a square) will contain less disjunct core areas than a more irregular patch. Units: None Range: NCORE >= 0 Behaviour: NCORE = 0 when CORE = 0, i.e. every cell in patch is edge. Increases, without limit, as core area increases and patch shape allows disjunct core areas (i.e. patch shape becomes rather complex). Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_c_dcore_mn, lsm_c_dcore_sd, lsm_c_dcore_cv, lsm_c_ndca, lsm_l_dcore_mn, lsm_l_dcore_sd, lsm_l_dcore_cv, lsm_l_ndca Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_ncore(landscape) lsm_p_para PARA (patch level) Description Perimeter-Area ratio (Shape metric) Usage lsm_p_para(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details pij P ARA = aij where pij is the perimeter in meters and aij is the area in square meters. PARA is a ’Shape metric’. It describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio. Units: None Range: PARA > 0 Behaviour: Increases, without limit, as the shape complexity increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also lsm_p_area, lsm_p_perim, lsm_c_para_mn, lsm_c_para_sd, lsm_c_para_cv, lsm_l_para_mn, lsm_l_para_sd, lsm_l_para_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_para(landscape) lsm_p_perim PERIM (patch level) Description Perimeter (Area and edge metric) Usage lsm_p_perim(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details P ERIM = pij where pij is the perimeter in meters. PERIM is an ’Area and edge metric’. It equals the perimeter of the patch including also the edge to the landscape boundary. The metric describes patch area (larger perimeter for larger patches), but also patch shape (large perimeter for irregular shapes). Units: Meters Range: PERIM > 0 Behaviour: Increases, without limit, as patch size and complexity increases. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_perim(landscape) lsm_p_shape SHAPE (patch level) Description Shape index (Shape metric) Usage lsm_p_shape(landscape, directions = 8) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). Details SHAP E = √ aij where pij is the perimeter (m) and aij is the area (m2). SHAPE is a ’Shape metric’. It describes the ratio between the actual perimeter of the patch and the square root of patch area and thus adjusting for a square standard. Thus, it is a simple measure of shape complexity. Units: None Range: SHAPE >= 1 Behaviour: Equals SHAPE = 1 for a squared patch and increases, without limit, as the patch shape becomes more complex. Value tibble References <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org <NAME>. 1975. A diversity index for quantifying habitat "edge". Wildl. Soc.Bull. 3:171-173. See Also lsm_p_perim, lsm_p_area, lsm_c_shape_mn, lsm_c_shape_sd, lsm_c_shape_cv, lsm_l_shape_mn, lsm_l_shape_sd, lsm_l_shape_cv Examples landscape <- terra::rast(landscapemetrics::landscape) lsm_p_shape(landscape) options_landscapemetrics options_landscapemetrics Description Sets global options for landscapemetrics Usage options_landscapemetrics(to_disk = NULL) Arguments to_disk Logical argument, if FALSE results of get_patches are hold in memory. If true, get_patches writes temporary files and hence, does not hold everything in mem- ory. Can be set with a global option, e.g. options(to_disk = TRUE). See De- tails. Details Landscape metrics rely on the delineation of patches. Hence, get_patches is heavily used in landscapemetrics. As raster can be quite big, the fact that get_patches creates a copy of the raster for each class in a landscape becomes a burden for computer memory. Hence, the argument to_disk allows to store the results of the connected labeling algorithm on disk. Furthermore, this option can be set globally, so that every function that internally uses get_patches can make use of that. Value Global option to be used internally in the package podlasie_ccilc Podlasie ESA CCI LC Description A real landscape of the Podlasie region in Poland from the ESA CCI Land Cover Usage podlasie_ccilc Format A raster object. Source http://maps.elie.ucl.ac.be/CCI/viewer/ sample_lsm sample_lsm Description Sample metrics Usage sample_lsm( landscape, y, plot_id = NULL, shape = "square", size = NULL, all_classes = FALSE, return_raster = FALSE, verbose = TRUE, progress = FALSE, ... ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. y 2-column matrix with coordinates or sf point geometries. plot_id Vector with id of sample points. If not provided, sample points will be labelled 1...n. shape String specifying plot shape. Either "circle" or "square" size Approximated size of sample plot. Equals the radius for circles or half of the side-length for squares in map units. For lines size equals the width of the buffer. all_classes Logical if NA should be returned for classes not present in some sample plots. return_raster Logical if the clipped raster of the sample plot should be returned verbose Print warning messages. progress Print progress report. ... Arguments passed on to calculate_lsm(). Details This function samples the selected metrics in a buffer area (sample plot) around sample points, sample lines or within provided polygons. The size of the actual sampled landscape can be different to the provided size due to two reasons. Firstly, because clipping raster cells using a circle or a sample plot not directly at a cell center lead to inaccuracies. Secondly, sample plots can exceed the landscape boundary. Therefore, we report the actual clipped sample plot area relative in relation to the theoretical, maximum sample plot area e.g. a sample plot only half within the landscape will have a percentage_inside = 50. Please be aware that the output is slightly different to all other lsm-function of landscapemetrics. Please be aware that the function behaves differently for POLYGONS and MULTIPOLYGONS. In the first case, each polygon is used as a singular sample area, while in the second case all polygons are used as one sample area. The metrics can be specified by the arguments what, level, metric, name and/or type (combina- tions of different arguments are possible (e.g. level = "class", type = "aggregation metric"). If an argument is not provided, automatically all possibilities are selected. Therefore, to get all available metrics, don’t specify any of the above arguments. Value tibble See Also list_lsm calculate_lsm Examples landscape <- terra::rast(landscapemetrics::landscape) # use a matrix sample_points <- matrix(c(10, 5, 25, 15, 5, 25), ncol = 2, byrow = TRUE) sample_lsm(landscape, y = sample_points, size = 15, what = "lsm_l_np") show_cores Show core area Description Show core area Usage show_cores( landscape, directions = 8, class = "all", labels = FALSE, nrow = NULL, ncol = NULL, consider_boundary = TRUE, edge_depth = 1 ) Arguments landscape Raster object directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). class How to show the core area: "global" (single map), "all" (every class as facet), or a vector with the specific classes one wants to show (every selected class as facet). labels Logical flag indicating whether to print or not to print core labels. boundary should be considered as core nrow, ncol Number of rows and columns for the facet. consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core. edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell Details The functions plots the core area of patches labeled with the corresponding patch id. The edges are the grey cells surrounding the patches and are always shown. Value ggplot Examples landscape <- terra::rast(landscapemetrics::landscape) # show "global" core area show_cores(landscape, class = "global", labels = FALSE) # show the core area of every class as facet show_cores(landscape, class = "all", labels = FALSE) # show only the core area of class 1 and 3 show_cores(landscape, class = c(1, 3), labels = TRUE) show_correlation Show correlation Description Show correlation Usage show_correlation( data, method = "pearson", diag = TRUE, labels = FALSE, vjust = 0, text_size = 15 ) Arguments data Tibble with results of as returned by the landscapemetrics package. method Type of correlation. See link{cor} for details. diag If FALSE, values on the diagonal will be NA and not plotted. labels If TRUE, the correlation value will be added as text. vjust Will be passed on to ggplot2 as vertical justification of x-axis text. text_size Text size of the plot. Details The functions calculates the correlation between all metrics. In order to calculate correlations, for the landscape level more than one landscape needs to be present. All input must be structured as returned by the landscapemetrics package. Value ggplot Examples landscape <- terra::rast(landscapemetrics::landscape) metrics <- calculate_lsm(landscape, what = c("patch", "class")) show_correlation(data = metrics, method = "pearson") ## Not run: metrics <- calculate_lsm(landscape, what = c("patch", "class"))#' correlations <- calculate_correlation(metrics) show_correlation(data = correlations, method = "pearson") ## End(Not run) show_lsm Show landscape metrics Description Show landscape metrics on patch level printed in their corresponding patch. Usage show_lsm( landscape, what, class = "global", directions = 8, consider_boundary = FALSE, edge_depth = 1, labels = FALSE, label_lsm = FALSE, nrow = NULL, ncol = NULL ) Arguments landscape *Raster object what Patch level what to plot class How to show the labeled patches: "global" (single map), "all" (every class as facet), or a vector with the specific classes one wants to show (every selected class as facet). directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). consider_boundary Logical if cells that only neighbour the landscape boundary should be consid- ered as core edge_depth Distance (in cells) a cell has the be away from the patch edge to be considered as core cell labels Logical flag indicating whether to print or not to print patch labels. label_lsm If true, the value of the landscape metric is used as label nrow, ncol Number of rows and columns for the facet. Details The function plots all patches with a fill corresponding to the value of the chosen landscape metric on patch level. Value ggplot Examples landscape <- terra::rast(landscapemetrics::landscape) show_lsm(landscape, what = "lsm_p_area", directions = 4) show_lsm(landscape, what = "lsm_p_shape", class = c(1, 2), label_lsm = TRUE) show_lsm(landscape, what = "lsm_p_circle", class = 3, labels = TRUE) show_patches Show patches Description Show patches Usage show_patches( landscape, class = "global", directions = 8, labels = FALSE, nrow = NULL, ncol = NULL ) Arguments landscape *Raster object class How to show the labeled patches: "global" (single map), "all" (every class as facet), or a vector with the specific classes one wants to show (every selected class as facet). directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). labels Logical flag indicating whether to print or not to print patch labels. nrow, ncol Number of rows and columns for the facet. Details The functions plots the landscape with the patches labeled with the corresponding patch id. Value ggplot Examples landscape <- terra::rast(landscapemetrics::landscape) show_patches(landscape) show_patches(landscape, class = c(1, 2)) show_patches(landscape, class = 3, labels = FALSE) spatialize_lsm spatialize_lsm Description Spatialize landscape metric values Usage spatialize_lsm( landscape, level = "patch", metric = NULL, name = NULL, type = NULL, what = NULL, directions = 8, progress = FALSE, to_disk = getOption("to_disk", default = FALSE), ... ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. level Level of metrics. Either ’patch’, ’class’ or ’landscape’ (or vector with combina- tion). metric Abbreviation of metrics (e.g. ’area’). name Full name of metrics (e.g. ’core area’) type Type according to FRAGSTATS grouping (e.g. ’aggregation metrics’). what Selected level of metrics: either "patch", "class" or "landscape". It is also pos- sible to specify functions as a vector of strings, e.g. what = c("lsm_c_ca", "lsm_l_ta"). directions The number of directions in which patches should be connected: 4 (rook’s case) or 8 (queen’s case). progress Print progress report. to_disk If TRUE raster will be saved to disk. ... Arguments passed on to calculate_lsm(). Details The functions returns a nested list with RasterLayers. The first level contains each input layer (only one element if RasterLayer was provided). The second level contains a RasterLayer for each selected metric (see list_lsm for details) where each cell has the landscape metric value of the patch it belongs to. Only patch level metrics are allowed. Value list See Also list_lsm show_lsm Examples landscape <- terra::rast(landscapemetrics::landscape) spatialize_lsm(landscape, what = "lsm_p_area") window_lsm window_lsm Description Moving window Usage window_lsm( landscape, window, level = "landscape", metric = NULL, name = NULL, type = NULL, what = NULL, progress = FALSE, ... ) Arguments landscape A categorical raster object: SpatRaster; Raster* Layer, Stack, Brick; stars or a list of SpatRasters. window Moving window matrix. level Level of metrics. Either ’patch’, ’class’ or ’landscape’ (or vector with combina- tion). metric Abbreviation of metrics (e.g. ’area’). name Full name of metrics (e.g. ’core area’) type Type according to FRAGSTATS grouping (e.g. ’aggregation metrics’). what Selected level of metrics: either "patch", "class" or "landscape". It is also pos- sible to specify functions as a vector of strings, e.g. what = c("lsm_c_ca", "lsm_l_ta"). progress Print progress report. ... Arguments passed on to calculate_lsm(). Details The function calculates for each focal cell the selected landscape metrics (currently only landscape level metrics are allowed) for a local neighbourhood. The neighbourhood can be specified using a matrix. For more details, see ?terra::focal(). The result will be a RasterLayer in which each focal cell includes the value of its neighbourhood and thereby allows to show gradients and variability in the landscape (Hagen-Zanker 2016). To be type stable, the actual result is always a nested list (first level for RasterStack layers, second level for selected landscape metrics). Value list References <NAME>., <NAME>. 2018. Spatial Ecology and Conservation Modeling: Applications with R. Springer International Publishing. 523 pages Hagen-Z<NAME>. (2016). A computational framework for generalized moving windows and its application to landscape pattern analysis. International journal of applied earth observation and geoinformation, 44, 205-216. <NAME>., <NAME>, and <NAME>. 2023. FRAGSTATS v4: Spatial Pattern Analysis Pro- gram for Categorical Maps. Computer software program produced by the authors; available at the following web site: https://www.fragstats.org See Also list_lsm calculate_lsm focal Examples ## Not run: landscape <- terra::rast(landscapemetrics::landscape) landscape_stack <- c(landscape, landscape) window <- matrix(1, nrow = 5,ncol = 5) window_lsm(landscape, window = window, what = c("lsm_l_pr", "lsm_l_joinent")) window_lsm(landscape_stack, window = window, what = c("lsm_l_pr", "lsm_l_joinent")) window_circular <- matrix(c(NA, 1, NA, 1, 1, 1, NA, 1, NA), nrow = 3, ncol = 3) window_lsm(landscape, window = window_circular, what = c("lsm_l_pr", "lsm_l_joinent")) ## End(Not run)
GITHUB_tj64_picolisp-works.zip_unzipped_refguide.pdf
free_programming_book
Unknown
LATEX2e SVMult Document Class Reference Guide for Contributed Books c 2004, Springer Verlag Heidelberg  All rights reserved. June 9, 2004 Contents 1 Introduction 2 2 Basic SVMult Class Features 2.1 Initializing the Class ... 2.2 New Class Options ... 2.3 Required Packages ... 2.4 The Contributions Header ... 2.5 New Commands in Text Mode ... 2.6 New Commands in Math Mode ... 2.7 New Built-in Theorem-Like Environments . . . . . . . . . . . . . 2.8 New Commands for the Figure Environment . . . . . . . . . . . 3 3 3 5 7 8 9 9 10 3 More Advanced Tips and Tricks 11 3.1 Packages for Typesetting Mathematics . . . . . . . . . . . . . . . 11 3.2 Enhanced Figure and Table Environment . . . . . . . . . . . . . 11 3.3 Enhanced Denitions for Theorem-Like Environments . . . . . . 12 3.4 References ... 13 4 Editors Section 14 4.1 Book Layout ... 14 4.2 Preface and the Like ... 14 4.3 Table of Contents ... 14 4.4 List of Contributors ... 15 4.5 Appendix ... 15 4.6 Index(es) ... 16 References 16 1 1 Introduction This reference guide gives a detailed description of the SVMult LATEX 2 document class and its special features designed to facilitate the preparation of scientic contributed books for Springer Verlag. It always comes as part of the SVMult tool package and should not be used on its own. The components of the SVMult tool package are: the Springer LATEX class SVMult.cls and - if applicable - further Springer styles as well as the templates with preset class options and packages as well as coding examples; Tip: Copy all of them to your working directory, run LATEX 2 and produce your own example *.dvi le; rename the template les as you see t and use them for your own input. Author Instructions with style and coding instructions specic to the subject area or book series you are writing for; Follow these instructions to set up your les, to type in your text and to obtain a consistent formal style and use the pages as checklists before you submit your ready-to-print manuscript. the Reference Guide describing all possible SVMult features independent of any specic style requirements. Tip: Use it as a reference if you need to alter or enhance the default settings provided by the SVMult document class and the templates. For editors only the SVMult tool package is enhanced by the editor instructions for compiling multiple contributions to a mutual book. The documentation in the Springer SVMult tool package is not intended to be a general introduction to LATEX 2 or TEX. For this we refer you to [1, 2, 3]. Should we refer in this tool package to standard tools or packages that are not installed on your system, please consult the Comprehensive TEX Archive Network (CTAN) at [4, 5, 6]. SVMult was derived from the LATEX 2 article.cls. Should you encounter any problems or bugs in the SVMult document class please contact <EMAIL>. 2 The main dierences from the standard article class are the presence of multiple Springer class options, a number of newly built-in environments for individual text structures like theorems, exercises, lemmas, proofs, etc., enhanced environments for the layout of gures and captions, and new declarations, commands and useful enhancements of standard environments to facilitate your math and text input and to ensure their output conforms with Springer layout standards. Nevertheless, text, formulae, gures, and tables are typed using the standard LATEX 2 commands. The standard sectioning commands are also used. Always give a \label where possible and use \ref for cross-referencing. Such cross-references may then be converted to hyperlinks in any electronic version of your book. The \cite and \bibitem mechanism for bibliographic references is also obligatory. 2 Basic SVMult Class Features 2.1 Initializing the Class To use the document class, enter \documentclass [options] {svmult} at the beginning of your input. 2.2 New Class Options Choose from the following list of class options if - in conjunction with the editor of your book - you decide to alter the default layout settings of the Springer SVMult document class. Page Layout Default: horizontal line above rst level heading, all headings are displayed except for subparagraph headings, rst level items of a list start with a bullet. multhd removes horizontal line, allows inline headings on subsubsection and paragraph level, rst level list items start with a hyphen multphys removes horizontal line, rst level list items start with a hyphen sechang indents second and subsequent lines in a multiline heading 3 Page Style Default: twoside, single-spaced output, contributions starting always on a recto page referee footinfo norunningheads produces double-spaced output for proofreading generates a footline with name, date, ... at the bottom of each page suppresses any headers and footers N.B. If you want to use both options, you must type referee before footinfo. Font Size Default: 10 pt 11pt, 12pt are ignored Language for Fixed LATEX Texts. In the SVMult class we have changed a few standard LATEX texts (e.g. Figure to Fig. in gure captions) and assigned names to newly dened theorem-like environments so that they conform with Springer style requirements. The default language is English. deutsch francais translates xed LATEX texts into their German equivalent same as above for French Equations Style eqn vecphys vecarrow Default: centered layout sets equations (and short gure and table captions) ushleft produces boldface italic vectors when \vec-command is used depicts vectors with an arrow above when \vec-command is used Numbering and Counting of Built-in Theorem-Like Environments For a list of built-in theorem-like environments refer to Sect. 2.7. default setting each built-in theorem-like environment gets its own counter without any chapter or section prex and is counted consecutively throughout the book envcountsame all built-in environments follow a single counter without any chapter or section prex, and are counted consecutively throughout the book envcountchap each built-in environment gets its own counter and is numbered chapterwise envcountsect each built-in environment gets its own counter and is numbered sectionwise envcountresetchap each built-in environment gets its own counter without any chapter or section prex but with the counter reset for each chapter 4 envcountresetsect each built-in environment gets its own counter without any chapter or section prex but with the counter reset for each section N.B.1 When the option envcountsame is combined with the options envcountresetchap or envcountresetsect all predened Springer environments get the same counter; but the counter is reset for each chapter or section. N.B.2 When the option envcountsame is combined with the options envcountchap or envcountsect all predened Springer environments get a common counter with a chapter or section prex; but the counter is reset for each chapter or section. N.B.3 We have designed a new easy-to-use mechanism to dene your own environments, see Sect. 3.3. Use the Springer class option nospthms only if you want to suppress all Springer theorem-like environments and use the theorem environments of original LATEX package or other theorem packages instead. (Please check this with your editor.) References By default, the list of references is set as an unnumbered section at the end of your contribution, the running head is set lower case, the list itself is set in small print and numbered with ordinal numbers. chaprefs natbib sets the reference list as an unnumbered chapter e.g. at the end of the book sorts reference entries in the author-year system (make sure that you have the natbib package by <NAME> installed. Otherwise it can be found at the Comprehensive TEX Archive Network (CTAN...texarchive/macros/latex/contrib/supported/natbib/), see [4, 5, 6] Use the Springer class option oribibl 2.3 only if you want to set reference numbers in square brackets without automatic TOC entry etc., as is the case in the original LATEX bibliography environment. But please note that most page layout features are nevertheless adjusted to Springer requirements. (Please check usage of this option with your editor.) Required Packages SVMult document class has been tested with a number of Standard LATEX tools. Below we list and comment on a selection of recommended packages for preparing fully formatted book manuscripts for Springer Verlag. Refer to Sect. 3 5 for a list of other useful, but not essential, standard packages. If not installed on your system, the source of all standard LATEX tools and packages is the Comprehensive TEX Archive Network (CTAN) at [4, 5, 6]. Book Layout For some book series or subject areas Springer Verlag provides specic styles. Please check your author instructions, Sect. Required Packages, for more details. Footnotes footmisc.sty used with style option [bottom] places all footnotes at the bottom of the page Figures graphics.sty or graphicx.sty powerful tool for including, rotating, scaling and sizing graphics les (preferrably eps les) References cite.sty generates compressed, sorted lists of numerical citations: e.g. [8,1116]; preferred style for books published in a print version only Index makeidx.sty provides and interprets the command \printindex which prints the externally generated index le *.ind. multicol.sty balances out multiple columns on the last page of your subject index, glossary or the like N.B. Use the MakeIndex program together with one of the Springer styles svind.ist for English texts to generate a subject index automatically in accordance with Springer layout requirements. For a detailed documentation of the program and its usage we refer you to [1] 6 2.4 The Contributions Header Use the command \title*{} to typeset an unnumbered heading of your contribution. \title{} to typeset a numbered heading of your contribution. Use the command \toctitle{} if you want to alter the line break of your heading for the table of content. Use the command \titlerunning{} if you need to abbreviate your heading to t into the running head. Use the command \author{} for your name(s). Always give your rst name in full. If there is more than one author, the names should be separated by \and. Use the command \inst{n} to assign your aliation as specied in the \institute command (see below). Use the command \tocauthor{} to change manually the list of authors to appear in the table of contents. Use the command \authorrunning{} if there are more than two authors; abbreviate the list of authors to the main authors name and add et al. for the running head. Use the command \institute{} for your aliation(s). Please be sure to include your e-mail address here. 7 If there is more than one aliation, they should be separated by \and. Use the command \maketitle to compile the header of your contribution. To format an abstract enter \begin{abstract} Text of Abstract \end{abstract} Use the command \keywords{keyword list} within the abstract environment to specify your keywords and/or subject classication. To create and format a short table of contents enter prior to the command \dominitoc, see below \setcounter{minitocdepth}{n} with n depicting the highest sectioning level of your short table of content (default is 0) and then enter \dominitoc To format the list of abbreviations and symbols \begin{abbrsymblist}[widest acronym or symbol ] \item[acronym or symbol ] full word or explanation \end{abbrsymblist} 2.5 New Commands in Text Mode Use the new environment command \begin{petit} text \end{petit} to typeset complete paragraphs in small print. Use the enhanced environment command 8 \begin{description}[largelabel ] \item[label1 ] text1  \item[label2 ] text2  \end{description} for your individual itemized lists. The new optional parameter [largelabel ] lets you specify the largest item label to appear within the list. The texts of all items are indented by the width of largelabel  and the item labels are typeset ush left within this space. Note, the optional parameter will work only two levels deep. 2.6 New Commands in Math Mode Use the new or enhanced symbol commands provided by the SVMult document class: \D \I \E \tens \vec upright d for dierential d upright i for imaginary unit upright e for exponential function depicts tensors as sans serif upright depicts vectors as boldface characters instead of the arrow accent N.B. By default the SVMult document class depicts Greek letters as italics because they are mostly used to symbolize variables. However, when used as operators, abbreviations, physical units, etc. they should be set upright. All upright upper-case Greek letters have been dened in the SVMult document class and are taken from the TEX alphabet. Use the command prex \var... with the upper-case name of the Greek letter to set it upright, e.g. \varDelta. Many upright lower-case Greek letters have been dened in the SVMult document class and are taken from the PostScript Symbol font. Use the command prex \u... with the lower-case name of the Greek letter to set it upright, e.g. \umu. 2.7 New Built-in Theorem-Like Environments For individual text structures such as theorems, denitions, and examples, the SVMult document class provides a number of predened environments which conform with the specic Springer layout requirements. 9 Use the environment command \begin{name of environment}[optional material ] text for that environment \end{name of environment} for the newly dened environments. Unnumbered environments will be produced by claim and proof. Numbered environments will be produced by case, conjecture, corollary, definition, example, exercise, lemma, note, problem, property, proposition, question, remark, solution, and theorem. The optional argument [optional material ] lets you specify additional text which will follow the environment caption and counter. N.B. We have designed a new easy-to-use mechanism to dene your own environments, refer to Sect. 3.3. Use the new symbol command \qed to produce an empty square at the end of your proof. In addition, use the new declaration \smartqed to move the position of the predened qed symbol to be ush right (in text mode). If you want to use this feature throughout your book the declaration must be set in the preamble, otherwise it should be used individually in the relevant environment, i.e. proof. 2.8 New Commands for the Figure Environment Use the new declaration \sidecaption[pos] to move the gure caption from beneath the gure (default) to the lower righthand side of the gure. The optional parameter [t] moves the gure caption to the upper right-hand side of the gure N.B. (1) Make sure the declaration \sidecaption follows the \begin{figure} command, and (2) remember to use the standard \caption{} command for your caption text. 10 3 More Advanced Tips and Tricks If the structuring and formatting of your manuscript needs more attention you may nd some useful hints for this in the sections below. Further to the packages listed in Sect.2.3, SVMult document class has been tested with the following style les. 3.1 Packages for Typesetting Mathematics A useful package for subnumbering each line of an equation array can be found at ../tex-archive/macros/latex/contrib/supported/subeqnarray/ at the Comprehensive TEX Archive Network (CTAN), see [4, 5, 6]. subeqnarray.sty denes the subeqnarray and subeqnarray* environments, which behave like the equivalent eqnarray and eqnarray* environments, except that the individual lines are numbered as 1a, 1b, 1c, etc. 3.2 Enhanced Figure and Table Environment Use the new declaration \samenumber within the gure environment directly after the \begin{figure} command to give the caption concerned the same counter as its predecessor (useful for long tables or gures spanning more than one page, see also the declaration \subfigures below. To arrange multiple gures in a single environment use the newly dened commands \leftfigure[pos] and \rightfigure[pos] within a {minipage}{\textwidth} environment. To allow enough space between two horizontally arranged gures use \hspace{\fill} to separate the corresponding \includegraphics{} commands . The required space between vertically arranged gures can be controlled with \\[12pt], for example. The default position of the gures within their predened space is ush left. The optional parameter [c] centers the gure, whereas [r] positions it ush right use the optional parameter only if you need to specify a position other than ush left. Use the newly dened commands 11 \leftcaption{} and \rightcaption{} outside the minipage environment to put two gure captions next to each other. Use the newly dened command \twocaptionwidth{width}{width} to overrule the default horizontal space of 5.4 cm provided for each of the abovedescribed caption commands. The rst argument corresponds to \leftcaption and the latter to \rightcaption. Use the new declaration \subfigures within the gure environment directly after the \begin{figure} command to subnumber multiple captions within a single gure-environment alphabetically. N.B.: When used in combination with \samenumber the main counter remains the same and the alphabetical subnumbering is continued. It works properly only when you stick to the sequence \samenumber\subfigures. If you do not include your gures as electronic les use the newly dened command \mpicplace{width}{height} to leave the desired amount of space for each gure. This command draws a vertical line of the height you specied. 3.3 Enhanced Denitions for Theorem-Like Environments In the SVMult document class the functions of the standard \newtheorem command have been enhanced to allow a more exible font selection. All standard functions though remain intact (e.g. adding an optional argument specifying additional text after the environment counter). Use the new Springer mechanism \spdefaulttheorem{env name}{caption}{cap font}{body font} to dene an environment compliant with the selected class options (see Sect. 2.2) and designed as the predened Springer theorem-like environments. The argument {env name} species the environment name; {caption} species the environments heading; {cap font} and {body font} specify the font shape of the caption and the text body. N.B. If you want to use optional arguments in your denition of a new theoremlike environment as done in the standard \newtheorem command, see below. 12 Use the new Springer mechanism \spnewtheorem{env name}[numbered like]{caption}{cap font}{body font} to dene an environment that shares its counter with another predened environment [numbered like]. The optional argument [numbered like] species the environment with which to share the counter. N.B. If you select the class option envcountsame the only valid numbered like argument is [theorem]. Use the newly dened Springer mechanism \spnewtheorem{env name}{caption}[within]{cap font}{body font} to dene an environment whose counter is prexed by either the chapter or section number (use [chapter] or [section] for [within]). Use the newly dened declaration \nocaption in the argument {caption} if you want to skip the environment caption and use an environment counter only. Use the newly dened environment \begin{theopargself} ... \end{theopargself} as a wrapper to any theorem-like environment dened with the Springer mechanism. It suppresses the brackets of the optional argument specifying additional text after the environment counter. 3.4 References The style natbib.sty sorts reference entries in the authoryear system (among other features) N.B. This style must be installed when the class option natbib is used, see Sect. 2.2. 13 The Springer command \biblstarthook{text} allows the inclusion of explanatory text between the bibliography heading and the actual list of references. The command must be placed before the thebibliography environment. 4 Editors Section Please refer to the Editor Instructions for details on how to compile all contributions into a single book. In addition to these instructions and the details described in the previous sections of this reference guide you nd below a list of further Springer class options, declarations and commands which you may nd especially useful for editing your Contributed Book. 4.1 Book Layout Choose the Springer class option openany 4.2 to allow contributions to start indierently on both recto and verso pages Preface and the Like Use the Springer new command \preface[althead ] to typeset the heading of your preface or any other unnumbered chapter (with automatically generated runnings heads, but without automatic TOC entry). The default heading text is Preface. If you choose a language class option, it will automatically be translated. In the optional argument [althead ], alternative headings (e.g. Foreword) may be indicated. 4.3 Table of Contents Use the command \setcounter{tocdepth}{number} to alter the numerical depth of your table of contents. 14 Use the macro \calctocindent to recalculate the horizontal spacing for large section numbers in the table of contents set with the following variables: \tocchpnum for the \tocsecnum \tocsubsecnum \tocsubsubsecnum \tocparanum chapter number section number subsection number subsubsection paragraph number Set the sizes of the variables concerned at the maximum numbering appearing in the current document. In the preamble set e.g: \settowidth{\tocchpnum}{36.\enspace} \settowidth{\tocsecnum}{36.10\enspace} \settowidth{\tocsubsecnum}{99.88.77} \calctocindent 4.4 List of Contributors Use the new environment command \begin{thecontriblist} Author 1 Information \and Author 2 Information \end{thecontriblist} to create a list of contributors. Please note that this environment makes use of the obeylines-function, so ideally you follow the input example given below for your author information: \textbf{Author Name} University/Institute Name Street No. X - Place, Postal Code \texttt{name@e-mail.*} 4.5 Appendix Use the declaration 15 \appendix after the \backmatter command to add an appendix at the end of the book. Use the \chapter command to typeset the heading. 4.6 Index(es) The Springer declaration \threecolindex allows the next index following the \threecolindex declaration to be set in three columns. The Springer declaration \indexstarthook{text} allows the inclusion of explanatory text between the index heading and the actual list of references. The command must be placed before the theindex environment. References [1] L. Lamport: LATEX: A Document Preparation System 2nd ed. (AddisonWesley, Reading, Ma 1994) [2] <NAME>, <NAME>, <NAME>: The LATEX Companion (AddisonWesley, Reading, Ma 1994) [3] <NAME>: The TEXbook (Addison-Wesley, Reading, Ma 1986) revised to cover TEX3 (1991) [4] TEX Users Group (TUG), http://www.tug.org [5] Deutschsprachige Anwendervereinigung TEX e.V. (DANTE), Heidelberg, Germany, http://www.dante.de [6] UK TEX Users Group (UK-TuG), http://uk.tug.org 16
form_urlencoded
rust
Rust
Crate form_urlencoded === Parser and serializer for the `application/x-www-form-urlencoded` syntax, as used by HTML forms. Converts between a string (such as an URL’s query string) and a sequence of (name, value) pairs. Structs --- * ByteSerializeReturn value of `byte_serialize()`. * ParseThe return type of `parse()`. * ParseIntoOwnedLike `Parse`, but yields pairs of `String` instead of pairs of `Cow<str>`. * SerializerThe `application/x-www-form-urlencoded` serializer. Traits --- * Target Functions --- * byte_serializeThe `application/x-www-form-urlencoded` byte serializer. * parseConvert a byte string in the `application/x-www-form-urlencoded` syntax into a iterator of (name, value) pairs. Type Definitions --- * EncodingOverride Crate form_urlencoded === Parser and serializer for the `application/x-www-form-urlencoded` syntax, as used by HTML forms. Converts between a string (such as an URL’s query string) and a sequence of (name, value) pairs. Structs --- * ByteSerializeReturn value of `byte_serialize()`. * ParseThe return type of `parse()`. * ParseIntoOwnedLike `Parse`, but yields pairs of `String` instead of pairs of `Cow<str>`. * SerializerThe `application/x-www-form-urlencoded` serializer. Traits --- * Target Functions --- * byte_serializeThe `application/x-www-form-urlencoded` byte serializer. * parseConvert a byte string in the `application/x-www-form-urlencoded` syntax into a iterator of (name, value) pairs. Type Definitions --- * EncodingOverride Struct form_urlencoded::ByteSerialize === ``` pub struct ByteSerialize<'a> { /* private fields */ } ``` Return value of `byte_serialize()`. Trait Implementations --- ### impl<'a> Debug for ByteSerialize<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. The type of the elements being iterated over.#### fn next(&mut self) -> Option<&'a strAdvances the iterator and returns the next value. Returns the bounds on the remaining length of the iterator. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for ByteSerialize<'a### impl<'a> Send for ByteSerialize<'a### impl<'a> Sync for ByteSerialize<'a### impl<'a> Unpin for ByteSerialize<'a### impl<'a> UnwindSafe for ByteSerialize<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct form_urlencoded::Parse === ``` pub struct Parse<'a> { /* private fields */ } ``` The return type of `parse()`. Implementations --- ### impl<'a> Parse<'a#### pub fn into_owned(self) -> ParseIntoOwned<'aReturn a new iterator that yields pairs of `String` instead of pairs of `Cow<str>`. Trait Implementations --- ### impl<'a> Clone for Parse<'a#### fn clone(&self) -> Parse<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Read more1.0.0 · source#### fn cycle(self) -> Cycle<Self>where Self: Sized + Clone, Repeats an iterator endlessly. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. --- ### impl<'a> RefUnwindSafe for Parse<'a### impl<'a> Send for Parse<'a### impl<'a> Sync for Parse<'a### impl<'a> Unpin for Parse<'a### impl<'a> UnwindSafe for Parse<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.{"Parse<'a>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.Parse.html\" title=\"struct form_urlencoded::Parse\">Parse</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.Parse.html\" title=\"struct form_urlencoded::Parse\">Parse</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = (<a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/alloc/borrow/enum.Cow.html\" title=\"enum alloc::borrow::Cow\">Cow</a>&lt;'a, <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.str.html\">str</a>&gt;, <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/alloc/borrow/enum.Cow.html\" title=\"enum alloc::borrow::Cow\">Cow</a>&lt;'a, <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.str.html\">str</a>&gt;);</span>","ParseIntoOwned<'a>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.ParseIntoOwned.html\" title=\"struct form_urlencoded::ParseIntoOwned\">ParseIntoOwned</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.ParseIntoOwned.html\" title=\"struct form_urlencoded::ParseIntoOwned\">ParseIntoOwned</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = (<a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/string/struct.String.html\" title=\"struct alloc::string::String\">String</a>, <a class=\"struct\" href=\"https://doc.rust-lang.org/nightly/alloc/string/struct.String.html\" title=\"struct alloc::string::String\">String</a>);</span>"} Struct form_urlencoded::ParseIntoOwned === ``` pub struct ParseIntoOwned<'a> { /* private fields */ } ``` Like `Parse`, but yields pairs of `String` instead of pairs of `Cow<str>`. Trait Implementations --- ### impl<'a> Iterator for ParseIntoOwned<'a#### type Item = (String, String) The type of the elements being iterated over.#### fn next(&mut self) -> Option<Self::ItemAdvances the iterator and returns the next value. &mut self ) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where Self: Sized, 🔬This is a nightly-only experimental API. (`iter_next_chunk`)Advances the iterator and returns an array containing the next `N` values. Read more1.0.0 · source#### fn size_hint(&self) -> (usize, Option<usize>) Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source#### fn count(self) -> usizewhere Self: Sized, Consumes the iterator, counting the number of iterations and returning it. Read more1.0.0 · source#### fn last(self) -> Option<Self::Item>where Self: Sized, Consumes the iterator, returning the last element. Self: Sized, Creates an iterator starting at the same point, but stepping by the given amount at each iteration. Read more1.0.0 · source#### fn chain<U>(self, other: U) -> Chain<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator<Item = Self::Item>, Takes two iterators and creates a new iterator over both in sequence. Read more1.0.0 · source#### fn zip<U>(self, other: U) -> Zip<Self, <U as IntoIterator>::IntoIter>where Self: Sized, U: IntoIterator, ‘Zips up’ two iterators into a single iterator of pairs. Self: Sized, G: FnMut() -> Self::Item, 🔬This is a nightly-only experimental API. (`iter_intersperse`)Creates a new iterator which places an item generated by `separator` between adjacent items of the original iterator. Read more1.0.0 · source#### fn map<B, F>(self, f: F) -> Map<Self, F>where Self: Sized, F: FnMut(Self::Item) -> B, Takes a closure and creates an iterator which calls that closure on each element. Read more1.21.0 · source#### fn for_each<F>(self, f: F)where Self: Sized, F: FnMut(Self::Item), Calls a closure on each element of an iterator. Read more1.0.0 · source#### fn filter<P>(self, predicate: P) -> Filter<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator which uses a closure to determine if an element should be yielded. Read more1.0.0 · source#### fn filter_map<B, F>(self, f: F) -> FilterMap<Self, F>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Creates an iterator that both filters and maps. Read more1.0.0 · source#### fn enumerate(self) -> Enumerate<Self>where Self: Sized, Creates an iterator which gives the current iteration count as well as the next value. Read more1.0.0 · source#### fn peekable(self) -> Peekable<Self>where Self: Sized, Creates an iterator which can use the `peek` and `peek_mut` methods to look at the next element of the iterator without consuming it. See their documentation for more information. Read more1.0.0 · source#### fn skip_while<P>(self, predicate: P) -> SkipWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that `skip`s elements based on a predicate. Read more1.0.0 · source#### fn take_while<P>(self, predicate: P) -> TakeWhile<Self, P>where Self: Sized, P: FnMut(&Self::Item) -> bool, Creates an iterator that yields elements based on a predicate. Read more1.57.0 · source#### fn map_while<B, P>(self, predicate: P) -> MapWhile<Self, P>where Self: Sized, P: FnMut(Self::Item) -> Option<B>, Creates an iterator that both yields elements based on a predicate and maps. Read more1.0.0 · source#### fn skip(self, n: usize) -> Skip<Self>where Self: Sized, Creates an iterator that skips the first `n` elements. Read more1.0.0 · source#### fn take(self, n: usize) -> Take<Self>where Self: Sized, Creates an iterator that yields the first `n` elements, or fewer if the underlying iterator ends sooner. Read more1.0.0 · source#### fn scan<St, B, F>(self, initial_state: St, f: F) -> Scan<Self, St, F>where Self: Sized, F: FnMut(&mut St, Self::Item) -> Option<B>, An iterator adapter which, like `fold`, holds internal state, but unlike `fold`, produces a new iterator. Read more1.0.0 · source#### fn flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F>where Self: Sized, U: IntoIterator, F: FnMut(Self::Item) -> U, Creates an iterator that works like map, but flattens nested structure. Read more1.0.0 · source#### fn fuse(self) -> Fuse<Self>where Self: Sized, Creates an iterator which ends after the first `None`. Read more1.0.0 · source#### fn inspect<F>(self, f: F) -> Inspect<Self, F>where Self: Sized, F: FnMut(&Self::Item), Does something with each element of an iterator, passing the value on. Read more1.0.0 · source#### fn by_ref(&mut self) -> &mut Selfwhere Self: Sized, Borrows an iterator, rather than consuming it. Read more1.0.0 · source#### fn collect<B>(self) -> Bwhere B: FromIterator<Self::Item>, Self: Sized, Transforms an iterator into a collection. E: Extend<Self::Item>, Self: Sized, 🔬This is a nightly-only experimental API. (`iter_collect_into`)Collects all the items from an iterator into a collection. Read more1.0.0 · source#### fn partition<B, F>(self, f: F) -> (B, B)where Self: Sized, B: Default + Extend<Self::Item>, F: FnMut(&Self::Item) -> bool, Consumes an iterator, creating two collections from it. Self: Sized, P: FnMut(Self::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_is_partitioned`)Checks if the elements of this iterator are partitioned according to the given predicate, such that all those that return `true` precede all those that return `false`. Read more1.27.0 · source#### fn try_fold<B, F, R>(&mut self, init: B, f: F) -> Rwhere Self: Sized, F: FnMut(B, Self::Item) -> R, R: Try<Output = B>, An iterator method that applies a function as long as it returns successfully, producing a single, final value. Read more1.27.0 · source#### fn try_for_each<F, R>(&mut self, f: F) -> Rwhere Self: Sized, F: FnMut(Self::Item) -> R, R: Try<Output = ()>, An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error. Read more1.0.0 · source#### fn fold<B, F>(self, init: B, f: F) -> Bwhere Self: Sized, F: FnMut(B, Self::Item) -> B, Folds every element into an accumulator by applying an operation, returning the final result. Read more1.51.0 · source#### fn reduce<F>(self, f: F) -> Option<Self::Item>where Self: Sized, F: FnMut(Self::Item, Self::Item) -> Self::Item, Reduces the elements to a single one, by repeatedly applying a reducing operation. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<<R as Try>::Output>>>::TryTypewhere Self: Sized, F: FnMut(Self::Item, Self::Item) -> R, R: Try<Output = Self::Item>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`iterator_try_reduce`)Reduces the elements to a single one by repeatedly applying a reducing operation. If the closure returns a failure, the failure is propagated back to the caller immediately. Read more1.0.0 · source#### fn all<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if every element of the iterator matches a predicate. Read more1.0.0 · source#### fn any<F>(&mut self, f: F) -> boolwhere Self: Sized, F: FnMut(Self::Item) -> bool, Tests if any element of the iterator matches a predicate. Read more1.0.0 · source#### fn find<P>(&mut self, predicate: P) -> Option<Self::Item>where Self: Sized, P: FnMut(&Self::Item) -> bool, Searches for an element of an iterator that satisfies a predicate. Read more1.30.0 · source#### fn find_map<B, F>(&mut self, f: F) -> Option<B>where Self: Sized, F: FnMut(Self::Item) -> Option<B>, Applies function to the elements of iterator and returns the first non-none result. &mut self, f: F ) -> <<R as Try>::Residual as Residual<Option<Self::Item>>>::TryTypewhere Self: Sized, F: FnMut(&Self::Item) -> R, R: Try<Output = bool>, <R as Try>::Residual: Residual<Option<Self::Item>>, 🔬This is a nightly-only experimental API. (`try_find`)Applies function to the elements of iterator and returns the first true result or the first error. Read more1.0.0 · source#### fn position<P>(&mut self, predicate: P) -> Option<usize>where Self: Sized, P: FnMut(Self::Item) -> bool, Searches for an element in an iterator, returning its index. Read more1.6.0 · source#### fn max_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the maximum value from the specified function. Read more1.15.0 · source#### fn max_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the maximum value with respect to the specified comparison function. Read more1.6.0 · source#### fn min_by_key<B, F>(self, f: F) -> Option<Self::Item>where B: Ord, Self: Sized, F: FnMut(&Self::Item) -> B, Returns the element that gives the minimum value from the specified function. Read more1.15.0 · source#### fn min_by<F>(self, compare: F) -> Option<Self::Item>where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering, Returns the element that gives the minimum value with respect to the specified comparison function. Read more1.0.0 · source#### fn unzip<A, B, FromA, FromB>(self) -> (FromA, FromB)where FromA: Default + Extend<A>, FromB: Default + Extend<B>, Self: Sized + Iterator<Item = (A, B)>, Converts an iterator of pairs into a pair of containers. Read more1.36.0 · source#### fn copied<'a, T>(self) -> Copied<Self>where T: 'a + Copy, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which copies all of its elements. Read more1.0.0 · source#### fn cloned<'a, T>(self) -> Cloned<Self>where T: 'a + Clone, Self: Sized + Iterator<Item = &'a T>, Creates an iterator which `clone`s all of its elements. Self: Sized, 🔬This is a nightly-only experimental API. (`iter_array_chunks`)Returns an iterator over `N` elements of the iterator at a time. Read more1.11.0 · source#### fn sum<S>(self) -> Swhere Self: Sized, S: Sum<Self::Item>, Sums the elements of an iterator. Read more1.11.0 · source#### fn product<P>(self) -> Pwhere Self: Sized, P: Product<Self::Item>, Iterates over the entire iterator, multiplying all the elements Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Ordering, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn partial_cmp<I>(self, other: I) -> Option<Ordering>where I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Lexicographically compares the `PartialOrd` elements of this `Iterator` with those of another. The comparison works like short-circuit evaluation, returning a result without comparing the remaining elements. As soon as an order can be determined, the evaluation stops and a result is returned. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`iter_order_by`)Lexicographically compares the elements of this `Iterator` with those of another with respect to the specified comparison function. Read more1.5.0 · source#### fn eq<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are equal to those of another. Self: Sized, I: IntoIterator, F: FnMut(Self::Item, <I as IntoIterator>::Item) -> bool, 🔬This is a nightly-only experimental API. (`iter_order_by`)Determines if the elements of this `Iterator` are equal to those of another with respect to the specified equality function. Read more1.5.0 · source#### fn ne<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialEq<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are not equal to those of another. Read more1.5.0 · source#### fn lt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less than those of another. Read more1.5.0 · source#### fn le<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically less or equal to those of another. Read more1.5.0 · source#### fn gt<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than those of another. Read more1.5.0 · source#### fn ge<I>(self, other: I) -> boolwhere I: IntoIterator, Self::Item: PartialOrd<<I as IntoIterator>::Item>, Self: Sized, Determines if the elements of this `Iterator` are lexicographically greater than or equal to those of another. Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Option<Ordering>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given comparator function. Self: Sized, F: FnMut(Self::Item) -> K, K: PartialOrd<K>, 🔬This is a nightly-only experimental API. (`is_sorted`)Checks if the elements of this iterator are sorted using the given key extraction function. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for ParseIntoOwned<'a### impl<'a> Send for ParseIntoOwned<'a### impl<'a> Sync for ParseIntoOwned<'a### impl<'a> Unpin for ParseIntoOwned<'a### impl<'a> UnwindSafe for ParseIntoOwned<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<I> IntoIterator for Iwhere I: Iterator, #### type Item = <I as Iterator>::Item The type of the elements being iterated over.#### type IntoIter = I Which kind of iterator are we turning this into?const: unstable · source#### fn into_iter(self) -> I Creates an iterator from a value. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct form_urlencoded::Serializer === ``` pub struct Serializer<'a, T: Target> { /* private fields */ } ``` The `application/x-www-form-urlencoded` serializer. Implementations --- ### impl<'a, T: Target> Serializer<'a, T#### pub fn new(target: T) -> Self Create a new `application/x-www-form-urlencoded` serializer for the given target. If the target is non-empty, its content is assumed to already be in `application/x-www-form-urlencoded` syntax. #### pub fn for_suffix(target: T, start_position: usize) -> Self Create a new `application/x-www-form-urlencoded` serializer for a suffix of the given target. If that suffix is non-empty, its content is assumed to already be in `application/x-www-form-urlencoded` syntax. #### pub fn clear(&mut self) -> &mut Self Remove any existing name/value pair. Panics if called after `.finish()`. #### pub fn encoding_override(&mut self, new: EncodingOverride<'a>) -> &mut Self Set the character encoding to be used for names and values before percent-encoding. #### pub fn append_pair(&mut self, name: &str, value: &str) -> &mut Self Serialize and append a name/value pair. Panics if called after `.finish()`. #### pub fn append_key_only(&mut self, name: &str) -> &mut Self Serialize and append a name of parameter without any value. Panics if called after `.finish()`. #### pub fn extend_pairs<I, K, V>(&mut self, iter: I) -> &mut Selfwhere I: IntoIterator, I::Item: Borrow<(K, V)>, K: AsRef<str>, V: AsRef<str>, Serialize and append a number of name/value pairs. This simply calls `append_pair` repeatedly. This can be more convenient, so the user doesn’t need to introduce a block to limit the scope of `Serializer`’s borrow of its string. Panics if called after `.finish()`. #### pub fn extend_keys_only<I, K>(&mut self, iter: I) -> &mut Selfwhere I: IntoIterator, I::Item: Borrow<K>, K: AsRef<str>, Serialize and append a number of names without values. This simply calls `append_key_only` repeatedly. This can be more convenient, so the user doesn’t need to introduce a block to limit the scope of `Serializer`’s borrow of its string. Panics if called after `.finish()`. #### pub fn finish(&mut self) -> T::Finished If this serializer was constructed with a string, take and return that string. ``` use form_urlencoded; let encoded: String = form_urlencoded::Serializer::new(String::new()) .append_pair("foo", "bar & baz") .append_pair("saison", "Été+hiver") .finish(); assert_eq!(encoded, "foo=bar+%26+baz&saison=%C3%89t%C3%A9%2Bhiver"); ``` Panics if called more than once. Auto Trait Implementations --- ### impl<'a, T> !RefUnwindSafe for Serializer<'a, T### impl<'a, T> !Send for Serializer<'a, T### impl<'a, T> !Sync for Serializer<'a, T### impl<'a, T> Unpin for Serializer<'a, T>where T: Unpin, ### impl<'a, T> !UnwindSafe for Serializer<'a, TBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Function form_urlencoded::byte_serialize === ``` pub fn byte_serialize(input: &[u8]) -> ByteSerialize<'_``` The `application/x-www-form-urlencoded` byte serializer. Return an iterator of `&str` slices. {"ByteSerialize<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.ByteSerialize.html\" title=\"struct form_urlencoded::ByteSerialize\">ByteSerialize</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.ByteSerialize.html\" title=\"struct form_urlencoded::ByteSerialize\">ByteSerialize</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = &amp;'a <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.str.html\">str</a>;</span>"} Function form_urlencoded::parse === ``` pub fn parse(input: &[u8]) -> Parse<'_``` Convert a byte string in the `application/x-www-form-urlencoded` syntax into a iterator of (name, value) pairs. Use `parse(input.as_bytes())` to parse a `&str` string. The names and values are percent-decoded. For instance, `%23first=%25try%25` will be converted to `[("#first", "%try%")]`. {"Parse<'_>":"<h3>Notable traits for <code><a class=\"struct\" href=\"struct.Parse.html\" title=\"struct form_urlencoded::Parse\">Parse</a>&lt;'a&gt;</code></h3><pre><code><span class=\"where fmt-newline\">impl&lt;'a&gt; <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html\" title=\"trait core::iter::traits::iterator::Iterator\">Iterator</a> for <a class=\"struct\" href=\"struct.Parse.html\" title=\"struct form_urlencoded::Parse\">Parse</a>&lt;'a&gt;</span><span class=\"where fmt-newline\"> type <a href=\"https://doc.rust-lang.org/nightly/core/iter/traits/iterator/trait.Iterator.html#associatedtype.Item\" class=\"associatedtype\">Item</a> = (<a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/alloc/borrow/enum.Cow.html\" title=\"enum alloc::borrow::Cow\">Cow</a>&lt;'a, <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.str.html\">str</a>&gt;, <a class=\"enum\" href=\"https://doc.rust-lang.org/nightly/alloc/borrow/enum.Cow.html\" title=\"enum alloc::borrow::Cow\">Cow</a>&lt;'a, <a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.str.html\">str</a>&gt;);</span>"}
boozelib
readthedoc
OCaml
You can install it from PyPi, it is known as `boozelib` and has no dependencies: ``` pip install --user boozelib ``` ## Development Setup¶ poetry is used to manage a virtual environment for the development setup. A `Makefile` is provided, that collects some common tasks. You have to run the following once, to setup your environment: `make setup` ## Constants¶ * `boozelib.` `ALCOHOL_DENSITY` = 0.8¶ * density of alcohol (g/ml) * `boozelib.` `BLOOD_DENSITY` = 1.055¶ * density of blood (g/ml) * `boozelib.` `WATER_IN_BLOOD` = 0.8¶ * parts of water in blood (%) * `boozelib.` `ALCOHOL_DEGRADATION` = 0.0025¶ * for kg body weight per minute (g) ## Functions¶ ``` get_blood_alcohol_content ``` (*, age: int, weight: int, height: int, sex: bool, volume: int, percent: float) → float[source]¶ * Return the blood alcohol contents raise (per mill) for a person after a drink. Given a drink containing volume (ml) of alcohol with the percent (vol/vol), for a person with age (years), weight (kg) and height (cm), using the formular for “female body types” if sex is true. ``` get_blood_alcohol_degradation ``` Return the alcohol degradation (per mill) for a person over minutes. For a person with age (years), weight (kg) and height (cm), using the formular for “female body types” if sex is true, over the given minutes. If degradation is not set, `ALCOHOL_DEGRADATION` is used as default. ``` calculate_alcohol_weight ``` (*, volume: int, percent: float) → float[source]¶ * Return the amount of alcohol (in gramm) contained in a drink. Given a drink with a volume (ml) containing percent (vol/vol) alcohol. ``` calculate_alcohol_degradation ``` Return the alcohol degeneration (in gramm) over time. For a person with weight (in kg) over the given minutes. If degradation is not set, `ALCOHOL_DEGRADATION` is used as default. * `boozelib.` `calculate_body_water` (*, age: int, weight: int, height: int, sex: bool) → float[source]¶ * Return the amount of water (in liter) in a persons body. For a person with age (years), weight (kg) and height (cm), using the formular for “female body types” if sex is true. * `boozelib.` `gramm_to_promille` (*, gramm: float, body_water: float) → float[source]¶ * Return the blood alcohol contents of a person, given alcohol (in gramm) and body water (in liter). We used the formulars from Widmark and Watson — with the modification by Eicker (for the female blood alcohol content) — to calcualte the blood alcohol content and alcohol degradation in this module. ## Variables and Constants¶ `pa` = Density of alcohol (g/ml) = 0.8 `pb` = Density of blood (g/ml) = 1.055 `w` = Parts water in blood (%) = 0.8 `v` = Volume of the drink (ml) `e` = Alcohol concentration in the drink (v/v) `t` = Age (in years) `h` = Height (in cm) `m` = Weight (in kg) ## Widmark-Formel¶ Blood Alcohol Concentration (BAC) => c `c = A / (m * r)` Mass of alcohol intake (in gramm) => A `A = V * e * pa` Factor for alcohol degradation “Reduktionsfaktor” (by sex) => r * male: `r = 0,7` * female: `r = 0,6` ### Watson-Addition¶ Reduktionsfaktor => r ``` r = (pb * kw) / (w * m) ``` Water in the body (by sex) => kw * male ``` kw = 2,447 - (0,09516 * t) + (0,1074 * h) + (0,3362 * m) ``` * female ``` kw = 0,203 - (0,07 * t) + (0,1069 * h) + (0,2466 * m) ``` ### Combined¶ ``` c = (pa * v * e * w) / (pb * kw) ``` ### Final Formel¶ * male: ``` (pa * v * e * w) / (pb * (2,447 - (0,09516 * t) + (0,1074 * h) + (0,3362 * m))) ``` * female: ``` (pa * v * e * w) / (pb * (0,203 - (0,07 * t) + (0,1069 * h) + (0,2466 * m))) ``` ## Alcohol degradation¶ Average is 0.15 g/kg per hour (0.0025 per minute).
membrane_telemetry_metrics
hex
Erlang
Membrane.TelemetryMetrics === Defines macros for executing telemetry events and registering processes with events. Provided macros evalueates to meaningful code or to nothing, depending on config values, in order to achieve performance boost, when specific event or whole telemetry is not in use. [Link to this section](#summary) Summary === [Types](#types) --- [label()](#t:label/0) [Functions](#functions) --- [conditional_execute(func, event_name, measurements, metadata, label)](#conditional_execute/5) Evaluates to conditional call to [`:telemetry.execute/3`](https://hexdocs.pm/telemetry/1.1.0/telemetry.html#execute/3) or to nothing, depending on if specific event is enabled in config file. If event is enabled, [`:telemetry.execute/3`](https://hexdocs.pm/telemetry/1.1.0/telemetry.html#execute/3) will be executed only if value returned by call to `func` will be truthly. [execute(event_name, measurments, metadata, label)](#execute/4) Evaluates to call to [`:telemetry.execute/3`](https://hexdocs.pm/telemetry/1.1.0/telemetry.html#execute/3) or to nothing, depending on if specific event is enabled in config file. [register(event_name, label)](#register/2) Evalueates to call to `Membrane.TelemetryMetrics.Monitor.start/3` or to nothing, depending on if specific event is enabled in config file. Should be called in every process, that will execute event linked with metric aggregated by some instance of [`Membrane.TelemetryMetrics.Reporter`](Membrane.TelemetryMetrics.Reporter.html). [Link to this section](#types) Types === [Link to this section](#functions) Functions === Membrane.TelemetryMetrics.Reporter === Attaches handlers to :telemetry events based on the received list of metrics definitions. The attached handlers store metrics values in ETS tables. These values can be gotten by calling [`scrape/2`](#scrape/2) function or also reset by calling [`scrape_and_cleanup/2`](#scrape_and_cleanup/2). Currently supported types of metrics are: * [`Telemetry.Metrics.Counter`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.Counter.html) * [`Telemetry.Metrics.Sum`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.Sum.html) * [`Telemetry.Metrics.LastValue`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.LastValue.html) Currently supported fields of metrics definitions are: `:name`, `:event_name`, `measurement`. Fields `:keep`, `:reporter_options`, `tag_values`, `tags`, `:unit` and functionalities related to them are not supported yet. Metrics values are grouped by `label`. [Link to this section](#summary) Summary === [Types](#types) --- [report()](#t:report/0) [reporter()](#t:reporter/0) [Functions](#functions) --- [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [scrape(reporter, timeout \\ 5000)](#scrape/2) [scrape_and_cleanup(reporter, timeout \\ 5000)](#scrape_and_cleanup/2) [start_link(init_arg, options \\ [])](#start_link/2) [stop(reporter)](#stop/1) [Link to this section](#types) Types === [Link to this section](#functions) Functions === Membrane Telmetry Metrics === [![Hex.pm](https://img.shields.io/hexpm/v/membrane_telemetry_metrics.svg)](https://hex.pm/packages/membrane_telemetry_metrics) [![API Docs](https://img.shields.io/badge/api-docs-yellow.svg?style=flat)](https://hexdocs.pm/membrane_telemetry_metrics) [![CircleCI](https://circleci.com/gh/membraneframework/membrane_telemetry_metrics.svg?style=svg)](https://circleci.com/gh/membraneframework/membrane_telemetry_metrics) This repository contains a tool for generating metrics. It is part of [Membrane Multimedia Framework](https://membraneframework.org). [installation](#installation) Installation --- The package can be installed by adding `membrane_telemetry_metrics` to your list of dependencies in `mix.exs`: ``` def deps do [ {:membrane_telemetry_metrics, "~> 0.1.0"} ] end ``` [usage](#usage) Usage --- The usage example below illustrates, how you can use this tool to aggregate metrics and generate reports containing their values. In this case, we have a scenario, where a couple of people are doing their shopping, and we want to have access to the reports with some metrics about them, without calling each of these people directly. To benefit from of full functionality offered by this tool, you have to use both [`Membrane.TelemetryMetrics`](Membrane.TelemetryMetrics.html) and `Mebrane.TelemetryMetrics.Reporter`. First, put ``` config :membrane_telemetry_metrics, enabled: true ``` in your `config.exs`. You can also pass list of events, that will be enabled in `:events` option (in this case, it would be `[[:shopping]]`). If you don't specify that option, all events will be enabled. Let's assume, that you want to track three metrics: * `cash_spent` * `number_of_payments` * `last_visited_shop` So, the definition of our list of metrics will look like ``` metrics = [ Telemetry.Metrics.sum( "cash_spent", event_name: [:shopping], measurement: :payment ), Telemetry.Metrics.counter( "number_of_payments", event_name: [:shopping] ), Telemetry.Metrics.last_value( "last_visited_shop", event_name: [:shopping], measurement: :shop ) ] ``` Then, you have to start `Mebrane.TelemetryMetrics.Reporter`. It can be made by calling ``` Membrane.TelemetryMetrics.Reporter.start_link(metrics, name: ShoppingReporter) ``` but the suggested way is to do it under [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html) tree, in [`Application`](https://hexdocs.pm/elixir/Application.html) module. Now, `ShoppingReporter` is ready to aggregate metrics from emitted events. But before that, we have to register our event in every process, that will emit it. Let's assume, that we want to aggregate data about three people: `<NAME>`, `<NAME>` and `<NAME>`. Events with data about each of these people will be emitted in a different process. In `<NAME>`'s process we will execute code ``` require Membrane.TelemetryMetrics Membrane.TelemetryMetrics.register([:shopping], [name: "James", surname: "Smith"]) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Grocery", payment: 10}, %{}, [name: "James", surname: "Smith"] ) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Bakery", payment: 5}, %{}, [name: "James", surname: "Smith"] ) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Jeweller", payment: 100}, %{}, [name: "James", surname: "Smith"] ) ``` In `<NAME>`'s process we will execute code ``` require Membrane.TelemetryMetrics Membrane.TelemetryMetrics.register([:shopping], [name: "Mary", surname: "Smith"]) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Bookshop", payment: 25}, %{}, [name: "Mary", surname: "Smith"] ) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Butcher", payment: 15}, %{}, [name: "Mary", surname: "Smith"] ) ``` In `<NAME>`'s process we will execute code ``` require Membrane.TelemetryMetrics Membrane.TelemetryMetrics.register( [:shopping], [name: "Patricia", surname: "Johnson"] ) Membrane.TelemetryMetrics.execute( [:shopping], %{shop: "Newsagent", payment: 5}, %{}, [name: "Patricia", surname: "Johnson"] ) ``` Then, calling ``` Membrane.TelemetryMetrics.Reporter.scrape(ShoppingReporter) ``` from anywhere on this same node, as noticed processes, will cause getting the following result: ``` %{ {:surname, "Johnson"} => %{ {:name, "Patricia"} => %{ "cash_spent" => 5, "last_visited_shop" => "Newsagent", "number_of_payments" => 1 } }, {:surname, "Smith"} => %{ {:name, "James"} => %{ "cash_spent" => 115, "last_visited_shop" => "Jeweller", "number_of_payments" => 3 }, {:name, "Mary"} => %{ "cash_spent" => 40, "last_visited_shop" => "Butcher", "number_of_payments" => 2 } } } ``` Then, for example, if `<NAME>`'s process exits, you will get a report like ``` %{ {:surname, "Johnson"} => %{ {:name, "Patricia"} => %{ "cash_spent" => 5, "last_visited_shop" => "Newsagent", "number_of_payments" => 1 } }, {:surname, "Smith"} => %{ {:name, "James"} => %{ "cash_spent" => 115, "last_visited_shop" => "Jeweller", "number_of_payments" => 3 } } } ``` If a process registers an event by calling ``` Membrane.TelemetryMetrics.register(event_name, label) ``` and exits after it, every metric that aggregated measurements from an event with `event_name` will drop its values held for a specific value of `label`, as in the example above `<NAME>` has disappeared from the report, after the end of her's process. [copyright-and-license](#copyright-and-license) Copyright and License --- Copyright 2022, [Software Mansion](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane_template_plugin) [![Software Mansion](https://logo.swmansion.com/logo?color=white&variant=desktop&width=200&tag=membrane-github)](https://swmansion.com/?utm_source=git&utm_medium=readme&utm_campaign=membrane_template_plugin) Licensed under the [Apache License, Version 2.0](license.html) [API Reference](api-reference.html) [Next Page → LICENSE](license.html)
font-kit
rust
Rust
Crate font_kit === `font-kit` provides a common interface to the various system font libraries and provides services such as finding fonts on the system, performing nearest-font matching, and rasterizing glyphs. ### Synopsis ``` use font_kit::canvas::{Canvas, Format, RasterizationOptions}; use font_kit::family_name::FamilyName; use font_kit::hinting::HintingOptions; use font_kit::properties::Properties; use font_kit::source::SystemSource; use pathfinder_geometry::transform2d::Transform2F; use pathfinder_geometry::vector::{Vector2F, Vector2I}; let font = SystemSource::new().select_best_match(&[FamilyName::SansSerif], &Properties::new()) .unwrap() .load() .unwrap(); let glyph_id = font.glyph_for_char('A').unwrap(); let mut canvas = Canvas::new(Vector2I::splat(32), Format::A8); font.rasterize_glyph(&mut canvas, glyph_id, 32.0, Transform2F::from_translation(Vector2F::new(0.0, 32.0)), HintingOptions::None, RasterizationOptions::GrayscaleAa) .unwrap(); ``` ### Backends `font-kit` delegates to system libraries to perform tasks. It has two types of backends: a *source* and a *loader*. Sources are platform font databases; they allow lookup of installed fonts by name or attributes. Loaders are font loading libraries; they allow font files (TTF, OTF, etc.) to be loaded from a file on disk or from bytes in memory. Sources and loaders can be freely intermixed at runtime; fonts can be looked up via DirectWrite and rendered via FreeType, for example. Available loaders: * Core Text (macOS): The system font loader on macOS. Does not do hinting except when bilevel rendering is in use. * DirectWrite (Windows): The newer system framework for text rendering on Windows. Does vertical hinting but not full hinting. * FreeType (cross-platform): A full-featured font rendering framework. Available sources: * Core Text (macOS): The system font database on macOS. * DirectWrite (Windows): The newer API to query the system font database on Windows. * Fontconfig (cross-platform): A technically platform-neutral, but in practice Unix-specific, API to query and match fonts. * Filesystem (cross-platform): A simple source that reads fonts from a path on disk. This is the default on Android. * Memory (cross-platform): A source that reads from a fixed set of fonts in memory. * Multi (cross-platform): A source that allows multiple sources to be queried at once. On Windows and macOS, the FreeType loader and the Fontconfig source are not built by default. To build them, use the `loader-freetype` and `source-fontconfig` Cargo features respectively. If you want them to be the default, instead use the `loader-freetype-default` and `source-fontconfig-default` Cargo features respectively. Beware that `source-fontconfig-default` is rarely what you want on those two platforms! ### Features `font-kit` is capable of doing the following: * Loading fonts from files or memory. * Determining whether files on disk or in memory represent fonts. * Interoperating with native font APIs. * Querying various metadata about fonts. * Doing simple glyph-to-character mapping. (For more complex use cases, a shaper is required; proper shaping is beyond the scope of `font-kit`.) * Reading unhinted or hinted vector outlines from glyphs. * Calculating glyph and font metrics. * Looking up glyph advances and origins. * Rasterizing glyphs using the native rasterizer, optionally using hinting. (Custom rasterizers, such as Pathfinder, can be used in conjuction with the outline API.) * Looking up all fonts on the system. * Searching for specific fonts by family or PostScript name. * Performing font matching according to the CSS Fonts Module Level 3 specification. ### License `font-kit` is licensed under the same terms as Rust itself. Modules --- canvasAn in-memory bitmap surface for glyph rasterization. errorVarious types of errors that `font-kit` can return. familyDefines a set of faces that vary in weight, width or slope. family_handleEncapsulates the information needed to locate and open the fonts in a family. family_nameA possible value for the `font-family` CSS property. file_typeThe type of a font file: either a single font or a TrueType/OpenType collection. fontA font face loaded into memory. handleEncapsulates the information needed to locate and open a font. hintingSpecifies how hinting (grid fitting) is to be performed (or not performed) for a glyph. loaderProvides a common interface to the platform-specific API that loads, parses, and rasterizes fonts. loadersThe different system services that can load and rasterize fonts. metricsVarious metrics that apply to the entire font. outlineBézier paths. propertiesProperties that specify which font in a family to use: e.g. style, weight, and stretchiness. sourceA database of installed fonts that can be queried. sourcesVarious databases of installed fonts that can be queried. Crate font_kit === `font-kit` provides a common interface to the various system font libraries and provides services such as finding fonts on the system, performing nearest-font matching, and rasterizing glyphs. ### Synopsis ``` use font_kit::canvas::{Canvas, Format, RasterizationOptions}; use font_kit::family_name::FamilyName; use font_kit::hinting::HintingOptions; use font_kit::properties::Properties; use font_kit::source::SystemSource; use pathfinder_geometry::transform2d::Transform2F; use pathfinder_geometry::vector::{Vector2F, Vector2I}; let font = SystemSource::new().select_best_match(&[FamilyName::SansSerif], &Properties::new()) .unwrap() .load() .unwrap(); let glyph_id = font.glyph_for_char('A').unwrap(); let mut canvas = Canvas::new(Vector2I::splat(32), Format::A8); font.rasterize_glyph(&mut canvas, glyph_id, 32.0, Transform2F::from_translation(Vector2F::new(0.0, 32.0)), HintingOptions::None, RasterizationOptions::GrayscaleAa) .unwrap(); ``` ### Backends `font-kit` delegates to system libraries to perform tasks. It has two types of backends: a *source* and a *loader*. Sources are platform font databases; they allow lookup of installed fonts by name or attributes. Loaders are font loading libraries; they allow font files (TTF, OTF, etc.) to be loaded from a file on disk or from bytes in memory. Sources and loaders can be freely intermixed at runtime; fonts can be looked up via DirectWrite and rendered via FreeType, for example. Available loaders: * Core Text (macOS): The system font loader on macOS. Does not do hinting except when bilevel rendering is in use. * DirectWrite (Windows): The newer system framework for text rendering on Windows. Does vertical hinting but not full hinting. * FreeType (cross-platform): A full-featured font rendering framework. Available sources: * Core Text (macOS): The system font database on macOS. * DirectWrite (Windows): The newer API to query the system font database on Windows. * Fontconfig (cross-platform): A technically platform-neutral, but in practice Unix-specific, API to query and match fonts. * Filesystem (cross-platform): A simple source that reads fonts from a path on disk. This is the default on Android. * Memory (cross-platform): A source that reads from a fixed set of fonts in memory. * Multi (cross-platform): A source that allows multiple sources to be queried at once. On Windows and macOS, the FreeType loader and the Fontconfig source are not built by default. To build them, use the `loader-freetype` and `source-fontconfig` Cargo features respectively. If you want them to be the default, instead use the `loader-freetype-default` and `source-fontconfig-default` Cargo features respectively. Beware that `source-fontconfig-default` is rarely what you want on those two platforms! ### Features `font-kit` is capable of doing the following: * Loading fonts from files or memory. * Determining whether files on disk or in memory represent fonts. * Interoperating with native font APIs. * Querying various metadata about fonts. * Doing simple glyph-to-character mapping. (For more complex use cases, a shaper is required; proper shaping is beyond the scope of `font-kit`.) * Reading unhinted or hinted vector outlines from glyphs. * Calculating glyph and font metrics. * Looking up glyph advances and origins. * Rasterizing glyphs using the native rasterizer, optionally using hinting. (Custom rasterizers, such as Pathfinder, can be used in conjuction with the outline API.) * Looking up all fonts on the system. * Searching for specific fonts by family or PostScript name. * Performing font matching according to the CSS Fonts Module Level 3 specification. ### License `font-kit` is licensed under the same terms as Rust itself. Modules --- canvasAn in-memory bitmap surface for glyph rasterization. errorVarious types of errors that `font-kit` can return. familyDefines a set of faces that vary in weight, width or slope. family_handleEncapsulates the information needed to locate and open the fonts in a family. family_nameA possible value for the `font-family` CSS property. file_typeThe type of a font file: either a single font or a TrueType/OpenType collection. fontA font face loaded into memory. handleEncapsulates the information needed to locate and open a font. hintingSpecifies how hinting (grid fitting) is to be performed (or not performed) for a glyph. loaderProvides a common interface to the platform-specific API that loads, parses, and rasterizes fonts. loadersThe different system services that can load and rasterize fonts. metricsVarious metrics that apply to the entire font. outlineBézier paths. propertiesProperties that specify which font in a family to use: e.g. style, weight, and stretchiness. sourceA database of installed fonts that can be queried. sourcesVarious databases of installed fonts that can be queried. Module font_kit::canvas === An in-memory bitmap surface for glyph rasterization. Structs --- CanvasAn in-memory bitmap surface for glyph rasterization. Enums --- FormatThe image format for the canvas. RasterizationOptionsThe antialiasing strategy that should be used when rasterizing glyphs. Module font_kit::error === Various types of errors that `font-kit` can return. Enums --- FontLoadingErrorReasons why a loader might fail to load a font. GlyphLoadingErrorReasons why a font might fail to load a glyph. SelectionErrorReasons why a source might fail to look up a font or fonts. Module font_kit::family === Defines a set of faces that vary in weight, width or slope. Structs --- FamilyDefines a set of faces that vary in weight, width or slope. Module font_kit::family_handle === Encapsulates the information needed to locate and open the fonts in a family. Structs --- FamilyHandleEncapsulates the information needed to locate and open the fonts in a family. Module font_kit::family_name === A possible value for the `font-family` CSS property. Enums --- FamilyNameA possible value for the `font-family` CSS property. Module font_kit::file_type === The type of a font file: either a single font or a TrueType/OpenType collection. Enums --- FileTypeThe type of a font file: either a single font or a TrueType/OpenType collection. Module font_kit::font === A font face loaded into memory. The Font type in this crate represents the default loader. Re-exports --- `pub use crate::loaders::default::Font;` Module font_kit::handle === Encapsulates the information needed to locate and open a font. This is either the path to the font or the raw in-memory font data. To open the font referenced by a handle, use a loader. Enums --- HandleEncapsulates the information needed to locate and open a font. Module font_kit::hinting === Specifies how hinting (grid fitting) is to be performed (or not performed) for a glyph. This affects both outlines and rasterization. Enums --- HintingOptionsSpecifies how hinting (grid fitting) is to be performed (or not performed) for a glyph. Module font_kit::loader === Provides a common interface to the platform-specific API that loads, parses, and rasterizes fonts. Structs --- FallbackFontA single font record for a fallback query result. FallbackResultThe result of a fallback query. Traits --- LoaderProvides a common interface to the platform-specific API that loads, parses, and rasterizes fonts. Module font_kit::loaders === The different system services that can load and rasterize fonts. Re-exports --- `pub use crate::loaders::freetype as default;`Modules --- freetypeA cross-platform loader that uses the FreeType library to load and rasterize fonts. Module font_kit::metrics === Various metrics that apply to the entire font. For OpenType fonts, these mostly come from the `OS/2` table. Structs --- MetricsVarious metrics that apply to the entire font. Module font_kit::outline === Bézier paths. Structs --- ContourA single curve or subpath within a glyph outline. OutlineA glyph vector outline or path. OutlineBuilderAccumulates Bézier path rendering commands into an `Outline` structure. PointFlagsFlags that specify what type of point the corresponding position represents. Traits --- OutlineSinkReceives Bézier path rendering commands. Module font_kit::properties === Properties that specify which font in a family to use: e.g. style, weight, and stretchiness. Much of the documentation in this modules comes from the CSS 3 Fonts specification: https://drafts.csswg.org/css-fonts-3/ Structs --- PropertiesProperties that specify which font in a family to use: e.g. style, weight, and stretchiness. StretchThe width of a font as an approximate fraction of the normal width. WeightThe degree of blackness or stroke thickness of a font. This value ranges from 100.0 to 900.0, with 400.0 as normal. Enums --- StyleAllows italic or oblique faces to be selected. Module font_kit::source === A database of installed fonts that can be queried. Re-exports --- `pub use crate::sources::fontconfig::FontconfigSource as SystemSource;`Traits --- SourceA database of installed fonts that can be queried. Module font_kit::sources === Various databases of installed fonts that can be queried. The system-specific sources (Core Text, DirectWrite, and Fontconfig) contain the fonts that are installed on the system. The remaining databases (`fs`, `mem`, and `multi`) allow `font-kit` to query fonts not installed on the system. Modules --- fontconfigA source that contains the fonts installed on the system, as reported by the Fontconfig library. fsA source that loads fonts from a directory or directories on disk. memA source that keeps fonts in memory. multiA source that encapsulates multiple sources and allows them to be queried as a group.
rstatscn
cran
R
Package ‘rstatscn’ October 14, 2022 Type Package Title R Interface for China National Data Version 1.1.3 Date 2019-04-25 Author <NAME> Maintainer <NAME> <<EMAIL>> Description R interface for china national data <http://data.stats.gov.cn/>, some convenient functions for accessing the national data are provided. Depends R (>= 3.2.2) Imports httr(>= 1.0.0), jsonlite(>= 0.9.19) URL http://www.bagualu.net/ BugReports http: //www.bagualu.net/wordpress/rstatscn-the-r-interface-for-china-national-data Repository CRAN License Apache License 2.0 RoxygenNote 6.1.1 NeedsCompilation no Date/Publication 2019-04-25 06:20:02 UTC R topics documented: checkHttpStatu... 2 dataJson2d... 2 genDfwd... 3 milSe... 3 statscnDb... 4 statscnQueryDat... 4 statscnQueryLast... 5 statscnQueryZ... 6 statscnRegion... 6 statscnRowNamePrefi... 7 checkHttpStatus private function for check the http status Description private function for check the http status Usage checkHttpStatus(ret) Arguments ret the response obj returned by httr package Value return nothing , but if it finds some error , it stop the script dataJson2df private function to convert the returned jason data to a dataframe Description private function to convert the returned jason data to a dataframe Usage dataJson2df(rawObj, rowcode, colcode) Arguments rawObj the fromJSON output rowcode rowcode in the data frame colcode colcode in the data frame Value the contructed data frame genDfwds private function for constructing the query parameter for dfwds Description private function for constructing the query parameter for dfwds Usage genDfwds(wdcode, valuecode) Arguments wdcode string value , one of c("zb","sj","reg") valuecode string value , following is the table for available valuecode zb: the valudecode can be gotten by statscnQueryZb() function sj: the valudecode can be "2014" for nd db, "2014C" for jd db. reg: the valudecode is the region code fetched by statscnRegions(dbcode) function Value return the queyr string for the http request milSec private function for sec Description private function for sec Usage milSec() Value milsec statscnDbs the available dbs Description the available dbs in the national db Usage statscnDbs() Value a data frame with 2 columns , one is the dbcode, another is the db description Examples statscnDbs() statscnQueryData query data in the statscn db Description the main function for querying the statscn database, it will retrieve the data from specified db and orginize the data in a data frame. Usage statscnQueryData(zb = "A0201", dbcode = "hgnd", rowcode = "zb", colcode = "sj", moreWd = list(name = NA, value = NA)) Arguments zb the zb/category code to be queried dbcode the db code for querying rowcode rowcode in the returned data frame colcode colcode in the returned data frame moreWd more constraint on the data where the name should be one of c("reg","sj") , which stand for region and sj/time. the valuecode for reg should be the region code queried by statscnRegions() the valuecode for sj should be like ’2014’ for *nd , ’2014C’ for *jd , ’201405’ for *yd. Be noted that , the moreWd name should be different with either rowcode or colcode Value the data frame you are quering Examples ## Not run: df=statscnQueryData('A0201',dbcode='hgnd') df=statscnQueryData('A0201',dbcode='fsnd',rowcode='zb',colcode='sj', moreWd=list(name='reg',value='110000')) ## End(Not run) statscnQueryLastN fetch the lastN data Description fetch the lastN data for the latest query, only affect the number of rows in the returned data. This function can not be used alone , statscnQueryData() has to be called before this function Usage statscnQueryLastN(n) Arguments n the number of rows to be fetched Value the last n rows data in the latest query Examples ## Not run: df=statscnQueryData('A0201',dbcode='hgnd') df2=statscnQueryLastN(20) ## End(Not run) statscnQueryZb the data categories Description the sub data categories for the zbid category, dbcode need to be specified, where the dbcode can be fetched by function statscnDbs(). In the returned data frame, the column ’isParent’ shows if each sub category is leap category or not Usage statscnQueryZb(zbid = "zb", dbcode = "hgnd") Arguments zbid the father zb/category id , the root id is ’zb’ dbcode which db will be queried Value the data frame with the sub zbs/categories , if the given zbid is not a Parent zb/category, null list is returned Examples ## Not run: statscnQueryZb() statscnQueryZb('A01',dbcode="hgnd") ## End(Not run) statscnRegions the regions in db Description the available regions in the specified db, it is used for query the province, city and country code generally Usage statscnRegions(dbcode = "fsnd") Arguments dbcode the dbcode should be some province db(fs*) , city db(cs*) or internaltional db(gj*) Value the data frame with all the available region codes and names in the db Examples ## Not run: statscnRegions('fsnd') statscnRegions('csnd') statscnRegions('gjnd') ## End(Not run) statscnRowNamePrefix statscnRowNamePrefix Description set the rowName prefix in the dataframe Usage statscnRowNamePrefix(p = "nrow") Arguments p , how to set the rowname prefix. it is ’nrow’ by default , and it is the only supported value currently to unset the row name prefix, call this function with p=NULL Details in case you encounter the following error: Error in ‘row.names<-.data.frame‘(‘*tmp*‘, value = value) : duplicate ’row.names’ are not allowed you need to call this function Value no return
import
cran
R
Package ‘import’ September 24, 2023 Type Package Title An Import Mechanism for R Version 1.3.1 Description Alternative mechanism for importing objects from packages and R modules. The syntax allows for importing multiple objects with a single command in an expressive way. The import package bridges some of the gap between using library (or require) and direct (single-object) imports. Furthermore the imported objects are not placed in the current environment. License MIT + file LICENSE ByteCompile TRUE URL https://github.com/rticulate/import BugReports https://github.com/rticulate/import/issues Suggests knitr, rmarkdown, magrittr, testthat Language en-US VignetteBuilder knitr RoxygenNote 7.2.3 Encoding UTF-8 NeedsCompilation no Author <NAME> [aut], <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-23 22:50:06 UTC R topics documented: fro... 2 impor... 5 from Import Objects From a Package. Description The import::from and import::into functions provide an alternative way to import objects (e.g. functions) from packages. It is sometimes preferred over using library (or require) which will import all objects exported by the package. The benefit over obj <- pkg::obj is that the imported objects will (by default) be placed in a separate entry in the search path (which can be specified), rather in the global/current environment. Also, it is a more succinct way of importing several objects. Note that the two functions are symmetric, and usage is a matter of preference and whether specifying the .into argument is desired. The function import::here imports into the current environment. Usage from( .from, ..., .into = "imports", .library = .libPaths(), .directory = ".", .all = (length(.except) > 0), .except = character(), .chdir = TRUE, .character_only = FALSE, .S3 = FALSE ) here( .from, ..., .library = .libPaths()[1L], .directory = ".", .all = (length(.except) > 0), .except = character(), .chdir = TRUE, .character_only = FALSE, .S3 = FALSE ) into( .into, ..., .from, .library = .libPaths()[1L], .directory = ".", .all = (length(.except) > 0), .except = character(), .chdir = TRUE, .character_only = FALSE, .S3 = FALSE ) Arguments .from The package from which to import. ... Names or name-value pairs specifying objects to import. If arguments are named, then the imported object will have this new name. .into The environment into which the imported objects should be assigned. If the value is of mode character, it is treated as referring to a named environment on the search path. If it is of mode environment, the objects are assigned directly to that environment. Using .into=environment() causes imports to be made into the current environment; .into="" is an equivalent shorthand value. .library character specifying the library to use when importing from packages. Defaults to the current set of library paths (note that the default value was different in versions up to and including 1.3.0). .directory character specifying the directory to use when importing from modules. De- faults to the current working directory. If .from is a module specified using an absolute path (i.e. starting with /), this parameter is ignored. .all logical specifying whether all available objects in a package or module should be imported. It defaults to FALSE unless .exclude is being used to omit particular functions. .except character vector specifying any objects that should not be imported. Any values specified here override both values provided in ... and objects included because of the .all parameter .chdir logical specifying whether to change directories before sourcing a module (this parameter is ignored for libraries) .character_only A logical indicating whether .from and ... can be assumed to be character strings. (Note that this parameter does not apply to how the .into parameter is handled). .S3 [Experimental] A logical indicating whether an automatic detection and regis- tration of S3 methods should be performed. The S3 methods are assumed to be in the standard form generic.class. Methods can also be registered manually instead using be registered manually instead using the .S3method(generic, class, method) call. This is an experimental feature. We think it should work well and you are encouraged to use it and report back – but the syntax and semantics may change in the future to improve the feature. Details The function arguments can be quoted or unquoted as with e.g. library. In any case, the character representation is used when unquoted arguments are provided (and not the value of objects with matching names). The period in the argument names .into and .from are there to avoid name clash with package objects. However, while importing of hidden objects (those with names prefixed by a period) is supported, care should be taken not to conflict with the argument names. The double- colon syntax import::from allows for imports of exported objects (and lazy data) only. To import objects that are not exported, use triple-colon syntax, e.g. import:::from. The two ways of calling the import functions analogue the :: and ::: operators themselves. Note that the import functions usually have the (intended) side-effect of altering the search path, as they (by default) import objects into the "imports" search path entry rather than the global envi- ronment. The import package is not meant to be loaded with library (and will output a message about this if attached), but rather it is named to make the function calls expressive without the need to loading before use, i.e. it is designed to be used explicitly with the :: syntax, e.g. import::from(pkg, x, y). Value a reference to the environment containing the imported objects. Packages vs. modules import can either be used to import objects either from R packages or from R source files. If the .from parameter ends with ’.R’ or ’.r’, import will look for a source file to import from. A source file in this context is referred to as a module in the documentation. Package Versions With import you can specify package version requirements. To do this add a requirement in paren- theses to the package name (which then needs to be quoted), e.g import::from("parallel (>= 3.2.0)", ...). You can use the operators <, >, <=, >=, ==, !=. Whitespace in the specification is irrelevant. See Also Helpful links: • https://import.rticulate.org • https://github.com/rticulate/import • https://github.com/rticulate/import/issues Examples import::from(parallel, makeCluster, parLapply) import::into("imports:parallel", makeCluster, parLapply, .from = parallel) import An Import Mechanism for R Description This is an alternative mechanism for importing objects from packages. The syntax allows for im- porting multiple objects from a package with a single command in an expressive way. The import package bridges some of the gap between using library (or require) and direct (single-object) imports. Furthermore the imported objects are not placed in the current environment (although possible), but in a named entry in the search path. Details This package is not intended for use with library. It is named to make calls like import::from(pkg, fun1, fun2) expressive. Using the import functions complements the standard use of library(pkg)(when most objects are needed, and context is clear) and obj <- pkg::obj (when only a single object is needed). Author(s) <NAME> See Also For usage instructions and examples, see from, into, or here. Helpful links: • https://import.rticulate.org • https://github.com/rticulate/import • https://github.com/rticulate/import/issues
shinyTempSignal
cran
R
Package ‘shinyTempSignal’ October 14, 2022 Title Explore Temporal Signal of Molecular Phylogenies Version 0.0.3 Description Sequences sampled at different time points can be used to infer molecular phylogenies on natu- ral time scales, but if the sequences records inaccurate sampling times, that are not the ac- tual sampling times, then it will affect the molecular phylogenetic analysis. This shiny applica- tion helps exploring temporal characteristics of the evolutionary trees through linear regres- sion analysis and with the ability to identify and remove incorrect labels. License GPL-3 Depends R (>= 3.3.0) Imports ape, Cairo, config (>= 0.3.1), DescTools, forecast, ggplot2, ggprism, ggpubr, ggtree, golem (>= 0.3.1), shiny (>= 1.6.0), shinydashboard, shinyjs, stringr, treeio Suggests attempt, conflicted, DT, glue, htmltools, processx, testthat (>= 3.0.0), thinkr Encoding UTF-8 RoxygenNote 7.1.2 Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre, cph] (<https://orcid.org/0000-0002-6485-8781>), <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-07-06 06:30:02 UTC R topics documented: dateNumeri... 2 dateType... 2 dateType... 3 dateType... 4 getdivergenc... 4 MCC_FluA_H3_tre... 5 meangrou... 5 run_shinyTempSigna... 6 dateNumeric Convert dates according to date format Description Convert dates according to date format Usage dateNumeric(date, format) Arguments date input a data extracted from labels, character format input format of the date, character Value Returns a date of numeric type, numeric Examples dateNumeric(date="1999-12-07", format="yyyy-MM-dd") dateType1 Method 1 for finding the date inside the label Description Method 1 for finding the date inside the label Usage dateType1(tree, order) Arguments tree A tree of sequences, phylo order Location of the date, character or numeric Value date, character Examples data("MCC_FluA_H3_tree") dateType1(MCC_FluA_H3_tree, "last") dateType2 Method 2 for finding the date inside the label Description Method 2 for finding the date inside the label Usage dateType2(tree, order, prefix) Arguments tree A tree of sequences, phylo order Location of the date, character or numeric prefix prefix for dates, character Value date, character Examples data("MCC_FluA_H3_tree") dateType2(MCC_FluA_H3_tree, "last","/") dateType3 Method 3 for finding the date inside the label Description Method 3 for finding the date inside the label Usage dateType3(tree, pattern) Arguments tree A tree of sequences, phylo pattern Canonical matching command, character Value date, character Examples data("MCC_FluA_H3_tree") dateType3(MCC_FluA_H3_tree, "(?<=/)\\d+$") getdivergence Calculating the divergence of sequences Description Calculating the divergence of sequences Usage getdivergence(tree, date, method) Arguments tree A tree of sequences, phylo date dates of numeric type, numeric method one of "correlation", "rms", or "rsquared", character Value the divergence of sequences, data.frame Examples data("MCC_FluA_H3_tree") date <- dateType3(MCC_FluA_H3_tree, "(?<=/)\\d+$") date <- dateNumeric(date, "yyyy") getdivergence(MCC_FluA_H3_tree, date, "rms") MCC_FluA_H3_tree Example data: a tree of 76 H3 hemagglutinin gene sequences of a lineage containing swine and human influenza A viruses Description This example data was reported on Liang et al. 2014 Format a tree with 76 sequences Value a tree, phylo Examples data(MCC_FluA_H3_tree) meangroup Combining data from the same years Description Combining data from the same years Usage meangroup(d) Arguments d a data frame with "time" in the column name Value The processed data frame, data.frame Examples x <- c(1999, 2002 ,2005, 2000,2004 ,2004, 1999) y <- c(1, 1.5, 2, 3 ,4 ,5 ,6) d <- data.frame(time=x, score=y) meangroup(d) run_shinyTempSignal Run the Shiny Application Description Run the Shiny Application Usage run_shinyTempSignal( onStart = NULL, options = list(), enableBookmarking = NULL, uiPattern = "/", ... ) Arguments onStart A function that will be called before the app is actually run. This is only needed for shinyAppObj, since in the shinyAppDir case, a global.R file can be used for this purpose. options Named options that should be passed to the runApp call (these can be any of the following: "port", "launch.browser", "host", "quiet", "display.mode" and "test.mode"). You can also specify width and height parameters which pro- vide a hint to the embedding environment about the ideal height/width for the app. enableBookmarking Can be one of "url", "server", or "disable". The default value, NULL, will re- spect the setting from any previous calls to enableBookmarking(). See enableBookmarking() for more information on bookmarking your app. uiPattern A regular expression that will be applied to each GET request to determine whether the ui should be used to handle the request. Note that the entire request path must match the regular expression in order for the match to be considered suc- cessful. ... arguments to pass to golem_opts. See ‘?golem::get_golem_options‘ for more details. Value Shiny application object Examples if (interactive()) {run_shinyTempSignal()}
@types/quotesy
npm
JavaScript
[Installation](#installation) === > `npm install --save @types/quotesy` [Summary](#summary) === This package contains type definitions for quotesy (<https://github.com/dwyl/quotes#readme>). [Details](#details) === Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/quotesy>. [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/quotesy/index.d.ts) --- ``` /** * Returns an Array of Quote Objects */ export function parse_json(): Quote[]; /** * Returns a random quote from the list */ export function random(): Quote; /** * Returns a random quote for a specific tag * @param tag the tag to search for */ export function random_by_tag(tag: string): Quote; /** A single quote from the database */ export interface Quote { /** The author of the quote */ author: string; /** The text of the quote */ text: string; /** Comma-separated list of words associated with the quote */ tags?: string | undefined; /** A URL where the origin of the quote can be verified */ source?: string | undefined; } ``` ### [Additional Details](#additional-details) * Last updated: Wed, 18 Oct 2023 11:45:05 GMT * Dependencies: none [Credits](#credits) === These definitions were written by [<NAME>](https://github.com/natesilva). Readme --- ### Keywords none
matman
cran
R
Package ‘matman’ October 13, 2022 Type Package Title Material Management Version 1.1.3 Date 2021-12-13 Maintainer <NAME> <<EMAIL>> Description A set of functions, classes and methods for performing ABC and ABC/XYZ analy- ses, identifying overperforming, underperforming and constantly performing items, and plot- ting, analyzing as well as predicting the temporal development of items. License GPL-3 Depends R (>= 3.5.0), shiny Imports methods, graphics, stats, utils, data.table, dplyr, tidyr, tidyselect, plotly, DT, shinydashboard, shinyWidgets, forecast, parsedate, lubridate Encoding UTF-8 LazyData true LazyLoad true RoxygenNote 7.1.2 NeedsCompilation no Author <NAME> [cre, aut], <NAME> [aut], <NAME> [aut] Repository CRAN Date/Publication 2021-12-13 09:30:02 UTC R topics documented: matman-packag... 2 ABCXYZComparison-clas... 3 ABCXYZData-clas... 4 aggregateDat... 5 Amoun... 6 compar... 7 compare,ABCXYZData,ABCXYZData-metho... 8 computeABCXYZAnalysi... 9 computeConstant... 11 computeOverperforme... 12 computeUnderperforme... 14 detectTimeVariation... 15 expandDat... 16 Forecast-clas... 18 matmanDem... 19 plot,ABCXYZData,ANY-metho... 19 plotValueSerie... 21 predictValu... 22 show,ABCXYZComparison-metho... 24 show,ABCXYZData-metho... 25 show,Forecast-metho... 26 Stock... 27 summar... 27 summary,ABCXYZComparison-metho... 28 summary,ABCXYZData-metho... 29 summary,Forecast-metho... 30 matman-package Material Management Description A set of functions, classes and methods for performing ABC and ABC/XYZ analyses, identifying overperforming, underperforming and constantly performing items, and plotting, analyzing as well as predicting the temporal development of items. Details Package: matman Type: Package Version: 1.1.4 Date: 2021-11-15 License: GPL-3 Depends: R (>= 3.5.0), stats Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> ABCXYZComparison-class Class ABCXYZComparison Description This S4 class represents the result of a comparison of two ABC/XYZ analysis results. Slots data (data.frame) The comparison result as data.frame. type (character) The type of the analysis that has been performed. This is either ’abc’ or ’abcxyz’. valueDiff (numeric) The difference between the value of an item in ABC/XYZ analysis A and the value of the same item in ABC/XYZ analysis B that is required to consider the item in the comparison. xyzCoefficientDiff (numeric) The difference between the xyz coefficient of an item in ABC/XYZ analysis A and the xyz coefficient of the same item in ABC/XYZ analysis B that is required to consider the item in the comparison. unequalABC (logical) If TRUE only items are returned, where the ABC-Classes are different. If FALSE only items are returned, where the ABC-Classes are equal. If NA, no further restriction takes place based on the column ABC. unequalXYZ (logical) If TRUE only items are returned, where the XYZ-Classes are different. If FALSE only items are returned, where the XYZ-Classes are equal. If NA, no further restriction takes place based on the column XYZ. Objects from the Class Objects can be created by calling the function compare function. This S4 class represents the result of a comparison of two ABC/XYZ analysis results. Author(s) Le<NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> Examples data("Amount") data1 = Amount[sample(1:nrow(Amount), 1000),] data2 = Amount[sample(1:nrow(Amount), 1000),] abcxyzData1 = computeABCXYZAnalysis(data1, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) abcxyzData2 = computeABCXYZAnalysis(data2, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) comparison = compare(abcxyzData1, abcxyzData2) comparison ABCXYZData-class Class ABCXYZData Description This S4 class represents the result of an ABC/XYZ analysis. Slots data (data.frame) The result table of an ABC/XYZ analysis. type (character) The type of the analysis that has been performed. This is either ’abc’ or ’abcxyz’. value (character) The name of the value column in the result table. item (character) Vector of the names of the item columns in the result table. Objects from the Class Objects can be created by calling the function computeABCXYZ. This S4 class represents the result of an ABC/XYZ analysis. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> Examples data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") abcResult aggregateData Performs a temporal aggregation of a data frame Description Aggregates a data frame based on a timestamp column to days, weeks, months, quarters, years or total. Usage aggregateData( data, value = NULL, item, timestamp, temporalAggregation = c("day", "week", "month", "quarter", "year", "total"), fiscal = 1, aggregationFun = sum ) Arguments data Data frame or matrix on which the ABC analysis is performed. value Name(s) of the column variable(s) that contains the values for the ABC and XYZ analysis. item Names of the columns including the item names or identifiers (e.g., product name, EAN). timestamp Name of the column including the timestamp. This column should be in POSIX or Date-format. temporalAggregation Temporal aggregation mode for the XYZ-analysis. Possible modes are ’day’, ’week’, ’month’, ’quarter’, ’year’, and ’total’. Total only aggregates by item whereas the other modes aggregate by item an temporal unit. fiscal consider the start of the business year. Default is set to 1 (January) aggregationFun Function for aggregating the value column. Default is sum. Value Returns a data frame with the aggregated data with the columns of item, timestamp and sum, which is the sum of the value column. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also expandData Examples data('Amount') aggregatedData = aggregateData(data = Amount, value = "value", item = "item", timestamp = "date", temporalAggregation = "quarter") Amount Amount data Description A dataset containing 23 items and their amounts over 3 years of data. Usage Amount Format A data frame with 10,000 rows and 9 variables: date Date in format yyyy-mm-dd week Date in format yyyy-’W’ww month Date in format yyyy-mm quarter Date in format yyyy-’Q’q year Date in format yyyy item Item ID itemgroup Item group ID amount Item amount value Item value Source anonymized real data compare Compares two S4 objects Description Compares two S4 objects. Usage compare(object1, object2, ...) Arguments object1 First S4 object. object2 Second S4 object. ... Further comparison parameters. Value Comparison result. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also compare Examples data("Amount") data1 = Amount[sample(1:nrow(Amount), 1000),] data2 = Amount[sample(1:nrow(Amount), 1000),] abcxyzData1 = computeABCXYZAnalysis(data1, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) abcxyzData2 = computeABCXYZAnalysis(data2, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) comparison = compare(abcxyzData1, abcxyzData2) compare,ABCXYZData,ABCXYZData-method Compares the results of two ABC/XYZ analyses Description Compares the class assignments of two ABC- or two ABC/XYZ analyses. Usage ## S4 method for signature 'ABCXYZData,ABCXYZData' compare( object1, object2, valueDiff = NA, xyzCoefficientDiff = NA, unequalABC = NA, unequalXYZ = NA ) Arguments object1 Object of class ABCXYZData. object2 Object of class ABCXYZData. valueDiff Only items with a difference of the column value larger than valueDiff be- tween the first and second ABC-XYZ-Analysis are returned. In the comparison data.frame a new column is added for the difference in the value columns. xyzCoefficientDiff Only items with a difference of the column xyzCoefficient larger than the xyz- CoefficientDiff between the first and second ABC-XYZ-Analysis are returned. In the comparison data.frame a new column is added for the difference in the xyzCoefficient columns. unequalABC If TRUE only items are returned, where the ABC-Classes are different. If FALSE only items are returned, where the ABC-Classes are equal. If NA, no further restriction takes place based on the column ABC. unequalXYZ If TRUE only items are returned, where the XYZ-Classes are different. If FALSE only items are returned, where the XYZ-Classes are equal. If NA, no further restriction takes place based on the column XYZ. Value An ABCYXZComparison object. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also ABCXYZComparison Examples data("Amount") data1 = Amount[sample(1:nrow(Amount), 1000),] data2 = Amount[sample(1:nrow(Amount), 1000),] abcxyzData1 = computeABCXYZAnalysis(data1, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) abcxyzData2 = computeABCXYZAnalysis(data2, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) comparison = compare(abcxyzData1, abcxyzData2) computeABCXYZAnalysis Performs an ABC/XYZ analysis Description Divides a given data frame into 3 classes, A, B, C, according to the value of one column (e.g., revenue). Usage computeABCXYZAnalysis( data, value, item, timestamp, temporalAggregation = c("day", "week", "month", "quarter", "year"), AB = 80, BC = 95, XY = NA, YZ = NA, ignoreZeros = FALSE ) Arguments data Data frame or matrix on which the ABC analysis is performed. value Name of the column variable that contains the value for the ABCXYZ analysis. item Names of the columns including the item names or identifiers (e.g., product name, EAN). timestamp Name of the column including the timestamp. This column should be in POSIX or date-format. temporalAggregation Temporal aggregation for the XYZ-analysis (i.e., "day", "week", "month", "quar- ter", "year"). AB Threshold (in percent) between category A and B. BC Threshold (in percent) between category B and C. XY Threshold (in percent) between category X and Y. YZ Threshold (in percent) between category Y and Z. ignoreZeros Whether zero values should be ignored in XYZ-analysis. Value Returns an ABCXYZData object. Only positive values are displayed Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also ABCXYZData summary Examples # ABC Analysis data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") # ABC/XYZ Analysis data("Amount") abcxyzResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date", temporalAggregation = "week", XY = 0.3, YZ = 0.5) computeConstants Select constant items Description Selects items with a constant value for a specified time period. Usage computeConstants( data, value, group, timestamp, timestampFormat = c("day", "week", "month", "quater", "year"), currentTime, thresholdTime = 7, use_latest = FALSE ) Arguments data Dataframe containing item stock data. value Name of the column variable containing the stock values. group Name(s) of the column(s) that are used to group stock data. These columns are usually the item ID or item name to group stock data by items. timestamp Name of the column including the timestamp. This column should be in Date, POSIX , YY-mm, YYYY-’W’ww, YYYY-mm, YYYY-’Q’q or YYYY format. timestampFormat Declares in which format the timestamp comes in (i.e., "day", "week", "month", "quarter", "year") currentTime Qualifying date for the value variable. Date must exist in data and have the same format as timestamp-variable. thresholdTime Time for which the value shouldn’t exceed the threshold value. Number declares the time in the format of timestampFormat. use_latest boolean value. If TRUE data will expand and dates with noexisting values will be filled up with the latest known values. Value Returns a data frame listing all constant items, the date since when the stock is constant and the value of the stock since this time. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also computeUnderperformer computeOverperformer Examples data("Stocks") constants = computeConstants(data=Stocks, value = "stock", group = "item", timestamp = "date", timestampFormat = "day", currentTime = "2019-07-27", thresholdTime = 7, use_latest = FALSE) computeOverperformer Select overperforming items Description Selects items with a value higher than a given threshold for a specified time period. Usage computeOverperformer( data, value, group, timestamp, timestampFormat = c("day", "week", "month", "quater", "year"), currentTime, thresholdValue = 0, thresholdTime = 90, use_latest = FALSE ) Arguments data Dataframe containing item stock data. value Name of the column variable containing the stock values. group Name(s) of the column(s) that are used to group stock data. These columns are usually the item ID or item name to group stock data by items. timestamp Name of the column including the timestamp. This column should be in Date, POSIX , YY-mm, YYYY-’W’ww, YYYY-mm, YYYY-’Q’q or YYYY format. timestampFormat Declares in which format the timestamp comes in (i.e., "day", "week", "month", "quarter", "year") currentTime Qualifying date for the value variable. Date must exist in data and have the same format as timestamp-variable. thresholdValue Name of the colum variable containing the items’ stock threshold value or the threshold value used in this analysis for all items. thresholdTime Time for which the value shouldn’t exceed the threshold value. Number declares the time in the format of timestampFormat. use_latest boolean value. If TRUE data will expand and dates with noexisting values will be filled up with the latest known values. Value Returns a data frame listing all overperforming items, the date their stock was the last time under the threshold (lastunder), the duration in days since the stock is over the threshold (toolowindays), the average difference between the stock and the threshold (meandiff) and the count of switched between over- and underperformance (moves). Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also computeUnderperformer computeConstants Examples data("Stocks") overperformer = computeOverperformer(data = Stocks, value = "stock", group = "item", timestamp = "date", timestampFormat = "day", currentTime = "2019-07-27", thresholdValue = "reorderlevel", thresholdTime = 0, use_latest = FALSE) computeUnderperformer Select underperforming items Description Selects items with a value lower than a given threshold for a specified time period. Usage computeUnderperformer( data, value, group, timestamp, timestampFormat = c("day", "week", "month", "quater", "year"), currentTime, thresholdValue = 0, thresholdTime = 90, use_latest = FALSE ) Arguments data Dataframe containing item stock data. value Name of the column variable containing the stock values. group Name(s) of the column(s) that are used to group stock data. These columns are usually the item ID or item name to group stock data by items. timestamp Name of the column including the timestamp. This column should be in Date, POSIX , YY-mm, YYYY-’W’ww, YYYY-mm, YYYY-’Q’q or YYYY format. timestampFormat Declares in which format the timestamp comes in (i.e., "day", "week", "month", "quarter", "year") currentTime Qualifying date for the value variable. Date must exist in data and have the same format as timestamp-variable. thresholdValue Name of the colum variable containing the items’ stock threshold value or the threshold value used in this analysis for all items. thresholdTime Time for which the value shouldn’t exceed the threshold value. Number declares the time in the format of timestampFormat use_latest boolean value. If TRUE data will expand and dates with noexisting values will be filled up with the latest known values Value Returns a data frame listing all underperforming items, the date their stock was the last time over the threshold (lastover), the duration in days since the stock is under the threshold (toolowindays), the average difference between the stock and the threshold (meandiff) and the count of switched between over- and underperformance (moves). Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also computeOverperformer computeConstants Examples data("Stocks") underperformer = computeUnderperformer(data=Stocks, value = "stock", group = "item", timestamp = "date", timestampFormat = "day", currentTime = "2019-07-27", thresholdValue = "minstock", thresholdTime = 90, use_latest = FALSE) detectTimeVariations Detects items whose value (stock, demand, etc.) has changed over time Description Detects items whose value (stock, demand, etc.) has changed over time in contrast to other items. This analysis is based on the Macnaughton-Smith et al. clustering algorithm. Usage detectTimeVariations( data, value, item, timestamp, temporalAggregation = c("day", "week", "month", "quarter", "year"), aggregationFun = sum, preProcess = NA, recentTimePeriods = 5 ) Arguments data Data frame that will be expanded. value Name of the column variable that contains the value for the ABC and XYZ analysis. item Name of the column including the item names or identifiers (e.g., product name, EAN) timestamp Name of the column including the timestamp. This column should be in POSIX or Date-format. temporalAggregation Temporal aggregation mode (i.e., "day", "week", "month", "quarter", "year"). aggregationFun Function for aggregating the value column. Default is sum. preProcess A string vector that defines a pre-processing of the aggregated data before clus- tering. Available pre-processing methods are "center", "scale", "standardize", and "normalize". Default is NA (no pre-processing). recentTimePeriods Integer indicating the number of time periods that are used to define the recent item values. Default is 5. Value Returns a data frame showing to which cluster each item belongs based on all value and based on the recent values as well as whether the item has switched the cluster. References <NAME>., <NAME>., <NAME>., <NAME>. (1964) "Dissimilarity Analysis: a new Technique of Hierarchical Sub-division", Nature, 202, 1034–1035. Examples data("Amount") timeVariations = detectTimeVariations(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week") expandData Expands a temporal data frame Description Expands a temporal data frame and fills values for missing dates. Usage expandData( data, expand, expandTo = c("all", "event"), valueColumns, latest_values = F, valueLevels = NA, timestamp, timestampFormat = c("day", "week", "month", "quarter", "year"), keepData = T ) Arguments data Data frame that will be expanded. expand Name of the variables that will be expanded. expandTo Defines whether values for the variables to be expanded will be filled for all dates or only those dates included in the data. valueColumns Name of the columns that are filled with specific values. latest_values If True missing values are filled with the latest known value until the next known value comes in. valueLevels Specific values that are used to fill the value columns. If latest_values = TRUE only values with no known values in the past of this values are specified with this specific values. timestamp Name of the column including the timestamp. This column should be in Date , YY-mm, YYYY-’W’ww, YYYY-mm, YYYY-’Q’q or YYYY format. timestampFormat Declares in which format the timestamp comes in (i.e., "day", "week", "month", "quarter", "year"). keepData Defines whether variables that will not be expanded should be kept. Value Returns the expanded data frame. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also aggregateData Examples data("Amount") expandedItems = expandData(Amount, expand = c("item", "itemgroup"), expandTo = "all", valueColumns = c("amount", "value"), latest_values = TRUE, valueLevels = c(0, 0), timestamp = "date", timestampFormat = "day") Forecast-class Class Forecast Description This S4 class represents the result of forecast using function predictValue. Slots data (data.frame) Data frame including the predicted data and optionally the training data. models (list) List of fitted ARIMA models. value (character) Name of the value column. item (character) Name of the item column. items (character) IDs or Names of the items. Objects from the Class Objects can be created by calling the function predictValue. This S4 class represents the result of a forecast. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> Examples data("Amount") prediction = predictValue(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week", timeUnitsAhead = 3) prediction matmanDemo Launches a demo app Description Launches a shiny app that demonstrates how to use the functions provides by package matman. Usage matmanDemo() Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> Examples ## Not run: matmanDemo() plot,ABCXYZData,ANY-method Plots the result of an ABC/XYZ analysis Description Plots a graph that shows what percentage of items is responsible for what amount of value. Usage ## S4 method for signature 'ABCXYZData,ANY' plot( x, plot_engine = c("graphics", "plotly"), title = "", xlab = "", ylab = "", top5lab = NA, color = list(itemColor = "blue", top5Color = "black", aColor = "green", bColor = "orange", cColor = "red"), item = NA, ... ) Arguments x Object of class ABCXYZData. plot_engine Name of the plot engine ("graphics", "plotly") title Plot title (e.g. ’ABC-Analysis’). xlab Label of x-axis (e.g. ’Percentage of Items’). ylab Label of y-axis (e.g. ’Percentage of cumulative Value’). top5lab Title of the rank of the top 5 items (e.g. ’Items with the highest Value’). color List of plot colors (i.e., itemColor, top5Color, aColor, bColor, cColor). Default is list(itemColor = "blue", top5Color = "black", aColor = "green", bColor = "orange", cColor = "red"). item Name of a single column with an identifier, that is displayed in the top-5- ranking. Used if the ABCXYZData object has multiple item columns. If NA the first item column is displayed. ... Further optional parameters for function graphics::plot or function plotly::plot_ly. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also computeABCXYZAnalysis ABCXYZData Examples data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") plot(abcResult, plot_engine = "graphics", title = "ABC Analysis", xlab = "Items", ylab = "Demand") plotValueSeries Plots the development of the values Description Plots a bar chart that shows the sum of the value column for a certain time interval. Usage plotValueSeries( data, item, item_id, value, timestamp, temporalAggregation = c("day", "week", "month", "quarter", "year"), expand = TRUE, withTrendLine = TRUE, windowLength = 5, trendLineType = "s" ) Arguments data Data frame or matrix on which the ABC analysis is performed. item Name of the column including the item name or identifier (e.g., product name, EAN). item_id Name of the item that will be displayed. value Name of the column variable that contains the values. timestamp Name of the column including the timestamp. This column should be in POSIX or date-format. temporalAggregation Temporal aggregation for the XYZ-analysis (i.e., "day", "week", "month", "quar- ter", "year"). expand Indicator if the data should be expanded with time intervals that have no data. withTrendLine Indicator if a trend line should be displayed in the bar chart. windowLength Backwards window length. trendLineType If "s" the simple and if "w" the weighted moving average is calculated. Value A plotly bar chart, that shows the development of the value column. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> Examples data("Amount") plotValueSeries(Amount, item = "item", item_id = "45186", value = "amount", timestamp = "date", temporalAggregation = "week", withTrendLine = TRUE, windowLength = 10, trendLineType = "w") predictValue Predicts the value for items Description Predicts the value for items based on previous values. Previous values can be aggregated to value per day, week, month, quarter or year. An ARIMA model is estimated for each item based on the function forecast:auto.arima. The best model is selected and used for prediction. Note that only models without drift term will be considered in order to ensure consistent predictions. Usage predictValue( data, value, item, timestamp, temporalAggregation = c("day", "week", "month", "quarter", "year"), aggregationFun = sum, timeUnitsAhead = 1, digits = 3, expand = F, keepPreviousData = F, level = 0.95, ... ) Arguments data Data frame including previous values. value Name of the column representing the item value. item Name of the column representing the item ID or the item name. timestamp Name of the column including the timestamp. This column should be in POSIX or date-format. temporalAggregation Temporal aggregation mode (i.e., "day", "week", "month", "quarter", "year"). aggregationFun Function for aggregating the value column. Default is sum. timeUnitsAhead Integer indicating the number of time units (i.e., days, weeks, months, quarters or years) the should be predicted. digits Integer indicating the number of significant digits used for the predicted values. expand Logical indicating whether the data will be expanded after they are aggregated. Default is not (FALSE). keepPreviousData Logical indicating whether the data from the given data frame will be added to the result or not. Default is not (FALSE). level Numeric value representing the confidence level for the predictions. The default is 0.95 (i.e. lower level = 0.025 and upper level = 0.975). ... Further arguments for function forecast::auto.arima. Value Returns a Forecast object. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also auto.arima Forecast Examples # Simple Example data("Amount") prediction = predictValue(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week", timeUnitsAhead = 3) prediction # More Sophisticated Example data("Amount") prediction = predictValue(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week", aggregationFun = mean, timeUnitsAhead = 5, digits = 4, keepPreviousData = TRUE, level = 0.9, trace = TRUE) prediction show,ABCXYZComparison-method Shows an ABCXYZComparison object Description Shows an ABCXYZComparison object as a table consisting of the absolute and relative amount of each item, the cumulative relative amount and the ABC-class for both ABCXYZData objects. It furthermore shows the ABC comparison of the two objects. If XY and YZ parameters have been specified for computing the ABCXYZData object, the table also includes a column for the XYZ coefficient, the XYZ-class, the ABC/XYZ-class and the XYZ comparison. Usage ## S4 method for signature 'ABCXYZComparison' show(object) Arguments object The ABCXYZComparison object Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also ABCXYZComparison compare Examples data("Amount") data1 = Amount[sample(1:nrow(Amount), 1000),] data2 = Amount[sample(1:nrow(Amount), 1000),] abcxyzData1 = computeABCXYZAnalysis(data1, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) abcxyzData2 = computeABCXYZAnalysis(data2, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) comparison = compare(abcxyzData1, abcxyzData2) comparison show,ABCXYZData-method Shows an ABCXYZData object Description Shows the ABCXYZData object as a table consisting of the absolute and relative amount of each item, the cumulative relative amount and the ABC-class. If XY and YZ parameters have been specified for computing the ABCXYZData object, the table also includes a column for the XYZ coefficient, the XYZ- class and the ABC/XYZ-class. Usage ## S4 method for signature 'ABCXYZData' show(object) Arguments object The ABCXYZData object Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also ABCXYZData computeABCXYZAnalysis Examples data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") abcResult show,Forecast-method Shows a Forecast object Description Shows the predicted data of a Forecast object. If the Forecast object was created using keepPrevi- ousData = TRUE, also the training data are shown Usage ## S4 method for signature 'Forecast' show(object) Arguments object The Forecast object Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also Forecast Examples data("Amount") prediction = predictValue(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week", timeUnitsAhead = 3) prediction Stocks Stock data Description A dataset containing 10 items and their stocks over 3 years of data. Usage Stocks Format A data frame with 1,610 rows and 5 variables: date Date in format yyyy-mm-dd item Item ID stock Item stock value minstock Minimum stock per item reorderlevel Stock threshold for triggering item reorders Source anonymized real data summary Summarizes an S4 object Description Summarizes an S4 object. Usage summary(object, ...) Arguments object S4 object. ... Optional parameters. Value Summary. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also summary summary summary Examples data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") summary(abcResult) summary,ABCXYZComparison-method Prints the summary of the comparison of two ABC/XYZ analyses Description Summarizes the differences between two ABCXYZData objects. Usage ## S4 method for signature 'ABCXYZComparison' summary(object, withMissing = FALSE) Arguments object Object of class ABCXYZComparison. withMissing Logical indicating whether missing categories will be shown. Default is FALSE. Value A contingency table showing the differences. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also compare ABCXYZComparison Examples data("Amount") data1 = Amount[sample(1:nrow(Amount), 1000),] data2 = Amount[sample(1:nrow(Amount), 1000),] abcxyzData1 = computeABCXYZAnalysis(data1, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) abcxyzData2 = computeABCXYZAnalysis(data2, value = "value", item = "item", timestamp = "date", temporalAggregation = "day", XY = 0.5, YZ = 1) comparison = compare(abcxyzData1, abcxyzData2) summary(comparison) summary,ABCXYZData-method Prints the result summary of an ABC/XYZ analysis Description Summarizes the items count and value sum grouped by the different ABC- or ABC/XYZ-Classes. Usage ## S4 method for signature 'ABCXYZData' summary(object, withMissing = FALSE) Arguments object Object of class ABCXYZData. withMissing Logical indicating whether missing categories will be shown. Default is FALSE. Value A data.table with the summarized results. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also computeABCXYZAnalysis ABCXYZData Examples # ABC Analysis data("Amount") abcResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date") summary(abcResult) # ABC/XYZ Analysis data("Amount") abcxyzResult = computeABCXYZAnalysis(data = Amount, value = "value", item = "item", timestamp = "date", temporalAggregation = "week", XY = 0.3, YZ = 0.5) summary(abcxyzResult) summary,Forecast-method Prints the summary of a Forecast object Description Summarizes the fitted models estimated for predicting item values (e.g., demand, stock). Usage ## S4 method for signature 'Forecast' summary(object) Arguments object Object of class Forecast Value A data frame showing a summary of fitted models. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> <NAME> <<EMAIL>> See Also predictValue Forecast summary,Forecast-method 31 Examples data("Amount") prediction = predictValue(data = Amount, value = "amount", item = "item", timestamp = "date", temporalAggregation = "week", timeUnitsAhead = 3) summary(prediction)
sanctuary
devdocs
Haskell
Sanctuary v3.1.0 ================ Refuge from unsafe JavaScript * [Overview](#section:overview) * [Sponsors](#section:sponsors) * [Folktale](#section:folktale) * [Ramda](#section:ramda) + [Totality](#section:totality) + [Information preservation](#section:information-preservation) + [Invariants](#section:invariants) + [Currying](#section:currying) + [Variadic functions](#section:variadic-functions) + [Implicit context](#section:implicit-context) + [Transducers](#section:transducers) + [Modularity](#section:modularity) * [Types](#section:types) * [Type checking](#section:type-checking) * [Installation](#section:installation) * [API](#section:api) Overview -------- Sanctuary is a JavaScript functional programming library inspired by [Haskell](https://www.haskell.org/) and [PureScript](http://www.purescript.org/). It's stricter than [Ramda](https://ramdajs.com/), and provides a similar suite of functions. Sanctuary promotes programs composed of simple, pure functions. Such programs are easier to comprehend, test, and maintain – they are also a pleasure to write. Sanctuary provides two data types, [Maybe](#section:maybe) and [Either](#section:either), both of which are compatible with [Fantasy Land](https://github.com/fantasyland/fantasy-land/tree/v4.0.1). Thanks to these data types even Sanctuary functions that may fail, such as [`head`](#head), are composable. Sanctuary makes it possible to write safe code without null checks. In JavaScript it's trivial to introduce a possible run-time type error. Sanctuary is designed to work in Node.js and in ES5-compatible browsers. Sponsors -------- Development of Sanctuary is funded by the following community-minded **partners**: * [Fink](https://www.fink.no/) is a small, friendly, and passionate gang of IT consultants. We love what we do, which is mostly web and app development, including graphic design, interaction design, back-end and front-end coding, and ensuring the stuff we make works as intended. Our company is entirely employee-owned; we place great importance on the well-being of every employee, both professionally and personally. Development of Sanctuary is further encouraged by the following generous **supporters**: * [@voxbono](https://github.com/voxbono) * [@syves](https://github.com/syves) * [@Avaq](https://github.com/Avaq) * [@kabo](https://gitlab.com/kabo) * [@o0th](https://github.com/o0th) * [@identinet](https://github.com/identinet) [Become a sponsor](https://github.com/sponsors/davidchambers) if you would like the Sanctuary ecosystem to grow even stronger. Folktale -------- [Folktale](https://folktale.origamitower.com/), like Sanctuary, is a standard library for functional programming in JavaScript. It is well designed and well documented. Whereas Sanctuary treats JavaScript as a member of the ML language family, Folktale embraces JavaScript's object-oriented programming model. Programming with Folktale resembles programming with Scala. Ramda ----- [Ramda](https://ramdajs.com/) provides several functions that return problematic values such as `undefined`, `Infinity`, or `NaN` when applied to unsuitable inputs. These are known as [partial functions](https://en.wikipedia.org/wiki/Partial_function). Partial functions necessitate the use of guards or null checks. In order to safely use `R.head`, for example, one must ensure that the array is non-empty: ``` if (R.isEmpty (xs)) { // ... } else { return f (R.head (xs)); } ``` Using the Maybe type renders such guards (and null checks) unnecessary. Changing functions such as `R.head` to return Maybe values was proposed in [ramda/ramda#683](https://github.com/ramda/ramda/issues/683), but was considered too much of a stretch for JavaScript programmers. Sanctuary was released the following month, in January 2015, as a companion library to Ramda. In addition to broadening in scope in the years since its release, Sanctuary's philosophy has diverged from Ramda's in several respects. ### Totality Every Sanctuary function is defined for every value that is a member of the function's input type. Such functions are known as [total functions](https://en.wikipedia.org/wiki/Partial_function#Total_function). Ramda, on the other hand, contains a number of [partial functions](https://en.wikipedia.org/wiki/Partial_function). ### Information preservation Certain Sanctuary functions preserve more information than their Ramda counterparts. Examples: ``` |> R.tail ([]) |> S.tail ([]) [] Nothing |> R.tail (['foo']) |> S.tail (['foo']) [] Just ([]) |> R.replace (/^x/) ('') ('abc') |> S.stripPrefix ('x') ('abc') 'abc' Nothing |> R.replace (/^x/) ('') ('xabc') |> S.stripPrefix ('x') ('xabc') 'abc' Just ('abc') ``` ### Invariants Sanctuary performs rigorous [type checking](#section:type-checking) of inputs and outputs, and throws a descriptive error if a type error is encountered. This allows bugs to be caught and fixed early in the development cycle. Ramda operates on the [garbage in, garbage out](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out) principle. Functions are documented to take arguments of particular types, but these invariants are not enforced. The problem with this approach in a language as permissive as JavaScript is that there's no guarantee that garbage input will produce garbage output ([ramda/ramda#1413](https://github.com/ramda/ramda/issues/1413)). Ramda performs ad hoc type checking in some such cases ([ramda/ramda#1419](https://github.com/ramda/ramda/pull/1419)). Sanctuary can be configured to operate in garbage in, garbage out mode. Ramda cannot be configured to enforce its invariants. ### Currying Sanctuary functions are curried. There is, for example, exactly one way to apply `S.reduce` to `S.add`, `0`, and `xs`: * `S.reduce (S.add) (0) (xs)` Ramda functions are also curried, but in a complex manner. There are four ways to apply `R.reduce` to `R.add`, `0`, and `xs`: * `R.reduce (R.add) (0) (xs)` * `R.reduce (R.add) (0, xs)` * `R.reduce (R.add, 0) (xs)` * `R.reduce (R.add, 0, xs)` Ramda supports all these forms because curried functions enable partial application, one of the library's tenets, but `f(x)(y)(z)` is considered too unfamiliar and too unattractive to appeal to JavaScript programmers. Sanctuary's developers prefer a simple, unfamiliar construct to a complex, familiar one. Familiarity can be acquired; complexity is intrinsic. The lack of breathing room in `f(x)(y)(z)` impairs readability. The simple solution to this problem, proposed in [#438](https://github.com/sanctuary-js/sanctuary/issues/438), is to include a space when applying a function: `f (x) (y) (z)`. Ramda also provides a special placeholder value, [`R.__`](https://ramdajs.com/docs/#__), that removes the restriction that a function must be applied to its arguments in order. The following expressions are equivalent: * `R.reduce (R.__, 0, xs) (R.add)` * `R.reduce (R.add, R.__, xs) (0)` * `R.reduce (R.__, 0) (R.add) (xs)` * `R.reduce (R.__, 0) (R.add, xs)` * `R.reduce (R.__, R.__, xs) (R.add) (0)` * `R.reduce (R.__, R.__, xs) (R.add, 0)` ### Variadic functions Ramda provides several functions that take any number of arguments. These are known as [variadic functions](https://en.wikipedia.org/wiki/Variadic_function). Additionally, Ramda provides several functions that take variadic functions as arguments. Although natural in a dynamically typed language, variadic functions are at odds with the type notation Ramda and Sanctuary both use, leading to some indecipherable type signatures such as this one: ``` R.lift :: (*... -> *...) -> ([*]... -> [*]) ``` Sanctuary has no variadic functions, nor any functions that take variadic functions as arguments. Sanctuary provides two "lift" functions, each with a helpful type signature: ``` S.lift2 :: Apply f => (a -> b -> c) -> f a -> f b -> f c S.lift3 :: Apply f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d ``` ### Implicit context Ramda provides [`R.bind`](https://ramdajs.com/docs/#bind) and [`R.invoker`](https://ramdajs.com/docs/#invoker) for working with methods. Additionally, many Ramda functions use `Function#call` or `Function#apply` to preserve context. Sanctuary makes no allowances for `this`. ### Transducers Several Ramda functions act as transducers. Sanctuary provides no support for transducers. ### Modularity Whereas Ramda has no dependencies, Sanctuary has a modular design: [sanctuary-def](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) provides type checking, [sanctuary-type-classes](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0) provides Fantasy Land functions and type classes, [sanctuary-show](https://github.com/sanctuary-js/sanctuary-show/tree/v2.0.0) provides string representations, and algebraic data types are provided by [sanctuary-either](https://github.com/sanctuary-js/sanctuary-either/tree/v2.1.0), [sanctuary-maybe](https://github.com/sanctuary-js/sanctuary-maybe/tree/v2.1.0), and [sanctuary-pair](https://github.com/sanctuary-js/sanctuary-pair/tree/v2.1.0). Not only does this approach reduce the complexity of Sanctuary itself, but it allows these components to be reused in other contexts. Types ----- Sanctuary uses Haskell-like type signatures to describe the types of values, including functions. `'foo'`, for example, is a member of `String`; `[1, 2, 3]` is a member of `Array Number`. The double colon (`::`) is used to mean "is a member of", so one could write: ``` 'foo' :: String [1, 2, 3] :: Array Number ``` An identifier may appear to the left of the double colon: ``` Math.PI :: Number ``` The arrow (`->`) is used to express a function's type: ``` Math.abs :: Number -> Number ``` That states that `Math.abs` is a unary function that takes an argument of type `Number` and returns a value of type `Number`. Some functions are parametrically polymorphic: their types are not fixed. Type variables are used in the representations of such functions: ``` S.I :: a -> a ``` `a` is a type variable. Type variables are not capitalized, so they are differentiable from type identifiers (which are always capitalized). By convention type variables have single-character names. The signature above states that `S.I` takes a value of any type and returns a value of the same type. Some signatures feature multiple type variables: ``` S.K :: a -> b -> a ``` It must be possible to replace all occurrences of `a` with a concrete type. The same applies for each other type variable. For the function above, the types with which `a` and `b` are replaced may be different, but needn't be. Since all Sanctuary functions are curried (they accept their arguments one at a time), a binary function is represented as a unary function that returns a unary function: `* -> * -> *`. This aligns neatly with Haskell, which uses curried functions exclusively. In JavaScript, though, we may wish to represent the types of functions with arities less than or greater than one. The general form is `(<input-types>) -> <output-type>`, where `<input-types>` comprises zero or more comma–space (`,` ) -separated type representations: * `() -> String` * `(a, b) -> a` * `(a, b, c) -> d` `Number -> Number` can thus be seen as shorthand for `(Number) -> Number`. Sanctuary embraces types. JavaScript doesn't support algebraic data types, but these can be simulated by providing a group of data constructors that return values with the same set of methods. A value of the Either type, for example, is created via the Left constructor or the Right constructor. It's necessary to extend Haskell's notation to describe implicit arguments to the *methods* provided by Sanctuary's types. In `x.map(y)`, for example, the `map` method takes an implicit argument `x` in addition to the explicit argument `y`. The type of the value upon which a method is invoked appears at the beginning of the signature, separated from the arguments and return value by a squiggly arrow (`~>`). The type of the `fantasy-land/map` method of the Maybe type is written `Maybe a ~> (a -> b) -> Maybe b`. One could read this as: *When the `fantasy-land/map` method is invoked on a value of type `Maybe a` (for any type `a`) with an argument of type `a -> b` (for any type `b`), it returns a value of type `Maybe b`.* The squiggly arrow is also used when representing non-function properties. `Maybe a ~> Boolean`, for example, represents a Boolean property of a value of type `Maybe a`. Sanctuary supports type classes: constraints on type variables. Whereas `a -> a` implicitly supports every type, `Functor f => (a -> b) -> f a -> f b` requires that `f` be a type that satisfies the requirements of the Functor type class. Type-class constraints appear at the beginning of a type signature, separated from the rest of the signature by a fat arrow (`=>`). Type checking ------------- Sanctuary functions are defined via [sanctuary-def](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) to provide run-time type checking. This is tremendously useful during development: type errors are reported immediately, avoiding circuitous stack traces (at best) and silent failures due to type coercion (at worst). For example: ``` > S.add (2) (true) ! Invalid value add :: FiniteNumber -> FiniteNumber -> FiniteNumber ^^^^^^^^^^^^ 1 1) true :: Boolean The value at position 1 is not a member of ‘FiniteNumber’. See https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber for information about the FiniteNumber type. ``` Compare this to the behaviour of Ramda's unchecked equivalent: ``` > R.add (2) (true) 3 ``` There is a performance cost to run-time type checking. Type checking is disabled by default if `process.env.NODE_ENV` is `'production'`. If this rule is unsuitable for a given program, one may use [`create`](#create) to create a Sanctuary module based on a different rule. For example: ``` const S = sanctuary.create ({ checkTypes: localStorage.getItem ('SANCTUARY_CHECK_TYPES') === 'true', env: sanctuary.env, }); ``` Occasionally one may wish to perform an operation that is not type safe, such as mapping over an object with heterogeneous values. This is possible via selective use of [`unchecked`](#unchecked) functions. Installation ------------ `npm install sanctuary` will install Sanctuary for use in Node.js. To add Sanctuary to a website, add the following `<script>` element, replacing `X.Y.Z` with a version number greater than or equal to `2.0.2`: ``` <script src="https://cdn.jsdelivr.net/gh/sanctuary-js/[email protected]/dist/bundle.js"></script``` Optionally, define aliases for various modules: ``` const S = window.sanctuary; const $ = window.sanctuaryDef; // ... ``` API --- ### Configure #### `[create](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L506) :: { checkTypes :: [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean), env :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [Type](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Type) } -> [Module](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Module)` Takes an options record and returns a Sanctuary module. `checkTypes` specifies whether to enable type checking. The module's polymorphic functions (such as [`I`](#I)) require each value associated with a type variable to be a member of at least one type in the environment. A well-typed application of a Sanctuary function will produce the same result regardless of whether type checking is enabled. If type checking is enabled, a badly typed application will produce an exception with a descriptive error message. The following snippet demonstrates defining a custom type and using `create` to produce a Sanctuary module that is aware of that type: ``` const {create, env} = require ('sanctuary'); const $ = require ('sanctuary-def'); const type = require ('sanctuary-type-identifiers'); // Identity :: a -> Identity a const Identity = x => { const identity = Object.create (Identity$prototype); identity.value = x; return identity; }; // identityTypeIdent :: String const identityTypeIdent = 'my-package/Identity@1'; const Identity$prototype = { '@@type': identityTypeIdent, '@@show': function() { return `Identity (${S.show (this.value)})`; }, 'fantasy-land/map': function(f) { return Identity (f (this.value)); }, }; // IdentityType :: Type -> Type const IdentityType = $.UnaryType ('Identity') ('http://example.com/my-package#Identity') ([]) (x => type (x) === identityTypeIdent) (identity => [identity.value]); const S = create ({ checkTypes: process.env.NODE_ENV !== 'production', env: env.concat ([IdentityType ($.Unknown)]), }); S.map (S.sub (1)) (Identity (43)); // => Identity (42) ``` See also [`env`](#env). #### `[env](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L582) :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [Type](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Type)` The Sanctuary module's environment (`(S.create ({checkTypes, env})).env` is a reference to `env`). Useful in conjunction with [`create`](#create). ``` > S.env [Function, Arguments, Array Unknown, Array2 Unknown Unknown, Boolean, Buffer, Date, Descending Unknown, Either Unknown Unknown, Error, Unknown -> Unknown, HtmlElement, Identity Unknown, JsMap Unknown Unknown, JsSet Unknown, Maybe Unknown, Module, Null, Number, Object, Pair Unknown Unknown, RegExp, StrMap Unknown, String, Symbol, Type, TypeClass, Undefined] ``` #### `[unchecked](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L619) :: [Module](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Module)` A complete Sanctuary module that performs no type checking. This is useful as it permits operations that Sanctuary's type checking would disallow, such as mapping over an object with heterogeneous values. See also [`create`](#create). ``` > S.unchecked.map (S.show) ({x: 'foo', y: true, z: 42}) {"x": "\"foo\"", "y": "true", "z": "42"} ``` Opting out of type checking may cause type errors to go unnoticed. ``` > S.unchecked.add (2) ('2') "22" ``` ### Classify #### `[type](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L641) :: [Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> { namespace :: [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String), name :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String), version :: [NonNegativeInteger](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#NonNegativeInteger) }` Returns the result of parsing the [type identifier](https://github.com/sanctuary-js/sanctuary-type-identifiers/tree/v3.0.0) of the given value. ``` > S.type (S.Just (42)) {"name": "Maybe", "namespace": Just ("sanctuary-maybe"), "version": 1} > S.type ([1, 2, 3]) {"name": "Array", "namespace": Nothing, "version": 0} ``` #### `[is](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L666) :: [Type](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Type) -> [Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the given value is a member of the specified type. See [`$.test`](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#test) for details. ``` > S.is ($.Array ($.Integer)) ([1, 2, 3]) true > S.is ($.Array ($.Integer)) ([1, 2, 3.14]) false ``` ### Showable #### `[show](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L681) :: [Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Alias of [`show`](https://github.com/sanctuary-js/sanctuary-show/tree/v2.0.0#show). ``` > S.show (-0) "-0" > S.show (['foo', 'bar', 'baz']) "[\"foo\", \"bar\", \"baz\"]" > S.show ({x: 1, y: 2, z: 3}) "{\"x\": 1, \"y\": 2, \"z\": 3}" > S.show (S.Left (S.Right (S.Just (S.Nothing)))) "Left (Right (Just (Nothing)))" ``` ### Fantasy Land Sanctuary is compatible with the [Fantasy Land](https://github.com/fantasyland/fantasy-land/tree/v4.0.1) specification. #### `[equals](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L708) :: [Setoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Setoid) a => a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Curried version of [`Z.equals`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#equals) that requires two arguments of the same type. To compare values of different types first use [`create`](#create) to create a Sanctuary module with type checking disabled, then use that module's `equals` function. ``` > S.equals (0) (-0) true > S.equals (NaN) (NaN) true > S.equals (S.Just ([1, 2, 3])) (S.Just ([1, 2, 3])) true > S.equals (S.Just ([1, 2, 3])) (S.Just ([1, 2, 4])) false ``` #### `[lt](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L741) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the *second* argument is less than the first according to [`Z.lt`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lt). ``` > S.filter (S.lt (3)) ([1, 2, 3, 4, 5]) [1, 2] ``` #### `[lte](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L761) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the *second* argument is less than or equal to the first according to [`Z.lte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lte). ``` > S.filter (S.lte (3)) ([1, 2, 3, 4, 5]) [1, 2, 3] ``` #### `[gt](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L781) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the *second* argument is greater than the first according to [`Z.gt`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#gt). ``` > S.filter (S.gt (3)) ([1, 2, 3, 4, 5]) [4, 5] ``` #### `[gte](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L801) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the *second* argument is greater than or equal to the first according to [`Z.gte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#gte). ``` > S.filter (S.gte (3)) ([1, 2, 3, 4, 5]) [3, 4, 5] ``` #### `[min](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L821) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> a` Returns the smaller of its two arguments (according to [`Z.lte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lte)). See also [`max`](#max). ``` > S.min (10) (2) 2 > S.min (new Date ('1999-12-31')) (new Date ('2000-01-01')) new Date ("1999-12-31T00:00:00.000Z") > S.min ('10') ('2') "10" ``` #### `[max](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L843) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> a` Returns the larger of its two arguments (according to [`Z.lte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lte)). See also [`min`](#min). ``` > S.max (10) (2) 10 > S.max (new Date ('1999-12-31')) (new Date ('2000-01-01')) new Date ("2000-01-01T00:00:00.000Z") > S.max ('10') ('2') "2" ``` #### `[clamp](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L865) :: [Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a => a -> a -> a -> a` Takes a lower bound, an upper bound, and a value of the same type. Returns the value if it is within the bounds; the nearer bound otherwise. See also [`min`](#min) and [`max`](#max). ``` > S.clamp (0) (100) (42) 42 > S.clamp (0) (100) (-1) 0 > S.clamp ('A') ('Z') ('~') "Z" ``` #### `[id](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L888) :: [Category](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Category) c => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) c -> c` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.id`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#id). ``` > S.id (Function) (42) 42 ``` #### `[concat](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L902) :: [Semigroup](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Semigroup) a => a -> a -> a` Curried version of [`Z.concat`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#concat). ``` > S.concat ('abc') ('def') "abcdef" > S.concat ([1, 2, 3]) ([4, 5, 6]) [1, 2, 3, 4, 5, 6] > S.concat ({x: 1, y: 2}) ({y: 3, z: 4}) {"x": 1, "y": 3, "z": 4} > S.concat (S.Just ([1, 2, 3])) (S.Just ([4, 5, 6])) Just ([1, 2, 3, 4, 5, 6]) > S.concat (Sum (18)) (Sum (24)) Sum (42) ``` #### `[empty](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L928) :: [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) a => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) a -> a` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.empty`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#empty). ``` > S.empty (String) "" > S.empty (Array) [] > S.empty (Object) {} > S.empty (Sum) Sum (0) ``` #### `[invert](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L951) :: [Group](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Group) g => g -> g` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.invert`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#invert). ``` > S.invert (Sum (5)) Sum (-5) ``` #### `[filter](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L965) :: [Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> f a` Curried version of [`Z.filter`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#filter). Discards every element that does not satisfy the predicate. See also [`reject`](#reject). ``` > S.filter (S.odd) ([1, 2, 3]) [1, 3] > S.filter (S.odd) ({x: 1, y: 2, z: 3}) {"x": 1, "z": 3} > S.filter (S.odd) (S.Nothing) Nothing > S.filter (S.odd) (S.Just (0)) Nothing > S.filter (S.odd) (S.Just (1)) Just (1) ``` #### `[reject](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L999) :: [Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> f a` Curried version of [`Z.reject`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#reject). Discards every element that satisfies the predicate. See also [`filter`](#filter). ``` > S.reject (S.odd) ([1, 2, 3]) [2] > S.reject (S.odd) ({x: 1, y: 2, z: 3}) {"y": 2} > S.reject (S.odd) (S.Nothing) Nothing > S.reject (S.odd) (S.Just (0)) Just (0) > S.reject (S.odd) (S.Just (1)) Nothing ``` #### `[map](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1033) :: [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f => (a -> b) -> f a -> f b` Curried version of [`Z.map`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#map). ``` > S.map (Math.sqrt) ([1, 4, 9]) [1, 2, 3] > S.map (Math.sqrt) ({x: 1, y: 4, z: 9}) {"x": 1, "y": 2, "z": 3} > S.map (Math.sqrt) (S.Just (9)) Just (3) > S.map (Math.sqrt) (S.Right (9)) Right (3) > S.map (Math.sqrt) (S.Pair (99980001) (99980001)) Pair (99980001) (9999) ``` Replacing `Functor f => f` with `Function x` produces the B combinator from combinatory logic (i.e. [`compose`](#compose)): ``` Functor f => (a -> b) -> f a -> f b (a -> b) -> Function x a -> Function x b (a -> c) -> Function x a -> Function x c (b -> c) -> Function x b -> Function x c (b -> c) -> Function a b -> Function a c (b -> c) -> (a -> b) -> (a -> c) ``` ``` > S.map (Math.sqrt) (S.add (1)) (99) 10 ``` #### `[flip](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1079) :: [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f => f (a -> b) -> a -> f b` Curried version of [`Z.flip`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#flip). Maps over the given functions, applying each to the given value. Replacing `Functor f => f` with `Function x` produces the C combinator from combinatory logic: ``` Functor f => f (a -> b) -> a -> f b Function x (a -> b) -> a -> Function x b Function x (a -> c) -> a -> Function x c Function x (b -> c) -> b -> Function x c Function a (b -> c) -> b -> Function a c (a -> b -> c) -> b -> a -> c ``` ``` > S.flip (S.concat) ('!') ('foo') "foo!" > S.flip ([Math.floor, Math.ceil]) (1.5) [1, 2] > S.flip ({floor: Math.floor, ceil: Math.ceil}) (1.5) {"ceil": 2, "floor": 1} > S.flip (Cons (Math.floor) (Cons (Math.ceil) (Nil))) (1.5) Cons (1) (Cons (2) (Nil)) ``` #### `[bimap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1118) :: [Bifunctor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Bifunctor) f => (a -> b) -> (c -> d) -> f a c -> f b d` Curried version of [`Z.bimap`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#bimap). ``` > S.bimap (S.toUpper) (Math.sqrt) (S.Pair ('foo') (64)) Pair ("FOO") (8) > S.bimap (S.toUpper) (Math.sqrt) (S.Left ('foo')) Left ("FOO") > S.bimap (S.toUpper) (Math.sqrt) (S.Right (64)) Right (8) ``` #### `[mapLeft](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1138) :: [Bifunctor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Bifunctor) f => (a -> b) -> f a c -> f b c` Curried version of [`Z.mapLeft`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#mapLeft). Maps the given function over the left side of a Bifunctor. ``` > S.mapLeft (S.toUpper) (S.Pair ('foo') (64)) Pair ("FOO") (64) > S.mapLeft (S.toUpper) (S.Left ('foo')) Left ("FOO") > S.mapLeft (S.toUpper) (S.Right (64)) Right (64) ``` #### `[promap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1159) :: [Profunctor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Profunctor) p => (a -> b) -> (c -> d) -> p b c -> p a d` Curried version of [`Z.promap`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#promap). ``` > S.promap (Math.abs) (S.add (1)) (Math.sqrt) (-100) 11 ``` #### `[alt](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1173) :: [Alt](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Alt) f => f a -> f a -> f a` Curried version of [`Z.alt`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#alt) with arguments flipped to facilitate partial application. ``` > S.alt (S.Just ('default')) (S.Nothing) Just ("default") > S.alt (S.Just ('default')) (S.Just ('hello')) Just ("hello") > S.alt (S.Right (0)) (S.Left ('X')) Right (0) > S.alt (S.Right (0)) (S.Right (1)) Right (1) ``` #### `[zero](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1202) :: [Plus](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Plus) f => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) f -> f a` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.zero`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#zero). ``` > S.zero (Array) [] > S.zero (Object) {} > S.zero (S.Maybe) Nothing ``` #### `[reduce](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1222) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (b -> a -> b) -> b -> f a -> b` Takes a curried binary function, an initial value, and a [Foldable](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#foldable), and applies the function to the initial value and the Foldable's first value, then applies the function to the result of the previous application and the Foldable's second value. Repeats this process until each of the Foldable's values has been used. Returns the initial value if the Foldable is empty; the result of the final application otherwise. See also [`reduce_`](#reduce_). ``` > S.reduce (S.add) (0) ([1, 2, 3, 4, 5]) 15 > S.reduce (xs => x => S.prepend (x) (xs)) ([]) ([1, 2, 3, 4, 5]) [5, 4, 3, 2, 1] ``` #### `[reduce_](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1256) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (a -> b -> b) -> b -> f a -> b` Variant of [`reduce`](#reduce) that takes a reducing function with arguments flipped. ``` > S.reduce_ (S.append) ([]) (Cons (1) (Cons (2) (Cons (3) (Nil)))) [1, 2, 3] > S.reduce_ (S.prepend) ([]) (Cons (1) (Cons (2) (Cons (3) (Nil)))) [3, 2, 1] ``` #### `[traverse](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1274) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Traversable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Traversable) t) => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) f -> (a -> f b) -> t a -> f (t b)` Curried version of [`Z.traverse`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#traverse). ``` > S.traverse (Array) (S.words) (S.Just ('foo bar baz')) [Just ("foo"), Just ("bar"), Just ("baz")] > S.traverse (Array) (S.words) (S.Nothing) [Nothing] > S.traverse (S.Maybe) (S.parseInt (16)) (['A', 'B', 'C']) Just ([10, 11, 12]) > S.traverse (S.Maybe) (S.parseInt (16)) (['A', 'B', 'C', 'X']) Nothing > S.traverse (S.Maybe) (S.parseInt (16)) ({a: 'A', b: 'B', c: 'C'}) Just ({"a": 10, "b": 11, "c": 12}) > S.traverse (S.Maybe) (S.parseInt (16)) ({a: 'A', b: 'B', c: 'C', x: 'X'}) Nothing ``` #### `[sequence](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1303) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Traversable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Traversable) t) => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) f -> t (f a) -> f (t a)` Curried version of [`Z.sequence`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#sequence). Inverts the given `t (f a)` to produce an `f (t a)`. ``` > S.sequence (Array) (S.Just ([1, 2, 3])) [Just (1), Just (2), Just (3)] > S.sequence (S.Maybe) ([S.Just (1), S.Just (2), S.Just (3)]) Just ([1, 2, 3]) > S.sequence (S.Maybe) ([S.Just (1), S.Just (2), S.Nothing]) Nothing > S.sequence (S.Maybe) ({a: S.Just (1), b: S.Just (2), c: S.Just (3)}) Just ({"a": 1, "b": 2, "c": 3}) > S.sequence (S.Maybe) ({a: S.Just (1), b: S.Just (2), c: S.Nothing}) Nothing ``` #### `[ap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1330) :: [Apply](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Apply) f => f (a -> b) -> f a -> f b` Curried version of [`Z.ap`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#ap). ``` > S.ap ([Math.sqrt, x => x * x]) ([1, 4, 9, 16, 25]) [1, 2, 3, 4, 5, 1, 16, 81, 256, 625] > S.ap ({x: Math.sqrt, y: S.add (1), z: S.sub (1)}) ({w: 4, x: 4, y: 4}) {"x": 2, "y": 5} > S.ap (S.Just (Math.sqrt)) (S.Just (64)) Just (8) ``` Replacing `Apply f => f` with `Function x` produces the S combinator from combinatory logic: ``` Apply f => f (a -> b) -> f a -> f b Function x (a -> b) -> Function x a -> Function x b Function x (a -> c) -> Function x a -> Function x c Function x (b -> c) -> Function x b -> Function x c Function a (b -> c) -> Function a b -> Function a c (a -> b -> c) -> (a -> b) -> (a -> c) ``` ``` > S.ap (s => n => s.slice (0, n)) (s => Math.ceil (s.length / 2)) ('Haskell') "Hask" ``` #### `[lift2](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1365) :: [Apply](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Apply) f => (a -> b -> c) -> f a -> f b -> f c` Promotes a curried binary function to a function that operates on two [Apply](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#apply)s. ``` > S.lift2 (S.add) (S.Just (2)) (S.Just (3)) Just (5) > S.lift2 (S.add) (S.Just (2)) (S.Nothing) Nothing > S.lift2 (S.and) (S.Just (true)) (S.Just (true)) Just (true) > S.lift2 (S.and) (S.Just (true)) (S.Just (false)) Just (false) ``` #### `[lift3](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1389) :: [Apply](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Apply) f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d` Promotes a curried ternary function to a function that operates on three [Apply](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#apply)s. ``` > S.lift3 (S.reduce) (S.Just (S.add)) (S.Just (0)) (S.Just ([1, 2, 3])) Just (6) > S.lift3 (S.reduce) (S.Just (S.add)) (S.Just (0)) (S.Nothing) Nothing ``` #### `[apFirst](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1407) :: [Apply](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Apply) f => f a -> f b -> f a` Curried version of [`Z.apFirst`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#apFirst). Combines two effectful actions, keeping only the result of the first. Equivalent to Haskell's `(<*)` function. See also [`apSecond`](#apSecond). ``` > S.apFirst ([1, 2]) ([3, 4]) [1, 1, 2, 2] > S.apFirst (S.Just (1)) (S.Just (2)) Just (1) ``` #### `[apSecond](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1428) :: [Apply](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Apply) f => f a -> f b -> f b` Curried version of [`Z.apSecond`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#apSecond). Combines two effectful actions, keeping only the result of the second. Equivalent to Haskell's `(*>)` function. See also [`apFirst`](#apFirst). ``` > S.apSecond ([1, 2]) ([3, 4]) [3, 4, 3, 4] > S.apSecond (S.Just (1)) (S.Just (2)) Just (2) ``` #### `[of](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1449) :: [Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) f -> a -> f a` Curried version of [`Z.of`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#of). ``` > S.of (Array) (42) [42] > S.of (Function) (42) (null) 42 > S.of (S.Maybe) (42) Just (42) > S.of (S.Either) (42) Right (42) ``` #### `[chain](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1477) :: [Chain](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Chain) m => (a -> m b) -> m a -> m b` Curried version of [`Z.chain`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#chain). ``` > S.chain (x => [x, x]) ([1, 2, 3]) [1, 1, 2, 2, 3, 3] > S.chain (n => s => s.slice (0, n)) (s => Math.ceil (s.length / 2)) ('slice') "sli" > S.chain (S.parseInt (10)) (S.Just ('123')) Just (123) > S.chain (S.parseInt (10)) (S.Just ('XXX')) Nothing ``` #### `[join](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1500) :: [Chain](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Chain) m => m (m a) -> m a` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.join`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#join). Removes one level of nesting from a nested monadic structure. ``` > S.join ([[1], [2], [3]]) [1, 2, 3] > S.join ([[[1, 2, 3]]]) [[1, 2, 3]] > S.join (S.Just (S.Just (1))) Just (1) > S.join (S.Pair ('foo') (S.Pair ('bar') ('baz'))) Pair ("foobar") ("baz") ``` Replacing `Chain m => m` with `Function x` produces the W combinator from combinatory logic: ``` Chain m => m (m a) -> m a Function x (Function x a) -> Function x a (x -> x -> a) -> (x -> a) ``` ``` > S.join (S.concat) ('abc') "abcabc" ``` #### `[chainRec](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1536) :: [ChainRec](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#ChainRec) m => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) m -> (a -> m ([Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b)) -> a -> m b` Performs a [`chain`](#chain)-like computation with constant stack usage. Similar to [`Z.chainRec`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#chainRec), but curried and more convenient due to the use of the Either type to indicate completion (via a Right). ``` > S.chainRec (Array) (s => s.length === 2 ? S.map (S.Right) ([s + '!', s + '?']) : S.map (S.Left) ([s + 'o', s + 'n'])) ('') ["oo!", "oo?", "on!", "on?", "no!", "no?", "nn!", "nn?"] ``` #### `[extend](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1565) :: [Extend](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Extend) w => (w a -> b) -> w a -> w b` Curried version of [`Z.extend`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#extend). ``` > S.extend (S.joinWith ('')) (['x', 'y', 'z']) ["xyz", "yz", "z"] > S.extend (f => f ([3, 4])) (S.reverse) ([1, 2]) [4, 3, 2, 1] ``` #### `[duplicate](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1582) :: [Extend](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Extend) w => w a -> w (w a)` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.duplicate`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#duplicate). Adds one level of nesting to a comonadic structure. ``` > S.duplicate (S.Just (1)) Just (Just (1)) > S.duplicate ([1]) [[1]] > S.duplicate ([1, 2, 3]) [[1, 2, 3], [2, 3], [3]] > S.duplicate (S.reverse) ([1, 2]) ([3, 4]) [4, 3, 2, 1] ``` #### `[extract](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1606) :: [Comonad](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Comonad) w => w a -> a` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.extract`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#extract). ``` > S.extract (S.Pair ('foo') ('bar')) "bar" ``` #### `[contramap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1620) :: [Contravariant](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Contravariant) f => (b -> a) -> f a -> f b` [Type-safe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0) version of [`Z.contramap`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#contramap). ``` > S.contramap (s => s.length) (Math.sqrt) ('Sanctuary') 3 ``` ### Combinator #### `[I](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1636) :: a -> a` The I combinator. Returns its argument. Equivalent to Haskell's `id` function. ``` > S.I ('foo') "foo" ``` #### `[K](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1654) :: a -> b -> a` The K combinator. Takes two values and returns the first. Equivalent to Haskell's `const` function. ``` > S.K ('foo') ('bar') "foo" > S.map (S.K (42)) (S.range (0) (5)) [42, 42, 42, 42, 42] ``` #### `[T](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1677) :: a -> (a -> b) -> b` The T ([thrush](https://github.com/raganwald-deprecated/homoiconic/blob/master/2008-10-30/thrush.markdown)) combinator. Takes a value and a function, and returns the result of applying the function to the value. Equivalent to Haskell's `(&)` function. ``` > S.T (42) (S.add (1)) 43 > S.map (S.T (100)) ([S.add (1), Math.sqrt]) [101, 10] ``` ### Function #### `[curry2](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1703) :: ((a, b) -> c) -> a -> b -> c` Curries the given binary function. ``` > S.map (S.curry2 (Math.pow) (10)) ([1, 2, 3]) [10, 100, 1000] ``` #### `[curry3](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1724) :: ((a, b, c) -> d) -> a -> b -> c -> d` Curries the given ternary function. ``` > const replaceString = S.curry3 ((what, replacement, string) => string.replace (what, replacement)) undefined > replaceString ('banana') ('orange') ('banana icecream') "orange icecream" ``` #### `[curry4](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1751) :: ((a, b, c, d) -> e) -> a -> b -> c -> d -> e` Curries the given quaternary function. ``` > const createRect = S.curry4 ((x, y, width, height) => ({x, y, width, height})) undefined > createRect (0) (0) (10) (10) {"height": 10, "width": 10, "x": 0, "y": 0} ``` #### `[curry5](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1780) :: ((a, b, c, d, e) -> f) -> a -> b -> c -> d -> e -> f` Curries the given quinary function. ``` > const toUrl = S.curry5 ((protocol, creds, hostname, port, pathname) => protocol + '//' + S.maybe ('') (S.flip (S.concat) ('@')) (creds) + hostname + S.maybe ('') (S.concat (':')) (port) + pathname) undefined > toUrl ('https:') (S.Nothing) ('example.com') (S.Just ('443')) ('/foo/bar') "https://example.com:443/foo/bar" ``` ### Composition #### `[compose](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1817) :: [Semigroupoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Semigroupoid) s => s b c -> s a b -> s a c` Curried version of [`Z.compose`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#compose). When specialized to Function, `compose` composes two unary functions, from right to left (this is the B combinator from combinatory logic). The generalized type signature indicates that `compose` is compatible with any [Semigroupoid](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#semigroupoid). See also [`pipe`](#pipe). ``` > S.compose (Math.sqrt) (S.add (1)) (99) 10 ``` #### `[pipe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1839) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f ([Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any)) -> a -> b` Takes a sequence of functions assumed to be unary and a value of any type, and returns the result of applying the sequence of transformations to the initial value. In general terms, `pipe` performs left-to-right composition of a sequence of functions. `pipe ([f, g, h]) (x)` is equivalent to `h (g (f (x)))`. ``` > S.pipe ([S.add (1), Math.sqrt, S.sub (1)]) (99) 9 ``` #### `[pipeK](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1863) :: ([Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Chain](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Chain) m) => f ([Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> m [Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any)) -> m a -> m b` Takes a sequence of functions assumed to be unary that return values with a [Chain](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#chain), and a value of that Chain, and returns the result of applying the sequence of transformations to the initial value. In general terms, `pipeK` performs left-to-right [Kleisli](https://en.wikipedia.org/wiki/Kleisli_category) composition of an sequence of functions. `pipeK ([f, g, h]) (x)` is equivalent to `chain (h) (chain (g) (chain (f) (x)))`. ``` > S.pipeK ([S.tail, S.tail, S.head]) (S.Just ([1, 2, 3, 4])) Just (3) ``` #### `[on](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1888) :: (b -> b -> c) -> (a -> b) -> a -> a -> c` Takes a binary function `f`, a unary function `g`, and two values `x` and `y`. Returns `f (g (x)) (g (y))`. This is the P combinator from combinatory logic. ``` > S.on (S.concat) (S.reverse) ([1, 2, 3]) ([4, 5, 6]) [3, 2, 1, 6, 5, 4] ``` ### Pair Pair is the canonical product type: a value of type `Pair a b` always contains exactly two values: one of type `a`; one of type `b`. The implementation is provided by [sanctuary-pair](https://github.com/sanctuary-js/sanctuary-pair/tree/v2.1.0). #### `[Pair](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1921) :: a -> b -> [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b` Pair's sole data constructor. Additionally, it serves as the Pair [type representative](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#type-representatives). ``` > S.Pair ('foo') (42) Pair ("foo") (42) ``` #### `[pair](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1936) :: (a -> b -> c) -> [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b -> c` Case analysis for the `Pair a b` type. ``` > S.pair (S.concat) (S.Pair ('foo') ('bar')) "foobar" ``` #### `[fst](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1955) :: [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b -> a` `fst (Pair (x) (y))` is equivalent to `x`. ``` > S.fst (S.Pair ('foo') (42)) "foo" ``` #### `[snd](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1969) :: [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b -> b` `snd (Pair (x) (y))` is equivalent to `y`. ``` > S.snd (S.Pair ('foo') (42)) 42 ``` #### `[swap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L1983) :: [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b -> [Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) b a` `swap (Pair (x) (y))` is equivalent to `Pair (y) (x)`. ``` > S.swap (S.Pair ('foo') (42)) Pair (42) ("foo") ``` ### Maybe The Maybe type represents optional values: a value of type `Maybe a` is either Nothing (the empty value) or a Just whose value is of type `a`. The implementation is provided by [sanctuary-maybe](https://github.com/sanctuary-js/sanctuary-maybe/tree/v2.1.0). #### `[Maybe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2004) :: [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe)` Maybe [type representative](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#type-representatives). #### `[Nothing](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2008) :: [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` The empty value of type `Maybe a`. ``` > S.Nothing Nothing ``` #### `[Just](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2017) :: a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Constructs a value of type `Maybe a` from a value of type `a`. ``` > S.Just (42) Just (42) ``` #### `[isNothing](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2031) :: [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given Maybe is Nothing; `false` if it is a Just. ``` > S.isNothing (S.Nothing) true > S.isNothing (S.Just (42)) false ``` #### `[isJust](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2051) :: [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given Maybe is a Just; `false` if it is Nothing. ``` > S.isJust (S.Just (42)) true > S.isJust (S.Nothing) false ``` #### `[maybe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2071) :: b -> (a -> b) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> b` Takes a value of any type, a function, and a Maybe. If the Maybe is a Just, the return value is the result of applying the function to the Just's value. Otherwise, the first argument is returned. See also [`maybe_`](#maybe_) and [`fromMaybe`](#fromMaybe). ``` > S.maybe (0) (S.prop ('length')) (S.Just ('refuge')) 6 > S.maybe (0) (S.prop ('length')) (S.Nothing) 0 ``` #### `[maybe_](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2099) :: (() -> b) -> (a -> b) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> b` Variant of [`maybe`](#maybe) that takes a thunk so the default value is only computed if required. ``` > function fib(n) { return n <= 1 ? n : fib (n - 2) + fib (n - 1); } undefined > S.maybe_ (() => fib (30)) (Math.sqrt) (S.Just (1000000)) 1000 > S.maybe_ (() => fib (30)) (Math.sqrt) (S.Nothing) 832040 ``` #### `[fromMaybe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2126) :: a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> a` Takes a default value and a Maybe, and returns the Maybe's value if the Maybe is a Just; the default value otherwise. See also [`maybe`](#maybe), [`fromMaybe_`](#fromMaybe_), and [`maybeToNullable`](#maybeToNullable). ``` > S.fromMaybe (0) (S.Just (42)) 42 > S.fromMaybe (0) (S.Nothing) 0 ``` #### `[fromMaybe_](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2147) :: (() -> a) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> a` Variant of [`fromMaybe`](#fromMaybe) that takes a thunk so the default value is only computed if required. ``` > function fib(n) { return n <= 1 ? n : fib (n - 2) + fib (n - 1); } undefined > S.fromMaybe_ (() => fib (30)) (S.Just (1000000)) 1000000 > S.fromMaybe_ (() => fib (30)) (S.Nothing) 832040 ``` #### `[justs](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2167) :: ([Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f, [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f) => f ([Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a) -> f a` Discards each element that is Nothing, and unwraps each element that is a Just. Related to Haskell's `catMaybes` function. See also [`lefts`](#lefts) and [`rights`](#rights). ``` > S.justs ([S.Just ('foo'), S.Nothing, S.Just ('baz')]) ["foo", "baz"] ``` #### `[mapMaybe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2187) :: ([Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f, [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f) => (a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) b) -> f a -> f b` Takes a function and a structure, applies the function to each element of the structure, and returns the "successful" results. If the result of applying the function to an element is Nothing, the result is discarded; if the result is a Just, the Just's value is included. ``` > S.mapMaybe (S.head) ([[], [1, 2, 3], [], [4, 5, 6], []]) [1, 4] > S.mapMaybe (S.head) ({x: [1, 2, 3], y: [], z: [4, 5, 6]}) {"x": 1, "z": 4} ``` #### `[maybeToNullable](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2207) :: [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a -> [Nullable](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Nullable) a` Returns the given Maybe's value if the Maybe is a Just; `null` otherwise. [Nullable](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Nullable) is defined in [sanctuary-def](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0). See also [`fromMaybe`](#fromMaybe). ``` > S.maybeToNullable (S.Just (42)) 42 > S.maybeToNullable (S.Nothing) null ``` #### `[maybeToEither](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2230) :: a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) b -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b` Converts a Maybe to an Either. Nothing becomes a Left (containing the first argument); a Just becomes a Right. See also [`eitherToMaybe`](#eitherToMaybe). ``` > S.maybeToEither ('Expecting an integer') (S.parseInt (10) ('xyz')) Left ("Expecting an integer") > S.maybeToEither ('Expecting an integer') (S.parseInt (10) ('42')) Right (42) ``` ### Either The Either type represents values with two possibilities: a value of type `Either a b` is either a Left whose value is of type `a` or a Right whose value is of type `b`. The implementation is provided by [sanctuary-either](https://github.com/sanctuary-js/sanctuary-either/tree/v2.1.0). #### `[Either](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2261) :: [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either)` Either [type representative](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#type-representatives). #### `[Left](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2265) :: a -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b` Constructs a value of type `Either a b` from a value of type `a`. ``` > S.Left ('Cannot divide by zero') Left ("Cannot divide by zero") ``` #### `[Right](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2279) :: b -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b` Constructs a value of type `Either a b` from a value of type `b`. ``` > S.Right (42) Right (42) ``` #### `[isLeft](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2293) :: [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given Either is a Left; `false` if it is a Right. ``` > S.isLeft (S.Left ('Cannot divide by zero')) true > S.isLeft (S.Right (42)) false ``` #### `[isRight](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2313) :: [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given Either is a Right; `false` if it is a Left. ``` > S.isRight (S.Right (42)) true > S.isRight (S.Left ('Cannot divide by zero')) false ``` #### `[either](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2333) :: (a -> c) -> (b -> c) -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> c` Takes two functions and an Either, and returns the result of applying the first function to the Left's value, if the Either is a Left, or the result of applying the second function to the Right's value, if the Either is a Right. See also [`fromLeft`](#fromLeft) and [`fromRight`](#fromRight). ``` > S.either (S.toUpper) (S.show) (S.Left ('Cannot divide by zero')) "CANNOT DIVIDE BY ZERO" > S.either (S.toUpper) (S.show) (S.Right (42)) "42" ``` #### `[fromLeft](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2362) :: a -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> a` Takes a default value and an Either, and returns the Left value if the Either is a Left; the default value otherwise. See also [`either`](#either) and [`fromRight`](#fromRight). ``` > S.fromLeft ('abc') (S.Left ('xyz')) "xyz" > S.fromLeft ('abc') (S.Right (123)) "abc" ``` #### `[fromRight](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2385) :: b -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> b` Takes a default value and an Either, and returns the Right value if the Either is a Right; the default value otherwise. See also [`either`](#either) and [`fromLeft`](#fromLeft). ``` > S.fromRight (123) (S.Right (789)) 789 > S.fromRight (123) (S.Left ('abc')) 123 ``` #### `[fromEither](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2408) :: b -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> b` Takes a default value and an Either, and returns the Right value if the Either is a Right; the default value otherwise. The behaviour of `fromEither` is likely to change in a future release. Please use [`fromRight`](#fromRight) instead. ``` > S.fromEither (0) (S.Right (42)) 42 > S.fromEither (0) (S.Left (42)) 0 ``` #### `[lefts](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2432) :: ([Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f, [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f) => f ([Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b) -> f a` Discards each element that is a Right, and unwraps each element that is a Left. See also [`rights`](#rights). ``` > S.lefts ([S.Right (20), S.Left ('foo'), S.Right (10), S.Left ('bar')]) ["foo", "bar"] ``` #### `[rights](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2449) :: ([Filterable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Filterable) f, [Functor](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Functor) f) => f ([Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b) -> f b` Discards each element that is a Left, and unwraps each element that is a Right. See also [`lefts`](#lefts). ``` > S.rights ([S.Right (20), S.Left ('foo'), S.Right (10), S.Left ('bar')]) [20, 10] ``` #### `[tagBy](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2466) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> a -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a a` Takes a predicate and a value, and returns a Right of the value if it satisfies the predicate; a Left of the value otherwise. ``` > S.tagBy (S.odd) (0) Left (0) > S.tagBy (S.odd) (1) Right (1) ``` #### `[encase](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2487) :: Throwing e a b -> a -> [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) e b` Takes a function that may throw and returns a pure function. ``` > S.encase (JSON.parse) ('["foo","bar","baz"]') Right (["foo", "bar", "baz"]) > S.encase (JSON.parse) ('[') Left (new SyntaxError ("Unexpected end of JSON input")) ``` #### `[eitherToMaybe](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2513) :: [Either](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Either) a b -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) b` Converts an Either to a Maybe. A Left becomes Nothing; a Right becomes a Just. See also [`maybeToEither`](#maybeToEither). ``` > S.eitherToMaybe (S.Left ('Cannot divide by zero')) Nothing > S.eitherToMaybe (S.Right (42)) Just (42) ``` ### Logic #### `[and](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2538) :: [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Boolean "and". ``` > S.and (false) (false) false > S.and (false) (true) false > S.and (true) (false) false > S.and (true) (true) true ``` #### `[or](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2566) :: [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Boolean "or". ``` > S.or (false) (false) false > S.or (false) (true) true > S.or (true) (false) true > S.or (true) (true) true ``` #### `[not](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2594) :: [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Boolean "not". See also [`complement`](#complement). ``` > S.not (false) true > S.not (true) false ``` #### `[complement](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2616) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Takes a unary predicate and a value of any type, and returns the logical negation of applying the predicate to the value. See also [`not`](#not). ``` > Number.isInteger (42) true > S.complement (Number.isInteger) (42) false ``` #### `[boolean](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2636) :: a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean) -> a` Case analysis for the `Boolean` type. `boolean (x) (y) (b)` evaluates to `x` if `b` is `false`; to `y` if `b` is `true`. ``` > S.boolean ('no') ('yes') (false) "no" > S.boolean ('no') ('yes') (true) "yes" ``` #### `[ifElse](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2661) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> (a -> b) -> (a -> b) -> a -> b` Takes a unary predicate, a unary "if" function, a unary "else" function, and a value of any type, and returns the result of applying the "if" function to the value if the value satisfies the predicate; the result of applying the "else" function to the value otherwise. See also [`when`](#when) and [`unless`](#unless). ``` > S.ifElse (x => x < 0) (Math.abs) (Math.sqrt) (-1) 1 > S.ifElse (x => x < 0) (Math.abs) (Math.sqrt) (16) 4 ``` #### `[when](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2693) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> (a -> a) -> a -> a` Takes a unary predicate, a unary function, and a value of any type, and returns the result of applying the function to the value if the value satisfies the predicate; the value otherwise. See also [`unless`](#unless) and [`ifElse`](#ifElse). ``` > S.when (x => x >= 0) (Math.sqrt) (16) 4 > S.when (x => x >= 0) (Math.sqrt) (-1) -1 ``` #### `[unless](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2717) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> (a -> a) -> a -> a` Takes a unary predicate, a unary function, and a value of any type, and returns the result of applying the function to the value if the value does not satisfy the predicate; the value otherwise. See also [`when`](#when) and [`ifElse`](#ifElse). ``` > S.unless (x => x < 0) (Math.sqrt) (16) 4 > S.unless (x => x < 0) (Math.sqrt) (-1) -1 ``` ### Array #### `[array](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2743) :: b -> (a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> b) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> b` Case analysis for the `Array a` type. ``` > S.array (S.Nothing) (head => tail => S.Just (head)) ([]) Nothing > S.array (S.Nothing) (head => tail => S.Just (head)) ([1, 2, 3]) Just (1) > S.array (S.Nothing) (head => tail => S.Just (tail)) ([]) Nothing > S.array (S.Nothing) (head => tail => S.Just (tail)) ([1, 2, 3]) Just ([2, 3]) ``` #### `[head](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2773) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Returns Just the first element of the given structure if the structure contains at least one element; Nothing otherwise. ``` > S.head ([1, 2, 3]) Just (1) > S.head ([]) Nothing > S.head (Cons (1) (Cons (2) (Cons (3) (Nil)))) Just (1) > S.head (Nil) Nothing ``` #### `[last](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2806) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Returns Just the last element of the given structure if the structure contains at least one element; Nothing otherwise. ``` > S.last ([1, 2, 3]) Just (3) > S.last ([]) Nothing > S.last (Cons (1) (Cons (2) (Cons (3) (Nil)))) Just (3) > S.last (Nil) Nothing ``` #### `[tail](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2838) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just all but the first of the given structure's elements if the structure contains at least one element; Nothing otherwise. ``` > S.tail ([1, 2, 3]) Just ([2, 3]) > S.tail ([]) Nothing > S.tail (Cons (1) (Cons (2) (Cons (3) (Nil)))) Just (Cons (2) (Cons (3) (Nil))) > S.tail (Nil) Nothing ``` #### `[init](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2872) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just all but the last of the given structure's elements if the structure contains at least one element; Nothing otherwise. ``` > S.init ([1, 2, 3]) Just ([1, 2]) > S.init ([]) Nothing > S.init (Cons (1) (Cons (2) (Cons (3) (Nil)))) Just (Cons (1) (Cons (2) (Nil))) > S.init (Nil) Nothing ``` #### `[take](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2906) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just the first N elements of the given structure if N is non-negative and less than or equal to the size of the structure; Nothing otherwise. ``` > S.take (0) (['foo', 'bar']) Just ([]) > S.take (1) (['foo', 'bar']) Just (["foo"]) > S.take (2) (['foo', 'bar']) Just (["foo", "bar"]) > S.take (3) (['foo', 'bar']) Nothing > S.take (3) (Cons (1) (Cons (2) (Cons (3) (Cons (4) (Cons (5) (Nil)))))) Just (Cons (1) (Cons (2) (Cons (3) (Nil)))) ``` #### `[drop](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2961) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just all but the first N elements of the given structure if N is non-negative and less than or equal to the size of the structure; Nothing otherwise. ``` > S.drop (0) (['foo', 'bar']) Just (["foo", "bar"]) > S.drop (1) (['foo', 'bar']) Just (["bar"]) > S.drop (2) (['foo', 'bar']) Just ([]) > S.drop (3) (['foo', 'bar']) Nothing > S.drop (3) (Cons (1) (Cons (2) (Cons (3) (Cons (4) (Cons (5) (Nil)))))) Just (Cons (4) (Cons (5) (Nil))) ``` #### `[takeLast](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L2993) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just the last N elements of the given structure if N is non-negative and less than or equal to the size of the structure; Nothing otherwise. ``` > S.takeLast (0) (['foo', 'bar']) Just ([]) > S.takeLast (1) (['foo', 'bar']) Just (["bar"]) > S.takeLast (2) (['foo', 'bar']) Just (["foo", "bar"]) > S.takeLast (3) (['foo', 'bar']) Nothing > S.takeLast (3) (Cons (1) (Cons (2) (Cons (3) (Cons (4) (Nil))))) Just (Cons (2) (Cons (3) (Cons (4) (Nil)))) ``` #### `[dropLast](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3026) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) (f a)` Returns Just all but the last N elements of the given structure if N is non-negative and less than or equal to the size of the structure; Nothing otherwise. ``` > S.dropLast (0) (['foo', 'bar']) Just (["foo", "bar"]) > S.dropLast (1) (['foo', 'bar']) Just (["foo"]) > S.dropLast (2) (['foo', 'bar']) Just ([]) > S.dropLast (3) (['foo', 'bar']) Nothing > S.dropLast (3) (Cons (1) (Cons (2) (Cons (3) (Cons (4) (Nil))))) Just (Cons (1) (Nil)) ``` #### `[takeWhile](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3059) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a` Discards the first element that does not satisfy the predicate, and all subsequent elements. See also [`dropWhile`](#dropWhile). ``` > S.takeWhile (S.odd) ([3, 3, 3, 7, 6, 3, 5, 4]) [3, 3, 3, 7] > S.takeWhile (S.even) ([3, 3, 3, 7, 6, 3, 5, 4]) [] ``` #### `[dropWhile](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3086) :: (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a` Retains the first element that does not satisfy the predicate, and all subsequent elements. See also [`takeWhile`](#takeWhile). ``` > S.dropWhile (S.odd) ([3, 3, 3, 7, 6, 3, 5, 4]) [6, 3, 5, 4] > S.dropWhile (S.even) ([3, 3, 3, 7, 6, 3, 5, 4]) [3, 3, 3, 7, 6, 3, 5, 4] ``` #### `[size](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3113) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f a -> [NonNegativeInteger](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#NonNegativeInteger)` Returns the number of elements of the given structure. ``` > S.size ([]) 0 > S.size (['foo', 'bar', 'baz']) 3 > S.size (Nil) 0 > S.size (Cons ('foo') (Cons ('bar') (Cons ('baz') (Nil)))) 3 > S.size (S.Nothing) 0 > S.size (S.Just ('quux')) 1 > S.size (S.Pair ('ignored!') ('counted!')) 1 ``` #### `[all](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3145) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) all the elements of the structure satisfy the predicate. See also [`any`](#any) and [`none`](#none). ``` > S.all (S.odd) ([]) true > S.all (S.odd) ([1, 3, 5]) true > S.all (S.odd) ([1, 2, 3]) false ``` #### `[any](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3168) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) any element of the structure satisfies the predicate. See also [`all`](#all) and [`none`](#none). ``` > S.any (S.odd) ([]) false > S.any (S.odd) ([2, 4, 6]) false > S.any (S.odd) ([1, 2, 3]) true ``` #### `[none](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3191) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) none of the elements of the structure satisfies the predicate. Properties: * `forall p :: a -> Boolean, xs :: Foldable f => f a. S.none (p) (xs) = S.not (S.any (p) (xs))` * `forall p :: a -> Boolean, xs :: Foldable f => f a. S.none (p) (xs) = S.all (S.complement (p)) (xs)` See also [`all`](#all) and [`any`](#any). ``` > S.none (S.odd) ([]) true > S.none (S.odd) ([2, 4, 6]) true > S.none (S.odd) ([1, 2, 3]) false ``` #### `[append](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3222) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Semigroup](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Semigroup) (f a)) => a -> f a -> f a` Returns the result of appending the first argument to the second. See also [`prepend`](#prepend). ``` > S.append (3) ([1, 2]) [1, 2, 3] > S.append (3) (Cons (1) (Cons (2) (Nil))) Cons (1) (Cons (2) (Cons (3) (Nil))) > S.append ([1]) (S.Nothing) Just ([1]) > S.append ([3]) (S.Just ([1, 2])) Just ([1, 2, 3]) ``` #### `[prepend](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3252) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Semigroup](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Semigroup) (f a)) => a -> f a -> f a` Returns the result of prepending the first argument to the second. See also [`append`](#append). ``` > S.prepend (1) ([2, 3]) [1, 2, 3] > S.prepend (1) (Cons (2) (Cons (3) (Nil))) Cons (1) (Cons (2) (Cons (3) (Nil))) > S.prepend ([1]) (S.Nothing) Just ([1]) > S.prepend ([1]) (S.Just ([2, 3])) Just ([1, 2, 3]) ``` #### `[joinWith](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3277) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Joins the strings of the second argument separated by the first argument. Properties: * `forall s :: String, t :: String. S.joinWith (s) (S.splitOn (s) (t)) = t` See also [`splitOn`](#splitOn) and [`intercalate`](#intercalate). ``` > S.joinWith (':') (['foo', 'bar', 'baz']) "foo:bar:baz" ``` #### `[elem](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3298) :: ([Setoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Setoid) a, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f) => a -> f a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Takes a value and a structure and returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the value is an element of the structure. See also [`find`](#find). ``` > S.elem ('c') (['a', 'b', 'c']) true > S.elem ('x') (['a', 'b', 'c']) false > S.elem (3) ({x: 1, y: 2, z: 3}) true > S.elem (8) ({x: 1, y: 2, z: 3}) false > S.elem (0) (S.Just (0)) true > S.elem (0) (S.Just (1)) false > S.elem (0) (S.Nothing) false ``` #### `[find](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3333) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => (a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> f a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Takes a predicate and a structure and returns Just the leftmost element of the structure that satisfies the predicate; Nothing if there is no such element. See also [`elem`](#elem). ``` > S.find (S.lt (0)) ([1, -2, 3, -4, 5]) Just (-2) > S.find (S.lt (0)) ([1, 2, 3, 4, 5]) Nothing ``` #### `[intercalate](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3365) :: ([Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) m, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f) => m -> f m -> m` Curried version of [`Z.intercalate`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#intercalate). Concatenates the elements of the given structure, separating each pair of adjacent elements with the given separator. See also [`joinWith`](#joinWith). ``` > S.intercalate (', ') ([]) "" > S.intercalate (', ') (['foo', 'bar', 'baz']) "foo, bar, baz" > S.intercalate (', ') (Nil) "" > S.intercalate (', ') (Cons ('foo') (Cons ('bar') (Cons ('baz') (Nil)))) "foo, bar, baz" > S.intercalate ([0, 0, 0]) ([]) [] > S.intercalate ([0, 0, 0]) ([[1], [2, 3], [4, 5, 6], [7, 8], [9]]) [1, 0, 0, 0, 2, 3, 0, 0, 0, 4, 5, 6, 0, 0, 0, 7, 8, 0, 0, 0, 9] ``` #### `[foldMap](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3398) :: ([Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) m, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f) => [TypeRep](https://github.com/fantasyland/fantasy-land#type-representatives) m -> (a -> m) -> f a -> m` Curried version of [`Z.foldMap`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#foldMap). Deconstructs a foldable by mapping every element to a monoid and concatenating the results. ``` > S.foldMap (String) (f => f.name) ([Math.sin, Math.cos, Math.tan]) "sincostan" > S.foldMap (Array) (x => [x + 1, x + 2]) ([10, 20, 30]) [11, 12, 21, 22, 31, 32] ``` #### `[unfoldr](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3416) :: (b -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) ([Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b)) -> b -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a` Takes a function and a seed value, and returns an array generated by applying the function repeatedly. The array is initially empty. The function is initially applied to the seed value. Each application of the function should result in either: * Nothing, in which case the array is returned; or * Just a pair, in which case the first element is appended to the array and the function is applied to the second element. ``` > S.unfoldr (n => n < 1000 ? S.Just (S.Pair (n) (2 * n)) : S.Nothing) (1) [1, 2, 4, 8, 16, 32, 64, 128, 256, 512] ``` #### `[range](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3447) :: [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer)` Returns an array of consecutive integers starting with the first argument and ending with the second argument minus one. Returns `[]` if the second argument is less than or equal to the first argument. ``` > S.range (0) (10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] > S.range (-5) (0) [-5, -4, -3, -2, -1] > S.range (0) (-5) [] ``` #### `[groupBy](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3476) :: (a -> a -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) ([Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a)` Splits its array argument into an array of arrays of equal, adjacent elements. Equality is determined by the function provided as the first argument. Its behaviour can be surprising for functions that aren't reflexive, transitive, and symmetric (see [equivalence](https://en.wikipedia.org/wiki/Equivalence_relation) relation). Properties: * `forall f :: a -> a -> Boolean, xs :: Array a. S.join (S.groupBy (f) (xs)) = xs` ``` > S.groupBy (S.equals) ([1, 1, 2, 1, 1]) [[1, 1], [2], [1, 1]] > S.groupBy (x => y => x + y === 0) ([2, -3, 3, 3, 3, 4, -4, 4]) [[2], [-3, 3, 3, 3], [4, -4], [4]] ``` #### `[reverse](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3515) :: ([Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) f, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (f a)) => f a -> f a` Reverses the elements of the given structure. ``` > S.reverse ([1, 2, 3]) [3, 2, 1] > S.reverse (Cons (1) (Cons (2) (Cons (3) (Nil)))) Cons (3) (Cons (2) (Cons (1) (Nil))) > S.pipe ([S.splitOn (''), S.reverse, S.joinWith ('')]) ('abc') "cba" ``` #### `[sort](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3535) :: ([Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) a, [Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) m, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) m, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (m a)) => m a -> m a` Performs a [stable sort](https://en.wikipedia.org/wiki/Sorting_algorithm#Stability) of the elements of the given structure, using [`Z.lte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lte) for comparisons. Properties: * `S.sort (S.sort (m)) = S.sort (m)` (idempotence) See also [`sortBy`](#sortBy). ``` > S.sort (['foo', 'bar', 'baz']) ["bar", "baz", "foo"] > S.sort ([S.Left (4), S.Right (3), S.Left (2), S.Right (1)]) [Left (2), Left (4), Right (1), Right (3)] ``` #### `[sortBy](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3559) :: ([Ord](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Ord) b, [Applicative](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Applicative) m, [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) m, [Monoid](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Monoid) (m a)) => (a -> b) -> m a -> m a` Performs a [stable sort](https://en.wikipedia.org/wiki/Sorting_algorithm#Stability) of the elements of the given structure, using [`Z.lte`](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#lte) to compare the values produced by applying the given function to each element of the structure. Properties: * `S.sortBy (f) (S.sortBy (f) (m)) = S.sortBy (f) (m)` (idempotence) See also [`sort`](#sort). ``` > S.sortBy (S.prop ('rank')) ([{rank: 7, suit: 'spades'}, {rank: 5, suit: 'hearts'}, {rank: 2, suit: 'hearts'}, {rank: 5, suit: 'spades'}]) [{"rank": 2, "suit": "hearts"}, {"rank": 5, "suit": "hearts"}, {"rank": 5, "suit": "spades"}, {"rank": 7, "suit": "spades"}] > S.sortBy (S.prop ('suit')) ([{rank: 7, suit: 'spades'}, {rank: 5, suit: 'hearts'}, {rank: 2, suit: 'hearts'}, {rank: 5, suit: 'spades'}]) [{"rank": 5, "suit": "hearts"}, {"rank": 2, "suit": "hearts"}, {"rank": 7, "suit": "spades"}, {"rank": 5, "suit": "spades"}] ``` If descending order is desired, one may use [`Descending`](https://github.com/sanctuary-js/sanctuary-descending/tree/v2.1.0#Descending): ``` > S.sortBy (Descending) ([83, 97, 110, 99, 116, 117, 97, 114, 121]) [121, 117, 116, 114, 110, 99, 97, 97, 83] ``` #### `[zip](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3607) :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) b -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) ([Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) a b)` Returns an array of pairs of corresponding elements from the given arrays. The length of the resulting array is equal to the length of the shorter input array. See also [`zipWith`](#zipWith). ``` > S.zip (['a', 'b']) (['x', 'y', 'z']) [Pair ("a") ("x"), Pair ("b") ("y")] > S.zip ([1, 3, 5]) ([2, 4]) [Pair (1) (2), Pair (3) (4)] ``` #### `[zipWith](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3628) :: (a -> b -> c) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) b -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) c` Returns the result of combining, pairwise, the given arrays using the given binary function. The length of the resulting array is equal to the length of the shorter input array. See also [`zip`](#zip). ``` > S.zipWith (a => b => a + b) (['a', 'b']) (['x', 'y', 'z']) ["ax", "by"] > S.zipWith (a => b => [a, b]) ([1, 3, 5]) ([2, 4]) [[1, 2], [3, 4]] ``` ### Object #### `[prop](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3663) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> b` Takes a property name and an object with known properties and returns the value of the specified property. If for some reason the object lacks the specified property, a type error is thrown. For accessing properties of uncertain objects, use [`get`](#get) instead. For accessing string map values by key, use [`value`](#value) instead. ``` > S.prop ('a') ({a: 1, b: 2}) 1 ``` #### `[props](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3690) :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> b` Takes a property path (an array of property names) and an object with known structure and returns the value at the given path. If for some reason the path does not exist, a type error is thrown. For accessing property paths of uncertain objects, use [`gets`](#gets) instead. ``` > S.props (['a', 'b', 'c']) ({a: {b: {c: 1}}}) 1 ``` #### `[get](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3719) :: ([Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) b` Takes a predicate, a property name, and an object and returns Just the value of the specified object property if it exists and the value satisfies the given predicate; Nothing otherwise. See also [`gets`](#gets), [`prop`](#prop), and [`value`](#value). ``` > S.get (S.is ($.Number)) ('x') ({x: 1, y: 2}) Just (1) > S.get (S.is ($.Number)) ('x') ({x: '1', y: '2'}) Nothing > S.get (S.is ($.Number)) ('x') ({}) Nothing > S.get (S.is ($.Array ($.Number))) ('x') ({x: [1, 2, 3]}) Just ([1, 2, 3]) > S.get (S.is ($.Array ($.Number))) ('x') ({x: [1, 2, 3, null]}) Nothing ``` #### `[gets](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3752) :: ([Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) b` Takes a predicate, a property path (an array of property names), and an object and returns Just the value at the given path if such a path exists and the value satisfies the given predicate; Nothing otherwise. See also [`get`](#get). ``` > S.gets (S.is ($.Number)) (['a', 'b', 'c']) ({a: {b: {c: 42}}}) Just (42) > S.gets (S.is ($.Number)) (['a', 'b', 'c']) ({a: {b: {c: '42'}}}) Nothing > S.gets (S.is ($.Number)) (['a', 'b', 'c']) ({}) Nothing ``` ### StrMap StrMap is an abbreviation of *string map*. A string map is an object, such as `{foo: 1, bar: 2, baz: 3}`, whose values are all members of the same type. Formally, a value is a member of type `StrMap a` if its [type identifier](https://github.com/sanctuary-js/sanctuary-type-identifiers/tree/v3.0.0) is `'Object'` and the values of its enumerable own properties are all members of type `a`. #### `[value](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3793) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Retrieve the value associated with the given key in the given string map. Formally, `value (k) (m)` evaluates to `Just (m[k])` if `k` is an enumerable own property of `m`; `Nothing` otherwise. See also [`prop`](#prop) and [`get`](#get). ``` > S.value ('foo') ({foo: 1, bar: 2}) Just (1) > S.value ('bar') ({foo: 1, bar: 2}) Just (2) > S.value ('baz') ({foo: 1, bar: 2}) Nothing ``` #### `[singleton](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3825) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a` Takes a string and a value of any type, and returns a string map with a single entry (mapping the key to the value). ``` > S.singleton ('foo') (42) {"foo": 42} ``` #### `[insert](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3847) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> a -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a` Takes a string, a value of any type, and a string map, and returns a string map comprising all the entries of the given string map plus the entry specified by the first two arguments (which takes precedence). Equivalent to Haskell's `insert` function. Similar to Clojure's `assoc` function. ``` > S.insert ('c') (3) ({a: 1, b: 2}) {"a": 1, "b": 2, "c": 3} > S.insert ('a') (4) ({a: 1, b: 2}) {"a": 4, "b": 2} ``` #### `[remove](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3876) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a` Takes a string and a string map, and returns a string map comprising all the entries of the given string map except the one whose key matches the given string (if such a key exists). Equivalent to Haskell's `delete` function. Similar to Clojure's `dissoc` function. ``` > S.remove ('c') ({a: 1, b: 2, c: 3}) {"a": 1, "b": 2} > S.remove ('c') ({}) {} ``` #### `[keys](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3905) :: [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns the keys of the given string map, in arbitrary order. ``` > S.sort (S.keys ({b: 2, c: 3, a: 1})) ["a", "b", "c"] ``` #### `[values](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3919) :: [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) a` Returns the values of the given string map, in arbitrary order. ``` > S.sort (S.values ({a: 1, c: 3, b: 2})) [1, 2, 3] ``` #### `[pairs](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3936) :: [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) ([Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) a)` Returns the key–value pairs of the given string map, in arbitrary order. ``` > S.sort (S.pairs ({b: 2, a: 1, c: 3})) [Pair ("a") (1), Pair ("b") (2), Pair ("c") (3)] ``` #### `[fromPairs](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3954) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f ([Pair](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Pair) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) a) -> [StrMap](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#StrMap) a` Returns a string map containing the key–value pairs specified by the given [Foldable](https://github.com/fantasyland/fantasy-land/tree/v4.0.1#foldable). If a key appears in multiple pairs, the rightmost pair takes precedence. ``` > S.fromPairs ([S.Pair ('a') (1), S.Pair ('b') (2), S.Pair ('c') (3)]) {"a": 1, "b": 2, "c": 3} > S.fromPairs ([S.Pair ('x') (1), S.Pair ('x') (2)]) {"x": 2} ``` ### Number #### `[negate](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L3981) :: [ValidNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#ValidNumber) -> [ValidNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#ValidNumber)` Negates its argument. ``` > S.negate (12.5) -12.5 > S.negate (-42) 42 ``` #### `[add](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4001) :: [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Returns the sum of two (finite) numbers. ``` > S.add (1) (1) 2 ``` #### `[sum](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4020) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Returns the sum of the given array of (finite) numbers. ``` > S.sum ([1, 2, 3, 4, 5]) 15 > S.sum ([]) 0 > S.sum (S.Just (42)) 42 > S.sum (S.Nothing) 0 ``` #### `[sub](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4043) :: [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Takes a finite number `n` and returns the *subtract `n`* function. ``` > S.map (S.sub (1)) ([1, 2, 3]) [0, 1, 2] ``` #### `[mult](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4062) :: [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Returns the product of two (finite) numbers. ``` > S.mult (4) (2) 8 ``` #### `[product](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4081) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Returns the product of the given array of (finite) numbers. ``` > S.product ([1, 2, 3, 4, 5]) 120 > S.product ([]) 1 > S.product (S.Just (42)) 42 > S.product (S.Nothing) 1 ``` #### `[div](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4104) :: [NonZeroFiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#NonZeroFiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Takes a non-zero finite number `n` and returns the *divide by `n`* function. ``` > S.map (S.div (2)) ([0, 1, 2, 3]) [0, 0.5, 1, 1.5] ``` #### `[pow](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4124) :: [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Takes a finite number `n` and returns the *power of `n`* function. ``` > S.map (S.pow (2)) ([-3, -2, -1, 0, 1, 2, 3]) [9, 4, 1, 0, 1, 4, 9] > S.map (S.pow (0.5)) ([1, 4, 9, 16, 25]) [1, 2, 3, 4, 5] ``` #### `[mean](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4146) :: [Foldable](https://github.com/sanctuary-js/sanctuary-type-classes/tree/v12.1.0#Foldable) f => f [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [FiniteNumber](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#FiniteNumber)` Returns the mean of the given array of (finite) numbers. ``` > S.mean ([1, 2, 3, 4, 5]) Just (3) > S.mean ([]) Nothing > S.mean (S.Just (42)) Just (42) > S.mean (S.Nothing) Nothing ``` ### Integer #### `[even](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4183) :: [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given integer is even; `false` if it is odd. ``` > S.even (42) true > S.even (99) false ``` #### `[odd](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4203) :: [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Returns `true` if the given integer is odd; `false` if it is even. ``` > S.odd (99) true > S.odd (42) false ``` ### Parse #### `[parseDate](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4225) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [ValidDate](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#ValidDate)` Takes a string `s` and returns `Just (new Date (s))` if `new Date (s)` evaluates to a [`ValidDate`](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#ValidDate) value; Nothing otherwise. As noted in [#488](https://github.com/sanctuary-js/sanctuary/issues/488), this function's behaviour is unspecified for some inputs! [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) warns against using the `Date` constructor to parse date strings: > **Note:** parsing of date strings with the `Date` constructor […] is strongly discouraged due to browser differences and inconsistencies. Support for RFC 2822 format strings is by convention only. Support for ISO 8601 formats differs in that date-only strings (e.g. "1970-01-01") are treated as UTC, not local. ``` > S.parseDate ('2011-01-19T17:40:00Z') Just (new Date ("2011-01-19T17:40:00.000Z")) > S.parseDate ('today') Nothing ``` #### `[parseFloat](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4291) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [Number](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Number)` Takes a string and returns Just the number represented by the string if it does in fact represent a number; Nothing otherwise. ``` > S.parseFloat ('-123.45') Just (-123.45) > S.parseFloat ('foo.bar') Nothing ``` #### `[parseInt](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4319) :: Radix -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [Integer](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Integer)` Takes a radix (an integer between 2 and 36 inclusive) and a string, and returns Just the number represented by the string if it does in fact represent a number in the base specified by the radix; Nothing otherwise. This function is stricter than [`parseInt`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt): a string is considered to represent an integer only if all its non-prefix characters are members of the character set specified by the radix. ``` > S.parseInt (10) ('-42') Just (-42) > S.parseInt (16) ('0xFF') Just (255) > S.parseInt (16) ('0xGG') Nothing ``` #### `[parseJson](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4359) :: ([Any](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Any) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) a` Takes a predicate and a string that may or may not be valid JSON, and returns Just the result of applying `JSON.parse` to the string *if* the result satisfies the predicate; Nothing otherwise. ``` > S.parseJson (S.is ($.Array ($.Integer))) ('[') Nothing > S.parseJson (S.is ($.Array ($.Integer))) ('["1","2","3"]') Nothing > S.parseJson (S.is ($.Array ($.Integer))) ('[0,1.5,3,4.5]') Nothing > S.parseJson (S.is ($.Array ($.Integer))) ('[1,2,3]') Just ([1, 2, 3]) ``` ### RegExp #### `[regex](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4411) :: [RegexFlags](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#RegexFlags) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [RegExp](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#RegExp)` Takes a [RegexFlags](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#RegexFlags) and a pattern, and returns a RegExp. ``` > S.regex ('g') (':\\d+:') /:\d+:/g ``` #### `[regexEscape](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4430) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes a string that may contain regular expression metacharacters, and returns a string with those metacharacters escaped. Properties: * `forall s :: String. S.test (S.regex ('') (S.regexEscape (s))) (s) = true` ``` > S.regexEscape ('-=*{XYZ}*=-') "\\-=\*\\{XYZ\\}\*=\\-" ``` #### `[test](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4453) :: [RegExp](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#RegExp) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Boolean](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Boolean)` Takes a pattern and a string, and returns `true` [iff](https://en.wikipedia.org/wiki/If_and_only_if) the pattern matches the string. ``` > S.test (/^a/) ('abacus') true > S.test (/^a/) ('banana') false ``` #### `[match](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4476) :: [NonGlobalRegExp](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#NonGlobalRegExp) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) { match :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String), groups :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) ([Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)) }` Takes a pattern and a string, and returns Just a match record if the pattern matches the string; Nothing otherwise. `groups :: Array (Maybe String)` acknowledges the existence of optional capturing groups. Properties: * `forall p :: Pattern, s :: String. S.head (S.matchAll (S.regex ('g') (p)) (s)) = S.match (S.regex ('') (p)) (s)` See also [`matchAll`](#matchAll). ``` > S.match (/(good)?bye/) ('goodbye') Just ({"groups": [Just ("good")], "match": "goodbye"}) > S.match (/(good)?bye/) ('bye') Just ({"groups": [Nothing], "match": "bye"}) ``` #### `[matchAll](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4511) :: [GlobalRegExp](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#GlobalRegExp) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) { match :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String), groups :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) ([Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)) }` Takes a pattern and a string, and returns an array of match records. `groups :: Array (Maybe String)` acknowledges the existence of optional capturing groups. See also [`match`](#match). ``` > S.matchAll (/@([a-z]+)/g) ('Hello, world!') [] > S.matchAll (/@([a-z]+)/g) ('Hello, @foo! Hello, @bar! Hello, @baz!') [{"groups": [Just ("foo")], "match": "@foo"}, {"groups": [Just ("bar")], "match": "@bar"}, {"groups": [Just ("baz")], "match": "@baz"}] ``` ### String #### `[toUpper](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4548) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns the upper-case equivalent of its argument. See also [`toLower`](#toLower). ``` > S.toUpper ('ABC def 123') "ABC DEF 123" ``` #### `[toLower](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4564) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns the lower-case equivalent of its argument. See also [`toUpper`](#toUpper). ``` > S.toLower ('ABC def 123') "abc def 123" ``` #### `[trim](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4580) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Strips leading and trailing whitespace characters. ``` > S.trim ('\t\t foo bar \n') "foo bar" ``` #### `[stripPrefix](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4594) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns Just the portion of the given string (the second argument) left after removing the given prefix (the first argument) if the string starts with the prefix; Nothing otherwise. See also [`stripSuffix`](#stripSuffix). ``` > S.stripPrefix ('https://') ('https://sanctuary.js.org') Just ("sanctuary.js.org") > S.stripPrefix ('https://') ('http://sanctuary.js.org') Nothing ``` #### `[stripSuffix](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4621) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Maybe](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Maybe) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns Just the portion of the given string (the second argument) left after removing the given suffix (the first argument) if the string ends with the suffix; Nothing otherwise. See also [`stripPrefix`](#stripPrefix). ``` > S.stripSuffix ('.md') ('README.md') Just ("README") > S.stripSuffix ('.md') ('README') Nothing ``` #### `[words](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4648) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes a string and returns the array of words the string contains (words are delimited by whitespace characters). See also [`unwords`](#unwords). ``` > S.words (' foo bar baz ') ["foo", "bar", "baz"] ``` #### `[unwords](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4671) :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes an array of words and returns the result of joining the words with separating spaces. See also [`words`](#words). ``` > S.unwords (['foo', 'bar', 'baz']) "foo bar baz" ``` #### `[lines](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4688) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes a string and returns the array of lines the string contains (lines are delimited by newlines: `'\n'` or `'\r\n'` or `'\r'`). The resulting strings do not contain newlines. See also [`unlines`](#unlines). ``` > S.lines ('foo\nbar\nbaz\n') ["foo", "bar", "baz"] ``` #### `[unlines](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4710) :: [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes an array of lines and returns the result of joining the lines after appending a terminating line feed (`'\n'`) to each. See also [`lines`](#lines). ``` > S.unlines (['foo', 'bar', 'baz']) "foo\nbar\nbaz\n" ``` #### `[splitOn](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4730) :: [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Returns the substrings of its second argument separated by occurrences of its first argument. See also [`joinWith`](#joinWith) and [`splitOnRegex`](#splitOnRegex). ``` > S.splitOn ('::') ('foo::bar::baz') ["foo", "bar", "baz"] ``` #### `[splitOnRegex](https://github.com/sanctuary-js/sanctuary/blob/v3.1.0/index.js#L4747) :: [GlobalRegExp](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#GlobalRegExp) -> [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String) -> [Array](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#Array) [String](https://github.com/sanctuary-js/sanctuary-def/tree/v0.22.0#String)` Takes a pattern and a string, and returns the result of splitting the string at every non-overlapping occurrence of the pattern. Properties: * `forall s :: String, t :: String. S.joinWith (s) (S.splitOnRegex (S.regex ('g') (S.regexEscape (s))) (t)) = t` See also [`splitOn`](#splitOn). ``` > S.splitOnRegex (/[,;][ ]*/g) ('foo, bar, baz') ["foo", "bar", "baz"] > S.splitOnRegex (/[,;][ ]*/g) ('foo;bar;baz') ["foo", "bar", "baz"] ``` © 2020 Sanctuary © 2016 Plaid Technologies, Inc. Licensed under the MIT License. <https://sanctuary.js.org/sanctuary
hydroroute
cran
R
Package ‘hydroroute’ February 8, 2023 Type Package Title Trace Longitudinal Hydropeaking Waves Version 0.1.2 Description Implements an empirical approach referred to as PeakTrace which uses multiple hydro- graphs to detect and follow hydropower plant-specific hydropeaking waves at the sub- catchment scale and to describe how hydropeaking flow parameters change along the longitudi- nal flow path. The method is based on the identification of associated events and uses (linear) re- gression models to describe translation and retention processes between neighboring hydro- graphs. Several regression model results are combined to arrive at a power plant- specific model. The approach is proposed and vali- dated in Greimel et al. (2022) <doi:10.1002/rra.3978>. The identification of associ- ated events is based on the event detection implemented in 'hydropeak'. License GPL-2 Encoding UTF-8 Depends R (>= 4.1.0) Imports dplyr, ggpmisc, ggplot2, gridExtra, hydropeak, lubridate, parallel, reshape2, stats, utils, scales RoxygenNote 7.2.3 Suggests rmarkdown, knitr VignetteBuilder knitr NeedsCompilation no Author <NAME> [cre, ctb] (<https://orcid.org/0000-0001-7265-4773>), <NAME> [aut], <NAME> [ctb] (<https://orcid.org/0000-0002-8000-1227>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-02-08 13:20:02 UTC R topics documented: estimate_A... 2 extract_A... 4 get_la... 5 get_lag_di... 7 get_lag_fil... 9 merge_tim... 11 peaktrac... 12 routin... 14 estimate_AE Estimate Associated Events Description For two neighboring stations, potential associated events (AEs) are determined according to the time lag and metric (amplitude) difference allowed. For all potential AEs, parabolas are fitted to the histogram obtained for the relative difference in amplitude binned into intervals from -1 to 1 of width 0.1 by fixing the vertex at the inner maximum of the histogram and the width is determined by minimizing the average squared distances between the parabola and the histogram data along arbitrary symmetric ranges from the inner maximum. Based on the fitted parabola, cut points with the x-axis are determined such that only those potential AEs are retained where the relative difference is within these cut points. If this automatic scheme does not succeed to determine suitable cut points, e.g., because the estimated cut points are outside -1 and 1, then a strict criterion for the relative difference in amplitude is imposed to identify AEs considering only deviations of at most 10%. Usage estimate_AE( Sx, Sy, relation, timeLag = c(1, 1, 1), metricLag = c(1, 1), unique = c("time", "metric"), TimeFormat = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1", settings = NULL ) Arguments Sx Data frame that consists of flow fluctuation events and computed metrics (see hydropeak::get_events()) of an upstream hydrograph Sx . Sy Data frame that consists of flow fluctuation events and computed metrics (see hydropeak::get_events()) of a downstream hydrograph Sy . relation Data frame that contains the relation between upstream and downstream hydro- graph. Must only contain two rows (one for each hydrograph) in order of their location in downstream direction. See the appended example data relation.csv or the vignette for details on the structure. See get_lag() for further informa- tion about the relation and the lag between the hydrographs. timeLag Numeric vector specifying factors to alter the interval to capture events from the downstream hydrograph. By default it is timeLag = c(1, 1, 1), this refers to matches within a time slot ± the mean translation time from relation. For exact time matches, timeLag = c(0, 1, 0) must be specified. metricLag Numeric vector specifying factors to alter the interval of relative metric devi- ations to capture events from the downstream hydrograph. By default. it is metricLag = c(1, 1), such that events are filtered where the amplitude at Sy is at least 0, i.e., amplitude at Sx − 1· amplitude at Sx , and at most two times the amplitude at Sx , i.e., Sx + 1· amplitude at Sx . For exact matches, metricLag = c(0, 0) must be specified. unique Character string specifying if the potential AEs which meet the timeLag and metricLag condition should be filtered to contain only unique events using "time", i.e., by selecting those where the time difference is smallest compared to the specified factor of the mean translation time, or using "metric", i.e., by selecting those where the relative difference in amplitude is smallest (default: "time"). TimeFormat Character string giving the date-time format of the date-time column in the input data frame (default: "%Y-%m-%d %H:%M"). tz Character string specifying the time zone to be used for the conversion (default: "Etc/GMT-1"). settings Data.frame with 3 rows and columns station.x, station.y, bound, lag, metric. lag needs to correspond to the unique value specified in argument timeLag and bound needs to contain "lower", "inner", "upper". Value A nested list containing the estimated settings, the histogram obtained for the relative difference data with estimated cut points, and the obtained “real” AEs. Examples # file paths Sx <- system.file("testdata", "Events", "100000_2_2014-01-01_2014-02-28.csv", package = "hydroroute") Sy <- system.file("testdata", "Events", "200000_2_2014-01-01_2014-02-28.csv", package = "hydroroute") relation <- system.file("testdata", "relation.csv", package = "hydroroute") # read data Sx <- utils::read.csv(Sx) Sy <- utils::read.csv(Sy) relation <- utils::read.csv(relation) relation <- relation[1:2, ] # estimate AE, exact time matches results <- estimate_AE(Sx, Sy, relation, timeLag = c(0, 1, 0)) results$settings results$plot_threshold results$real_AE extract_AE Extract Associated Events Description For given relation and event data return the associated events which comply with the conditions specified in the settings. Usage extract_AE( relation_path, events_path, settings_path, unique = c("time", "metric"), inputdec = ".", inputsep = ",", saveResults = FALSE, outdir = tempdir(), TimeFormat = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1" ) Arguments relation_path Character string containing the path of the file where the relation file is to be read from with utils::read.csv(). The file must contain a column ID that contains the gauging station ID’s in the file have to be in order of their location in downstream direction. events_path Character string containing the path of the directory where the event files cor- responding to the ‘relation‘ file are located. Only relevant files in this directory will be used, i.e., files that are related to the ‘relation‘ file. settings_path Character string containing the path of the file where the settings file is to be read from with utils::read.csv(). The file must be in the format of the output of peaktrace(). unique Character string specifying if the potential AEs which meet the timeLag and metricLag condition should be filtered to contain only unique events using "time", i.e., by selecting those where the time difference is smallest compared to the specified factor of the mean translation time, or using "metric", i.e., by selecting those where the relative difference in amplitude is smallest (default: "time"). inputdec Character string for decimal points in input data. inputsep Field separator character string for input data. saveResults A logical. If FALSE (default), the extracted AEs are not saved. Otherwise the extracted AEs are written to a csv file. outdir Character string naming a directory where the extraced AEs should be saved to. TimeFormat Character string giving the date-time format of the date-time column in the input data frame (default: "%Y-%m-%d %H:%M"). tz Character string specifying the time zone to be used for the conversion (default: "Etc/GMT-1"). Value A data frame containing “real” AEs (i.e., events where the time differences and the relative differ- ence in amplitude is within the limits and cut points provided by the file in settings_path). If no AEs can be found between the first two neighboring stations, NULL is returned. Otherwise the function returns all “real” AEs that could be found along the river section specified in the file from relation_path. A warning is issued when the extraction is stopped early and shows the IDs for which no AEs are determined. Examples relation_path <- system.file("testdata", "relation.csv", package = "hydroroute") events_path <- system.file("testdata", "Events", package = "hydroroute") settings_path <- system.file("testdata", "Q_event_2_AMP-LAG_aut_settings.csv", package = "hydroroute") real_AE <- extract_AE(relation_path, events_path, settings_path) get_lag Get Lag Description Given a data frame (time series) of measurements and a vector of gauging station ID’s in order of their location in downstream direction, the lag (the amount of passing time between two gauging stations) is estimated based on the cross-correlation function (ccf) of the time series of two adjacent gauging stations (stats::ccf()). To ensure that the same time period is used for every gauging station, intersecting time steps are determined. These time steps are used to estimate the lags. The result of stats::ccf() is rounded to four decimals before selecting the optimal time lag so that minimal differences are neglected. If there are multiple time steps with the highest correlation, the smallest time step is considered. If the highest correlation corresponds to a zero lag or positive lag (note that the result should usually be negative as measurements at the lower gauge are later recorded as measurements at the upper gauge), a time step of length 1 is selected and a warning message is generated. Usage get_lag( Q, relation, steplength = 15, lag.max = 20, na.action = na.pass, mc.cores = getOption("mc.cores", 2L), tz = "Etc/GMT-1", format = "%Y.%m.%d %H:%M", cols = c(1, 2, 3) ) Arguments Q Data frame (time series) of measurements which contains at least a column with the gauging station ID’s (default: column index 1), a column with date-time values in character representation (default: column index 2) and a column with flow measurements (default: column index 3). If the column indices differ from c(1, 2, 3), they have to be specified in the cols argument in the format c(i,j, k). relation A character vector containing the gauging station ID’s in order of their location in downstream direction. steplength Numeric value that specifies the length between time steps in minutes (default: 15 minutes). As time steps have to be equispaced, this is used by hydropeak::flow() to get a compatible format and fill missing time steps with NA. lag.max Numeric value that specifies the maximum lag at which to calculate the ccf in stats::ccf() (default: 20). na.action Function to be called to handle missing values in stats::ccf() (default: na.pass). mc.cores Number of cores to use with parallel::mclapply(). On Windows, this is set to 1. tz Character string specifying the time zone to be used for internal conversion (de- fault: Etc/GMT-1). format Character string giving the date-time format of the date-time column in the input data frame Q. This is passed to hydropeak::flow(), to get a compatible format (default: YYYY.mm.dd HH:MM). cols Integer vector specifying column indices in Q. The default indices are 1 (ID), 2 (date-time) and 3 (flow rate, Q). This is passed to hydropeak::flow(). Value A character vector which contains the estimated cumulative lag between neighboring gauging sta- tions in the format HH:MM. Examples Q_path <- system.file("testdata", "Q.csv", package = "hydroroute") Q <- utils::read.csv(Q_path) relation_path <- system.file("testdata", "relation.csv", package = "hydroroute") relation <- utils::read.csv(relation_path) # from relation data frame get_lag(Q, relation$ID, format = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1") # station ID's in downstream direction as vector relation <- c("100000", "200000", "300000", "400000") get_lag(Q, relation, format = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1") get_lag_dir Get Lag from Input Directory Description Given a file path it reads a data frame (time series) of measurements. For each relation file in the provided directory path it calls get_lag_file(). Make sure that the file with Q data and the relation files have the same separator (inputsep) and character for decimal points (inputdec). Gauging station ID’s in the relation files have to be in order of their location in downstream direction. The resulting lags are appended to the relation files. The resulting list of relation files can be returned and each relation file can be saved to its input path. Usage get_lag_dir( Q, relation, steplength = 15, lag.max = 20, na.action = na.pass, tz = "Etc/GMT-1", format = "%Y.%m.%d %H:%M", cols = c(1, 2, 3), inputsep = ",", inputdec = ".", relation_pattern = "relation", save = FALSE, mc.cores = getOption("mc.cores", 2L), overwrite = FALSE ) Arguments Q Data frame or character string. If it is a data frame, it corresponds to the Q data frame in get_lag(). It contains at least a column with the gauging station ID’s (default: column index 1), a column with date-time values in character represen- tation (default: column index 2) and a column with flow measurements (default: column index 3). If the column indices differ from c(1, 2, 3), they have to be specified as cols argument in the format c(i, j, k). If it is a character string, it contains the path to the corresponding file which is then read within the function with utils::read.csv(). relation A character string containing the path to the directory where the relation files are located. They are read within the function with utils::read.csv(). steplength Numeric value that specifies the length between time steps in minutes (default: 15 minutes). As time steps have to be equispaced, this is used by hydropeak::flow() to get a compatible format and fill missing time steps with NA. lag.max Maximum lag at which to calculate the ccf in stats::ccf() (default: 20). na.action Function to be called to handle missing values in stats::ccf() (default: na.pass). tz Character string specifying the time zone to be used for internal conversion (de- fault: Etc/GMT-1). format Character string giving the date-time format of the date-time column in the input data frame Q. This is passed to hydropeak::flow(), to get a compatible format (default: YYYY.mm.dd HH:MM). cols Integer vector specifying column indices in the input data frame which contain gauging station ID, date-time and flow rate to be renamed. The default indices are 1 (ID), 2 (date-time) and 3 (flow rate, Q). inputsep Field separator character string for input data. inputdec Character string for decimal points in input data. relation_pattern Character string containing a regular expression to filter relation files (de- fault: relation, to filter files that contain relation with no restriction) (see base::grep()). save A logical. If FALSE (default) the lag, appended to the relation file, overwrites the original relation input file. mc.cores Number of cores to use with parallel::mclapply(). On Windows, this is set to 1. overwrite A logical. If FALSE (default), it produces an error if a LAG column already exists in the relation file. Otherwise, it overwrites an existing column. Value Returns invisibly a list of data frames where each list element represents a relation file from the input directory. Optionally, the data frames are used to overwrite the existing relation files with the appended LAG column. Examples Q_file <- system.file("testdata", "Q.csv", package = "hydroroute") relations_path <- system.file("testdata", package = "hydroroute") lag_list <- get_lag_dir(Q_file, relations_path, inputsep = ",", inputdec = ".", format = "%Y-%m-%d %H:%M", overwrite = TRUE) lag_list get_lag_file Get Lag from Input File Description Given a file path it reads a data frame (time series) of measurements which combines several gaug- ing station ID’s and calls get_lag(). The relation (ID’s) of gauging stations is read from a file (provided through the file path). The file with Q data and the relation file need to have the same separator (inputsep) and character for decimal points (inputdec). Gauging station ID’s have to be in order of their location in downstream direction. The resulting lag is appended to the relation file. This can be saved to a file. Usage get_lag_file( Q_file, relation_file, steplength = 15, lag.max = 20, na.action = na.pass, tz = "Etc/GMT-1", format = "%Y.%m.%d %H:%M", cols = c(1, 2, 3), inputsep = ";", inputdec = ".", save = FALSE, outfile = file.path(tempdir(), "relation.csv"), mc.cores = getOption("mc.cores", 2L), overwrite = FALSE ) Arguments Q_file Data frame or character string. If it is a data frame, it corresponds to the Q data frame in get_lag(). It contains at least a column with the gauging station ID’s (default: column index 1), a column with date-time values in character represen- tation (default: column index 2) and a column with flow measurements (default: column index 3). If the column indices differ from c(1, 2, 3), they have to be specified as cols argument in the format c(i, j, k). If it is a character string, it contains the path to the corresponding file which is then read within the function with utils::read.csv(). relation_file A character string containing the path to the relation file. It is read within the function with utils::read.csv(). The file must contain a column ID that con- tains the gauging station ID’s in order of their location in downstream direction. The lag will then be appended as column to the data frame. For more details on the relation file, see the vignette. steplength Numeric value that specifies the length between time steps in minutes (default: 15 minutes). As time steps have to be equispaced, this is used by hydropeak::flow() to get a compatible format and fill missing time steps with NA. lag.max Maximum lag at which to calculate the ccf in stats::ccf() (default: 20). na.action Function to be called to handle missing values in stats::ccf() (default: na.pass). tz Character string specifying the time zone to be used for internal conversion (de- fault: Etc/GMT-1). format Character string giving the date-time format of the date-time column in the input data frame Q. This is passed to hydropeak::flow(), to get a compatible format (default: YYYY.mm.dd HH:MM). cols Integer vector specifying column indices in the input data frame which contain gauging station ID, date-time and flow rate to be renamed. The default indices are 1 (ID), 2 (date-time) and 3 (flow rate, Q). inputsep Character string for the field separator in input data. inputdec Character string for decimal points in input data. save A logical. If FALSE (default) the lag, appended to the relation file, is not written to a file, otherwise it is written to outfile. outfile A character string naming a file path and name where the output file should be written to. mc.cores Number of cores to use with parallel::mclapply(). On Windows, this is set to 1. overwrite A logical. If FALSE (default), it produces an error if a LAG column already exists in the relation file. Otherwise, it overwrites an existing column. Value Returns invisibly the data frame of the relation data with the estimated cumulative lag between neighboring gauging stations in the format HH:MM appended. Examples Q_file <- system.file("testdata", "Q.csv", package = "hydroroute") relation_file <- system.file("testdata", "relation.csv", package = "hydroroute") get_lag_file(Q_file, relation_file, inputsep = ",", inputdec = ".", format = "%Y-%m-%d %H:%M", save = FALSE, overwrite = TRUE) Q_file <- read.csv(Q_file) get_lag_file(Q_file, relation_file, inputsep = ",", inputdec = ".", format = "%Y-%m-%d %H:%M", save = FALSE, overwrite = TRUE) merge_time Merge Events Description Given two event data frames of neighboring stations Sx and Sy that consist of flow fluctuation events and computed metrics (see hydropeak::get_events()), the translation time indicated by the relation file as well as timeLag between these two stations is subtracted from Sy and events are merged where matches according to differences allowed to timeLag can be found. Usage merge_time( Sx, Sy, relation, timeLag = c(1, 1, 1), TimeFormat = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1" ) Arguments Sx Data frame that consists of flow fluctuation events and computed metrics (see hydropeak::get_events()) of an upstream hydrograph Sx . Sy Data frame that consists of flow fluctuation events and computed metrics (see hydropeak::get_events()) of a downstream hydrograph Sy . relation Data frame that contains the relation between upstream and downstream hydro- graph. Must only contain two rows (one for each hydrograph) in order of their location in downstream direction. See the appended example data relation.csv or vignette for details on the structure. See get_lag() for further information about the relation and the lag between the hydrographs. timeLag Numeric vector specifying factors to alter the interval to capture events from the downstream hydrograph. By default it is timeLag = c(1, 1, 1), this refers to matches within a time slot ± the mean translation time from relation. For exact time matches, timeLag = c(0, 1, 0) must be specified. TimeFormat Character string giving the date-time format of the date-time column in the input data frame (default: "%Y-%m-%d %H:%M"). tz Character string specifying the time zone to be used for the conversion (default: "Etc/GMT-1"). Value Data frame that has a matched event at Sx and Sy in each row. If no matches are detected, NULL is returned. Examples Sx <- system.file("testdata", "Events", "100000_2_2014-01-01_2014-02-28.csv", package = "hydroroute") Sy <- system.file("testdata", "Events", "200000_2_2014-01-01_2014-02-28.csv", package = "hydroroute") relation <- system.file("testdata", "relation.csv", package = "hydroroute") # read data Sx <- utils::read.csv(Sx) Sy <- utils::read.csv(Sy) relation <- utils::read.csv(relation) relation <- relation[1:2, ] # exact matches merged <- merge_time(Sx, Sy, relation, timeLag = c(0, 1, 0)) head(merged) # matches within +/- mean translation time merged <- merge_time(Sx, Sy, relation) head(merged) peaktrace Trace Longitudinal Hydropeaking Waves Along a River Section Description Estimates all settings based on the ‘relation’ file of a river section. The function uses a single ‘relation’ file and determines the settings for all neighboring stations with estimate_AE() for all event types specified in event_type. It fits models to describe translation and retention processes between neighboring hydrographs, and generates plots (see vignette for details). Given a file with initial values (see vignette), predictions are made and visualized in a plot. Optionally, the results can be written to a directory. All files need to have the same separator (inputsep) and character for decimal points (inputdec). Usage peaktrace( relation_path, events_path, initial_values_path, settings_path, unique = c("time", "metric"), inputdec = ".", inputsep = ",", event_type = c(2, 4), saveResults = FALSE, outdir = tempdir(), TimeFormat = "%Y-%m-%d %H:%M", tz = "Etc/GMT-1", formula = y ~ x, model = stats::lm, FKM_MAX = 65, impute_method = base::max, ... ) Arguments relation_path Character string containing the path of the file where the relation file is to be read from with utils::read.csv(). The file must contain a column ID that contains the gauging station ID. ID’s in the file have to be in order of their location in downstream direction. events_path Character string containing the path of the directory where the event files cor- responding to the ‘relation’ file are located. Only relevant files in this directory will be used, i.e., files that are related to the ‘relation’ file. initial_values_path Character string containing the path of the file which contains initial values for predictions (see vignette). settings_path Character string containing the path where the settings files are to be read from with utils::read.csv() if available. The settings files must be in the format of the output of peaktrace(). If missing or incomplete, the settings are deter- mined automatically. unique Character string specifying if the potential AEs which meet the timeLag and metricLag condition should be filtered to contain only unique events using "time", i.e., by selecting those where the time difference is smallest compared to the specified factor of the mean translation time, or using "metric", i.e., by selecting those where the relative difference in amplitude is smallest (default: "time"). inputdec Character string for decimal points in input data. inputsep Field separator character string for input data. event_type Vector specifying the event type that is used to identify event files by their file names (see hydropeak::get_events()). Default: c(2, 4), i.e., increasing and decreasing events. saveResults A logical. If FALSE (default), the generated plots and the estimated settings are not saved. Otherwise the settings are written to a csv file and the plots are saved as png and pdf files. outdir Character string naming a directory where the estimated settings should be saved to. TimeFormat Character string giving the date-time format of the date-time column in the input data frame (default: "%Y-%m-%d %H:%M"). tz Character string specifying the time zone to be used for the conversion (default: "Etc/GMT-1"). formula An object of class stats::formula() to fit models. model Function which specifies the method used for fitting models (default: stats::lm()). The model class must have a stats::predict() function. FKM_MAX Numeric value that specifies the maximum fkm (see ‘relation’ file) for which predictions seem valid. impute_method Function which specifies the method used for imputing missing values in initial values based on potential AEs (default: base::max()).’ ... Additional arguments to be passed to the function specified in argument model. Value A nested list containing an element for each event type in order as defined in event_type. Each element contains again six elements, namely a data frame of estimated settings, a ‘gtable’ object that specifies the combined plot of all stations (plot it with grid::grid.draw()), a data frame con- taining “real” AEs (i.e., events where the relative difference in amplitude is within the estimated cut points), a grid of scatterplots (‘gtable‘ object) for neighboring hydrographs with a regression line for each metric, a data frame of results of the model fitting where each row contains the corresponding stations and metric, the model type (default: "lm"), formula, coefficients, number of observations and R2 , and a plot of predicted values based on the “initial values”. routing Estimate Models and Make Predictions Description Performs the “routing” procedure, i.e., based on associated events, it uses (linear) models to describe translation and retention processes between neighboring hydrographs. Usage routing( real_AE, initials, relation, formula = y ~ x, model = stats::lm, FKM_MAX = 65, ... ) Arguments real_AE Data frame that contains real AEs of two neighboring hydrographs estimated with estimate_AE(). initials Data frame that contains initial values for predictions (see vignette). relation Data frame that contains the relation between upstream and downstream hydro- graph. Must only contain two rows (one for each hydrograph) in order of their location in downstream direction. See the appended example data relation.csv or vignette for details on the structure. See get_lag() for further information about the relation and the lag between the hydrographs. formula An object of class stats::formula() to fit models. model Function which specifies the method used for fitting models (default: stats::lm()). The model class must have a stats::predict() function. FKM_MAX Numeric value that specifies the maximum fkm (see relation file) for which pre- dictions seem valid. ... Additional arguments to be passed to the function specified in argument model. Value A nested list containing a grid of scatterplots (‘gtable‘ object) for neighboring hydrographs with a regression line for each metric, a data frame of results of the model fitting where each row contains the corresponding stations and metric, the model type (default: "lm"), formula, coefficients, number of observations and R2 , and a plot of predicted values based on the “initial values”.
XamarinForms.pdf
free_programming_book
Unknown
Xamarin .Forms Xamarin.Forms Notes for Professionals Notes for Professionals 100+ pages of professional hints and tricks GoalKicker.com Free Programming Books Disclaimer This is an unocial free book created for educational purposes and is not aliated with ocial Xamarin.Forms group(s) or company(s). All trademarks and registered trademarks are the property of their respective owners Contents About ... 1 Chapter 1: Getting started with Xamarin.Forms ... 2 Section 1.1: Installation (Visual Studio) ... 2 Section 1.2: Hello World Xamarin Forms: Visual Studio ... 4 Chapter 2: Why Xamarin Forms and When to use Xamarin Forms ... 7 Section 2.1: Why Xamarin Forms and When to use Xamarin Forms ... 7 Chapter 3: Xamarin Forms Layouts ... 8 Section 3.1: AbsoluteLayout ... 8 Section 3.2: Grid ... 10 Section 3.3: ContentPresenter ... 11 Section 3.4: ContentView ... 12 Section 3.5: ScrollView ... 13 Section 3.6: Frame ... 14 Section 3.7: TemplatedView ... 14 Section 3.8: RelativeLayout ... 15 Section 3.9: StackLayout ... 16 Chapter 4: Xamarin Relative Layout ... 19 Section 4.1: Box after box ... 19 Section 4.2: Page with an simple label on the middle ... 21 Chapter 5: Navigation in Xamarin.Forms ... 23 Section 5.1: NavigationPage ow with XAML ... 23 Section 5.2: NavigationPage ow ... 24 Section 5.3: Master Detail Navigation ... 25 Section 5.4: Using INavigation from view model ... 26 Section 5.5: Master Detail Root Page ... 28 Section 5.6: Hierarchical navigation with XAML ... 29 Section 5.7: Modal navigation with XAML ... 31 Chapter 6: Xamarin.Forms Page ... 32 Section 6.1: TabbedPage ... 32 Section 6.2: ContentPage ... 33 Section 6.3: MasterDetailPage ... 34 Chapter 7: Xamarin.Forms Cells ... 36 Section 7.1: EntryCell ... 36 Section 7.2: SwitchCell ... 36 Section 7.3: TextCell ... 37 Section 7.4: ImageCell ... 38 Section 7.5: ViewCell ... 39 Chapter 8: Xamarin.Forms Views ... 41 Section 8.1: Button ... 41 Section 8.2: DatePicker ... 42 Section 8.3: Entry ... 43 Section 8.4: Editor ... 43 Section 8.5: Image ... 44 Section 8.6: Label ... 45 Chapter 9: Using ListViews ... 47 Section 9.1: Pull to Refresh in XAML and Code behind ... 47 Chapter 10: Display Alert ... 48 Section 10.1: DisplayAlert ... 48 Section 10.2: Alert Example with only one button and action ... 49 Chapter 11: Accessing native features with DependencyService ... 50 Section 11.1: Implementing text-to-speech ... 50 Section 11.2: Getting Application and Device OS Version Numbers - Android & iOS - PCL ... 53 Chapter 12: DependencyService ... 55 Section 12.1: Android implementation ... 55 Section 12.2: Interface ... 56 Section 12.3: iOS implementation ... 56 Section 12.4: Shared code ... 57 Chapter 13: Custom Renderers ... 58 Section 13.1: Accessing renderer from a native project ... 58 Section 13.2: Rounded label with a custom renderer for Frame (PCL & iOS parts) ... 58 Section 13.3: Custom renderer for ListView ... 59 Section 13.4: Custom Renderer for BoxView ... 61 Section 13.5: Rounded BoxView with selectable background color ... 65 Chapter 14: Caching ... 68 Section 14.1: Caching using Akavache ... 68 Chapter 15: Gestures ... 70 Section 15.1: Make an Image tappable by adding a TapGestureRecognizer ... 70 Section 15.2: Gesture Event ... 70 Section 15.3: Zoom an Image with the Pinch gesture ... 78 Section 15.4: Show all of the zoomed Image content with the PanGestureRecognizer ... 78 Section 15.5: Tap Gesture ... 79 Section 15.6: Place a pin where the user touched the screen with MR.Gestures ... 79 Chapter 16: Data Binding ... 81 Section 16.1: Basic Binding to ViewModel ... 81 Chapter 17: Working with Maps ... 83 Section 17.1: Adding a map in Xamarin.Forms (Xamarin Studio) ... 83 Chapter 18: Custom Fonts in Styles ... 92 Section 18.1: Accessing custom Fonts in Syles ... 92 Chapter 19: Push Notications ... 94 Section 19.1: Push notications for Android with Azure ... 94 Section 19.2: Push notications for iOS with Azure ... 96 Section 19.3: iOS Example ... 99 Chapter 20: Eects ... 101 Section 20.1: Adding platform specic Eect for an Entry control ... 101 Chapter 21: Triggers & Behaviours ... 105 Section 21.1: Xamarin Forms Trigger Example ... 105 Section 21.2: Multi Triggers ... 106 Chapter 22: AppSettings Reader in Xamarin.Forms ... 107 Section 22.1: Reading app.cong le in a Xamarin.Forms Xaml project ... 107 Chapter 23: Creating custom controls ... 108 Section 23.1: Label with bindable collection of Spans ... 108 Section 23.2: Implementing a CheckBox Control ... 110 Section 23.3: Create an Xamarin Forms custom input control (no native required) ... 116 Section 23.4: Creating a custom Entry control with a MaxLength property ... 118 Section 23.5: Creating custom Button ... 119 Chapter 24: Working with local databases ... 121 Section 24.1: Using SQLite.NET in a Shared Project ... 121 Section 24.2: Working with local databases using xamarin.forms in visual studio 2015 ... 123 Chapter 25: CarouselView - Pre-release version ... 133 Section 25.1: Import CarouselView ... 133 Section 25.2: Import CarouselView into a XAML Page ... 133 Chapter 26: Exception handling ... 135 Section 26.1: One way to report about exceptions on iOS ... 135 Chapter 27: SQL Database and API in Xamarin Forms... 137 Section 27.1: Create API using SQL database and implement in Xamarin forms, ... 137 Chapter 28: Contact Picker - Xamarin Forms (Android and iOS) ... 138 Section 28.1: contact_picker.cs ... 138 Section 28.2: MyPage.cs ... 138 Section 28.3: ChooseContactPicker.cs ... 139 Section 28.4: ChooseContactActivity.cs ... 139 Section 28.5: MainActivity.cs ... 140 Section 28.6: ChooseContactRenderer.cs ... 141 Chapter 29: Xamarin Plugin ... 144 Section 29.1: Media Plugin ... 144 Section 29.2: Share Plugin ... 146 Section 29.3: ExternalMaps ... 147 Section 29.4: Geolocator Plugin ... 148 Section 29.5: Messaging Plugin ... 150 Section 29.6: Permissions Plugin ... 151 Chapter 30: OAuth2 ... 155 Section 30.1: Authentication by using Plugin ... 155 Chapter 31: MessagingCenter ... 157 Section 31.1: Simple example ... 157 Section 31.2: Passing arguments ... 157 Section 31.3: Unsubscribing ... 158 Chapter 32: Generic Xamarin.Forms app lifecycle? Platform-dependant! ... 159 Section 32.1: Xamarin.Forms lifecycle is not the actual app lifecycle but a cross-platform representation of it ... 159 Chapter 33: Platform-specic behaviour ... 161 Section 33.1: Removing icon in navigation header in Anroid ... 161 Section 33.2: Make label's font size smaller in iOS ... 161 Chapter 34: Platform specic visual adjustments ... 163 Section 34.1: Idiom adjustments ... 163 Section 34.2: Platform adjustments ... 163 Section 34.3: Using styles ... 164 Section 34.4: Using custom views ... 164 Chapter 35: Dependency Services ... 165 Section 35.1: Access Camera and Gallery ... 165 Chapter 36: Unit Testing ... 166 Section 36.1: Testing the view models ... 166 Chapter 37: BDD Unit Testing in Xamarin.Forms ... 172 Section 37.1: Simple Specow to test commands and navigation with NUnit Test Runner ... 172 Section 37.2: Advanced Usage for MVVM ... 174 Credits ... 175 You may also like ... 176 About Please feel free to share this PDF with anyone for free, latest version of this book can be downloaded from: https://goalkicker.com/XamarinFormsBook This Xamarin.Forms Notes for Professionals book is compiled from Stack Overow Documentation, the content is written by the beautiful people at Stack Overow. Text content is released under Creative Commons BY-SA, see credits at the end of this book whom contributed to the various chapters. Images may be copyright of their respective owners unless otherwise specied This is an unocial free book created for educational purposes and is not aliated with ocial Xamarin.Forms group(s) or company(s) nor Stack Overow. All trademarks and registered trademarks are the property of their respective company owners The information presented in this book is not guaranteed to be correct nor accurate, use at your own risk Please send feedback and corrections to <EMAIL> Goal<EMAIL> Xamarin.Forms Notes for Professionals 1 Chapter 1: Getting started with Xamarin.Forms Version 3.0.0 Release Date 2018-05-07 2.5.0 2017-11-15 2.4.0 2017-09-27 2.3.1 2016-08-03 2.3.0-hotx1 2016-06-29 2.3.0 2016-06-16 2.2.0-hotx1 2016-05-30 2.2.0 2016-04-27 2.1.0 2016-03-13 2.0.1 2016-01-20 2.0.0 2015-11-17 1.5.1 2016-10-20 1.5.0 2016-09-25 1.4.4 2015-07-27 1.4.3 2015-06-30 1.4.2 2015-04-21 1.4.1 2015-03-30 1.4.0 2015-03-09 1.3.5 2015-03-02 1.3.4 2015-02-17 1.3.3 2015-02-09 1.3.2 2015-02-03 1.3.1 2015-01-04 1.3.0 2014-12-24 1.2.3 2014-10-02 1.2.2 2014-07-30 1.2.1 2014-07-14 1.2.0 2014-07-11 1.1.1 2014-06-19 1.1.0 2014-06-12 1.0.1 2014-06-04 Section 1.1: Installation (Visual Studio) Xamarin.Forms is a cross-platform natively backed UI toolkit abstraction that allows developers to easily create user interfaces that can be shared across Android, iOS, Windows, and Windows Phone. The user interfaces are rendered using the native controls of the target platform, allowing Xamarin.Forms applications to retain the appropriate look and feel for each platform. Xamarin Plugin for Visual Studio To get started with Xamarin.Forms for Visual Studio you need to have the Xamarin plugin itself. The easiest way to have it installed is to download and install the latest Visual Studio. GoalKicker.com Xamarin.Forms Notes for Professionals 2 If you already have the latest Visual Studio installed, go to Control Panel > Programs and Features, right click on Visual Studio, and click Change. When the installer opens, click on Modify, and select the cross-platform mobile development tools: You can also select to install the Android SDK: Uncheck it if you already have the SDK installed. You will be able to setup Xamarin to use existing Android SDK later. Xamarin.Forms Xamarin.Forms is a set of libraries for your Portable Class library and native assemblies. The Xamarin.Forms library itself is available as a NuGet package. To add it to your project just use the regular Install-Package command of the Package Manager Console: Install-Package Xamarin.Forms for all of your initial assemblies (for example MyProject, MyProject.Droid and MyProject.iOS). The easiest way to get started with Xamarin.Forms is to create an empty project in Visual Studio: GoalKicker.com Xamarin.Forms Notes for Professionals 3 As you can see there are 2 available options to create the blank app -- Portable and Shared. I recommend you to get started with Portable one because it's the most commonly used in the real world (dierences and more explanation to be added). After creating the project make sure you're using the latest Xamarin.Forms version as your initial template may contain the old one. Use your Package Manager Console or Manage NuGet Packages option to upgrade to the latest Xamarin.Forms (remember it's just a NuGet package). While the Visual Studio Xamarin.Forms templates will create an iOS platform project for you, you will need to connect Xamarin to a Mac build host to be able to run these projects on the iOS Simulator or physical devices. Section 1.2: Hello World Xamarin Forms: Visual Studio After successfully installing Xamarin as described in the rst example, it's time to launch the rst sample application. Step 1: Creating a new Project. In Visual Studio, choose New -> Project -> Visual C# -> Cross-Platform -> Blank App (Xamarin.Forms Portable) Name the app "<NAME>" and select the location to create the project and click OK. This will create a solution for you which contains three projects: 1. HelloWorld (this is where your logic and views is placed, i.e. the portable project) 2. HelloWorld.Droid (the Android project) 3. HelloWorld.iOS (the iOS project) GoalKicker.com Xamarin.Forms Notes for Professionals 4 Step 2: Investigating the sample Having created the solution, a sample application will be ready to be deployed. Open the App.cs located in the root of the portable project and investigate the code. As seen below, the Contents of the sample is a StackLayout which contains a Label: using Xamarin.Forms; namespace Hello_World { public class App : Application { public App() { // The root page of your application MainPage = new ContentPage { Content = new StackLayout { VerticalOptions = LayoutOptions.Center, Children = { new Label { HorizontalTextAlignment = TextAlignment.Center, Text = "Welcome to Xamarin Forms!" } } } }; } protected override void OnStart() GoalKicker.com Xamarin.Forms Notes for Professionals 5 { // Handle when your app starts } protected override void OnSleep() { // Handle when your app sleeps } protected override void OnResume() { // Handle when your app resumes } } } Step 3: Launching the application Now simply right-click the project you want to start (HelloWorld.Droid or HelloWorld.iOS) and click Set as StartUp Project. Then, in the Visual Studio toolbar, click the Start button (the green triangular button that resembles a Play button) to launch the application on the targeted simulator/emulator. GoalKicker.com Xamarin.Forms Notes for Professionals 6 Chapter 2: Why Xamarin Forms and When to use Xamarin Forms Section 2.1: Why Xamarin Forms and When to use Xamarin Forms Xamarin is becoming more and more popular - it is hard to decide when to use Xamarin.Forms and when Xamarin.Platform (so Xamarin.iOS and Xamarin.Android). First of all you should know for what kind of applications you can use Xamarin.Forms: 1. Prototypes - to visualize how your application will look on the dierent devices. 2. Applications which not require platform specic functionality (like APIs) - but here please note that Xamarin is working busily to provide as many cross-platform compatibility as possible. 3. Applications where code sharing is crucial - more important than UI. 4. Applications where data displayed is more important than advanced functionality There are also many other factors: 1. Who will be responsible for application development - if your team consists of experienced mobile developers they will be able to handle Xamarin.Forms easily. But if you have one developer per platform (native development) Forms can be bigger challenge. 2. Please also note that with Xamarin.Forms you can still encounter some issues sometimes - Xamarin.Forms platform is still being improved. 3. Fast development is sometimes very important - to reduce costs and time you can decide to use Forms. 4. When developing enterprise applications without any advanced functionality it is better to use Xamarin.Forms - it enables you to share mode code not event in mobile area but in general. Some portions of code can be shared across many platforms. You should not use Xamarin.Forms when: 1. You have to create custom functionality and and access platform specic APIs 2. You have to create custom UI for the mobile application 3. When some functionality is not ready for Xamarin.Forms (like some specic behaviour on the mobile device) 4. Your team consists of platform specic mobile developers (mobile development in Java and/or Swift/Objective C) GoalKicker.com Xamarin.Forms Notes for Professionals 7 Chapter 3: Xamarin Forms Layouts Section 3.1: AbsoluteLayout AbsoluteLayout positions and sizes child elements proportional to its own size and position or by absolute values. Child views may be positioned and sized using proportional values or static values, and proportional and static values can be mixed. A denition of an AbsoluteLayout in XAML looks like this: <AbsoluteLayout> <Label Text="I'm centered on iPhone 4 but no other device" AbsoluteLayout.LayoutBounds="115,150,100,100" LineBreakMode="WordWrap" /> <Label Text="I'm bottom center on every device." AbsoluteLayout.LayoutBounds=".5,1,.5,.1" AbsoluteLayout.LayoutFlags="All" LineBreakMode="WordWrap" /> <BoxView Color="Olive" AbsoluteLayout.LayoutBounds="1,.5, 25, 100" AbsoluteLayout.LayoutFlags="PositionProportional" /> <BoxView Color="Red" AbsoluteLayout.LayoutBounds="0,.5,25,100" AbsoluteLayout.LayoutFlags="PositionProportional" /> <BoxView Color="Blue" AbsoluteLayout.LayoutBounds=".5,0,100,25" AbsoluteLayout.LayoutFlags="PositionProportional" /> <BoxView Color="Blue" AbsoluteLayout.LayoutBounds=".5,0,1,25" AbsoluteLayout.LayoutFlags="PositionProportional, WidthProportional" /> </AbsoluteLayout> The same layout would look like this in code: Title = "Absolute Layout Exploration - Code"; var layout = new AbsoluteLayout(); var centerLabel = new Label { Text = "I'm centered on iPhone 4 but no other device.", LineBreakMode = LineBreakMode.WordWrap}; AbsoluteLayout.SetLayoutBounds (centerLabel, new Rectangle (115, 159, 100, 100)); GoalKicker.com Xamarin.Forms Notes for Professionals 8 // No need to set layout flags, absolute positioning is the default var bottomLabel = new Label { Text = "I'm bottom center on every device.", LineBreakMode = LineBreakMode.WordWrap }; AbsoluteLayout.SetLayoutBounds (bottomLabel, new Rectangle (.5, 1, .5, .1)); AbsoluteLayout.SetLayoutFlags (bottomLabel, AbsoluteLayoutFlags.All); var rightBox = new BoxView{ Color = Color.Olive }; AbsoluteLayout.SetLayoutBounds (rightBox, new Rectangle (1, .5, 25, 100)); AbsoluteLayout.SetLayoutFlags (rightBox, AbsoluteLayoutFlags.PositionProportional); var leftBox = new BoxView{ Color = Color.Red }; AbsoluteLayout.SetLayoutBounds (leftBox, new Rectangle (0, .5, 25, 100)); AbsoluteLayout.SetLayoutFlags (leftBox, AbsoluteLayoutFlags.PositionProportional); var topBox = new BoxView{ Color = Color.Blue }; AbsoluteLayout.SetLayoutBounds (topBox, new Rectangle (.5, 0, 100, 25)); AbsoluteLayout.SetLayoutFlags (topBox, AbsoluteLayoutFlags.PositionProportional); var twoFlagsBox = new BoxView{ Color = Color.Blue }; AbsoluteLayout.SetLayoutBounds (topBox, new Rectangle (.5, 0, 1, 25)); AbsoluteLayout.SetLayoutFlags (topBox, AbsoluteLayoutFlags.PositionProportional | AbsoluteLayout.WidthProportional); layout.Children.Add (bottomLabel); layout.Children.Add (centerLabel); layout.Children.Add (rightBox); layout.Children.Add (leftBox); layout.Children.Add (topBox); The AbsoluteLayout control in Xamarin.Forms allows you to specify where exactly on the screen you want the child elements to appear, as well as their size and shape (bounds). There are a few dierent ways to set the bounds of the child elements based on the AbsoluteLayoutFlags enumeration that are used during this process. The AbsoluteLayoutFlags enumeration contains the following values: All: All dimensions are proportional. HeightProportional: Height is proportional to the layout. None: No interpretation is done. PositionProportional: Combines XProportional and YProportional. SizeProportional: Combines WidthProportional and HeightProportional. WidthProportional: Width is proportional to the layout. XProportional: X property is proportional to the layout. YProportional: Y property is proportional to the layout. The process of working with the layout of the AbsoluteLayout container may seem a little counterintuitive at rst, but with a little use it will become familiar. Once you have created your child elements, to set them at an absolute position within the container you will need to follow three steps. You will want to set the ags assigned to the elements using the AbsoluteLayout.SetLayoutFlags() method. You will also want to use the AbsoluteLayout.SetLayoutBounds() method to give the elements their bounds. Finally, you will want to add the child elements to the Children collection. Since Xamarin.Forms is an abstraction layer between Xamarin and the device-specic implementations, the positional values can be independent of the device pixels. This is where the layout ags mentioned previously come into play. You can choose how the layout process of the Xamarin.Forms controls should interpret the values you dene. GoalKicker.com Xamarin.Forms Notes for Professionals 9 Section 3.2: Grid A layout containing views arranged in rows and columns. This is a typical Grid denition in XAML. <Grid> <Grid.RowDefinitions> <RowDefinition Height="2*" /> <RowDefinition Height="*" /> <RowDefinition Height="200" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <ContentView Grid.Row="0" Grid.Column="0"/> <ContentView Grid.Row="1" Grid.Column="0"/> <ContentView Grid.Row="2" Grid.Column="0"/> <ContentView Grid.Row="0" Grid.Column="1"/> <ContentView Grid.Row="1" Grid.Column="1"/> <ContentView Grid.Row="2" Grid.Column="1"/> </Grid> The same Grid dened in code looks like this: var grid = new Grid(); grid.RowDefinitions.Add (new RowDefinition { Height = new GridLength(2, GridUnitType.Star) }); grid.RowDefinitions.Add (new RowDefinition { Height = new GridLength (1, GridUnitType.Star) }); grid.RowDefinitions.Add (new RowDefinition { Height = new GridLength(200)}); grid.ColumnDefinitions.Add (new ColumnDefinition{ Width = new GridLength (200) }); To add items to the grid: In XAML: GoalKicker.com Xamarin.Forms Notes for Professionals 10 <Grid> <--DEFINITIONS...--!> <ContentView Grid.Row="0" Grid.Column="0"/> <ContentView Grid.Row="1" Grid.Column="0"/> <ContentView Grid.Row="2" Grid.Column="0"/> <ContentView Grid.Row="0" Grid.Column="1"/> <ContentView Grid.Row="1" Grid.Column="1"/> <ContentView Grid.Row="2" Grid.Column="1"/> </Grid> In C# code: var grid = new Grid(); //DEFINITIONS... var topLeft = new Label { Text = "Top Left" }; var topRight = new Label { Text = "Top Right" }; var bottomLeft = new Label { Text = "Bottom Left" }; var bottomRight = new Label { Text = "Bottom Right" }; grid.Children.Add(topLeft, 0, 0); grid.Children.Add(topRight, 0, 1); grid.Children.Add(bottomLeft, 1, 0); grid.Children.Add(bottomRight, 1, 1); For Height and Width a number of units are available. Auto automatically sizes to t content in the row or column. Specied as GridUnitType.Auto in C# or as Auto in XAML. Proportional sizes columns and rows as a proportion of the remaining space. Specied as a value and GridUnitType.Star in C# and as #* in XAML, with # being your desired value. Specifying one row/column with * will cause it to ll the available space. Absolute sizes columns and rows with specic, xed height and width values. Specied as a value and GridUnitType.Absolute in C# and as # in XAML, with # being your desired value. Note: The width values for columns are set as Auto by default in Xamarin.Forms, which means that the width is determined from the size of the children. Note that this diers from the implementation of XAML on Microsoft platforms, where the default width is *, which will ll the available space. Section 3.3: ContentPresenter A layout manager for templated views. Used within a ControlTemplate to mark where the content to be presented appears. GoalKicker.com Xamarin.Forms Notes for Professionals 11 Section 3.4: ContentView An element with a single content. ContentView has very little use of its own. Its purpose is to serve as a base class for user-dened compound views. XAML <ContentView> <Label Text="Hi, I'm a simple Label inside of a simple ContentView" HorizontalOptions="Center" VerticalOptions="Center"/> </ContentView> Code GoalKicker.com Xamarin.Forms Notes for Professionals 12 var contentView = new ContentView { Content = new Label { Text = "Hi, I'm a simple Label inside of a simple ContentView", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center } }; Section 3.5: ScrollView An element capable of scrolling if it's Content requires. ScrollView contains layouts and enables them to scroll oscreen. ScrollView is also used to allow views to automatically move to the visible portion of the screen when the keyboard is showing. Note: ScrollViews should not be nested. In addition, ScrollViews should not be nested with other controls that provide scrolling, like ListView and WebView. A ScrollView is easy to dene. In XAML: <ContentPage.Content> <ScrollView> <StackLayout> <BoxView BackgroundColor="Red" HeightRequest="600" WidthRequest="150" /> <Entry /> </StackLayout> </ScrollView> </ContentPage.Content> The same denition in code: var scroll = new ScrollView(); Content = scroll; var stack = new StackLayout(); stack.Children.Add(new BoxView { BackgroundColor = Color.Red, 600 }); GoalKicker.com Xamarin.Forms Notes for Professionals HeightRequest = 600, WidthRequest = 13 stack.Children.Add(new Entry()); Section 3.6: Frame An element containing a single child, with some framing options. Frame have a default Xamarin.Forms.Layout.Padding of 20. XAML <Frame> <Label Text="I've been framed!" HorizontalOptions="Center" VerticalOptions="Center" /> </Frame> Code var frameView = new Frame { Content = new Label { Text = "I've been framed!", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center }, OutlineColor = Color.Red }; Section 3.7: TemplatedView An element that displays content with a control template, and the base class for ContentView. GoalKicker.com Xamarin.Forms Notes for Professionals 14 Section 3.8: RelativeLayout A Layout that uses Constraints to layout its children. RelativeLayout is used to position and size views relative to properties of the layout or sibling views. Unlike AbsoluteLayout, RelativeLayout does not have the concept of the moving anchor and does not have facilities for positioning elements relative to the bottom or right edges of the layout. RelativeLayout does support positioning elements outside of its own bounds. A RelativeLayout in XAML, is like this: <RelativeLayout> <BoxView Color="Red" x:Name="redBox" RelativeLayout.YConstraint="{ConstraintExpression Type=RelativeToParent, Property=Height,Factor=.15,Constant=0}" GoalKicker.com Xamarin.Forms Notes for Professionals 15 RelativeLayout.WidthConstraint="{ConstraintExpression Type=RelativeToParent,Property=Width,Factor=1,Constant=0}" RelativeLayout.HeightConstraint="{ConstraintExpression Type=RelativeToParent,Property=Height,Factor=.8,Constant=0}" /> <BoxView Color="Blue" RelativeLayout.YConstraint="{ConstraintExpression Type=RelativeToView, ElementName=redBox,Property=Y,Factor=1,Constant=20}" RelativeLayout.XConstraint="{ConstraintExpression Type=RelativeToView, ElementName=redBox,Property=X,Factor=1,Constant=20}" RelativeLayout.WidthConstraint="{ConstraintExpression Type=RelativeToParent,Property=Width,Factor=.5,Constant=0}" RelativeLayout.HeightConstraint="{ConstraintExpression Type=RelativeToParent,Property=Height,Factor=.5,Constant=0}" /> </RelativeLayout> The same layout can be accomplished with this code: layout.Children.Add (redBox, Constraint.RelativeToParent ((parent) => { return parent.X; }), Constraint.RelativeToParent ((parent) => { return parent.Y * .15; }), Constraint.RelativeToParent((parent) => { return parent.Width; }), Constraint.RelativeToParent((parent) => { return parent.Height * .8; })); layout.Children.Add (blueBox, Constraint.RelativeToView (redBox, (Parent, sibling) => { return sibling.X + 20; }), Constraint.RelativeToView (blueBox, (parent, sibling) => { return sibling.Y + 20; }), Constraint.RelativeToParent((parent) => { return parent.Width * .5; }), Constraint.RelativeToParent((parent) => { return parent.Height * .5; })); Section 3.9: StackLayout StackLayout organizes views in a one-dimensional line ("stack"), either horizontally or vertically. Views in a StackLayout can be sized based on the space in the layout using layout options. Positioning is determined by the order views were added to the layout and the layout options of the views. GoalKicker.com Xamarin.Forms Notes for Professionals 16 Usage in XAML <StackLayout> <Label Text="This will be on top" /> <Button Text="This will be on the bottom" /> </StackLayout> Usage in code StackLayout stackLayout = new StackLayout { Spacing = 0, VerticalOptions = LayoutOptions.FillAndExpand, Children = { new Label { Text = "StackLayout", HorizontalOptions = LayoutOptions.Start }, new Label { Text = "stacks its children", HorizontalOptions = LayoutOptions.Center }, new Label { Text = "vertically", HorizontalOptions = LayoutOptions.End }, new Label { Text = "by default,", HorizontalOptions = LayoutOptions.Center }, new Label { Text = "but horizontal placement", HorizontalOptions = LayoutOptions.Start }, new Label GoalKicker.com Xamarin.Forms Notes for Professionals 17 { Text = "can be controlled with", HorizontalOptions = LayoutOptions.Center }, new Label { Text = "the HorizontalOptions property.", HorizontalOptions = LayoutOptions.End }, new Label { Text = "An Expand option allows one or more children " + "to occupy the an area within the remaining " + "space of the StackLayout after it's been sized " + "to the height of its parent.", VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.End }, new StackLayout { Spacing = 0, Orientation = StackOrientation.Horizontal, Children = { new Label { Text = "Stacking", }, new Label { Text = "can also be", HorizontalOptions = LayoutOptions.CenterAndExpand }, new Label { Text = "horizontal.", }, } } } }; GoalKicker.com Xamarin.Forms Notes for Professionals 18 Chapter 4: Xamarin Relative Layout Section 4.1: Box after box public class MyPage : ContentPage { RelativeLayout _layout; BoxView centerBox; BoxView rightBox; BoxView leftBox; BoxView topBox; BoxView bottomBox; const int spacing = 10; const int boxSize = 50; public MyPage() { _layout = new RelativeLayout(); centerBox = new BoxView { BackgroundColor = Color.Black }; rightBox = new BoxView { BackgroundColor = Color.Blue, //You can both set width and hight here GoalKicker.com Xamarin.Forms Notes for Professionals 19 //Or when adding the control to the layout WidthRequest = boxSize, HeightRequest = boxSize }; leftBox = new BoxView { BackgroundColor = Color.Yellow, WidthRequest = boxSize, HeightRequest = boxSize }; topBox = new BoxView { BackgroundColor = Color.Green, WidthRequest = boxSize, HeightRequest = boxSize }; bottomBox = new BoxView { BackgroundColor = Color.Red, WidthRequest = boxSize, HeightRequest = boxSize }; //First adding center box since other boxes will be relative to center box _layout.Children.Add(centerBox, //Constraint for X, centering it horizontally //We give the expression as a paramater, parent is our layout in this case Constraint.RelativeToParent(parent => parent.Width / 2 - boxSize / 2), //Constraint for Y, centering it vertically Constraint.RelativeToParent(parent => parent.Height / 2 - boxSize / 2), //Constraint for Width Constraint.Constant(boxSize), //Constraint for Height Constraint.Constant(boxSize)); _layout.Children.Add(leftBox, //The x constraint will relate on some level to centerBox //Which is the first parameter in this case //We both need to have parent and centerBox, which will be called sibling, //in our expression parameters //This expression will be our second paramater Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.X - spacing boxSize), //Since we only need to move it left, //it's Y constraint will be centerBox' position at Y axis Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.Y) //No need to define the size constraints //Since we initialize them during instantiation ); _layout.Children.Add(rightBox, //The only difference hear is adding spacing and boxSize instead of subtracting them Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.X + spacing + boxSize), Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.Y) ); _layout.Children.Add(topBox, //Since we are going to move it vertically this thime GoalKicker.com Xamarin.Forms Notes for Professionals 20 //We need to do the math on Y Constraint //In this case, X constraint will be centerBox' position at X axis Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.X), //We will do the math on Y axis this time Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.Y - spacing boxSize) ); _layout.Children.Add(bottomBox, Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.X), Constraint.RelativeToView(centerBox, (parent, sibling) => sibling.Y + spacing + boxSize) ); Content = _layout; } } Section 4.2: Page with an simple label on the middle public class MyPage : ContentPage { RelativeLayout _layout; Label MiddleText; public MyPage() { _layout = new RelativeLayout(); MiddleText = new Label { GoalKicker.com Xamarin.Forms Notes for Professionals 21 Text = "Middle Text" }; MiddleText.SizeChanged += (s, e) => { //We will force the layout so it will know the actual width and height of the label //Otherwise width and height of the label remains 0 as far as layout knows _layout.ForceLayout(); }; _layout.Children.Add(MiddleText Constraint.RelativeToParent(parent => parent.Width / 2 - MiddleText.Width / 2), Constraint.RelativeToParent(parent => parent.Height / 2 - MiddleText.Height / 2)); Content = _layout; } } GoalKicker.com Xamarin.Forms Notes for Professionals 22 Chapter 5: Navigation in Xamarin.Forms Section 5.1: NavigationPage ow with XAML App.xaml.cs le (App.xaml le is default, so skipped) using Xamrin.Forms namespace NavigationApp { public partial class App : Application { public static INavigation GlobalNavigation { get; private set; } public App() { InitializeComponent(); var rootPage = new NavigationPage(new FirstPage()); GlobalNavigation = rootPage.Navigation; MainPage = rootPage; } } } FirstPage.xaml le <?xml version="1.0" encoding="UTF-8"?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="NavigationApp.FirstPage" Title="First page"> <ContentPage.Content> <StackLayout> <Label Text="This is the first page" /> <Button Text="Click to navigate to a new page" Clicked="GoToSecondPageButtonClicked"/> <Button Text="Click to open the new page as modal" Clicked="OpenGlobalModalPageButtonClicked"/> </StackLayout> </ContentPage.Content> </ContentPage> In some cases you need to open the new page not in the current navigation but in the global one. For example, if your current page contains bottom menu, it will be visible when you push the new page in the current navigation. If you need the page to be opened over the whole visible content hiding the bottom menu and other current page's content, you need to push the new page as a modal into the global navigation. See App.GlobalNavigation property and the example below. FirstPage.xaml.cs le using System; using Xamarin.Forms; GoalKicker.com Xamarin.Forms Notes for Professionals 23 namespace NavigationApp { public partial class FirstPage : ContentPage { public FirstPage() { InitializeComponent(); } async void GoToSecondPageButtonClicked(object sender, EventArgs e) { await Navigation.PushAsync(new SecondPage(), true); } async void OpenGlobalModalPageButtonClicked(object sender, EventArgs e) { await App.GlobalNavigation.PushModalAsync(new SecondPage(), true); } } } SecondPage.xaml le (xaml.cs le is default, so skipped) <?xml version="1.0" encoding="UTF-8"?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="NavigationApp.SecondPage" Title="Second page"> <ContentPage.Content> <Label Text="This is the second page" /> </ContentPage.Content> </ContentPage> Section 5.2: NavigationPage ow using System; using Xamarin.Forms; namespace NavigationApp { public class App : Application { public App() { MainPage = new NavigationPage(new FirstPage()); } } public class FirstPage : ContentPage { Label FirstPageLabel { get; set; } = new Label(); Button FirstPageButton { get; set; } = new Button(); public FirstPage() { Title = "First page"; GoalKicker.com Xamarin.Forms Notes for Professionals 24 FirstPageLabel.Text = "This is the first page"; FirstPageButton.Text = "Navigate to the second page"; FirstPageButton.Clicked += OnFirstPageButtonClicked; var content = new StackLayout(); content.Children.Add(FirstPageLabel); content.Children.Add(FirstPageButton); Content = content; } async void OnFirstPageButtonClicked(object sender, EventArgs e) { await Navigation.PushAsync(new SecondPage(), true); } } public class SecondPage : ContentPage { Label SecondPageLabel { get; set; } = new Label(); public SecondPage() { Title = "Second page"; SecondPageLabel.Text = "This is the second page"; Content = SecondPageLabel; } } } Section 5.3: Master Detail Navigation The code below shows how to perform asynchronous navigation when the app is in a MasterDetailPage context. public async Task NavigateMasterDetail(Page page) { if (page == null) { return; } var masterDetail = App.Current.MainPage as MasterDetailPage; if (masterDetail == null || masterDetail.Detail == null) return; var navigationPage = masterDetail.Detail as NavigationPage; if (navigationPage == null) { masterDetail.Detail = new NavigationPage(page); masterDetail.IsPresented = false; return; } await navigationPage.Navigation.PushAsync(page); navigationPage.Navigation.RemovePage(navigationPage.Navigation.NavigationStack[navigationPage.Navig ation.NavigationStack.Count - 2]); GoalKicker.com Xamarin.Forms Notes for Professionals 25 masterDetail.IsPresented = false; } Section 5.4: Using INavigation from view model First step is create navigation interface which we will use on view model: public interface IViewNavigationService { void Initialize(INavigation navigation, SuperMapper navigationMapper); Task NavigateToAsync(object navigationSource, object parameter = null); Task GoBackAsync(); } In Initialize method I use my custom mapper where I keep collection of pages types with associated keys. public class SuperMapper { private readonly ConcurrentDictionary<Type, object> _typeToAssociateDictionary = new ConcurrentDictionary<Type, object>(); private readonly ConcurrentDictionary<object, Type> _associateToType = new ConcurrentDictionary<object, Type>(); public void AddMapping(Type type, object associatedSource) { _typeToAssociateDictionary.TryAdd(type, associatedSource); _associateToType.TryAdd(associatedSource, type); } public Type GetTypeSource(object associatedSource) { Type typeSource; _associateToType.TryGetValue(associatedSource, out typeSource); return typeSource; } public object GetAssociatedSource(Type typeSource) { object associatedSource; _typeToAssociateDictionary.TryGetValue(typeSource, out associatedSource); return associatedSource; } } Enum with pages: public enum NavigationPageSource { Page1, Page2 } App.cs le: public class App : Application { GoalKicker.com Xamarin.Forms Notes for Professionals 26 public App() { var startPage = new Page1(); InitializeNavigation(startPage); MainPage = new NavigationPage(startPage); } #region Sample of navigation initialization private void InitializeNavigation(Page startPage) { var mapper = new SuperMapper(); mapper.AddMapping(typeof(Page1), NavigationPageSource.Page1); mapper.AddMapping(typeof(Page2), NavigationPageSource.Page2); var navigationService = DependencyService.Get<IViewNavigationService>(); navigationService.Initialize(startPage.Navigation, mapper); } #endregion } In mapper I associated type of some page with enum value. IViewNavigationService implementation: [assembly: Dependency(typeof(ViewNavigationService))] namespace SuperForms.Core.ViewNavigation { public class ViewNavigationService : IViewNavigationService { private INavigation _navigation; private SuperMapper _navigationMapper; public void Initialize(INavigation navigation, SuperMapper navigationMapper) { _navigation = navigation; _navigationMapper = navigationMapper; } public async Task NavigateToAsync(object navigationSource, object parameter = null) { CheckIsInitialized(); var type = _navigationMapper.GetTypeSource(navigationSource); if (type == null) { throw new InvalidOperationException( "Can't find associated type for " + navigationSource.ToString()); } ConstructorInfo constructor; object[] parameters; if (parameter == null) { constructor = type.GetTypeInfo() .DeclaredConstructors .FirstOrDefault(c => !c.GetParameters().Any()); parameters = new object[] { }; } GoalKicker.com Xamarin.Forms Notes for Professionals 27 else { constructor = type.GetTypeInfo() .DeclaredConstructors .FirstOrDefault(c => { var p = c.GetParameters(); return p.Count() == 1 && p[0].ParameterType == parameter.GetType(); }); parameters = new[] { parameter }; } if (constructor == null) { throw new InvalidOperationException( "No suitable constructor found for page " + navigationSource.ToString()); } var page = constructor.Invoke(parameters) as Page; await _navigation.PushAsync(page); } public async Task GoBackAsync() { CheckIsInitialized(); await _navigation.PopAsync(); } private void CheckIsInitialized() { if (_navigation == null || _navigationMapper == null) throw new NullReferenceException("Call Initialize method first."); } } } I get type of page on which user want navigate and create it's instance using reection. And then I could use navigation service on view model: var navigationService = DependencyService.Get<IViewNavigationService>(); await navigationService.NavigateToAsync(NavigationPageSource.Page2, "hello from Page1"); Section 5.5: Master Detail Root Page public class App : Application { internal static NavigationPage NavPage; public App () { // The root page of your application MainPage = new RootPage(); } } public class RootPage : MasterDetailPage GoalKicker.com Xamarin.Forms Notes for Professionals 28 { public RootPage() { var menuPage = new MenuPage(); menuPage.Menu.ItemSelected += (sender, e) => NavigateTo(e.SelectedItem as MenuItem); Master = menuPage; App.NavPage = new NavigationPage(new HomePage()); Detail = App.NavPage; } protected override async void OnAppearing() { base.OnAppearing(); } void NavigateTo(MenuItem menuItem) { Page displayPage = (Page)Activator.CreateInstance(menuItem.TargetType); Detail = new NavigationPage(displayPage); IsPresented = false; } } Section 5.6: Hierarchical navigation with XAML By default, the navigation pattern works like a stack of pages, calling the newest pages over the previous pages. You will need to use the NavigationPage object for this. Pushing new pages ... public class App : Application { public App() { MainPage = new NavigationPage(new Page1()); } } ... Page1.xaml ... <ContentPage.Content> <StackLayout> <Label Text="Page 1" /> <Button Text="Go to page 2" Clicked="GoToNextPage" /> </StackLayout> </ContentPage.Content> ... Page1.xaml.cs ... public partial class Page1 : ContentPage { public Page1() { InitializeComponent(); } protected async void GoToNextPage(object sender, EventArgs e) { await Navigation.PushAsync(new Page2()); } } GoalKicker.com Xamarin.Forms Notes for Professionals 29 ... Page2.xaml ... <ContentPage.Content> <StackLayout> <Label Text="Page 2" /> <Button Text="Go to Page 3" Clicked="GoToNextPage" /> </StackLayout> </ContentPage.Content> ... Page2.xaml.cs ... public partial class Page2 : ContentPage { public Page2() { InitializeComponent(); } protected async void GoToNextPage(object sender, EventArgs e) { await Navigation.PushAsync(new Page3()); } } ... Popping pages Normally the user uses the back button to return pages, but sometimes you need to control this programmatically, so you need to call the method NavigationPage.PopAsync() to return to the previous page or NavigationPage.PopToRootAsync() to return at the beggining, such like... Page3.xaml ... <ContentPage.Content> <StackLayout> <Label Text="Page 3" /> <Button Text="Go to previous page" Clicked="GoToPreviousPage" /> <Button Text="Go to beginning" Clicked="GoToStartPage" /> </StackLayout> </ContentPage.Content> ... Page3.xaml.cs ... public partial class Page3 : ContentPage { public Page3() { InitializeComponent(); } protected async void GoToPreviousPage(object sender, EventArgs e) { await Navigation.PopAsync(); } protected async void GoToStartPage(object sender, EventArgs e) { GoalKicker.com Xamarin.Forms Notes for Professionals 30 await Navigation.PopToRootAsync(); } } ... Section 5.7: Modal navigation with XAML Modal pages can created in three ways: From NavigationPage object for full screen pages For Alerts and Notications For ActionSheets that are pop-ups menus Full screen modals ... // to open await Navigation.PushModalAsync(new ModalPage()); // to close await Navigation.PopModalAsync(); ... Alerts/Conrmations and Notications ... // alert await DisplayAlert("Alert title", "Alert text", "Ok button text"); // confirmation var booleanAnswer = await DisplayAlert("Confirm?", "Confirmation text", "Yes", "No"); ... ActionSheets ... var selectedOption = await DisplayActionSheet("Options", "Cancel", "Destroy", "Option 1", "Option 2", "Option 3"); ... GoalKicker.com Xamarin.Forms Notes for Professionals 31 Chapter 6: Xamarin.Forms Page Section 6.1: TabbedPage A TabbedPage is similar to a NavigationPage in that it allows for and manages simple navigation between several child Page objects. The dierence is that generally speaking, each platform displays some sort of bar at the top or bottom of the screen that displays most, if not all, of the available child Page objects. In Xamarin.Forms applications, a TabbedPage is generally useful when you have a small predened number of pages that users can navigate between, such as a menu or a simple wizard that can be positioned at the top or bottom of the screen. XAML Code var page1 = new ContentPage { Title = "Tab1", Content = new Label { Text = "I'm the Tab1 Page", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center } }; var page2 = new ContentPage { Title = "Tab2", Content = new Label { Text = "I'm the Tab2 Page", HorizontalOptions = LayoutOptions.Center, 66 VerticalOptions = LayoutOptions.Center } }; var tabbedPage = new TabbedPage { Children = { page1, page2 } }; GoalKicker.com Xamarin.Forms Notes for Professionals 32 Section 6.2: ContentPage ContentPage: Displays a single View. XAML <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="XamlBasics.SampleXaml"> <Label Text="This is a simple ContentPage" HorizontalOptions="Center" VerticalOptions="Center" /> </ContentPage> Code var label = new Label { Text = "This is a simple ContentPage", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center }; var contentPage = new ContentPage { Content = label }; GoalKicker.com Xamarin.Forms Notes for Professionals 33 Section 6.3: MasterDetailPage MasterDetailPage: Manages two separate Pages (panes) of information. XAML <?xml version="1.0" encoding="utf-8" ?> <MasterDetailPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="XamlBasics.SampleXaml"> <MasterDetailPage.Master> <ContentPage Title = "Master" BackgroundColor = "Silver"> <Label Text="This is the Master page." TextColor = "Black" HorizontalOptions="Center" VerticalOptions="Center" /> </ContentPage> </MasterDetailPage.Master> <MasterDetailPage.Detail> <ContentPage> <Label Text="This is the Detail page." HorizontalOptions="Center" VerticalOptions="Center" /> </ContentPage> </MasterDetailPage.Detail> </MasterDetailPage> Code var masterDetailPage = new MasterDetailPage { Master = new ContentPage { Content = new Label { Title = "Master", BackgroundColor = Color.Silver, TextColor = Color.Black, Text = "This is the Master page.", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center } GoalKicker.com Xamarin.Forms Notes for Professionals 34 }, Detail = new ContentPage { Content = new Label { Title = "Detail", Text = "This is the Detail page.", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center } } }; GoalKicker.com Xamarin.Forms Notes for Professionals 35 Chapter 7: Xamarin.Forms Cells Section 7.1: EntryCell An EntryCell is a Cell that combines the capabilities of a Label and an Entry. The EntryCell can be useful in scenarios when building some functionality within your application to gather data from the user. They can easily be placed into a TableView and be treated as a simple form. XAML <EntryCell Label="Type Something" Placeholder="Here"/> Code var entryCell = new EntryCell { Label = "Type Something", Placeholder = "Here" }; Section 7.2: SwitchCell A SwitchCell is a Cell that combines the capabilities of a Label and an on-o switch. A SwitchCell can be useful for turning on and o functionality, or even user preferences or conguration options. XAML <SwitchCell Text="Switch It Up!" /> Code var switchCell = new SwitchCell { Text = "Switch It Up!" }; GoalKicker.com Xamarin.Forms Notes for Professionals 36 Section 7.3: TextCell A TextCell is a Cell that has two separate text areas for displaying data. A TextCell is typically used for information purposes in both TableView and ListView controls. The two text areas are aligned vertically to maximize the space within the Cell. This type of Cell is also commonly used to display hierarchical data, so when the user taps this cell, it will navigate to another page. XAML <TextCell Text="I am primary" TextColor="Red" Detail="I am secondary" DetailColor="Blue"/> Code var textCell = new TextCell { Text = "I am primary", TextColor = Color.Red, Detail = "I am secondary", DetailColor = Color.Blue }; GoalKicker.com Xamarin.Forms Notes for Professionals 37 Section 7.4: ImageCell An ImageCell is exactly what it sounds like. It is a simple Cell that contains only an Image. This control functions very similarly to a normal Image control, but with far fewer bells and whistles. XAML <ImageCell ImageSource="http://d2g29cya9iq7ip.cloudfront.net/content/imag es/company/aboutus-video-bg.png?v=25072014072745")), Text="This is some text" Detail="This is some detail" /> Code var imageCell = new ImageCell { ImageSource = ImageSource.FromUri(new Uri("http://d2g29cya9iq7ip.clou 109 dfront.net/content/images/company/aboutus-videobg.png?v=25072014072745")), Text = "This is some text", Detail = "This is some detail" }; GoalKicker.com Xamarin.Forms Notes for Professionals 38 Section 7.5: ViewCell You can consider a ViewCell a blank slate. It is your personal canvas to create a Cell that looks exactly the way you want it. You can even compose it of instances of multiple other View objects put together with Layout controls. You are only limited by your imagination. And maybe screen size. XAML <ViewCell> <ViewCell.View> <StackLayout> <Button Text="My Button"/> <Label Text="My Label"/> <Entry Text="And some other stuff"/> </StackLayout> </ViewCell.View> </ViewCell> Code var button = new Button { Text = "My Button" }; var label = new Label { Text = "My Label" }; var entry = new Entry { Text ="And some other stuff" }; var viewCell = new ViewCell { View = new StackLayout { Children = { button, label, entry } } }; GoalKicker.com Xamarin.Forms Notes for Professionals 39 GoalKicker.com Xamarin.Forms Notes for Professionals 40 Chapter 8: Xamarin.Forms Views Section 8.1: Button The Button is probably the most common control not only in mobile applications, but in any applications that have a UI. The concept of a button has too many purposes to list here. Generally speaking though, you will use a button to allow users to initiate some sort of action or operation within your application. This operation could include anything from basic navigation within your app, to submitting data to a web service somewhere on the Internet. XAML <Button x:Name="MyButton" Text="Click Me!" TextColor="Red" BorderColor="Blue" VerticalOptions="Center" HorizontalOptions="Center" Clicked="Button_Clicked"/> XAML Code-Behind public void Button_Clicked( object sender, EventArgs args ) { MyButton.Text = "I've been clicked!"; } Code var button = new Button( ) { Text = "Hello, Forms !", VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.CenterAndExpand, TextColor = Color.Red, BorderColor = Color.Blue, }; button.Clicked += ( sender, args ) => { var b = (Button) sender; b.Text = "I've been clicked!"; }; GoalKicker.com Xamarin.Forms Notes for Professionals 41 Section 8.2: DatePicker Quite often within mobile applications, there will be a reason to deal with dates. When working with dates, you will probably need some sort of user input to select a date. This could occur when working with a scheduling or calendar app. In this case, it is best to provide users with a specialized control that allows them to interactively pick a date, rather than requiring users to manually type a date. This is where the DatePicker control is really useful. XAML <DatePicker Date="09/12/2014" Format="d" /> Code var datePicker = new DatePicker{ Date = DateTime.Now, Format = "d" }; GoalKicker.com Xamarin.Forms Notes for Professionals 42 Section 8.3: Entry The Entry View is used to allow users to type a single line of text. This single line of text can be used for multiple purposes including entering basic notes, credentials, URLs, and more. This View is a multi-purpose View, meaning that if you need to type regular text or want to obscure a password, it is all done through this single control. XAML <Entry Placeholder="Please Enter Some Text Here" HorizontalOptions="Center" VerticalOptions="Center" Keyboard="Email"/> Code var entry = new Entry { Placeholder = "Please Enter Some Text Here", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center, Keyboard = Keyboard.Email }; Section 8.4: Editor The Editor is very similar to the Entry in that it allows users to enter some free-form text. The dierence is that the Editor allows for multi-line input whereas the Entry is only used for single line input. The Entry also provides a few more properties than the Editor to allow further customization of the View. XAML <Editor HorizontalOptions="Fill" VerticalOptions="Fill" Keyboard="Chat"/> Code var editor = new Editor { GoalKicker.com Xamarin.Forms Notes for Professionals 43 HorizontalOptions = LayoutOptions.Fill, VerticalOptions = LayoutOptions.Fill, Keyboard = Keyboard.Chat }; Section 8.5: Image Images are very important parts of any application. They provide the opportunity to inject additional visual elements as well as branding into your application. Not to mention that images are typically more interesting to look at than text or buttons. You can use an Image as a standalone element within your application, but an Image element can also be added to other View elements such as a Button. XAML <Image Aspect="AspectFit" Source="http://d2g29cya9iq7ip.cloudfront.net/co ntent/images/company/aboutus-video-bg.png?v=25072014072745"/> Code var image = new Image { Aspect = Aspect.AspectFit, Source = ImageSource.FromUri(new Uri("http://d2g29cya9iq7ip.cloudfron t.net/content/images/company/aboutus-video-bg.png?v=25072014072745")) }; GoalKicker.com Xamarin.Forms Notes for Professionals 44 Section 8.6: Label Believe it or not, the Label is one of the most crucial yet underappreciated View classes not only in Xamarin.Forms, but in UI development in general. It is seen as a rather boring line of text, but without that line of text it would be very dicult to convey certain ideas to the user. Label controls can be used to describe what the user should enter into an Editor or Entry control. They can describe a section of the UI and give it context. They can be used to show the total in a calculator app. Yes, the Label is truly the most versatile control in your tool bag that may not always spark a lot of attention, but it is the rst one noticed if it isnt there. XAML <Label Text="This is some really awesome text in a Label!" TextColor="Red" XAlign="Center" YAlign="Center"/> Code var label = new Label { Text = "This is some really awesome text in a Label!", TextColor = Color.Red, XAlign = TextAlignment.Center, YAlign = TextAlignment.Center }; GoalKicker.com Xamarin.Forms Notes for Professionals 45 GoalKicker.com Xamarin.Forms Notes for Professionals 46 Chapter 9: Using ListViews This documentation details how to use the dierent components of the Xamarin Forms ListView Section 9.1: Pull to Refresh in XAML and Code behind To enable Pull to Refresh in a ListView in Xamarin, you rst need to specify that it is PullToRefresh enabled and then specify the name of the command you want to invoke upon the ListView being pulled: <ListView x:Name="itemListView" IsPullToRefreshEnabled="True" RefreshCommand="Refresh"> The same can be achieved in code behind: itemListView.IsPullToRefreshEnabled = true; itemListView.RefreshCommand = Refresh; Then, you must specify what the Refresh Command does in your code behind: public ICommand Refresh { get { itemListView.IsRefreshing = true; //This turns on the activity //Indicator for the ListView //Then add your code to execute when the ListView is pulled itemListView.IsRefreshing = false; } } GoalKicker.com Xamarin.Forms Notes for Professionals 47 Chapter 10: Display Alert Section 10.1: DisplayAlert An alert box can be popped-up on a Xamarin.Forms Page by the method, DisplayAlert. We can provide a Title, Body (Text to be alerted) and one/two Action Buttons. Page oers two overrides of DisplayAlert method. 1. public Task DisplayAlert (String title, String message, String cancel) This override presents an alert dialog to the application user with a single cancel button. The alert displays modally and once dismissed the user continues interacting with the application. Example : DisplayAlert ("Alert", "You have been alerted", "OK"); Above snippet will present a native implementation of Alerts in each platform (AlertDialog in Android, UIAlertView in iOS, MessageDialog in Windows) as below. 2. public System.Threading.Tasks.Task<bool> DisplayAlert (String title, String message, String accept, String cancel) This override presents an alert dialog to the application user with an accept and a cancel button. It captures a user's response by presenting two buttons and returning a boolean. To get a response from an alert, supply text for both buttons and await the method. After the user selects one of the options the answer will be returned to the code. Example : var answer = await DisplayAlert ("Question?", "Would you like to play a game", "Yes", "No"); Debug.WriteLine ("Answer: " + (answer?"Yes":"No")); Example 2:(if Condition true or false check to alert proceed) GoalKicker.com Xamarin.Forms Notes for Professionals 48 async void listSelected(object sender, SelectedItemChangedEventArgs e) { var ans = await DisplayAlert("Question?", "Would you like Delete", "Yes", "No"); if (ans == true) { //Success condition } else { //false conditon } } Section 10.2: Alert Example with only one button and action var alertResult = await DisplayAlert("Alert Title", Alert Message, null, "OK"); if(!alertResult) { //do your stuff. } Here we will get Ok click action. GoalKicker.com Xamarin.Forms Notes for Professionals 49 Chapter 11: Accessing native features with DependencyService Section 11.1: Implementing text-to-speech A good example of a feature that request platform specic code is when you want to implement text-to-speech (tts). This example assumes that you are working with shared code in a PCL library. A schematic overview of our solution would look like the image underneath. In our shared code we dene an interface which is registered with the DependencyService. This is where we will do our calls upon. Dene an interface like underneath. public interface ITextToSpeech { void Speak (string text); } Now in each specic platform, we need to create an implementation of this interface. Let's start with the iOS implementation. iOS Implementation using AVFoundation; public class TextToSpeechImplementation : ITextToSpeech { public TextToSpeechImplementation () {} public void Speak (string text) { var speechSynthesizer = new AVSpeechSynthesizer (); var speechUtterance = new AVSpeechUtterance (text) { Rate = AVSpeechUtterance.MaximumSpeechRate/4, Voice = AVSpeechSynthesisVoice.FromLanguage ("en-US"), GoalKicker.com Xamarin.Forms Notes for Professionals 50 Volume = 0.5f, PitchMultiplier = 1.0f }; speechSynthesizer.SpeakUtterance (speechUtterance); } } In the code example above you notice that there is specic code to iOS. Like types such as AVSpeechSynthesizer. These would not work in shared code. To register this implementation with the Xamarin DependencyService add this attribute above the namespace declaration. using AVFoundation; using DependencyServiceSample.iOS;//enables registration outside of namespace [assembly: Xamarin.Forms.Dependency (typeof (TextToSpeechImplementation))] namespace DependencyServiceSample.iOS { public class TextToSpeechImplementation : ITextToSpeech //... Rest of code Now when you do a call like this in your shared code, the right implementation for the platform you are running your app on is injected. DependencyService.Get<ITextToSpeech>(). More on this later on. Android Implementation The Android implementation of this code would look like underneath. using Android.Speech.Tts; using Xamarin.Forms; using System.Collections.Generic; using DependencyServiceSample.Droid; public class TextToSpeechImplementation : Java.Lang.Object, ITextToSpeech, TextToSpeech.IOnInitListener { TextToSpeech speaker; string toSpeak; public TextToSpeechImplementation () {} public void Speak (string text) { var ctx = Forms.Context; // useful for many Android SDK features toSpeak = text; if (speaker == null) { speaker = new TextToSpeech (ctx, this); } else { var p = new Dictionary<string,string> (); speaker.Speak (toSpeak, QueueMode.Flush, p); } } #region IOnInitListener implementation public void OnInit (OperationResult status) { if (status.Equals (OperationResult.Success)) { GoalKicker.com Xamarin.Forms Notes for Professionals 51 var p = new Dictionary<string,string> (); speaker.Speak (toSpeak, QueueMode.Flush, p); } } #endregion } Again don't forget to register it with the DependencyService. using Android.Speech.Tts; using Xamarin.Forms; using System.Collections.Generic; using DependencyServiceSample.Droid; [assembly: Xamarin.Forms.Dependency (typeof (TextToSpeechImplementation))] namespace DependencyServiceSample.Droid{ //... Rest of code Windows Phone Implementation Finally, for Windows Phone this code can be used. public class TextToSpeechImplementation : ITextToSpeech { public TextToSpeechImplementation() {} public async void Speak(string text) { MediaElement mediaElement = new MediaElement(); var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer(); SpeechSynthesisStream stream = await synth.SynthesizeTextToStreamAsync("Hello World"); mediaElement.SetSource(stream, stream.ContentType); mediaElement.Play(); await synth.SynthesizeTextToStreamAsync(text); } } And once more do not forget to register it. using Windows.Media.SpeechSynthesis; using Windows.UI.Xaml.Controls; using DependencyServiceSample.WinPhone;//enables registration outside of namespace [assembly: Xamarin.Forms.Dependency (typeof (TextToSpeechImplementation))] namespace DependencyServiceSample.WinPhone{ //... Rest of code Implementing in Shared Code Now everything is in place to make it work! Finally, in your shared code you can now call this function by using the interface. At runtime, the implementation will be injected which corresponds to the current platform it is running on. In this code you will see a page that could be in a Xamarin Forms project. It creates a button which invokes the Speak() method by using the DependencyService. GoalKicker.com Xamarin.Forms Notes for Professionals 52 public MainPage () { var speak = new Button { Text = "Hello, Forms !", VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.CenterAndExpand, }; speak.Clicked += (sender, e) => { DependencyService.Get<ITextToSpeech>().Speak("Hello from Xamarin Forms"); }; Content = speak; } The result will be that when the app is ran and the button is clicked, the text provided will be spoken. All of this without having to do hard stu like compiler hints and such. You now have one uniform way of accessing platform specic functionality through platform independent code. Section 11.2: Getting Application and Device OS Version Numbers - Android & iOS - PCL The example below will collect the Device's OS version number and the the version of the application (which is dened in each projects' properties) that is entered into Version name on Android and Version on iOS. First make an interface in your PCL project: public interface INativeHelper { /// <summary> /// On iOS, gets the <c>CFBundleVersion</c> number and on Android, gets the <c>PackageInfo</c>'s <c>VersionName</c>, both of which are specified in their respective project properties. /// </summary> /// <returns><c>string</c>, containing the build number.</returns> string GetAppVersion(); /// <summary> /// On iOS, gets the <c>UIDevice.CurrentDevice.SystemVersion</c> number and on Android, gets the <c>Build.VERSION.Release</c>. /// </summary> /// <returns><c>string</c>, containing the OS version number.</returns> string GetOsVersion(); } Now we implement the interface in the Android and iOS projects. Android: [assembly: Dependency(typeof(NativeHelper_Android))] namespace YourNamespace.Droid{ public class NativeHelper_Android : INativeHelper { /// <summary> /// See interface summary. /// </summary> public string GetAppVersion() { Context context = Forms.Context; return context.PackageManager.GetPackageInfo(context.PackageName, 0).VersionName; } GoalKicker.com Xamarin.Forms Notes for Professionals 53 /// <summary> /// See interface summary. /// </summary> public string GetOsVersion() { return Build.VERSION.Release; } } } iOS: [assembly: Dependency(typeof(NativeHelper_iOS))] namespace YourNamespace.iOS { public class NativeHelper_iOS : INativeHelper { /// <summary> /// See interface summary. /// </summary> public string GetAppVersion() { return Foundation.NSBundle.MainBundle.InfoDictionary[new Foundation.NSString("CFBundleVersion")].ToString(); } /// <summary> /// See interface summary. /// </summary> public string GetOsVersion() { return UIDevice.CurrentDevice.SystemVersion; } } } Now to use the code in a method: public string GetOsAndAppVersion { INativeHelper helper = DependencyService.Get<INativeHelper>(); if(helper != null) { string osVersion = helper.GetOsVersion(); string appVersion = helper.GetBuildNumber() } } GoalKicker.com Xamarin.Forms Notes for Professionals 54 Chapter 12: DependencyService Section 12.1: Android implementation The Android specic implementation is a bit more complex because it forces you to inherit from a native Java.Lang.Object and forces you to implement the IOnInitListener interface. Android requires you to provide a valid Android context for a lot of the SDK methods it exposes. Xamarin.Forms exposes a Forms.Context object that provides you with a Android context that you can use in such cases. using Android.Speech.Tts; using Xamarin.Forms; using System.Collections.Generic; using DependencyServiceSample.Droid; public class TextToSpeechAndroid : Java.Lang.Object, ITextToSpeech, TextToSpeech.IOnInitListener { TextToSpeech _speaker; public TextToSpeechAndroid () {} public void Speak (string whatToSay) { var ctx = Forms.Context; if (_speaker == null) { _speaker = new TextToSpeech (ctx, this); } else { var p = new Dictionary<string,string> (); _speaker.Speak (whatToSay, QueueMode.Flush, p); } } #region IOnInitListener implementation public void OnInit (OperationResult status) { if (status.Equals (OperationResult.Success)) { var p = new Dictionary<string,string> (); _speaker.Speak (toSpeak, QueueMode.Flush, p); } } #endregion } When you've created your class you need to enable the DependencyService to discover it at run time. This is done by adding an [assembly] attribute above the class denition and outside of any namespace denitions. using Android.Speech.Tts; using Xamarin.Forms; using System.Collections.Generic; using DependencyServiceSample.Droid; GoalKicker.com Xamarin.Forms Notes for Professionals 55 [assembly: Xamarin.Forms.Dependency (typeof (TextToSpeechAndroid))] namespace DependencyServiceSample.Droid { ... This attribute registers the class with the DependencyService so it can be used when an instance of the ITextToSpeech interface is needed. Section 12.2: Interface The interface denes the behaviour that you want to expose through the DependencyService. One example usage of a DependencyService is a Text-To-Speech service. There is currently no abstraction for this feature in Xamarin.Forms, so you need to create your own. Start o by dening an interface for the behaviour: public interface ITextToSpeech { void Speak (string whatToSay); } Because we dene our interface we can code against it from our shared code. Note: Classes that implement the interface need to have a parameterless constructor to work with the DependencyService. Section 12.3: iOS implementation The interface you dened needs to be implemented in every targeted platform. For iOS this is done through the AVFoundation framework. The following implementation of the ITextToSpeech interface handles speaking a given text in English. using AVFoundation; public class TextToSpeechiOS : ITextToSpeech { public TextToSpeechiOS () {} public void Speak (string whatToSay) { var speechSynthesizer = new AVSpeechSynthesizer (); var speechUtterance = new AVSpeechUtterance (whatToSay) { Rate = AVSpeechUtterance.MaximumSpeechRate/4, Voice = AVSpeechSynthesisVoice.FromLanguage ("en-US"), Volume = 0.5f, PitchMultiplier = 1.0f }; speechSynthesizer.SpeakUtterance (speechUtterance); } } When you've created your class you need to enable the DependencyService to discover it at run time. This is done by adding an [assembly] attribute above the class denition and outside of any namespace denitions. using AVFoundation; using DependencyServiceSample.iOS; [assembly: Xamarin.Forms.Dependency (typeof (TextToSpeechiOS))] GoalKicker.com Xamarin.Forms Notes for Professionals 56 namespace DependencyServiceSample.iOS { public class TextToSpeechiOS : ITextToSpeech ... This attribute registers the class with the DependencyService so it can be used when an instance of the ITextToSpeech interface is needed. Section 12.4: Shared code After you've created and registered your platform-specic classes you can start hooking them up to your shared code. The following page contains a button that triggers the text-to-speech functionality using a pre-dened sentence. It uses DependencyService to retrieve a platform-specic implementation of ITextToSpeech at run time using the native SDKs. public MainPage () { var speakButton = new Button { Text = "Talk to me baby!", VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.CenterAndExpand, }; speakButton.Clicked += (sender, e) => { DependencyService.Get<ITextToSpeech>().Speak("Xamarin Forms likes eating cake by the ocean."); }; Content = speakButton; } When you run this application on an iOS or Android device and tap the button you will hear the application speak the given sentence. GoalKicker.com Xamarin.Forms Notes for Professionals 57 Chapter 13: Custom Renderers Section 13.1: Accessing renderer from a native project var renderer = Platform.GetRenderer(visualElement); if (renderer == null) { renderer = Platform.CreateRenderer(visualElement); Platform.SetRenderer(visualElement, renderer); } DoSomeThingWithRender(render); // now you can do whatever you want with render Section 13.2: Rounded label with a custom renderer for Frame (PCL & iOS parts) First step : PCL part using Xamarin.Forms; namespace ProjectNamespace { public class ExtendedFrame : Frame { /// <summary> /// The corner radius property. /// </summary> public static readonly BindableProperty CornerRadiusProperty = BindableProperty.Create("CornerRadius", typeof(double), typeof(ExtendedFrame), 0.0); /// <summary> /// Gets or sets the corner radius. /// </summary> public double CornerRadius { get { return (double)GetValue(CornerRadiusProperty); } set { SetValue(CornerRadiusProperty, value); } } } } Second step : iOS part using ProjectNamespace; using ProjectNamespace.iOS; using Xamarin.Forms; using Xamarin.Forms.Platform.iOS; [assembly: ExportRenderer(typeof(ExtendedFrame), typeof(ExtendedFrameRenderer))] namespace ProjectNamespace.iOS { public class ExtendedFrameRenderer : FrameRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Frame> e) { base.OnElementChanged(e); GoalKicker.com Xamarin.Forms Notes for Professionals 58 if (Element != null) { Layer.MasksToBounds = true; Layer.CornerRadius = (float)(Element as ExtendedFrame).CornerRadius; } } protected override void OnElementPropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); if (e.PropertyName == ExtendedFrame.CornerRadiusProperty.PropertyName) { Layer.CornerRadius = (float)(Element as ExtendedFrame).CornerRadius; } } } } Third step : XAML code to call an ExtendedFrame If you want to use it in a XAML part, don't forget to write this : xmlns:controls="clr-namespace:ProjectNamespace;assembly:ProjectNamespace" after xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" Now, you can use the ExtendedFrame like this : <controls:ExtendedFrame VerticalOptions="FillAndExpand" HorizontalOptions="FillAndExpand" BackgroundColor="Gray" CornerRadius="35.0"> <Frame.Content> <Label Text="MyText" TextColor="Blue"/> </Frame.Content> </controls:ExtendedFrame> Section 13.3: Custom renderer for ListView Custom Renderers let developers customize the appearance and behavior of Xamarin.Forms controls on each platform. Developers could use features of native controls. For example, we need to disable scroll in ListView. On iOS ListView is scrollable even if all items are placed on the screen and user shouldn't be able to scroll the list. Xamarin.Forms.ListView doesn't manage such setting. In this case, a renderer is coming to help. Firstly, we should create custom control in PCL project, which will declare some required bindable property: public class SuperListView : ListView { GoalKicker.com Xamarin.Forms Notes for Professionals 59 public static readonly BindableProperty IsScrollingEnableProperty = BindableProperty.Create(nameof(IsScrollingEnable), typeof(bool), typeof(SuperListView), true); public bool IsScrollingEnable { get { return (bool)GetValue(IsScrollingEnableProperty); } set { SetValue(IsScrollingEnableProperty, value); } } } Next step will be creating a renderer for each platform. iOS: [assembly: ExportRenderer(typeof(SuperListView), typeof(SuperListViewRenderer))] namespace SuperForms.iOS.Renderers { public class SuperListViewRenderer : ListViewRenderer { protected override void OnElementChanged(ElementChangedEventArgs<ListView> e) { base.OnElementChanged(e); var superListView = Element as SuperListView; if (superListView == null) return; Control.ScrollEnabled = superListView.IsScrollingEnable; } } } And Android(Android's list doesn't have scroll if all items are placed on the screen, so we will not disable scrolling, but still we are able to use native properties): [assembly: ExportRenderer(typeof(SuperListView), typeof(SuperListViewRenderer))] namespace SuperForms.Droid.Renderers { public class SuperListViewRenderer : ListViewRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.ListView> e) { base.OnElementChanged(e); var superListView = Element as SuperListView; if (superListView == null) return; } } } Element property of renderer is my SuperListView control from PCL project. Control property of renderer is native control. Android.Widget.ListView for Android and UIKit.UITableView for iOS. GoalKicker.com Xamarin.Forms Notes for Professionals 60 And how we will use it in XAML: <ContentPage x:Name="Page" xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:controls="clr-namespace:SuperForms.Controls;assembly=SuperForms.Controls" x:Class="SuperForms.Samples.SuperListViewPage"> <controls:SuperListView ItemsSource="{Binding Items, Source={x:Reference Page}}" IsScrollingEnable="false" Margin="20"> <controls:SuperListView.ItemTemplate> <DataTemplate> <ViewCell> <Label Text="{Binding .}"/> </ViewCell> </DataTemplate> </controls:SuperListView.ItemTemplate> </controls:SuperListView> </ContentPage> .cs le of page: public partial class SuperListViewPage : ContentPage { private ObservableCollection<string> _items; public ObservableCollection<string> Items { get { return _items; } set { _items = value; OnPropertyChanged(); } } public SuperListViewPage() { var list = new SuperListView(); InitializeComponent(); var items = new List<string>(10); for (int i = 1; i <= 10; i++) { items.Add($"Item {i}"); } Items = new ObservableCollection<string>(items); } } Section 13.4: Custom Renderer for BoxView Custom Renderer help to allows to add new properties and render them dierently in native platform that can not be otherwise does through shared code. In this example we will add radius and shadow to a boxview. Firstly, we should create custom control in PCL project, which will declare some required bindable property: GoalKicker.com Xamarin.Forms Notes for Professionals 61 namespace Mobile.Controls { public class ExtendedBoxView : BoxView { /// <summary> /// Respresents the background color of the button. /// </summary> public static readonly BindableProperty BorderRadiusProperty = BindableProperty.Create<ExtendedBoxView, double>(p => p.BorderRadius, 0); public double BorderRadius { get { return (double)GetValue(BorderRadiusProperty); } set { SetValue(BorderRadiusProperty, value); } } public static readonly BindableProperty StrokeProperty = BindableProperty.Create<ExtendedBoxView, Color>(p => p.Stroke, Color.Transparent); public Color Stroke { get { return (Color)GetValue(StrokeProperty); } set { SetValue(StrokeProperty, value); } } public static readonly BindableProperty StrokeThicknessProperty = BindableProperty.Create<ExtendedBoxView, double>(p => p.StrokeThickness, 0); public double StrokeThickness { get { return (double)GetValue(StrokeThicknessProperty); } set { SetValue(StrokeThicknessProperty, value); } } } } Next step will be creating a renderer for each platform. iOS: [assembly: ExportRenderer(typeof(ExtendedBoxView), typeof(ExtendedBoxViewRenderer))] namespace Mobile.iOS.Renderers { public class ExtendedBoxViewRenderer : VisualElementRenderer<BoxView> { public ExtendedBoxViewRenderer() { } protected override void OnElementChanged(ElementChangedEventArgs<BoxView> e) { base.OnElementChanged(e); if (Element == null) return; Layer.MasksToBounds = true; Layer.CornerRadius = (float)((ExtendedBoxView)this.Element).BorderRadius / 2.0f; } protected override void OnElementPropertyChanged(object sender, GoalKicker.com Xamarin.Forms Notes for Professionals 62 System.ComponentModel.PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); if (e.PropertyName == ExtendedBoxView.BorderRadiusProperty.PropertyName) { SetNeedsDisplay(); } } public override void Draw(CGRect rect) { ExtendedBoxView roundedBoxView = (ExtendedBoxView)this.Element; using (var context = UIGraphics.GetCurrentContext()) { context.SetFillColor(roundedBoxView.Color.ToCGColor()); context.SetStrokeColor(roundedBoxView.Stroke.ToCGColor()); context.SetLineWidth((float)roundedBoxView.StrokeThickness); var rCorner = this.Bounds.Inset((int)roundedBoxView.StrokeThickness / 2, (int)roundedBoxView.StrokeThickness / 2); nfloat radius = (nfloat)roundedBoxView.BorderRadius; radius = (nfloat)Math.Max(0, Math.Min(radius, Math.Max(rCorner.Height / 2, rCorner.Width / 2))); var path = CGPath.FromRoundedRect(rCorner, radius, radius); context.AddPath(path); context.DrawPath(CGPathDrawingMode.FillStroke); } } } } Again you can customize however you want inside the draw method. And same for Android: [assembly: ExportRenderer(typeof(ExtendedBoxView), typeof(ExtendedBoxViewRenderer))] namespace Mobile.Droid { /// <summary> /// /// </summary> public class ExtendedBoxViewRenderer : VisualElementRenderer<BoxView> { /// <summary> /// /// </summary> public ExtendedBoxViewRenderer() { } /// <summary> /// /// </summary> /// <param name="e"></param> protected override void OnElementChanged(ElementChangedEventArgs<BoxView> e) { base.OnElementChanged(e); GoalKicker.com Xamarin.Forms Notes for Professionals 63 SetWillNotDraw(false); Invalidate(); } /// <summary> /// /// </summary> /// <param name="sender"></param> /// <param name="e"></param> protected override void OnElementPropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); if (e.PropertyName == ExtendedBoxView.BorderRadiusProperty.PropertyName) { Invalidate(); } } /// <summary> /// /// </summary> /// <param name="canvas"></param> public override void Draw(Canvas canvas) { var box = Element as ExtendedBoxView; base.Draw(canvas); Paint myPaint = new Paint(); myPaint.SetStyle(Paint.Style.Stroke); myPaint.StrokeWidth = (float)box.StrokeThickness; myPaint.SetARGB(convertTo255ScaleColor(box.Color.A), convertTo255ScaleColor(box.Color.R), convertTo255ScaleColor(box.Color.G), convertTo255ScaleColor(box.Color.B)); myPaint.SetShadowLayer(20, 0, 5, Android.Graphics.Color.Argb(100, 0, 0, 0)); SetLayerType(Android.Views.LayerType.Software, myPaint); var number = (float)box.StrokeThickness / 2; RectF rectF = new RectF( number, // left number, // top canvas.Width - number, // right canvas.Height - number // bottom ); var radius = (float)box.BorderRadius; canvas.DrawRoundRect(rectF, radius, radius, myPaint); } /// <summary> /// /// </summary> /// <param name="color"></param> /// <returns></returns> private int convertTo255ScaleColor(double color) { return (int) Math.Ceiling(color * 255); } GoalKicker.com Xamarin.Forms Notes for Professionals 64 } } The XAML: We rst reference to our control with the namespace we dened earlier. xmlns:Controls="clr-namespace:Mobile.Controls" We then use the Control as follows and use properties dened at the beginning: <Controls:ExtendedBoxView x:Name="search_boxview" Color="#444" BorderRadius="5" HorizontalOptions="CenterAndExpand" /> Section 13.5: Rounded BoxView with selectable background color First step : PCL part public class RoundedBoxView : BoxView { public static readonly BindableProperty CornerRadiusProperty = BindableProperty.Create("CornerRadius", typeof(double), typeof(RoundedEntry), default(double)); public double CornerRadius { get { return (double)GetValue(CornerRadiusProperty); } set { SetValue(CornerRadiusProperty, value); } } public static readonly BindableProperty FillColorProperty = BindableProperty.Create("FillColor", typeof(string), typeof(RoundedEntry), default(string)); public string FillColor { get { return (string) GetValue(FillColorProperty); } set { SetValue(FillColorProperty, value); } } } GoalKicker.com Xamarin.Forms Notes for Professionals 65 Second step : Droid part [assembly: ExportRenderer(typeof(RoundedBoxView), typeof(RoundedBoxViewRenderer))] namespace MyNamespace.Droid { public class RoundedBoxViewRenderer : VisualElementRenderer<BoxView> { protected override void OnElementChanged(ElementChangedEventArgs<BoxView> e) { base.OnElementChanged(e); SetWillNotDraw(false); Invalidate(); } protected override void OnElementPropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); SetWillNotDraw(false); Invalidate(); } public override void Draw(Canvas canvas) { var box = Element as RoundedBoxView; var rect = new Rect(); var paint = new Paint { Color = Xamarin.Forms.Color.FromHex(box.FillColor).ToAndroid(), AntiAlias = true, }; GetDrawingRect(rect); var radius = (float)(rect.Width() / box.Width * box.CornerRadius); canvas.DrawRoundRect(new RectF(rect), radius, radius, paint); } } } Third step : iOS part [assembly: ExportRenderer(typeof(RoundedBoxView), typeof(RoundedBoxViewRenderer))] namespace MyNamespace.iOS { public class RoundedBoxViewRenderer : BoxRenderer { protected override void OnElementChanged(ElementChangedEventArgs<BoxView> e) { base.OnElementChanged(e); if (Element != null) { Layer.CornerRadius = (float)(Element as RoundedBoxView).CornerRadius; Layer.BackgroundColor = Color.FromHex((Element as RoundedBoxView).FillColor).ToCGColor(); } } protected override void OnElementPropertyChanged(object sender, GoalKicker.com Xamarin.Forms Notes for Professionals 66 System.ComponentModel.PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); if (Element != null) { Layer.CornerRadius = (float)(Element as RoundedBoxView).CornerRadius; Layer.BackgroundColor = (Element as RoundedBoxView).FillColor.ToCGColor(); } } } } GoalKicker.com Xamarin.Forms Notes for Professionals 67 Chapter 14: Caching Section 14.1: Caching using Akavache About Akavache Akavache is an incredibly useful library providing reach functionality of caching your data. Akavache provides a keyvalue storage interface and works on the top of SQLite3. You do not need to keep your schema synced as it's actually No-SQL solution which makes it perfect for most of the mobile applications especially if you need your app to be updated often without data loss. Recommendations for Xamarin Akavache is denetly the best caching library for Xamarin application if only you do not need to operate with strongly relative data, binary or really big amounts of data. Use Akavache in the following cases: You need your app to cache the data for a given period of time (you can congure expiration timeout for each entity being saved; You want your app to work oine; It's hard to determine and freeze the schema of your data. For example, you have lists containing dierent typed objects; It's enough for you to have simple key-value access to the data and you do not need to make complex queries. Akavache is not a "silver bullet" for data storage so think twice about using it in the following cases: Your data entities have many relations between each other; You don't really need your app to work oine; You have huge amount of data to be saved locally; You need to migrate your data from version to version; You need to perform complex queries typical for SQL like grouping, projections etc. Actually you can manually migrate your data just by reading and writing it back with updated elds. Simple example Interacting with Akavache is primarily done through an object called BlobCache. Most of the Akavache's methods returns reactive observables, but you also can just await them thanks to extension methods. using System.Reactive.Linq; // IMPORTANT - this makes await work! // Make sure you set the application name before doing any inserts or gets BlobCache.ApplicationName = "AkavacheExperiment"; var myToaster = new Toaster(); await BlobCache.UserAccount.InsertObject("toaster", myToaster); // // ...later, in another part of town... // GoalKicker.com Xamarin.Forms Notes for Professionals 68 // Using async/await var toaster = await BlobCache.UserAccount.GetObject<Toaster>("toaster"); // or without async/await Toaster toaster; BlobCache.UserAccount.GetObject<Toaster>("toaster") .Subscribe(x => toaster = x, ex => Console.WriteLine("No Key!")); Error handling Toaster toaster; try { toaster = await BlobCache.UserAccount.GetObjectAsync("toaster"); } catch (KeyNotFoundException ex) { toaster = new Toaster(); } // Or without async/await: toaster = await BlobCache.UserAccount.GetObjectAsync<Toaster>("toaster") .Catch(Observable.Return(new Toaster())); GoalKicker.com Xamarin.Forms Notes for Professionals 69 Chapter 15: Gestures Section 15.1: Make an Image tappable by adding a TapGestureRecognizer There are a couple of default recognizers available in Xamarin.Forms, one of them is the TapGestureRecognizer. You can add them to virtually any visual element. Have a look at a simple implementation which binds to an Image. Here is how to do it in code. var tappedCommand = new Command(() => { //handle the tap }); var tapGestureRecognizer = new TapGestureRecognizer { Command = tappedCommand }; image.GestureRecognizers.Add(tapGestureRecognizer); Or in XAML: <Image Source="tapped.jpg"> <Image.GestureRecognizers> <TapGestureRecognizer Command="{Binding TappedCommand}" NumberOfTapsRequired="2" /> </Image.GestureRecognizers> </Image> Here the command is set by using data binding. As you can see you can also set the NumberOfTapsRequired to enable it for more taps before it takes action. The default value is 1 tap. Other gestures are Pinch and Pan. Section 15.2: Gesture Event When we put the control of Label, the Label does not provide any event. <Label x:Name="lblSignUp Text="Don't have account?"/> as shown the Label only display purpose only. When the user want to replace Button with Label, then we give the event for Label. As shown below: XAML <Label x:Name="lblSignUp" Text="Don't have an account?" Grid.Row="8" Grid.Column="1" Grid.ColumnSpan="2"> <Label.GestureRecognizers> <TapGestureRecognizer Tapped="lblSignUp_Tapped"/> </Label.GestureRecognizers> C# var lblSignUp_Tapped = new TapGestureRecognizer(); lblSignUp_Tapped.Tapped += (s,e) => { // // Do your work here. GoalKicker.com Xamarin.Forms Notes for Professionals 70 // }; lblSignUp.GestureRecognizers.Add(lblSignUp_Tapped); The Screen Below shown the Label Event. Screen 1 : The Label "Don't have an account?" as shown in Bottom . GoalKicker.com Xamarin.Forms Notes for Professionals 71 GoalKicker.com Xamarin.Forms Notes for Professionals 72 GoalKicker.com Xamarin.Forms Notes for Professionals 73 When the User click the Label "Don't have an account?", it will Navigate to Sign Up Screen. GoalKicker.com Xamarin.Forms Notes for Professionals 74 GoalKicker.com Xamarin.Forms Notes for Professionals 75 GoalKicker.com Xamarin.Forms Notes for Professionals 76 For more details: GoalKicker.com Xamarin.Forms Notes for Professionals 77 [https://developer.xamarin.com/guides/xamarin-forms/user-interface/gestures/tap/][1] Section 15.3: Zoom an Image with the Pinch gesture In order to make an Image (or any other visual element) zoomable we have to add a PinchGestureRecognizer to it. Here is how to do it in code: var pinchGesture = new PinchGestureRecognizer(); pinchGesture.PinchUpdated += (s, e) => { // Handle the pinch }; image.GestureRecognizers.Add(pinchGesture); But it can also be done from XAML: <Image Source="waterfront.jpg"> <Image.GestureRecognizers> <PinchGestureRecognizer PinchUpdated="OnPinchUpdated" /> </Image.GestureRecognizers> </Image> In the accompanied event handler you should provide the code to zoom your image. Of course other uses can be implement as well. void OnPinchUpdated (object sender, PinchGestureUpdatedEventArgs e) { // ... code here } Other gestures are Tap and Pan. Section 15.4: Show all of the zoomed Image content with the PanGestureRecognizer When you have a zoomed Image (or other content) you may want to drag around the Image to show all of its content in the zoomed in state. This can be achieved by implementing the PanGestureRecognizer. From code this looks like so: var panGesture = new PanGestureRecognizer(); panGesture.PanUpdated += (s, e) => { // Handle the pan }; image.GestureRecognizers.Add(panGesture); This can also be done from XAML: <Image Source="MonoMonkey.jpg"> <Image.GestureRecognizers> <PanGestureRecognizer PanUpdated="OnPanUpdated" /> </Image.GestureRecognizers> </Image> In the code-behind event you can now handle the panning accordingly. Use this method signature to handle it: GoalKicker.com Xamarin.Forms Notes for Professionals 78 void OnPanUpdated (object sender, PanUpdatedEventArgs e) { // Handle the pan } Section 15.5: Tap Gesture With the Tap Gesture, you can make any UI-Element clickable (Images, Buttons, StackLayouts, ...): (1) In code, using the event: var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.Tapped += (s, e) => { // handle the tap }; image.GestureRecognizers.Add(tapGestureRecognizer); (2) In code, using ICommand (with MVVM-Pattern, for example): var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.SetBinding (TapGestureRecognizer.CommandProperty, "TapCommand"); image.GestureRecognizers.Add(tapGestureRecognizer); (3) Or in Xaml (with event and ICommand, only one is needed): <Image Source="tapped.jpg"> <Image.GestureRecognizers> <TapGestureRecognizer Tapped="OnTapGestureRecognizerTapped" Command="{Binding TapCommand"} /> </Image.GestureRecognizers> </Image> Section 15.6: Place a pin where the user touched the screen with MR.Gestures Xamarins built in gesture recognizers provide only very basic touch handling. E.g. there is no way to get the position of a touching nger. MR.Gestures is a component which adds 14 dierent touch handling events. The position of the touching ngers is part of the EventArgs passed to all MR.Gestures events. If you want to place a pin anywhere on the screen, the easiest way is to use an MR.Gestures.AbsoluteLayout which handles the Tapping event. <mr:AbsoluteLayout x:Name="MainLayout" Tapping="OnTapping"> ... </mr:AbsoluteLayout> As you can see the Tapping="OnTapping" also feels more like .NET than Xamarins syntax with the nested GestureRecognizers. That syntax was copied from iOS and it smells a bit for .NET developers. In your code behind you could add the OnTapping handler like this: private void OnTapping(object sender, MR.Gestures.TapEventArgs e) { if (e.Touches?.Length > 0) { Point touch = e.Touches[0]; GoalKicker.com Xamarin.Forms Notes for Professionals 79 var image = new Image() { Source = "pin" }; MainLayout.Children.Add(image, touch); } } Instead of the Tapping event, you could also use the TappingCommand and bind to your ViewModel, but that would complicate things in this simple example. More samples for MR.Gestures can be found in the GestureSample app on GitHub and on the MR.Gestures website. These also show how to use all the other touch events with event handlers, commands, MVVM, ... GoalKicker.com Xamarin.Forms Notes for Professionals 80 Chapter 16: Data Binding Section 16.1: Basic Binding to ViewModel EntryPage.xaml: <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:vm="clr-namespace:MyAssembly.ViewModel;assembly=MyAssembly" x:Class="MyAssembly.EntryPage"> <ContentPage.BindingContext> <vm:MyViewModel /> </ContentPage.BindingContext> <ContentPage.Content> <StackLayout VerticalOptions="FillAndExpand" HorizontalOptions="FillAndExpand" Orientation="Vertical" Spacing="15"> <Label Text="Name:" /> <Entry Text="{Binding Name}" /> <Label Text="Phone:" /> <Entry Text="{Binding Phone}" /> <Button Text="Save" Command="{Binding SaveCommand}" /> </StackLayout> </ContentPage.Content> </ContentPage> MyViewModel.cs: using System; using System.ComponentModel; namespace MyAssembly.ViewModel { public class MyViewModel : INotifyPropertyChanged { private string _name = String.Empty; private string _phone = String.Empty; public string Name { get { return _name; } set { if (_name != value) { _name = value; OnPropertyChanged(nameof(Name)); } } } public string Phone { get { return _phone; } set { if (_phone != value) GoalKicker.com Xamarin.Forms Notes for Professionals 81 { _phone = value; OnPropertyChanged(nameof(Phone)); } } } public ICommand SaveCommand { get; private set; } public MyViewModel() { SaveCommand = new Command(SaveCommandExecute); } private void SaveCommandExecute() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged(string propertyName) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } } GoalKicker.com Xamarin.Forms Notes for Professionals 82 Chapter 17: Working with Maps Section 17.1: Adding a map in Xamarin.Forms (Xamarin Studio) You can simply use the native map APIs on each platform with Xamarin Forms. All you need is to download the Xamarin.Forms.Maps package from nuget and install it to each project (including the PCL project). Maps Initialization First of all you have to add this code to your platform-specic projects. For doing this you have to add the Xamarin.FormsMaps.Init method call, like in the examples below. iOS project File AppDelegate.cs [Register("AppDelegate")] public partial class AppDelegate : Xamarin.Forms.Platform.iOS.FormsApplicationDelegate { public override bool FinishedLaunching(UIApplication app, NSDictionary options) { Xamarin.Forms.Forms.Init(); Xamarin.FormsMaps.Init(); LoadApplication(new App()); return base.FinishedLaunching(app, options); } } Android project File MainActivity.cs [Activity(Label = "MapExample.Droid", Icon = "@drawable/icon", Theme = "@style/MyTheme", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation)] public class MainActivity : Xamarin.Forms.Platform.Android.FormsAppCompatActivity { protected override void OnCreate(Bundle bundle) { TabLayoutResource = Resource.Layout.Tabbar; ToolbarResource = Resource.Layout.Toolbar; base.OnCreate(bundle); Xamarin.Forms.Forms.Init(this, bundle); Xamarin.FormsMaps.Init(this, bundle); LoadApplication(new App()); } } Platform Conguration Additional conguration steps are required on some platforms before the map will display. GoalKicker.com Xamarin.Forms Notes for Professionals 83 iOS project In iOS project you just have to add 2 entries to your Info.plist le: NSLocationWhenInUseUsageDescription string with value We are using your location NSLocationAlwaysUsageDescription string with value Can we use your location Android project To use Google Maps you have to generate an API key and add it to your project. Follow the instruction below to get this key: 1. (Optional) Find where your keytool tool location (default is /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands) 2. (Optional) Open terminal and go to your keytool location: cd /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands 3. Run the following keytool command: keytool -list -v -keystore "/Users/[USERNAME]/.local/share/Xamarin/Mono for Android/debug.keystore" -alias androiddebugkey -storepass android -keypass android Where [USERNAME] is, obviously, your current user folder. You should get something similar to this in the output: Alias name: androiddebugkey Creation date: Jun 30, 2016 Entry type: PrivateKeyEntry GoalKicker.com Xamarin.Forms Notes for Professionals 84 Certificate chain length: 1 Certificate[1]: Owner: CN=Android Debug, O=Android, C=US Issuer: CN=Android Debug, O=Android, C=US Serial number: 4b5ac934 Valid from: Thu Jun 30 10:22:00 EEST 2016 until: Sat Jun 23 10:22:00 EEST 2046 Certificate fingerprints: MD5: 4E:49:A7:14:99:D6:AB:9F:AA:C7:07:E2:6A:1A:1D:CA SHA1: 57:A1:E5:23:CE:49:2F:17:8D:8A:EA:87:65:44:C1:DD:1C:DA:51:95 SHA256: 70:E1:F3:5B:95:69:36:4A:82:A9:62:F3:67:B6:73:A4:DD:92:95:51:44:E3:4C:3D:9E:ED:99:03:09:9F:90: 3F Signature algorithm name: SHA256withRSA Version: 3 4. All we need in this output is the SHA1 certicate ngerprint. In our case it equals to this: 57:A1:E5:23:CE:49:2F:17:8D:8A:EA:87:65:44:C1:DD:1C:DA:51:95 Copy or save somewhere this key. We will need it later on. 5. Go to Google Developers Console, in our case we have to add Google Maps Android API, so choose it: 6. Google will ask you to create a project to enable APIs, follow this tip and create the project: GoalKicker.com Xamarin.Forms Notes for Professionals 85 7. Enable Google Maps API for your project: GoalKicker.com Xamarin.Forms Notes for Professionals 86 After you have enabled api, you have to create credentials for your app. Follow this tip: 8. On the next page choose the Android platform, tap on "What credentials do I need?" button, create a name for your API key, tap on "Add package name and ngerprint", enter your package name and your SHA1 ngerprint from the step 4 and nally create an API key: GoalKicker.com Xamarin.Forms Notes for Professionals 87 To nd your package name in Xamarin Studio go to your .Droid solution -> AndroidManifest.xml: 9. After creation copy the new API key (don't forget to press the "Done" button after) and paste it to your AndroidManifest.xml le: GoalKicker.com Xamarin.Forms Notes for Professionals 88 File AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package="documentation.mapexample"> <uses-sdk android:minSdkVersion="15" /> <application android:label="MapExample"> <meta-data android:name="com.google.android.geo.API_KEY" android:value="<KEY>" /> <meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version" /> </application> </manifest> You'll also need to enable some permissions in your manifest to enable some additional features: Access Coarse Location Access Fine Location Access Location Extra Commands Access Mock Location Access Network State Access Wi State Internet GoalKicker.com Xamarin.Forms Notes for Professionals 89 Although, the last two permissions are required to download Maps data. Read about Android permissions to learn more. That's all the steps for Android conguration. Note: if you want to run your app on android simulator, you have to install Google Play Services on it. Follow this tutorial to install Play Services on Xamarin Android Player. If you can't nd google play services update after the play store installation, you can update it directly from your app, where you have dependency on maps services Adding a map Adding map view to your crossplatform project is quite simple. Here is an example of how you can do it (I'm using PCL project without XAML). PCL project File MapExample.cs public class App : Application GoalKicker.com Xamarin.Forms Notes for Professionals 90 { public App() { var map = new Map(); map.IsShowingUser = true; var rootPage = new ContentPage(); rootPage.Content = map; MainPage = rootPage; } } That's all. Now if you'll run your app on iOS or Android, it will show you the map view: GoalKicker.com Xamarin.Forms Notes for Professionals 91 Chapter 18: Custom Fonts in Styles Section 18.1: Accessing custom Fonts in Syles Xamarin.Forms provide great mechanism for styling your cross-platforms application with global styles. In mobile world your application must be pretty and stand out from the other applications. One of this characters is Custom Fonts used in application. With power support of XAML Styling in Xamarin.Forms just created base style for all labels with yours custom fonts. To include custom fonts into you iOS and Android project follow the guide in Using custom fonts on iOS and Android with Xamarin.Forms post written by Gerald. Declare Style in App.xaml le resource section. This make all styles globally visible. From Gerald post above we need to use StyleId property but it isn't bindable property, so to using it in Style Setter we need to create Attachable Property for it: public static class FontHelper { public static readonly BindableProperty StyleIdProperty = BindableProperty.CreateAttached( propertyName: nameof(Label.StyleId), returnType: typeof(String), declaringType: typeof(FontHelper), defaultValue: default(String), propertyChanged: OnItemTappedChanged); public static String GetStyleId(BindableObject bindable) => (String)bindable.GetValue(StyleIdProperty); public static void SetStyleId(BindableObject bindable, String value) => bindable.SetValue(StyleIdProperty, value); public static void OnItemTappedChanged(BindableObject bindable, object oldValue, object newValue) { var control = bindable as Element; if (control != null) { control.StyleId = GetStyleId(control); } } } Then add style in App.xaml resource: <?xml version="1.0" encoding="utf-8" ?> <Application xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:h="clr-namespace:My.Helpers" x:Class="My.App"> <Application.Resources> <ResourceDictionary> GoalKicker.com Xamarin.Forms Notes for Professionals 92 <Style x:Key="LabelStyle" TargetType="Label"> <Setter Property="FontFamily" Value="Metric Bold" /> <Setter Property="h:FontHelper.StyleId" Value="Metric-Bold" /> </Style> </ResourceDictionary> </Application.Resources> </Application> According to post above we need to create Custom Renderer for Label which inherits from LabelRenderer On Android platform. internal class LabelExRenderer : LabelRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Label> e) { base.OnElementChanged(e); if (!String.IsNullOrEmpty(e.NewElement?.StyleId)) { var font = Typeface.CreateFromAsset(Forms.Context.ApplicationContext.Assets, e.NewElement.StyleId + ".ttf"); Control.Typeface = font; } } } For iOS platform no custom renderers required. Now you can obtain style in your`s page markup: For specic label <Label Text="Some text" Style={StaticResource LabelStyle} /> Or apply style to all labels on the page by creating Style Based on LabesStyle <!-- language: xaml --> <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="My.MainPage"> <ContentPage.Resources> <ResourceDictionary> <Style TargetType="Label" BasedOn={StaticResource LabelStyle}> </Style> </ResourceDictionary> </ContentPage.Resources> <Label Text="Some text" /> </ContentPage> GoalKicker.com Xamarin.Forms Notes for Professionals 93 Chapter 19: Push Notications Section 19.1: Push notications for Android with Azure Implementation on Android is a bit more work and requires a specic Service to be implemented. First lets check if our device is capable of receiving push notications, and if so, register it with Google. This can be done with this code in our MainActivity.cs le. protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); global::Xamarin.Forms.Forms.Init(this, bundle); // Check to ensure everything's setup right for push GcmClient.CheckDevice(this); GcmClient.CheckManifest(this); GcmClient.Register(this, NotificationsBroadcastReceiver.SenderIDs); LoadApplication(new App()); } The SenderIDs can be found in the code underneath and is the project number that you get from the Google developer dashboard in order to be able to send push messages. using Android.App; using Android.Content; using Gcm.Client; using Java.Lang; using System; using WindowsAzure.Messaging; using XamarinNotifications.Helpers; // These attributes are to register the right permissions for our app concerning push messages [assembly: Permission(Name = "com.versluisit.xamarinnotifications.permission.C2D_MESSAGE")] [assembly: UsesPermission(Name = "com.versluisit.xamarinnotifications.permission.C2D_MESSAGE")] [assembly: UsesPermission(Name = "com.google.android.c2dm.permission.RECEIVE")] //GET_ACCOUNTS is only needed for android versions 4.0.3 and below [assembly: UsesPermission(Name = "android.permission.GET_ACCOUNTS")] [assembly: UsesPermission(Name = "android.permission.INTERNET")] [assembly: UsesPermission(Name = "android.permission.WAKE_LOCK")] namespace XamarinNotifications.Droid.PlatformSpecifics { // These attributes belong to the BroadcastReceiver, they register for the right intents [BroadcastReceiver(Permission = Constants.PERMISSION_GCM_INTENTS)] [IntentFilter(new[] { Constants.INTENT_FROM_GCM_MESSAGE }, Categories = new[] { "com.versluisit.xamarinnotifications" })] [IntentFilter(new[] { Constants.INTENT_FROM_GCM_REGISTRATION_CALLBACK }, Categories = new[] { "com.versluisit.xamarinnotifications" })] [IntentFilter(new[] { Constants.INTENT_FROM_GCM_LIBRARY_RETRY }, Categories = new[] { "com.versluisit.xamarinnotifications" })] // This is the broadcast receiver public class NotificationsBroadcastReceiver : GcmBroadcastReceiverBase<PushHandlerService> { // TODO add your project number here GoalKicker.com Xamarin.Forms Notes for Professionals 94 public static string[] SenderIDs = { "96688------" }; } [Service] // Don't forget this one! This tells Xamarin that this class is a Android Service public class PushHandlerService : GcmServiceBase { // TODO add your own access key private string _connectionString = ConnectionString.CreateUsingSharedAccessKeyWithListenAccess( new Java.Net.URI("sb://xamarinnotifications-ns.servicebus.windows.net/"), "<your key here>"); // TODO add your own hub name private string _hubName = "xamarinnotifications"; public static string RegistrationID { get; private set; } public PushHandlerService() : base(NotificationsBroadcastReceiver.SenderIDs) { } // This is the entry point for when a notification is received protected override void OnMessage(Context context, Intent intent) { var title = "XamarinNotifications"; if (intent.Extras.ContainsKey("title")) title = intent.Extras.GetString("title"); var messageText = intent.Extras.GetString("message"); if (!string.IsNullOrEmpty(messageText)) CreateNotification(title, messageText); } // The method we use to compose our notification private void CreateNotification(string title, string desc) { // First we make sure our app will start when the notification is pressed const int pendingIntentId = 0; const int notificationId = 0; var startupIntent = new Intent(this, typeof(MainActivity)); var stackBuilder = TaskStackBuilder.Create(this); stackBuilder.AddParentStack(Class.FromType(typeof(MainActivity))); stackBuilder.AddNextIntent(startupIntent); var pendingIntent = stackBuilder.GetPendingIntent(pendingIntentId, PendingIntentFlags.OneShot); // Here we start building our actual notification, this has some more // interesting customization options! var builder = new Notification.Builder(this) .SetContentIntent(pendingIntent) .SetContentTitle(title) .SetContentText(desc) .SetSmallIcon(Resource.Drawable.icon); // Build the notification var notification = builder.Build(); notification.Flags = NotificationFlags.AutoCancel; GoalKicker.com Xamarin.Forms Notes for Professionals 95 // Get the notification manager var notificationManager = GetSystemService(NotificationService) as NotificationManager; // Publish the notification to the notification manager notificationManager.Notify(notificationId, notification); } // Whenever an error occurs in regard to push registering, this fires protected override void OnError(Context context, string errorId) { Console.Out.WriteLine(errorId); } // This handles the successful registration of our device to Google // We need to register with Azure here ourselves protected override void OnRegistered(Context context, string registrationId) { var hub = new NotificationHub(_hubName, _connectionString, context); Settings.DeviceToken = registrationId; // TODO set some tags here if you want and supply them to the Register method var tags = new string[] { }; hub.Register(registrationId, tags); } // This handles when our device unregisters at Google // We need to unregister with Azure protected override void OnUnRegistered(Context context, string registrationId) { var hub = new NotificationHub(_hubName, _connectionString, context); hub.UnregisterAll(registrationId); } } } A sample notication on Android looks like this. Section 19.2: Push notications for iOS with Azure To start the registration for push notications you need to execute the below code. // registers for push var settings = UIUserNotificationSettings.GetSettingsForTypes( UIUserNotificationType.Alert | UIUserNotificationType.Badge GoalKicker.com Xamarin.Forms Notes for Professionals 96 | UIUserNotificationType.Sound, new NSSet()); UIApplication.SharedApplication.RegisterUserNotificationSettings(settings); UIApplication.SharedApplication.RegisterForRemoteNotifications(); This code can either be ran directly when the app starts up in the FinishedLaunching in the AppDelegate.cs le. Or you can do it whenever a user decides that they want to enable push notications. Running this code will trigger an alert to prompt the user if they will accept that the app can send them notications. So also implement a scenario where the user denies that! GoalKicker.com Xamarin.Forms Notes for Professionals 97 These are the events that need implementation for implementing push notications on iOS. You can nd them in the AppDelegate.cs le. // We've successfully registered with the Apple notification service, or in our case Azure public override void RegisteredForRemoteNotifications(UIApplication application, NSData deviceToken) { // Modify device token for compatibility Azure var token = deviceToken.Description; token = token.Trim('<', '>').Replace(" ", ""); // You need the Settings plugin for this! Settings.DeviceToken = token; var hub = new SBNotificationHub("Endpoint=sb://xamarinnotificationsns.servicebus.windows.net/;SharedAccessKeyName=DefaultListenSharedAccessSignature;SharedAccessKey=< your own key>", "xamarinnotifications"); NSSet tags = null; // create tags if you want, not covered for now hub.RegisterNativeAsync(deviceToken, tags, (errorCallback) => { if (errorCallback != null) { var alert = new UIAlertView("ERROR!", errorCallback.ToString(), null, "OK", null); alert.Show(); } }); } // We've received a notification, yay! public override void ReceivedRemoteNotification(UIApplication application, NSDictionary userInfo) { NSObject inAppMessage; var success = userInfo.TryGetValue(new NSString("inAppMessage"), out inAppMessage); if (success) { var alert = new UIAlertView("Notification!", inAppMessage.ToString(), null, "OK", null); alert.Show(); } } // Something went wrong while registering! public override void FailedToRegisterForRemoteNotifications(UIApplication application, NSError error) { var alert = new UIAlertView("Computer says no", "Notification registration failed! Try again!", null, "OK", null); alert.Show(); } When a notication is received this is what it looks like. GoalKicker.com Xamarin.Forms Notes for Professionals 98 Section 19.3: iOS Example 1. You will need a development device 2. Go to your Apple Developer Account and create a provisioning prole with Push Notications enabled 3. You will need some sort of way to notify your phone (AWS, Azure..etc) We will use AWS here public override bool FinishedLaunching(UIApplication app, NSDictionary options) { global::Xamarin.Forms.Forms.Init(); //after typical Xamarin.Forms Init Stuff //variable to set-up the style of notifications you want, iOS supports 3 types var pushSettings = UIUserNotificationSettings.GetSettingsForTypes( UIUserNotificationType.Alert | UIUserNotificationType.Badge | UIUserNotificationType.Sound, null ); //both of these methods are in iOS, we have to override them and set them up //to allow push notifications app.RegisterUserNotificationSettings(pushSettings); notifications settings to register app in settings page //pass the supported push } public override async void RegisteredForRemoteNotifications(UIApplication application, NSData token) { AmazonSimpleNotificationServiceClient snsClient = new AmazonSimpleNotificationServiceClient("your AWS credentials here"); // This contains the registered push notification token stored on the phone. var deviceToken = token.Description.Replace("<", "").Replace(">", "").Replace(" ", ""); if (!string.IsNullOrEmpty(deviceToken)) { GoalKicker.com Xamarin.Forms Notes for Professionals 99 //register with SNS to create an endpoint ARN, this means AWS can message your phone var response = await snsClient.CreatePlatformEndpointAsync( new CreatePlatformEndpointRequest { Token = deviceToken, PlatformApplicationArn = "yourARNwouldgohere" /* insert your platform application ARN here */ }); var endpoint = response.EndpointArn; //AWS lets you create topics, so use subscribe your app to a topic, so you can easily send out one push notification to all of your users var subscribeResponse = await snsClient.SubscribeAsync(new SubscribeRequest { TopicArn = "YourTopicARN here", Endpoint = endpoint, Protocol = "application" }); } } GoalKicker.com Xamarin.Forms Notes for Professionals 100 Chapter 20: Eects Eects simplies platform specic customizations. When there is a need to modify a Xamarin Forms Control's properties, Eects can be used. When there is a need to override the Xamarin Forms Control's methods, Custom renderers can be used Section 20.1: Adding platform specic Eect for an Entry control 1. Create a new Xamarin Forms app using PCL File -> New Solution -> Multiplatform App -> Xamarin Forms -> Forms App; Name the project as EffectsDemo 2. Under the iOS project, add a new Effect class that inherits from PlatformEffect class and overrides the methods OnAttached, OnDetached and OnElementPropertyChanged Notice the two attributes ResolutionGroupName and ExportEffect, these are required for consuming this eect from the PCL/shared project. OnAttached is the method where the logic for customization goes in OnDetached is the method where the clean up and de-registering happens OnElementPropertyChanged is the method which gets triggered upon property changes of dierent elements. To identify the right property, check for the exact property change and add your logic. In this example, OnFocus will give the Blue color and OutofFocus will give Red Color using System; using EffectsDemo.iOS; using UIKit; using Xamarin.Forms; using Xamarin.Forms.Platform.iOS; [assembly: ResolutionGroupName("xhackers")] [assembly: ExportEffect(typeof(FocusEffect), "FocusEffect")] namespace EffectsDemo.iOS { public class FocusEffect : PlatformEffect { public FocusEffect() { } UIColor backgroundColor; protected override void OnAttached() { try { Control.BackgroundColor = backgroundColor = UIColor.Red; } catch (Exception ex) { Console.WriteLine("Cannot set attacked property" + ex.Message); } } protected override void OnDetached() { throw new NotImplementedException(); } GoalKicker.com Xamarin.Forms Notes for Professionals 101 protected override void OnElementPropertyChanged(System.ComponentModel.PropertyChangedEventArgs args) { base.OnElementPropertyChanged(args); try { if (args.PropertyName == "IsFocused") { if (Control.BackgroundColor == backgroundColor) { Control.BackgroundColor = UIColor.Blue; } else { Control.BackgroundColor = backgroundColor; } } } catch (Exception ex) { Console.WriteLine("Cannot set property " + ex.Message); } } }} 3. To Consume this eect in the application, Under the PCL project, create a new class named FocusEffect which inherits from RoutingEffect. This is essential to make the PCL instantiate the platform specic implementation of the eect. Sample code below: using Xamarin.Forms; namespace EffectsDemo { public class FocusEffect : RoutingEffect { public FocusEffect() : base("xhackers.FocusEffect") { } } } 4. Add the eect to Entry control in the XAML <?xml version="1.0" encoding="utf-8"?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clrnamespace:EffectsDemo" x:Class="EffectsDemo.EffectsDemoPage"> <StackLayout Orientation="Horizontal" HorizontalOptions="Center" VerticalOptions="Center"> <Label Text="Effects Demo" HorizontalOptions="StartAndExpand" VerticalOptions="Center" ></Label> <Entry Text="Controlled by effects" HorizontalOptions="FillAndExpand" VerticalOptions="Center"> <Entry.Effects> <local:FocusEffect> </local:FocusEffect> </Entry.Effects> </Entry> GoalKicker.com Xamarin.Forms Notes for Professionals 102 </StackLayout> </ContentPage> GoalKicker.com Xamarin.Forms Notes for Professionals 103 Since the Eect was implemented only in iOS version, when the app runs in iOS Simulator upon focusing the Entry background color changes and nothing happens in Android Emulator as the Effect wasn't created under Droid project GoalKicker.com Xamarin.Forms Notes for Professionals 104 Chapter 21: Triggers & Behaviours Section 21.1: Xamarin Forms Trigger Example Triggers are an easy way to add some UX responsiveness to your application. One easy way to do this is to add a Trigger which changes a Label's TextColor based on whether its related Entry has text entered into it or not. Using a Trigger for this allows the Label.TextColor to change from gray (when no text is entered) to black (as soon as the users enters text): Converter (each converter is given an Instance variable which is used in the binding so that a new instance of the class is not created each time it is used): /// <summary> /// Used in a XAML trigger to return <c>true</c> or <c>false</c> based on the length of <c>value</c>. /// </summary> public class LengthTriggerConverter : Xamarin.Forms.IValueConverter { /// <summary> /// Used so that a new instance is not created every time this converter is used in the XAML code. /// </summary> public static LengthTriggerConverter Instance = new LengthTriggerConverter(); /// <summary> /// If a `ConverterParameter` is passed in, a check to see if <c>value</c> is greater than <c>parameter</c> is made. Otherwise, a check to see if <c>value</c> is over 0 is made. /// </summary> /// <param name="value">The length of the text from an Entry/Label/etc.</param> /// <param name="targetType">The Type of object/control that the text/value is coming from.</param> /// <param name="parameter">Optional, specify what length to test against (example: for 3 Letter Name, we would choose 2, since the 3 Letter Name Entry needs to be over 2 characters), if not specified, defaults to 0.</param> /// <param name="culture">The current culture set in the device.</param> /// <returns><c>object</c>, which is a <c>bool</c> (<c>true</c> if <c>value</c> is greater than 0 (or is greater than the parameter), <c>false</c> if not).</returns> public object Convert(object value, System.Type targetType, object parameter, CultureInfo culture) { return DoWork(value, parameter); } public object ConvertBack(object value, System.Type targetType, object parameter, CultureInfo culture) { return DoWork(value, parameter); } private static object DoWork(object value, object parameter) { int parameterInt = 0; if(parameter != null) { //If param was specified, convert and use it, otherwise, 0 is used string parameterString = (string)parameter; if(!string.IsNullOrEmpty(parameterString)) { int.TryParse(parameterString, out parameterInt); } } return (int)value > parameterInt; } } XAML (the XAML code uses the x:Name of the Entry to gure out in the Entry.Text property is over 3 characters GoalKicker.com Xamarin.Forms Notes for Professionals 105 long.): <StackLayout> <Label Text="3 Letter Name"> <Label.Triggers> <DataTrigger TargetType="Label" Binding="{Binding Source={x:Reference NameEntry}, Path=Text.Length, Converter={x:Static helpers:LengthTriggerConverter.Instance}, ConverterParameter=2}" Value="False"> <Setter Property="TextColor" Value="Gray"/> </DataTrigger> </Label.Triggers> </Label> <Entry x:Name="NameEntry" Text="{Binding MealAmount}" HorizontalOptions="StartAndExpand"/> </StackLayout> Section 21.2: Multi Triggers MultiTrigger is not needed frequently but there are some situations where it is very handy. MultiTrigger behaves similarly to Trigger or DataTrigger but it has multiple conditions. All the conditions must be true for a Setters to re. Here is a simple example: <!-- Text field needs to be initialized in order for the trigger to work at start --> <Entry x:Name="email" Placeholder="Email" Text="" /> <Entry x:Name="phone" Placeholder="Phone" Text="" /> <Button Text="Submit"> <Button.Triggers> <MultiTrigger TargetType="Button"> <MultiTrigger.Conditions> <BindingCondition Binding="{Binding Source={x:Reference email}, Path=Text.Length}" Value="0" /> <BindingCondition Binding="{Binding Source={x:Reference phone}, Path=Text.Length}" Value="0" /> </MultiTrigger.Conditions> <Setter Property="IsEnabled" Value="False" /> </MultiTrigger> </Button.Triggers> </Button> The example has two dierent entries, phone and email, and one of them is required to be lled. The MultiTrigger disables the submit button when both elds are empty. GoalKicker.com Xamarin.Forms Notes for Professionals 106 Chapter 22: AppSettings Reader in Xamarin.Forms Section 22.1: Reading app.cong le in a Xamarin.Forms Xaml project While each mobile platforms do oer their own settings management api, there are no built in ways to read settings from a good old .net style app.cong xml le; This is due to a bunch of good reasons, notably the .net framework conguration management api being on the heavyweight side, and each platform having their own le system api. So we built a simple PCLAppCong library, nicely nuget packaged for your immediate consumption. This library makes use of the lovely PCLStorage library This example assumes you are developing a Xamarin.Forms Xaml project, where you would need to access settings from your shared viewmodel. 1. Initialize CongurationManager.AppSettings on each of your platform project, just after the 'Xamarin.Forms.Forms.Init' statement, as per below: iOS (AppDelegate.cs) global::Xamarin.Forms.Forms.Init(); ConfigurationManager.Initialise(PCLAppConfig.FileSystemStream.PortableStream.Current); LoadApplication(new App()); Android (MainActivity.cs) global::Xamarin.Forms.Forms.Init(this, bundle); ConfigurationManager.Initialise(PCLAppConfig.FileSystemStream.PortableStream.Current); LoadApplication(new App()); UWP / Windows 8.1 / WP 8.1 (App.xaml.cs) Xamarin.Forms.Forms.Init(e); ConfigurationManager.Initialise(PCLAppConfig.FileSystemStream.PortableStream.Current); 2. Add an app.cong le to your shared PCL project, and add your appSettings entries, as you would do with any app.cong le <configuration> <appSettings> <add key="config.text" value="hello from app.settings!" /> </appSettings> </configuration> 3. Add this PCL app.cong le as a linked le on all your platform projects. For android, make sure to set the build action to 'AndroidAsset', for UWP set the build action to 'Content' 4. Access your setting: ConfigurationManager.AppSettings["config.text"]; GoalKicker.com Xamarin.Forms Notes for Professionals 107 Chapter 23: Creating custom controls Every Xamarin.Forms view has an accompanying renderer for each platform that creates an instance of a native control. When a View is rendered on the specic platform the ViewRenderer class is instantiated. The process for doing this is as follows: Create a Xamarin.Forms custom control. Consume the custom control from Xamarin.Forms. Create the custom renderer for the control on each platform. Section 23.1: Label with bindable collection of Spans I created custom label with wrapper around FormattedText property: public class MultiComponentLabel : Label { public IList<TextComponent> Components { get; set; } public MultiComponentLabel() { var components = new ObservableCollection<TextComponent>(); components.CollectionChanged += OnComponentsChanged; Components = components; } private void OnComponentsChanged(object sender, NotifyCollectionChangedEventArgs e) { BuildText(); } private void OnComponentPropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { BuildText(); } private void BuildText() { var formattedString = new FormattedString(); foreach (var component in Components) { formattedString.Spans.Add(new Span { Text = component.Text }); component.PropertyChanged -= OnComponentPropertyChanged; component.PropertyChanged += OnComponentPropertyChanged; } FormattedText = formattedString; } } I added collection of custom TextComponents: public class TextComponent : BindableObject { public static readonly BindableProperty TextProperty = GoalKicker.com Xamarin.Forms Notes for Professionals 108 BindableProperty.Create(nameof(Text), typeof(string), typeof(TextComponent), default(string)); public string Text { get { return (string)GetValue(TextProperty); } set { SetValue(TextProperty, value); } } } And when collection of text components changes or Text property of separate component changes I rebuild FormattedText property of base Label. And how I used it in XAML: <ContentPage x:Name="Page" xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:controls="clr-namespace:SuperForms.Controls;assembly=SuperForms.Controls" x:Class="SuperForms.Samples.MultiComponentLabelPage"> <controls:MultiComponentLabel Margin="0,20,0,0"> <controls:MultiComponentLabel.Components> <controls:TextComponent Text="Time"/> <controls:TextComponent Text=": "/> <controls:TextComponent Text="{Binding CurrentTime, Source={x:Reference Page}}"/> </controls:MultiComponentLabel.Components> </controls:MultiComponentLabel> </ContentPage> Codebehind of page: public partial class MultiComponentLabelPage : ContentPage { private string _currentTime; public string CurrentTime { get { return _currentTime; } set { _currentTime = value; OnPropertyChanged(); } } public MultiComponentLabelPage() { InitializeComponent(); BindingContext = this; } protected override void OnAppearing() { base.OnAppearing(); Device.StartTimer(TimeSpan.FromSeconds(1), () => { CurrentTime = DateTime.Now.ToString("hh : mm : ss"); GoalKicker.com Xamarin.Forms Notes for Professionals 109 return true; }); } } Section 23.2: Implementing a CheckBox Control In this example we will implement a custom Checkbox for Android and iOS. Creating the Custom Control namespace CheckBoxCustomRendererExample { public class Checkbox : View { public static readonly BindableProperty IsCheckedProperty = BindableProperty.Create<Checkbox, bool>(p => p.IsChecked, true, propertyChanged: (s, o, n) => { (s as Checkbox).OnChecked(new EventArgs()); }); public static readonly BindableProperty ColorProperty = BindableProperty.Create<Checkbox, Color>(p => p.Color, Color.Default); public bool IsChecked { get { return (bool)GetValue(IsCheckedProperty); } set { SetValue(IsCheckedProperty, value); } } public Color Color { get { return (Color)GetValue(ColorProperty); } set { SetValue(ColorProperty, value); } } public event EventHandler Checked; protected virtual void OnChecked(EventArgs e) { if (Checked != null) Checked(this, e); } } } Well start o with the Android Custom Renderer by creating a new class (CheckboxCustomRenderer) in the Android portion of our solution. A few important details to note: We need to mark the top of our class with the ExportRenderer attribute so that the renderer is registered GoalKicker.com Xamarin.Forms Notes for Professionals 110 with Xamarin.Forms. This way, Xamarin.Forms will use this renderer when its trying to create our Checkbox object on Android. Were doing most of our work in the OnElementChanged method, where we instantiate and set up our native control. Consuming the Custom Control <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clrnamespace:CheckBoxCustomRendererExample" x:Class="CheckBoxCustomRendererExample.CheckBoxCustomRendererExamplePage"> <StackLayout Padding="20"> <local:Checkbox Color="Aqua" /> </StackLayout> </ContentPage> Creating the Custom Renderer on each Platform The process for creating the custom renderer class is as follows: 1. Create a subclass of the ViewRenderer<T1,T2> class that renders the custom control. The rst type argument should be the custom control the renderer is for, in this case CheckBox. The second type argument should be the native control that will implement the custom control. 2. Override the OnElementChanged method that renders the custom control and write logic to customize it. This method is called when the corresponding Xamarin.Forms control is created. 3. Add an ExportRenderer attribute to the custom renderer class to specify that it will be used to render the Xamarin.Forms custom control. This attribute is used to register the custom renderer with Xamarin.Forms. Creating the Custom Renderer for Android [assembly: ExportRenderer(typeof(Checkbox), typeof(CheckBoxRenderer))] namespace CheckBoxCustomRendererExample.Droid { public class CheckBoxRenderer : ViewRenderer<Checkbox, CheckBox> { private CheckBox checkBox; protected override void OnElementChanged(ElementChangedEventArgs<Checkbox> e) { base.OnElementChanged(e); var model = e.NewElement; checkBox = new CheckBox(Context); checkBox.Tag = this; CheckboxPropertyChanged(model, null); checkBox.SetOnClickListener(new ClickListener(model)); SetNativeControl(checkBox); } private void CheckboxPropertyChanged(Checkbox model, String propertyName) { if (propertyName == null || Checkbox.IsCheckedProperty.PropertyName == propertyName) { checkBox.Checked = model.IsChecked; } if (propertyName == null || Checkbox.ColorProperty.PropertyName == propertyName) { int[][] states = { new int[] { Android.Resource.Attribute.StateEnabled}, // enabled new int[] {Android.Resource.Attribute.StateEnabled}, // disabled new int[] {Android.Resource.Attribute.StateChecked}, // unchecked GoalKicker.com Xamarin.Forms Notes for Professionals 111 new int[] { Android.Resource.Attribute.StatePressed} // pressed }; var checkBoxColor = (int)model.Color.ToAndroid(); int[] colors = { checkBoxColor, checkBoxColor, checkBoxColor, checkBoxColor }; var myList = new Android.Content.Res.ColorStateList(states, colors); checkBox.ButtonTintList = myList; } } protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e) { if (checkBox != null) { base.OnElementPropertyChanged(sender, e); CheckboxPropertyChanged((Checkbox)sender, e.PropertyName); } } public class ClickListener : Java.Lang.Object, IOnClickListener { private Checkbox _myCheckbox; public ClickListener(Checkbox myCheckbox) { this._myCheckbox = myCheckbox; } public void OnClick(global::Android.Views.View v) { _myCheckbox.IsChecked = !_myCheckbox.IsChecked; } } } } Creating the Custom Renderer for iOS Since in iOS the is no built in checkbox, we will create a CheckBoxView rst a then create a renderer for our Xamarin.Forms checkbox. The CheckBoxView is based in two images the checked_checkbox.png and the unchecked_checkbox.png so the property Color will be ignored. The CheckBox view: namespace CheckBoxCustomRendererExample.iOS { [Register("CheckBoxView")] public class CheckBoxView : UIButton { public CheckBoxView() { Initialize(); } GoalKicker.com Xamarin.Forms Notes for Professionals 112 public CheckBoxView(CGRect bounds) : base(bounds) { Initialize(); } public string CheckedTitle { set { SetTitle(value, UIControlState.Selected); } } public string UncheckedTitle { set { SetTitle(value, UIControlState.Normal); } } public bool Checked { set { Selected = value; } get { return Selected; } } void Initialize() { ApplyStyle(); TouchUpInside += (sender, args) => Selected = !Selected; // set default color, because type is not UIButtonType.System SetTitleColor(UIColor.DarkTextColor, UIControlState.Normal); SetTitleColor(UIColor.DarkTextColor, UIControlState.Selected); } void ApplyStyle() { SetImage(UIImage.FromBundle("Images/checked_checkbox.png"), UIControlState.Selected); SetImage(UIImage.FromBundle("Images/unchecked_checkbox.png"), UIControlState.Normal); } } } The CheckBox custom renderer: [assembly: ExportRenderer(typeof(Checkbox), typeof(CheckBoxRenderer))] namespace CheckBoxCustomRendererExample.iOS { public class CheckBoxRenderer : ViewRenderer<Checkbox, CheckBoxView> { /// <summary> /// Handles the Element Changed event /// </summary> /// <param name="e">The e.</param> protected override void OnElementChanged(ElementChangedEventArgs<Checkbox> e) { base.OnElementChanged(e); GoalKicker.com Xamarin.Forms Notes for Professionals 113 if (Element == null) return; BackgroundColor = Element.BackgroundColor.ToUIColor(); if (e.NewElement != null) { if (Control == null) { var checkBox = new CheckBoxView(Bounds); checkBox.TouchUpInside += (s, args) => Element.IsChecked = Control.Checked; SetNativeControl(checkBox); } Control.Checked = e.NewElement.IsChecked; } Control.Frame = Frame; Control.Bounds = Bounds; } /// <summary> /// Handles the <see cref="E:ElementPropertyChanged" /> event. /// </summary> /// <param name="sender">The sender.</param> /// <param name="e">The <see cref="PropertyChangedEventArgs"/> instance containing the event data.</param> protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e) { base.OnElementPropertyChanged(sender, e); if (e.PropertyName.Equals("Checked")) { Control.Checked = Element.IsChecked; } } } } Result: GoalKicker.com Xamarin.Forms Notes for Professionals 114 GoalKicker.com Xamarin.Forms Notes for Professionals 115 Section 23.3: Create an Xamarin Forms custom input control (no native required) Below is an example of a pure Xamarin Forms custom control. No custom rendering is being done for this but could easily be implemented, in fact, in my own code, I use this very same control along with a custom renderer for both the Label and Entry. The custom control is a ContentView with a Label, Entry, and a BoxView within it, held in place using 2 StackLayouts. We also dene multiple bindable properties as well as a TextChanged event. The custom bindable properties work by being dened as they are below and having the elements within the control (in this case a Label and an Entry) being bound to the custom bindable properties. A few on the bindable properties need to also implement a BindingPropertyChangedDelegate in order to make the bounded elements change their values. public class InputFieldContentView : ContentView { #region Properties /// <summary> /// Attached to the <c>InputFieldContentView</c>'s <c>ExtendedEntryOnTextChanged()</c> event, but returns the <c>sender</c> as <c>InputFieldContentView</c>. /// </summary> public event System.EventHandler<TextChangedEventArgs> OnContentViewTextChangedEvent; //In OnContentViewTextChangedEvent() we return our custom InputFieldContentView control as the sender but we could have returned the Entry itself as the sender if we wanted to do that instead. public static readonly BindableProperty LabelTextProperty = BindableProperty.Create("LabelText", typeof(string), typeof(InputFieldContentView), string.Empty); public string LabelText { get { return (string)GetValue(LabelTextProperty); } set { SetValue(LabelTextProperty, value); } } public static readonly BindableProperty LabelColorProperty = BindableProperty.Create("LabelColor", typeof(Color), typeof(InputFieldContentView), Color.Default); public Color LabelColor { get { return (Color)GetValue(LabelColorProperty); } set { SetValue(LabelColorProperty, value); } } public static readonly BindableProperty EntryTextProperty = BindableProperty.Create("EntryText", typeof(string), typeof(InputFieldContentView), string.Empty, BindingMode.TwoWay, null, OnEntryTextChanged); public string EntryText { get { return (string)GetValue(EntryTextProperty); } set { SetValue(EntryTextProperty, value); } } public static readonly BindableProperty PlaceholderTextProperty = BindableProperty.Create("PlaceholderText", typeof(string), typeof(InputFieldContentView), string.Empty); public string PlaceholderText { get { return (string)GetValue(PlaceholderTextProperty); } set { SetValue(PlaceholderTextProperty, value); } GoalKicker.com Xamarin.Forms Notes for Professionals 116 } public static readonly BindableProperty UnderlineColorProperty = BindableProperty.Create("UnderlineColor", typeof(Color), typeof(InputFieldContentView), Color.Black, BindingMode.TwoWay, null, UnderlineColorChanged); public Color UnderlineColor { get { return (Color)GetValue(UnderlineColorProperty); } set { SetValue(UnderlineColorProperty, value); } } private BoxView _underline; #endregion public InputFieldContentView() { BackgroundColor = Color.Transparent; HorizontalOptions = LayoutOptions.FillAndExpand; Label label = new Label { BindingContext = this, HorizontalOptions = LayoutOptions.StartAndExpand, VerticalOptions = LayoutOptions.Center, TextColor = Color.Black }; label.SetBinding(Label.TextProperty, (InputFieldContentView view) => view.LabelText, BindingMode.TwoWay); label.SetBinding(Label.TextColorProperty, (InputFieldContentView view) => view.LabelColor, BindingMode.TwoWay); Entry entry = new Entry { BindingContext = this, HorizontalOptions = LayoutOptions.End, TextColor = Color.Black, HorizontalTextAlignment = TextAlignment.End }; entry.SetBinding(Entry.PlaceholderProperty, (InputFieldContentView view) => view.PlaceholderText, BindingMode.TwoWay); entry.SetBinding(Entry.TextProperty, (InputFieldContentView view) => view.EntryText, BindingMode.TwoWay); entry.TextChanged += OnTextChangedEvent; _underline = new BoxView { BackgroundColor = Color.Black, HeightRequest = 1, HorizontalOptions = LayoutOptions.FillAndExpand }; Content = new StackLayout { Spacing = 0, HorizontalOptions = LayoutOptions.FillAndExpand, Children = { new StackLayout { Padding = new Thickness(5, 0), Spacing = 0, HorizontalOptions = LayoutOptions.FillAndExpand, Orientation = StackOrientation.Horizontal, Children = { label, entry } GoalKicker.com Xamarin.Forms Notes for Professionals 117 }, _underline } }; SizeChanged += (sender, args) => entry.WidthRequest = Width * 0.5 - 10; } private static void OnEntryTextChanged(BindableObject bindable, object oldValue, object newValue) { InputFieldContentView contentView = (InputFieldContentView)bindable; contentView.EntryText = (string)newValue; } private void OnTextChangedEvent(object sender, TextChangedEventArgs args) { if(OnContentViewTextChangedEvent != null) { OnContentViewTextChangedEvent(this, new TextChangedEventArgs(args.OldTextValue, args.NewTextValue)); } //Here is where we pass in 'this' (which is the InputFieldContentView) instead of 'sender' (which is the Entry control) } private static void UnderlineColorChanged(BindableObject bindable, object oldValue, object newValue) { InputFieldContentView contentView = (InputFieldContentView)bindable; contentView._underline.BackgroundColor = (Color)newValue; } } And here is a picture of the nal product on iOS (the image shows what it looks like when using a custom renderer for the Label and Entry which is being used to remove the border on iOS and to specify a custom font for both elements): One issue I ran into was getting the BoxView.BackgroundColor to change when UnderlineColor changed. Even after binding the BoxView's BackgroundColor property, it would not change until I added the UnderlineColorChanged delegate. Section 23.4: Creating a custom Entry control with a MaxLength property The Xamarin Forms Entry control does not have a MaxLength property. To achieve this you can extend Entry as below, by adding a Bindable MaxLength property. Then you just need to subscribe to the TextChanged event on Entry and validate the length of the Text when this is called: class CustomEntry : Entry { public CustomEntry() { base.TextChanged += Validate; } public static readonly BindableProperty MaxLengthProperty = BindableProperty.Create(nameof(MaxLength), typeof(int), typeof(CustomEntry), 0); public int MaxLength { get { return (int)GetValue(MaxLengthProperty); } set { SetValue(MaxLengthProperty, value); } } GoalKicker.com Xamarin.Forms Notes for Professionals 118 public void Validate(object sender, TextChangedEventArgs args) { var e = sender as Entry; var val = e?.Text; if (string.IsNullOrEmpty(val)) return; if (MaxLength > 0 && val.Length > MaxLength) val = val.Remove(val.Length - 1); e.Text = val; } } Usage in XAML: <ContentView xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:customControls="clr-namespace:CustomControls;assembly=CustomControls" x:Class="Views.TestView"> <ContentView.Content> <customControls:CustomEntry MaxLength="10" /> </ContentView.Content> Section 23.5: Creating custom Button /// <summary> /// Button with some additional options /// </summary> public class TurboButton : Button { public static readonly BindableProperty StringDataProperty = BindableProperty.Create( propertyName: "StringData", returnType: typeof(string), declaringType: typeof(ButtonWithStorage), defaultValue: default(string)); public static readonly BindableProperty IntDataProperty = BindableProperty.Create( propertyName: "IntData", returnType: typeof(int), declaringType: typeof(ButtonWithStorage), defaultValue: default(int)); /// <summary> /// You can put here some string data /// </summary> public string StringData { get { return (string)GetValue(StringDataProperty); } set { SetValue(StringDataProperty, value); } } /// <summary> /// You can put here some int data /// </summary> public int IntData { get { return (int)GetValue(IntDataProperty); } GoalKicker.com Xamarin.Forms Notes for Professionals 119 set { SetValue(IntDataProperty, value); } } public TurboButton() { PropertyChanged += CheckIfPropertyLoaded; } /// <summary> /// Called when one of properties is changed /// </summary> private void CheckIfPropertyLoaded(object sender, PropertyChangedEventArgs e) { //example of using PropertyChanged if(e.PropertyName == "IntData") { //IntData is now changed, you can operate on updated value } } } Usage in XAML: <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="SomeApp.Pages.SomeFolder.Example" xmlns:customControls="clr-namespace:SomeApp.CustomControls;assembly=SomeApp"> <StackLayout> <customControls:TurboButton x:Name="exampleControl" IntData="2" StringData="Test" /> </StackLayout> </ContentPage> Now, you can use your properties in c#: exampleControl.IntData Note that you need to specify by yourself where your TurboButton class is placed in your project. I've done it in this line: xmlns:customControls="clr-namespace:SomeApp.CustomControls;assembly=SomeApp" You can freely change "customControls" to some other name. It's up to you how you will call it. GoalKicker.com Xamarin.Forms Notes for Professionals 120 Chapter 24: Working with local databases Section 24.1: Using SQLite.NET in a Shared Project SQLite.NET is an open source library which makes it possible to add local-databases support using SQLite version 3 in a Xamarin.Forms project. The steps below demonstrate how to include this component in a Xamarin.Forms Shared Project: 1. Download the latest version of the SQLite.cs class and add it to the Shared Project. 2. Every table that will be included in the database needs to be modeled as a class in the Shared Project. A table is dened by adding at least two attributes in the class: Table (for the class) and PrimaryKey (for a property). For this example, a new class named Song is added to the Shared Project, dened as follows: using System; using SQLite; namespace SongsApp { [Table("Song")] public class Song { [PrimaryKey] public string ID { get; set; } public string SongName { get; set; } public string SingerName { get; set; } } } 3. Next, add a new class called Database, which inherits from the SQLiteConnection class (included in SQLite.cs). In this new class, the code for database access, tables creation and CRUD operations for each table is dened. Sample code is shown below: using System; using System.Linq; using System.Collections.Generic; using SQLite; namespace SongsApp { public class BaseDatos : SQLiteConnection { public BaseDatos(string path) : base(path) { Initialize(); } void Initialize() { CreateTable<Song>(); } public List<Song> GetSongs() { return Table<Song>().ToList(); } GoalKicker.com Xamarin.Forms Notes for Professionals 121 public Song GetSong(string id) { return Table<Song>().Where(t => t.ID == id).First(); } public bool AddSong(Song song) { Insert(song); } public bool UpdateSong(Song song) { Update(song); } public void DeleteSong(Song song) { Delete(song); } } } 4. As you could see in the previous step, the constructor of our Database class includes a path parameter, which represents the location of the le that stores the SQLite database le. A static Database object can be declared in App.cs. The path is platform-specic: public class App : Application { public static Database DB; public App () { string dbFile = "SongsDB.db3"; #if __ANDROID__ string docPath = Environment.GetFolderPath(Environment.SpecialFolder.Personal); var dbPath = System.IO.Path.Combine(docPath, dbFile); #else #if __IOS__ string docPath = Environment.GetFolderPath(Environment.SpecialFolder.Personal); string libPath = System.IO.Path.Combine(docPath, "..", "Library"); var dbPath = System.IO.Path.Combine(libPath, dbFile); #else var dbPath = System.IO.Path.Combine(ApplicationData.Current.LocalFolder.Path, dbFile); #endif #endif DB = new Database(dbPath); // The root page of your application MainPage = new SongsPage(); } } 5. Now simply call the DB object through the App class anytime you need to perform a CRUD operation to the Songs table. For example, to insert a new Song after the user has clicked a button, you can use the following code: void AddNewSongButton_Click(object sender, EventArgs a) { Song s = new Song(); GoalKicker.com Xamarin.Forms Notes for Professionals 122 s.ID = Guid.NewGuid().ToString(); s.SongName = songNameEntry.Text; s.SingerName = singerNameEntry.Text; App.DB.AddSong(song); } Section 24.2: Working with local databases using xamarin.forms in visual studio 2015 SQlite example Step by step Explanation 1. The steps below demonstrate how to include this component in a Xamarin.Forms Shared Project: to add packages in (pcl,Andriod,Windows,Ios) Add References Click on Manage Nuget packages ->click on Browse to install SQLite.Net.Core-PCL , SQLite Net Extensions after installation is completed check it once in references then 2. To add Class Employee.cs below code using SQLite.Net.Attributes; namespace DatabaseEmployeeCreation.SqlLite { public class Employee { [PrimaryKey,AutoIncrement] public int Eid { get; set; } public string Ename { get; set; } public string Address { get; set; } public string phonenumber { get; set; } public string email { get; set; } } } 3. To add one interface ISQLite using SQLite.Net; //using SQLite.Net; namespace DatabaseEmployeeCreation.SqlLite.ViewModel { public interface ISQLite { SQLiteConnection GetConnection(); } } 4. Create a one class for database logics and methods below code is follow . using SQLite.Net; using System.Collections.Generic; using System.Linq; using Xamarin.Forms; namespace DatabaseEmployeeCreation.SqlLite.ViewModel { public class DatabaseLogic { static object locker = new object(); SQLiteConnection database; public DatabaseLogic() { database = DependencyService.Get<ISQLite>().GetConnection(); // create the tables database.CreateTable<Employee>(); } GoalKicker.com Xamarin.Forms Notes for Professionals 123 public IEnumerable<Employee> GetItems() { lock (locker) { return (from i in database.Table<Employee>() select i).ToList(); } } public IEnumerable<Employee> GetItemsNotDone() { lock (locker) { return database.Query<Employee>("SELECT * FROM [Employee]"); } } public Employee GetItem(int id) { lock (locker) { return database.Table<Employee>().FirstOrDefault(x => x.Eid == id); } } public int SaveItem(Employee item) { lock (locker) { if (item.Eid != 0) { database.Update(item); return item.Eid; } else { return database.Insert(item); } } } public int DeleteItem(int Eid) { lock (locker) { return database.Delete<Employee>(Eid); } } } } 5. to Create a xaml.forms EmployeeRegistration.xaml <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="DatabaseEmployeeCreation.SqlLite.EmployeeRegistration" Title="{Binding Name}" > <StackLayout VerticalOptions="StartAndExpand" Padding="20"> <Label Text="Ename" /> <Entry x:Name="nameEntry" Text="{Binding Ename}"/> GoalKicker.com Xamarin.Forms Notes for Professionals 124 <Label Text="Address" /> <Editor x:Name="AddressEntry" Text="{Binding Address}"/> <Label Text="phonenumber" /> <Entry x:Name="phonenumberEntry" Text="{Binding phonenumber}"/> <Label Text="email" /> <Entry x:Name="emailEntry" Text="{Binding email}"/> <Button Text="Add" Clicked="addClicked"/> <!-- <Button Text="Delete" Clicked="deleteClicked"/>--> <Button Text="Details" Clicked="DetailsClicked"/> <!-- <Button Text="Edit" Clicked="speakClicked"/>--> </StackLayout> </ContentPage> EmployeeRegistration.cs using DatabaseEmployeeCreation.SqlLite.ViewModel; using DatabaseEmployeeCreation.SqlLite.Views; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Xamarin.Forms; namespace DatabaseEmployeeCreation.SqlLite { public partial class EmployeeRegistration : ContentPage { private int empid; private Employee obj; public EmployeeRegistration() { InitializeComponent(); } public EmployeeRegistration(Employee obj) { this.obj = obj; var eid = obj.Eid; Navigation.PushModalAsync(new EmployeeRegistration()); var Address = obj.Address; var email = obj.email; var Ename = obj.Ename; var phonenumber = obj.phonenumber; AddressEntry. = Address; emailEntry.Text = email; nameEntry.Text = Ename; //AddressEntry.Text = obj.Address; //emailEntry.Text = obj.email; //nameEntry.Text = obj.Ename; //phonenumberEntry.Text = obj.phonenumber; GoalKicker.com Xamarin.Forms Notes for Professionals 125 Employee empupdate = new Employee(); //updateing Values empupdate.Address = AddressEntry.Text; empupdate.Ename = nameEntry.Text; empupdate.email = emailEntry.Text; empupdate.Eid = obj.Eid; App.Database.SaveItem(empupdate); Navigation.PushModalAsync(new EmployeeRegistration()); } public EmployeeRegistration(int empid) { this.empid = empid; Employee lst = App.Database.GetItem(empid); //var Address = lst.Address; //var email = lst.email; //var Ename = lst.Ename; //var phonenumber = lst.phonenumber; //AddressEntry.Text = Address; //emailEntry.Text = email; //nameEntry.Text = Ename; //phonenumberEntry.Text = phonenumber; // to retriva values based on id to AddressEntry.Text = lst.Address; emailEntry.Text = lst.email; nameEntry.Text = lst.Ename; phonenumberEntry.Text = lst.phonenumber; Employee empupdate = new Employee(); //updateing Values empupdate.Address = AddressEntry.Text; empupdate.email = emailEntry.Text; App.Database.SaveItem(empupdate); Navigation.PushModalAsync(new EmployeeRegistration()); } void addClicked(object sender, EventArgs e) { //var createEmp = (Employee)BindingContext; Employee emp = new Employee(); emp.Address = AddressEntry.Text; emp.email = emailEntry.Text; emp.Ename = nameEntry.Text; emp.phonenumber = phonenumberEntry.Text; App.Database.SaveItem(emp); this.Navigation.PushAsync(new EmployeeDetails()); } //void deleteClicked(object sender, EventArgs e) //{ // var emp = (Employee)BindingContext; // App.Database.DeleteItem(emp.Eid); // this.Navigation.PopAsync(); //} void DetailsClicked(object sender, EventArgs e) { var empcancel = (Employee)BindingContext; this.Navigation.PushAsync(new EmployeeDetails()); } // void speakClicked(object sender, EventArgs e) // { // var empspek = (Employee)BindingContext; GoalKicker.com Xamarin.Forms Notes for Professionals 126 // empspek.Ename); // } } //DependencyService.Get<ITextSpeak>().Speak(empspek.Address + " " + } 6. to display EmployeeDetails below code behind using DatabaseEmployeeCreation; using DatabaseEmployeeCreation.SqlLite; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Xamarin.Forms; namespace DatabaseEmployeeCreation.SqlLite.Views { public partial class EmployeeDetails : ContentPage { ListView lv = new ListView(); IEnumerable<Employee> lst; public EmployeeDetails() { InitializeComponent(); displayemployee(); } private void displayemployee() { Button btn = new Button() { Text = "Details", BackgroundColor = Color.Blue, }; btn.Clicked += Btn_Clicked; //IEnumerable<Employee> lst = App.Database.GetItems(); //IEnumerable<Employee> lst1 = App.Database.GetItemsNotDone(); //IEnumerable<Employee> lst2 = App.Database.GetItemsNotDone(); Content = new StackLayout() { Children = { btn }, }; } private void Btn_Clicked(object sender, EventArgs e) { lst = App.Database.GetItems(); lv.ItemsSource = lst; lv.HasUnevenRows = true; lv.ItemTemplate = new DataTemplate(typeof(OptionsViewCell)); Content = new StackLayout() { Children = { lv }, }; GoalKicker.com Xamarin.Forms Notes for Professionals 127 } } public class OptionsViewCell : ViewCell { int empid; Button btnEdit; public OptionsViewCell() { } protected override void OnBindingContextChanged() { base.OnBindingContextChanged(); if (this.BindingContext == null) return; dynamic obj = BindingContext; empid = Convert.ToInt32(obj.Eid); var lblname = new Label { BackgroundColor = Color.Lime, Text = obj.Ename, }; var lblAddress = new Label { BackgroundColor = Color.Yellow, Text = obj.Address, }; var lblphonenumber = new Label { BackgroundColor = Color.Pink, Text = obj.phonenumber, }; var lblemail = new Label { BackgroundColor = Color.Purple, Text = obj.email, }; var lbleid = new Label { BackgroundColor = Color.Silver, Text = (empid).ToString(), }; //var lblname = new Label //{ // BackgroundColor = Color.Lime, // // HorizontalOptions = LayoutOptions.Start //}; //lblname.SetBinding(Label.TextProperty, "Ename"); //var lblAddress = new Label //{ // BackgroundColor = Color.Yellow, // //HorizontalOptions = LayoutOptions.Center, GoalKicker.com Xamarin.Forms Notes for Professionals 128 //}; //lblAddress.SetBinding(Label.TextProperty, "Address"); //var lblphonenumber = new Label //{ // BackgroundColor = Color.Pink, // //HorizontalOptions = LayoutOptions.CenterAndExpand, //}; //lblphonenumber.SetBinding(Label.TextProperty, "phonenumber"); //var lblemail = new Label //{ // BackgroundColor = Color.Purple, // // HorizontalOptions = LayoutOptions.CenterAndExpand //}; //lblemail.SetBinding(Label.TextProperty, "email"); //var lbleid = new Label //{ // BackgroundColor = Color.Silver, // // HorizontalOptions = LayoutOptions.CenterAndExpand //}; //lbleid.SetBinding(Label.TextProperty, "Eid"); Button btnDelete = new Button { BackgroundColor = Color.Gray, Text = "Delete", //WidthRequest = 15, //HeightRequest = 20, TextColor = Color.Red, HorizontalOptions = LayoutOptions.EndAndExpand, }; btnDelete.Clicked += BtnDelete_Clicked; //btnDelete.PropertyChanged += BtnDelete_PropertyChanged; btnEdit = new Button { BackgroundColor = Color.Gray, Text = "Edit", TextColor = Color.Green, }; // lbleid.SetBinding(Label.TextProperty, "Eid"); btnEdit.Clicked += BtnEdit_Clicked1; ; //btnEdit.Clicked += async (s, e) =>{ // await App.Current.MainPage.Navigation.PushModalAsync(new EmployeeRegistration()); //}; View = new StackLayout() { Orientation = StackOrientation.Horizontal, BackgroundColor = Color.White, Children = { lbleid, lblname, lblAddress, lblemail, lblphonenumber, btnDelete, btnEdit }, }; //View = new StackLayout() //{ HorizontalOptions = LayoutOptions.Center, WidthRequest = 10, BackgroundColor = Color.Yellow, Children = { lblAddress } }; //View = new StackLayout() //{ HorizontalOptions = LayoutOptions.End, WidthRequest = 30, BackgroundColor = GoalKicker.com Xamarin.Forms Notes for Professionals 129 Color.Yellow, Children = { lblemail } }; //View = new StackLayout() //{ HorizontalOptions = LayoutOptions.End, BackgroundColor = Color.Green, Children = { lblphonenumber } }; //string Empid =c.eid ; } private async void BtnEdit_Clicked1(object sender, EventArgs e) { Employee obj= App.Database.GetItem(empid); if (empid > 0) { await App.Current.MainPage.Navigation.PushModalAsync(new EmployeeRegistration(obj)); } else { await App.Current.MainPage.Navigation.PushModalAsync(new EmployeeRegistration(empid)); } } private void BtnDelete_Clicked(object sender, EventArgs e) { // var eid = Convert.ToInt32(empid); // var item = (Xamarin.Forms.Button)sender; int eid = empid; App.Database.DeleteItem(eid); } //private void BtnDelete_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) //{ // var ename= e.PropertyName; //} } //private void BtnDelete_Clicked(object sender, EventArgs e) //{ // var eid = 8; // var item = (Xamarin.Forms.Button)sender; // //} App.Database.DeleteItem(eid); } 7. To implement method in Android and ios GetConnection() method using System; using Xamarin.Forms; using System.IO; using DatabaseEmployeeCreation.Droid; using DatabaseEmployeeCreation.SqlLite.ViewModel; using SQLite; using SQLite.Net; GoalKicker.com Xamarin.Forms Notes for Professionals 130 [assembly: Dependency(typeof(SQLiteEmployee_Andriod))] namespace DatabaseEmployeeCreation.Droid { public class SQLiteEmployee_Andriod : ISQLite { public SQLiteEmployee_Andriod() { } #region ISQLite implementation public SQLiteConnection GetConnection() { //var sqliteFilename = "EmployeeSQLite.db3"; //string documentsPath = System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal); // Documents folder //var path = Path.Combine(documentsPath, sqliteFilename); //// This is where we copy in the prepopulated database //Console.WriteLine(path); //if (!File.Exists(path)) //{ // var s = Forms.Context.Resources.OpenRawResource(Resource.Raw.EmployeeSQLite); // RESOURCE NAME ### // // FileAccess.Write); // // //} // create a write stream FileStream writeStream = new FileStream(path, FileMode.OpenOrCreate, // write to the stream ReadWriteStream(s, writeStream); //var conn = new SQLiteConnection(path); //// Return the database connection //return conn; var filename = "DatabaseEmployeeCreationSQLite.db3"; var documentspath = Environment.GetFolderPath(Environment.SpecialFolder.Personal); var path = Path.Combine(documentspath, filename); var platform = new SQLite.Net.Platform.XamarinAndroid.SQLitePlatformAndroid(); var connection = new SQLite.Net.SQLiteConnection(platform, path); return connection; } //public SQLiteConnection GetConnection() //{ // var filename = "EmployeeSQLite.db3"; // var documentspath = Environment.GetFolderPath(Environment.SpecialFolder.Personal); // var path = Path.Combine(documentspath, filename); // var platform = new SQLite.Net.Platform.XamarinAndroid.SQLitePlatformAndroid(); // var connection = new SQLite.Net.SQLiteConnection(platform, path); // return connection; //} #endregion /// <summary> /// helper method to get the database out of /raw/ and into the user filesystem /// </summary> void ReadWriteStream(Stream readStream, Stream writeStream) { int Length = 256; Byte[] buffer = new Byte[Length]; GoalKicker.com Xamarin.Forms Notes for Professionals 131 int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); } } } I hope this above example is very easy way i explained GoalKicker.com Xamarin.Forms Notes for Professionals 132 Chapter 25: CarouselView - Pre-release version Section 25.1: Import CarouselView The easiest way to import CarouselView is to use the NuGet-Packages Manager in Xamarin / Visual studio: To use pre-release packages, make sure you enable the 'Show pre-release packages' checkbox at the left corner. Each sub-project (.iOS/.droid./.WinPhone) must import this package. Section 25.2: Import CarouselView into a XAML Page The basics In the heading of ContentPage, insert following line: xmlns:cv="clr-namespace:Xamarin.Forms;assembly=Xamarin.Forms.CarouselView" Between the <ContentPage.Content> tags place the CarouselView: <cv:CarouselView x:Name="DemoCarouselView"> </cv:CarouselView> GoalKicker.com Xamarin.Forms Notes for Professionals 133 x:Name will give your CarouselView a name, which can be used in the C# code behind le. This is the basics you need to do for integrating CarouselView into a view. The given examples will not show you anything because the CarouselView is empty. Creating bindable source As example of an ItemSource, I will be using a ObservableCollection of strings. public ObservableCollection<TechGiant> TechGiants { get; set; } TechGiant is a class that will host names of Technology Giants public class TechGiant { public string Name { get; set; } public TechGiant(string Name) { this.Name = Name; } } After the InitializeComponent of your page, create and ll the ObservableCollection TechGiants = new ObservableCollection<TechGiant>(); TechGiants.Add(new TechGiant("Xamarin")); TechGiants.Add(new TechGiant("Microsoft")); TechGiants.Add(new TechGiant("Apple")); TechGiants.Add(new TechGiant("Google")); At last, set TechGiants to be the ItemSource of the DemoCarouselView DemoCarouselView.ItemsSource = TechGiants; DataTemplates In the XAML - le, give the CarouselView a DataTemplate: <cv:CarouselView.ItemTemplate> </cv:CarouselView.ItemTemplate> Dene a DataTemplate. In this case, this will be a Label with text bind to the itemsource and a green background: <DataTemplate> <Label Text="{Binding Name}" BackgroundColor="Green"/> </DataTemplate> That's it! Run the program and see the result! GoalKicker.com Xamarin.Forms Notes for Professionals 134 Chapter 26: Exception handling Section 26.1: One way to report about exceptions on iOS Go to Main.cs le in iOS project and change existed code, like presented below: static void Main(string[] args) { try { UIApplication.Main(args, null, "AppDelegate"); } catch (Exception ex) { Debug.WriteLine("iOS Main Exception: {0}", ex); var watson = new LittleWatson(); watson.SaveReport(ex); } } ILittleWatson interface, used in portable code, could look like this: public interface ILittleWatson { Task<bool> SendReport(); void SaveReport(Exception ex); } Implementation for iOS project: [assembly: Xamarin.Forms.Dependency(typeof(LittleWatson))] namespace SomeNamespace { public class LittleWatson : ILittleWatson { private const string FileName = "Report.txt"; private readonly static string DocumentsFolder; private readonly static string FilePath; private TaskCompletionSource<bool> _sendingTask; static LittleWatson() { DocumentsFolder = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments); FilePath = Path.Combine(DocumentsFolder, FileName); } public async Task<bool> SendReport() { _sendingTask = new TaskCompletionSource<bool>(); try { var text = File.ReadAllText(FilePath); File.Delete(FilePath); if (MFMailComposeViewController.CanSendMail) GoalKicker.com Xamarin.Forms Notes for Professionals 135 { var email = ""; // Put receiver email here. var mailController = new MFMailComposeViewController(); mailController.SetToRecipients(new string[] { email }); mailController.SetSubject("iPhone error"); mailController.SetMessageBody(text, false); mailController.Finished += (object s, MFComposeResultEventArgs args) => { args.Controller.DismissViewController(true, null); _sendingTask.TrySetResult(true); }; ShowViewController(mailController); } } catch (FileNotFoundException) { // No errors found. _sendingTask.TrySetResult(false); } return await _sendingTask.Task; } public void SaveReport(Exception ex) { var exceptionInfo = $"{ex.Message} - {ex.StackTrace}"; File.WriteAllText(FilePath, exceptionInfo); } private static void ShowViewController(UIViewController controller) { var topController = UIApplication.SharedApplication.KeyWindow.RootViewController; while (topController.PresentedViewController != null) { topController = topController.PresentedViewController; } topController.PresentViewController(controller, true, null); } } } And then, somewhere, where app starts, put: var watson = DependencyService.Get<ILittleWatson>(); if (watson != null) { await watson.SendReport(); } GoalKicker.com Xamarin.Forms Notes for Professionals 136 Chapter 27: SQL Database and API in Xamarin Forms. Section 27.1: Create API using SQL database and implement in Xamarin forms, Source Code Blog GoalKicker.com Xamarin.Forms Notes for Professionals 137 Chapter 28: Contact Picker - Xamarin Forms (Android and iOS) Section 28.1: contact_picker.cs using System; using Xamarin.Forms; namespace contact_picker { public class App : Application { public App () { // The root page of your application MainPage = new MyPage(); } protected override void OnStart () { // Handle when your app starts } protected override void OnSleep () { // Handle when your app sleeps } protected override void OnResume () { // Handle when your app resumes } } } Section 28.2: MyPage.cs using System; using Xamarin.Forms; namespace contact_picker { public class MyPage : ContentPage { Button button; public MyPage () { button = new Button { Text = "choose contact" }; button.Clicked += async (object sender, EventArgs e) => { if (Device.OS == TargetPlatform.iOS) { await Navigation.PushModalAsync (new ChooseContactPage ()); } GoalKicker.com Xamarin.Forms Notes for Professionals 138 else if (Device.OS == TargetPlatform.Android) { MessagingCenter.Send (this, "android_choose_contact", "number1"); } }; Content = new StackLayout { Children = { new Label { Text = "Hello ContentPage" }, button } }; } protected override void OnSizeAllocated (double width, double height) { base.OnSizeAllocated (width, height); MessagingCenter.Subscribe<MyPage, string> (this, "num_select", (sender, arg) => { DisplayAlert ("contact", arg, "OK"); }); } } } Section 28.3: ChooseContactPicker.cs using System; using Xamarin.Forms; namespace contact_picker { public class ChooseContactPage : ContentPage { public ChooseContactPage () { } } } Section 28.4: ChooseContactActivity.cs using Android.App; using Android.OS; using Android.Content; using Android.Database; using Xamarin.Forms; namespace contact_picker.Droid { [Activity (Label = "ChooseContactActivity")] public class ChooseContactActivity : Activity { public string type_number = ""; GoalKicker.com Xamarin.Forms Notes for Professionals 139 protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); Intent intent = new Intent(Intent.ActionPick, Android.Provider.ContactsContract.CommonDataKinds.Phone.ContentUri); StartActivityForResult(intent, 1); } protected override void OnActivityResult (int requestCode, Result resultCode, Intent data) { // TODO Auto-generated method stub base.OnActivityResult (requestCode, resultCode, data); if (requestCode == 1) { if (resultCode == Result.Ok) { Android.Net.Uri contactData = data.Data; ICursor cursor = ContentResolver.Query(contactData, null, null, null, null); cursor.MoveToFirst(); string number = cursor.GetString(cursor.GetColumnIndexOrThrow(Android.Provider.ContactsContract.CommonDataKinds.Pho ne.Number)); var twopage_renderer = new MyPage(); MessagingCenter.Send<MyPage, string> (twopage_renderer, "num_select", number); Finish (); Xamarin.Forms.Application.Current.MainPage.Navigation.PopModalAsync (); } else if (resultCode == Result.Canceled) { Finish (); } } } } } Section 28.5: MainActivity.cs using System; using Android.App; using Android.Content; using Android.Content.PM; using Android.Runtime; using Android.Views; using Android.Widget; using Android.OS; using Xamarin.Forms; namespace contact_picker.Droid { [Activity (Label = "contact_picker.Droid", Icon = "@drawable/icon", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation)] GoalKicker.com Xamarin.Forms Notes for Professionals 140 public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsApplicationActivity { protected override void OnCreate (Bundle bundle) { base.OnCreate (bundle); global::Xamarin.Forms.Forms.Init (this, bundle); LoadApplication (new App ()); MessagingCenter.Subscribe<MyPage, string>(this, "android_choose_contact", (sender, args) => { Intent i = new Intent (Android.App.Application.Context, typeof(ChooseContactActivity)); i.PutExtra ("number1", args); StartActivity (i); }); } } } Section 28.6: ChooseContactRenderer.cs using UIKit; using AddressBookUI; using Xamarin.Forms; using Xamarin.Forms.Platform.iOS; using contact_picker; using contact_picker.iOS; [assembly: ExportRenderer (typeof(ChooseContactPage), typeof(ChooseContactRenderer))] namespace contact_picker.iOS { public partial class ChooseContactRenderer : PageRenderer { ABPeoplePickerNavigationController _contactController; public string type_number; protected override void OnElementChanged (VisualElementChangedEventArgs e) { base.OnElementChanged (e); var page = e.NewElement as ChooseContactPage; if (e.OldElement != null || Element == null) { return; } } public override void ViewDidLoad () { base.ViewDidLoad (); _contactController = new ABPeoplePickerNavigationController (); GoalKicker.com Xamarin.Forms Notes for Professionals 141 this.PresentModalViewController (_contactController, true); //display contact chooser _contactController.Cancelled += delegate { Xamarin.Forms.Application.Current.MainPage.Navigation.PopModalAsync (); this.DismissModalViewController (true); }; _contactController.SelectPerson2 += delegate(object sender, ABPeoplePickerSelectPerson2EventArgs e) { var getphones = e.Person.GetPhones(); string number = ""; if (getphones == null) { number = "Nothing"; } else if (getphones.Count > 1) { //il ya plus de 2 num de telephone foreach(var t in getphones) { number = t.Value + "/" + number; } } else if (getphones.Count == 1) { //il ya 1 num de telephone foreach(var t in getphones) { number = t.Value; } } Xamarin.Forms.Application.Current.MainPage.Navigation.PopModalAsync (); var twopage_renderer = new MyPage(); MessagingCenter.Send<MyPage, string> (twopage_renderer, "num_select", number); this.DismissModalViewController (true); }; } public override void ViewDidUnload () { base.ViewDidUnload (); // Clear any references to subviews of the main view in order to // allow the Garbage Collector to collect them sooner. // // e.g. myOutlet.Dispose (); myOutlet = null; this.DismissModalViewController (true); } public override bool ShouldAutorotateToInterfaceOrientation (UIInterfaceOrientation toInterfaceOrientation) GoalKicker.com Xamarin.Forms Notes for Professionals 142 { // Return true for supported orientations return (toInterfaceOrientation != UIInterfaceOrientation.PortraitUpsideDown); } } } GoalKicker.com Xamarin.Forms Notes for Professionals 143 Chapter 29: Xamarin Plugin Section 29.1: Media Plugin Take or pick photos and videos from a cross platform API. Available Nuget : [https://www.nuget.org/packages/Xam.Plugin.Media/][1] XAML <StackLayout Spacing="10" Padding="10"> <Button x:Name="takePhoto" Text="Take Photo"/> <Button x:Name="pickPhoto" Text="Pick Photo"/> <Button x:Name="takeVideo" Text="Take Video"/> <Button x:Name="pickVideo" Text="Pick Video"/> <Label Text="Save to Gallery"/> <Switch x:Name="saveToGallery" IsToggled="false" HorizontalOptions="Center"/> <Label Text="Image will show here"/> <Image x:Name="image"/> <Label Text=""/> </StackLayout> Code namespace PluginDemo { public partial class MediaPage : ContentPage { public MediaPage() { InitializeComponent(); takePhoto.Clicked += async (sender, args) => { if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported) { await DisplayAlert("No Camera", ":( No camera avaialble.", "OK"); return; } try { var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions { Directory = "Sample", Name = "test.jpg", SaveToAlbum = saveToGallery.IsToggled }); if (file == null) return; await DisplayAlert("File Location", (saveToGallery.IsToggled ? file.AlbumPath : file.Path), "OK"); image.Source = ImageSource.FromStream(() => { GoalKicker.com Xamarin.Forms Notes for Professionals 144 var stream = file.GetStream(); file.Dispose(); return stream; }); } catch //(Exception ex) { // Xamarin.Insights.Report(ex); // await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; pickPhoto.Clicked += async (sender, args) => { if (!CrossMedia.Current.IsPickPhotoSupported) { await DisplayAlert("Photos Not Supported", ":( Permission not granted to photos.", "OK"); return; } try { Stream stream = null; var file = await CrossMedia.Current.PickPhotoAsync().ConfigureAwait(true); if (file == null) return; stream = file.GetStream(); file.Dispose(); image.Source = ImageSource.FromStream(() => stream); } catch //(Exception ex) { // Xamarin.Insights.Report(ex); // await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; takeVideo.Clicked += async (sender, args) => { if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakeVideoSupported) { await DisplayAlert("No Camera", ":( No camera avaialble.", "OK"); return; } try { var file = await CrossMedia.Current.TakeVideoAsync(new Plugin.Media.Abstractions.StoreVideoOptions { Name = "video.mp4", Directory = "DefaultVideos", SaveToAlbum = saveToGallery.IsToggled }); GoalKicker.com Xamarin.Forms Notes for Professionals 145 if (file == null) return; await DisplayAlert("Video Recorded", "Location: " + (saveToGallery.IsToggled ? file.AlbumPath : file.Path), "OK"); file.Dispose(); } catch //(Exception ex) { // Xamarin.Insights.Report(ex); // await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; pickVideo.Clicked += async (sender, args) => { if (!CrossMedia.Current.IsPickVideoSupported) { await DisplayAlert("Videos Not Supported", ":( Permission not granted to videos.", "OK"); return; } try { var file = await CrossMedia.Current.PickVideoAsync(); if (file == null) return; await DisplayAlert("Video Selected", "Location: " + file.Path, "OK"); file.Dispose(); } catch //(Exception ex) { //Xamarin.Insights.Report(ex); //await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; } } } Section 29.2: Share Plugin Simple way to share a message or link, copy text to clipboard, or open a browser in any Xamarin or Windows app. Available on NuGet : https://www.nuget.org/packages/Plugin.Share/ XAML <StackLayout Padding="20" Spacing="20"> <Button StyleId="Text" Text="Share Text" Clicked="Button_OnClicked"/> <Button StyleId="Link" Text="Share Link" Clicked="Button_OnClicked"/> <Button StyleId="Browser" Text="Open Browser" Clicked="Button_OnClicked"/> <Label Text=""/> GoalKicker.com Xamarin.Forms Notes for Professionals 146 </StackLayout> C# async void Button_OnClicked(object sender, EventArgs e) { switch (((Button)sender).StyleId) { case "Text": await CrossShare.Current.Share("Follow @JamesMontemagno on Twitter", "Share"); break; case "Link": await CrossShare.Current.ShareLink("http://motzcod.es", "Checkout my blog", "MotzCod.es"); break; case "Browser": await CrossShare.Current.OpenBrowser("http://motzcod.es"); break; } } Section 29.3: ExternalMaps External Maps Plugin Open external maps to navigate to a specic geolocation or address. Option to launch with navigation option on iOS as well. Available on NuGet :[https://www.nuget.org/packages/Xam.Plugin.ExternalMaps/][1] XAML <StackLayout Spacing="10" Padding="10"> <Button x:Name="navigateAddress" Text="Navigate to Address"/> <Button x:Name="navigateLatLong" Text="Navigate to Lat|Long"/> <Label Text=""/> </StackLayout> Code namespace PluginDemo { public partial class ExternalMaps : ContentPage { public ExternalMaps() { InitializeComponent(); navigateLatLong.Clicked += (sender, args) => { CrossExternalMaps.Current.NavigateTo("Space Needle", 47.6204, -122.3491); }; navigateAddress.Clicked += (sender, args) => { CrossExternalMaps.Current.NavigateTo("Xamarin", "394 pacific ave.", "San Francisco", "CA", "94111", "USA", "USA"); }; } } GoalKicker.com Xamarin.Forms Notes for Professionals 147 } Section 29.4: Geolocator Plugin Easly access geolocation across Xamarin.iOS, Xamarin.Android and Windows. Available Nuget: [https://www.nuget.org/packages/Xam.Plugin.Geolocator/][1] XAML <StackLayout Spacing="10" Padding="10"> <Button x:Name="buttonGetGPS" Text="Get GPS"/> <Label x:Name="labelGPS"/> <Button x:Name="buttonTrack" Text="Track Movements"/> <Label x:Name="labelGPSTrack"/> <Label Text=""/> </StackLayout> Code namespace PluginDemo { public partial class GeolocatorPage : ContentPage { public GeolocatorPage() { InitializeComponent(); buttonGetGPS.Clicked += async (sender, args) => { try { var locator = CrossGeolocator.Current; locator.DesiredAccuracy = 1000; labelGPS.Text = "Getting gps"; var position = await locator.GetPositionAsync(timeoutMilliseconds: 10000); if (position == null) { labelGPS.Text = "null gps :("; return; } labelGPS.Text = string.Format("Time: {0} \nLat: {1} \nLong: {2} \nAltitude: {3} \nAltitude Accuracy: {4} \nAccuracy: {5} \nHeading: {6} \nSpeed: {7}", position.Timestamp, position.Latitude, position.Longitude, position.Altitude, position.AltitudeAccuracy, position.Accuracy, position.Heading, position.Speed); } catch //(Exception ex) { // Xamarin.Insights.Report(ex); // await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; buttonTrack.Clicked += async (object sender, EventArgs e) => { GoalKicker.com Xamarin.Forms Notes for Professionals 148 try { if (CrossGeolocator.Current.IsListening) { await CrossGeolocator.Current.StopListeningAsync(); labelGPSTrack.Text = "Stopped tracking"; buttonTrack.Text = "Stop Tracking"; } else { if (await CrossGeolocator.Current.StartListeningAsync(30000, 0)) { labelGPSTrack.Text = "Started tracking"; buttonTrack.Text = "Track Movements"; } } } catch //(Exception ex) { //Xamarin.Insights.Report(ex); // await DisplayAlert("Uh oh", "Something went wrong, but don't worry we captured it in Xamarin Insights! Thanks.", "OK"); } }; } protected override void OnAppearing() { base.OnAppearing(); try { CrossGeolocator.Current.PositionChanged += CrossGeolocator_Current_PositionChanged; CrossGeolocator.Current.PositionError += CrossGeolocator_Current_PositionError; } catch { } } void CrossGeolocator_Current_PositionError(object sender, Plugin.Geolocator.Abstractions.PositionErrorEventArgs e) { labelGPSTrack.Text = "Location error: " + e.Error.ToString(); } void CrossGeolocator_Current_PositionChanged(object sender, Plugin.Geolocator.Abstractions.PositionEventArgs e) { var position = e.Position; labelGPSTrack.Text = string.Format("Time: {0} \nLat: {1} \nLong: {2} \nAltitude: {3} \nAltitude Accuracy: {4} \nAccuracy: {5} \nHeading: {6} \nSpeed: {7}", position.Timestamp, position.Latitude, position.Longitude, position.Altitude, position.AltitudeAccuracy, position.Accuracy, position.Heading, position.Speed); } protected override void OnDisappearing() { base.OnDisappearing(); GoalKicker.com Xamarin.Forms Notes for Professionals 149 try { CrossGeolocator.Current.PositionChanged -= CrossGeolocator_Current_PositionChanged; CrossGeolocator.Current.PositionError -= CrossGeolocator_Current_PositionError; } catch { } } } } Section 29.5: Messaging Plugin Messaging plugin for Xamarin and Windows to make a phone call, send a sms or send an e-mail using the default messaging applications on the dierent mobile platforms. Available Nuget : [https://www.nuget.org/packages/Xam.Plugins.Messaging/][1] XAML <StackLayout Spacing="10" Padding="10"> <Entry Placeholder="Phone Number" x:Name="phone"/> <Button x:Name="buttonSms" Text="Send SMS"/> <Button x:Name="buttonCall" Text="Call Phone Number"/> <Entry Placeholder="E-mail Address" x:Name="email"/> <Button x:Name="buttonEmail" Text="Send E-mail"/> <Label Text=""/> </StackLayout> Code namespace PluginDemo { public partial class MessagingPage : ContentPage { public MessagingPage() { InitializeComponent(); buttonCall.Clicked += async (sender, e) => { try { // Make Phone Call var phoneCallTask = MessagingPlugin.PhoneDialer; if (phoneCallTask.CanMakePhoneCall) phoneCallTask.MakePhoneCall(phone.Text); else await DisplayAlert("Error", "This device can't place calls", "OK"); } catch { // await DisplayAlert("Error", "Unable to perform action", "OK"); } }; buttonSms.Clicked += async (sender, e) => { try GoalKicker.com Xamarin.Forms Notes for Professionals 150 { var smsTask = MessagingPlugin.SmsMessenger; if (smsTask.CanSendSms) smsTask.SendSms(phone.Text, "Hello World"); else await DisplayAlert("Error", "This device can't send sms", "OK"); } catch { // await DisplayAlert("Error", "Unable to perform action", "OK"); } }; buttonEmail.Clicked += async (sender, e) => { try { var emailTask = MessagingPlugin.EmailMessenger; if (emailTask.CanSendEmail) emailTask.SendEmail(email.Text, "Hello there!", "This was sent from the Xamrain Messaging Plugin from shared code!"); else await DisplayAlert("Error", "This device can't send emails", "OK"); } catch { //await DisplayAlert("Error", "Unable to perform action", "OK"); } }; } } } Section 29.6: Permissions Plugin Check to see if your users have granted or denied permissions for common permission groups on iOS and Android. Additionally, you can request permissions with a simple cross-platform async/awaitied API. Available Nuget : https://www.nuget.org/packages/Plugin.Permissions enter link description here XAML XAML <StackLayout Padding="30" Spacing="10"> <Button Text="Get Location" Clicked="Button_OnClicked"></Button> <Label x:Name="LabelGeolocation"></Label> <Button Text="Calendar" StyleId="Calendar" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Camera" StyleId="Camera" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Contacts" StyleId="Contacts" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Microphone" StyleId="Microphone" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Phone" StyleId="Phone" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Photos" StyleId="Photos" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Reminders" StyleId="Reminders" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Sensors" StyleId="Sensors" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Sms" StyleId="Sms" Clicked="ButtonPermission_OnClicked"></Button> <Button Text="Storage" StyleId="Storage" Clicked="ButtonPermission_OnClicked"></Button> <Label Text=""/> </StackLayout> GoalKicker.com Xamarin.Forms Notes for Professionals 151 Code bool busy; async void ButtonPermission_OnClicked(object sender, EventArgs e) { if (busy) return; busy = true; ((Button)sender).IsEnabled = false; var status = PermissionStatus.Unknown; switch (((Button)sender).StyleId) { case "Calendar": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Calendar); break; case "Camera": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Camera); break; case "Contacts": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Contacts); break; case "Microphone": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Microphone); break; case "Phone": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Phone); break; case "Photos": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Photos); break; case "Reminders": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Reminders); break; case "Sensors": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Sensors); break; case "Sms": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Sms); break; case "Storage": status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Storage); break; } await DisplayAlert("Results", status.ToString(), "OK"); if (status != PermissionStatus.Granted) { switch (((Button)sender).StyleId) { GoalKicker.com Xamarin.Forms Notes for Professionals 152 case "Calendar": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Calendar))[Permission.Calendar]; break; case "Camera": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Camera))[Permission.Camera]; break; case "Contacts": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Contacts))[Permission.Contacts]; break; case "Microphone": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Microphone))[Permission.Microphone]; break; case "Phone": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Phone))[Permission.Phone]; break; case "Photos": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Photos))[Permission.Photos]; break; case "Reminders": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Reminders))[Permission.Reminders]; break; case "Sensors": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Sensors))[Permission.Sensors]; break; case "Sms": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Sms))[Permission.Sms]; break; case "Storage": status = (await CrossPermissions.Current.RequestPermissionsAsync(Permission.Storage))[Permission.Storage]; break; } await DisplayAlert("Results", status.ToString(), "OK"); } busy = false; ((Button)sender).IsEnabled = true; } async void Button_OnClicked(object sender, EventArgs e) { if (busy) return; busy = true; ((Button)sender).IsEnabled = false; try { var status = await CrossPermissions.Current.CheckPermissionStatusAsync(Permission.Location); GoalKicker.com Xamarin.Forms Notes for Professionals 153 if (status != PermissionStatus.Granted) { if (await CrossPermissions.Current.ShouldShowRequestPermissionRationaleAsync(Permission.Location)) { await DisplayAlert("Need location", "Gunna need that location", "OK"); } var results = await CrossPermissions.Current.RequestPermissionsAsync(Permission.Location); status = results[Permission.Location]; } if (status == PermissionStatus.Granted) { var results = await CrossGeolocator.Current.GetPositionAsync(10000); LabelGeolocation.Text = "Lat: " + results.Latitude + " Long: " + results.Longitude; } else if (status != PermissionStatus.Unknown) { await DisplayAlert("Location Denied", "Can not continue, try again.", "OK"); } } catch (Exception ex) { LabelGeolocation.Text = "Error: " + ex; } ((Button)sender).IsEnabled = true; busy = false; } GoalKicker.com Xamarin.Forms Notes for Professionals 154 Chapter 30: OAuth2 Section 30.1: Authentication by using Plugin 1. First, Go to Tools > NuGet Package Manager > Package Manager Console. 2. Enter this Command "Install-Package Plugin.Facebook" in Package Manger Console. 3. Now all the le is automatically created. GoalKicker.com Xamarin.Forms Notes for Professionals 155 Video : Login with Facebook in Xamarin Forms Other Authentication by using Plugin. Please place the command in Package Manager Console as shown in Step 2. 1. Youtube : Install-Package Plugin.Youtube 2. Twitter : Install-Package Plugin.Twitter 3. Foursquare : Install-Package Plugin.Foursquare 4. Google : Install-Package Plugin.Google 5. Instagram : Install-Package Plugin.Instagram 6. Eventbrite : Install-Package Plugin.Eventbrite GoalKicker.com Xamarin.Forms Notes for Professionals 156 Chapter 31: MessagingCenter Xamarin.Forms has a built-in messaging mechanism to promote decoupled code. This way, view models and other components do not need to know each other. They can communicate by a simple messaging contract. There a basically two main ingredients for using the MessagingCenter. Subscribe; listen for messages with a certain signature (the contract) and execute code when a message is received. A message can have multiple subscribers. Send; sending a message for subscribers to act upon. Section 31.1: Simple example Here we will see a simple example of using the MessagingCenter in Xamarin.Forms. First, let's have a look at subscribing to a message. In the FooMessaging model we subscribe to a message coming from the MainPage. The message should be "Hi" and when we receive it, we register a handler which sets the property Greeting. Lastly this means the current FooMessaging instance is registering for this message. public class FooMessaging { public string Greeting { get; set; } public FooMessaging() { MessagingCenter.Subscribe<MainPage> (this, "Hi", (sender) => { this.Greeting = "Hi there!"; }); } } To send a message triggering this functionality, we need to have a page called MainPage, and implement code like underneath. public class MainPage : Page { private void OnButtonClick(object sender, EventArgs args) { MessagingCenter.Send<MainPage> (this, "Hi"); } } In our MainPage we have a button with a handler that sends a message. this should be an instance of MainPage. Section 31.2: Passing arguments You can also pass arguments with a message to work with. We will use the classed from our previous example and extend them. In the receiving part, right behind the Subscribe method call add the type of the argument you are expecting. Also make sure you also declare the arguments in the handler signature. public class FooMessaging { GoalKicker.com Xamarin.Forms Notes for Professionals 157 public string Greeting { get; set; } public FooMessaging() { MessagingCenter.Subscribe<MainPage, string> (this, "Hi", (sender, arg) => { this.Greeting = arg; }); } } When sending a message, make sure to include the argument value. Also, here you add the type right behind the Send method and add the argument value. public class MainPage : Page { private void OnButtonClick(object sender, EventArgs args) { MessagingCenter.Send<MainPage, string> (this, "Hi", "Hi there!"); } } In this example a simple string is used, but you can also use any other type of (complex) objects. Section 31.3: Unsubscribing When you no longer need to receive messages, you can simply unsubscribe. You can do it like this: MessagingCenter.Unsubscribe<MainPage> (this, "Hi"); When you are supplying arguments, you have to unsubscribe from the complete signature, like this: MessagingCenter.Unsubscribe<MainPage, string> (this, "Hi"); GoalKicker.com Xamarin.Forms Notes for Professionals 158 Chapter 32: Generic Xamarin.Forms app lifecycle? Platform-dependant! Section 32.1: Xamarin.Forms lifecycle is not the actual app lifecycle but a cross-platform representation of it Lets have a look at the native app lifecycle methods for dierent platforms. Android. //Xamarin.Forms.Platform.Android.FormsApplicationActivity lifecycle methods: protected override void OnCreate(Bundle savedInstanceState); protected override void OnDestroy(); protected override void OnPause(); protected override void OnRestart(); protected override void OnResume(); protected override void OnStart(); protected override void OnStop(); iOS. //Xamarin.Forms.Platform.iOS.FormsApplicationDelegate lifecycle methods: public override void DidEnterBackground(UIApplication uiApplication); public override bool FinishedLaunching(UIApplication uiApplication, NSDictionary launchOptions); public override void OnActivated(UIApplication uiApplication); public override void OnResignActivation(UIApplication uiApplication); public override void WillEnterForeground(UIApplication uiApplication); public override bool WillFinishLaunching(UIApplication uiApplication, NSDictionary launchOptions); public override void WillTerminate(UIApplication uiApplication); Windows. //Windows.UI.Xaml.Application lifecycle methods: public event EventHandler<System.Object> Resuming; public event SuspendingEventHandler Suspending; protected virtual void OnActivated(IActivatedEventArgs args); protected virtual void OnFileActivated(FileActivatedEventArgs args); protected virtual void OnFileOpenPickerActivated(FileOpenPickerActivatedEventArgs args); protected virtual void OnFileSavePickerActivated(FileSavePickerActivatedEventArgs args); protected virtual void OnLaunched(LaunchActivatedEventArgs args); protected virtual void OnSearchActivated(SearchActivatedEventArgs args); protected virtual void OnShareTargetActivated(ShareTargetActivatedEventArgs args); protected virtual void OnWindowCreated(WindowCreatedEventArgs args); //Windows.UI.Xaml.Window lifecycle methods: public event WindowActivatedEventHandler Activated; public event WindowClosedEventHandler Closed; public event WindowVisibilityChangedEventHandler VisibilityChanged; And now Xamarin.Forms app lifecycle methods: //Xamarin.Forms.Application lifecycle methods: protected virtual void OnResume(); protected virtual void OnSleep(); protected virtual void OnStart(); GoalKicker.com Xamarin.Forms Notes for Professionals 159 What you can easily tell from merely observing the lists, the Xamarin.Forms cross-platform app lifecycle perspective is greatly simplied. It gives you the generic clue about what state your app is in but in most production cases you will have to build some platform-dependant logic. GoalKicker.com Xamarin.Forms Notes for Professionals 160 Chapter 33: Platform-specic behaviour Section 33.1: Removing icon in navigation header in Anroid Using a small transparent image called empty.png public class MyPage : ContentPage { public Page() { if (Device.OS == TargetPlatform.Android) NavigationPage.SetTitleIcon(this, "empty.png"); } } Section 33.2: Make label's font size smaller in iOS Label label = new Label { Text = "text" }; if(Device.OS == TargetPlatform.iOS) { label.FontSize = label.FontSize - 2; GoalKicker.com Xamarin.Forms Notes for Professionals 161 } GoalKicker.com Xamarin.Forms Notes for Professionals 162 Chapter 34: Platform specic visual adjustments Section 34.1: Idiom adjustments Idiom specic adjustments can be done from C# code, for example for changing the layout orientation whether the view is shown or a phone or a tablet. if (Device.Idiom == TargetIdiom.Phone) { this.panel.Orientation = StackOrientation.Vertical; } else { this.panel.Orientation = StackOrientation.Horizontal; } Those functionalities are also available directly from XAML code : <StackLayout x:Name="panel"> <StackLayout.Orientation> <OnIdiom x:TypeArguments="StackOrientation"> <OnIdiom.Phone>Vertical</OnIdiom.Phone> <OnIdiom.Tablet>Horizontal</OnIdiom.Tablet> </OnIdiom> </StackLayout.Orientation> </StackLayout> Section 34.2: Platform adjustments Adjustments can be done for specic platforms from C# code, for example for changing padding for all the targeted platforms. if (Device.OS == TargetPlatform.iOS) { panel.Padding = new Thickness (10); } else { panel.Padding = new Thickness (20); } An helper method is also available for shortened C# declarations : panel.Padding = new Thickness (Device.OnPlatform(10,20,0)); Those functionalities are also available directly from XAML code : <StackLayout x:Name="panel"> <StackLayout.Padding> <OnPlatform x:TypeArguments="Thickness" iOS="10" Android="20" /> </StackLayout.Padding> </StackLayout> GoalKicker.com Xamarin.Forms Notes for Professionals 163 Section 34.3: Using styles When working with XAML, using a centralized Style allows you to update a set of styled views from one place. All the idiom and platform adjustements can also be integrated to your styles. <Style TargetType="StackLayout"> <Setter Property="Padding"> <Setter.Value> <OnPlatform x:TypeArguments="Thickness" iOS="10" Android="20"/> </Setter.Value> </Setter> </Style> Section 34.4: Using custom views You can create custom views that can be integrated to your page thanks to those adjustment tools. Select File > New > File... > Forms > Forms ContentView (Xaml) and create a view for each specic layout : TabletHome.xamland PhoneHome.xaml. Then select File > New > File... > Forms > Forms ContentPage and create a HomePage.cs that contains : using Xamarin.Forms; public class HomePage : ContentPage { public HomePage() { if (Device.Idiom == TargetIdiom.Phone) { Content = new PhoneHome(); } else { Content = new TabletHome(); } } } You now have a HomePage that creates a dierent view hierarchy for Phone and Tablet idioms. GoalKicker.com Xamarin.Forms Notes for Professionals 164 Chapter 35: Dependency Services Section 35.1: Access Camera and Gallery https://github.com/vDoers/vDoersCameraAccess GoalKicker.com Xamarin.Forms Notes for Professionals 165 Chapter 36: Unit Testing Section 36.1: Testing the view models Before we start... In terms of application layers your ViewModel is a class containing all the business logic and rules making the app do what it should according to the requirements. It's also important to make it as much independent as possible reducing references to UI, data layer, native features and API calls etc. All of these makes your VM be testable. In short, your ViewModel: Should not depend on UI classes (views, pages, styles, events); Should not use static data of another classes (as much as you can); Should implement the business logic and prepare data to be should on UI; Should use other components (database, HTTP, UI-specic) via interfaces being resolved using Dependency Injection. Your ViewModel may have properties of another VMs types as well. For example ContactsPageViewModel will have propery of collection type like ObservableCollection<ContactListItemViewModel> Business requirements Let's say we have the following functionality to implement: As an unauthorized user I want to log into the app So that I will access the authorized features After clarifying the user story we dened the following scenarios: Scenario: trying to log in with valid non-empty creds Given the user is on Login screen When the user enters 'user' as username And the user enters 'pass' as password And the user taps the Login button Then the app shows the loading indicator And the app makes an API call for authentication Scenario: trying to log in empty username Given the user is on Login screen When the user enters ' ' as username And the user enters 'pass' as password And the user taps the Login button Then the app shows an error message saying 'Please, enter correct username and password' And the app doesn't make an API call for authentication We will stay with only these two scenarios. Of course, there should be much more cases and you should dene all of them before actual coding, but it's pretty enough for us now to get familiar with unit testing of view models. Let's follow the classical TDD approach and start with writing an empty class being tested. Then we will write tests and will make them green by implementing the business functionality. GoalKicker.com Xamarin.Forms Notes for Professionals 166 Common classes public abstract class BaseViewModel : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } Services Do you remember our view model must not utilize UI and HTTP classes directly? You should dene them as abstractions instead and not to depend on implementation details. /// <summary> /// Provides authentication functionality. /// </summary> public interface IAuthenticationService { /// <summary> /// Tries to authenticate the user with the given credentials. /// </summary> /// <param name="userName">UserName</param> /// <param name="password">User's password</param> /// <returns>true if the user has been successfully authenticated</returns> Task<bool> Login(string userName, string password); } /// <summary> /// UI-specific service providing abilities to show alert messages. /// </summary> public interface IAlertService { /// <summary> /// Show an alert message to the user. /// </summary> /// <param name="title">Alert message title</param> /// <param name="message">Alert message text</param> Task ShowAlert(string title, string message); } Building the ViewModel stub Ok, we're gonna have the page class for Login screen, but let's start with ViewModel rst: public class LoginPageViewModel : BaseViewModel { private readonly IAuthenticationService authenticationService; private readonly IAlertService alertService; private string userName; private string password; private bool isLoading; private ICommand loginCommand; public LoginPageViewModel(IAuthenticationService authenticationService, IAlertService alertService) { this.authenticationService = authenticationService; GoalKicker.com Xamarin.Forms Notes for Professionals 167 this.alertService = alertService; } public string UserName { get { return userName; } set { if (userName!= value) { userName= value; OnPropertyChanged(); } } } public string Password { get { return password; } set { if (password != value) { password = value; OnPropertyChanged(); } } } public bool IsLoading { get { return isLoading; } set { if (isLoading != value) { isLoading = value; OnPropertyChanged(); } } } public ICommand LoginCommand => loginCommand ?? (loginCommand = new Command(Login)); private void Login() { authenticationService.Login(UserName, Password); } } We dened two string properties and a command to be bound on UI. We won't describe how to build a page class, XAML markup and bind ViewModel to it in this topic as they have nothing specic. GoalKicker.com Xamarin.Forms Notes for Professionals 168 How to create a LoginPageViewModel instance? I think you were probably creating the VMs just with constructor. Now as you can see our VM depends on 2 services being injected as constructor parameters so can't just do var viewModel = new LoginPageViewModel(). If you're not familiar with Dependency Injection it's the best moment to learn about it. Proper unit-testing is impossible without knowing and following this principle. Tests Now let's write some tests according to use cases listed above. First of all you need to create a new assembly (just a class library or select a special testing project if you want to use Microsoft unit testing tools). Name it something like ProjectName.Tests and add reference to your original PCL project. I this example I'm going to use NUnit and Moq but you can go on with any testing libs of your choise. There will be nothing special with them. Ok, that's the test class: [TestFixture] public class LoginPageViewModelTest { } Writing tests Here's the test methods for the rst two scenarios. Try keeping 1 test method per 1 expected result and not to check everything in one test. That will help you to receive clearer reports about what has failed in the code. [TestFixture] public class LoginPageViewModelTest { private readonly Mock<IAuthenticationService> authenticationServiceMock = new Mock<IAuthenticationService>(); private readonly Mock<IAlertService> alertServiceMock = new Mock<IAlertService>(); [TestCase("user", "pass")] public void LogInWithValidCreds_LoadingIndicatorShown(string userName, string password) { LoginPageViewModel model = CreateViewModelAndLogin(userName, password); Assert.IsTrue(model.IsLoading); } [TestCase("user", "pass")] public void LogInWithValidCreds_AuthenticationRequested(string userName, string password) { CreateViewModelAndLogin(userName, password); authenticationServiceMock.Verify(x => x.Login(userName, password), Times.Once); } [TestCase("", "pass")] [TestCase(" ", "pass")] [TestCase(null, "pass")] public void LogInWithEmptyuserName_AuthenticationNotRequested(string userName, string password) { CreateViewModelAndLogin(userName, password); GoalKicker.com Xamarin.Forms Notes for Professionals 169 authenticationServiceMock.Verify(x => x.Login(It.IsAny<string>(), It.IsAny<string>()), Times.Never); } [TestCase("", "pass", "Please, enter correct username and password")] [TestCase(" ", "pass", "Please, enter correct username and password")] [TestCase(null, "pass", "Please, enter correct username and password")] public void LogInWithEmptyUserName_AlertMessageShown(string userName, string password, string message) { CreateViewModelAndLogin(userName, password); alertServiceMock.Verify(x => x.ShowAlert(It.IsAny<string>(), message)); } private LoginPageViewModel CreateViewModelAndLogin(string userName, string password) { var model = new LoginPageViewModel( authenticationServiceMock.Object, alertServiceMock.Object); model.UserName = userName; model.Password = password; model.LoginCommand.Execute(null); return model; } } And here we go: Now the goal is to write correct implementation for ViewModel's Login method and that's it. Business logic implementation private async void Login() { if (String.IsNullOrWhiteSpace(UserName) || String.IsNullOrWhiteSpace(Password)) { await alertService.ShowAlert("Warning", "Please, enter correct username and password"); } else { IsLoading = true; GoalKicker.com Xamarin.Forms Notes for Professionals 170 bool isAuthenticated = await authenticationService.Login(UserName, Password); } } And after running the tests again: Now you can keep covering your code with new tests making it more stable and regression-safe. GoalKicker.com Xamarin.Forms Notes for Professionals 171 Chapter 37: BDD Unit Testing in Xamarin.Forms Section 37.1: Simple Specow to test commands and navigation with NUnit Test Runner Why do we need this? The current way to do unit testing in Xamarin.Forms is via a platform runner, so your test will have to run within an ios, android, windows or mac UI environment : Running Tests in the IDE Xamarin also provides awesome UI testing with the Xamarin.TestCloud oering, but when wanting to implement BDD dev practices, and have the ability to test ViewModels and Commands, while running cheaply on a unit test runners locally or on a build server, there is not built in way. I developed a library that allows to use Specow with Xamarin.Forms to easily implement your features from your Scenarios denitions up to the ViewModel, independently of any MVVM framework used for the App (such as XLabs, MVVMCross, Prism) If you are new to BDD, check Specow out. Usage: If you don't have it yet, install the specow visual studio extension from here (or from you visual studio IDE): https://visualstudiogallery.msdn.microsoft.com/c74211e7-cb6e-4dfa-855d-df0ad4a37dd6 Add a Class library to your Xamarin.Forms project. That's your test project. Add SpecFlow.Xamarin.Forms package from nuget to your test projects. Add a class to you test project that inherits 'TestApp', and register your views/viewmodels pairs as well as adding any DI registration, as per below: public class DemoAppTest : TestApp { protected override void SetViewModelMapping() { TestViewFactory.EnableCache = false; // register your views / viewmodels below RegisterView<MainPage, MainViewModel>(); } protected override void InitialiseContainer() { // add any di registration here // Resolver.Instance.Register<TInterface, TType>(); base.InitialiseContainer(); } } Add a SetupHook class to your test project, in order to add you Specow hooks. You will need to bootstrap the test application as per below, providing the class you created above, and the your app initial viewmodel: [Binding] public class SetupHooks : TestSetupHooks GoalKicker.com Xamarin.Forms Notes for Professionals 172 { /// <summary> /// The before scenario. /// </summary> [BeforeScenario] public void BeforeScenario() { // bootstrap test app with your test app and your starting viewmodel new TestAppBootstrap().RunApplication<DemoAppTest, MainViewModel>(); } } You will need to add a catch block to your xamarin.forms views codebehind in order to ignore xamarin.forms framework forcing you to run the app ui (something we don't want to do): public YourView() { try { InitializeComponent(); } catch (InvalidOperationException soe) { if (!soe.Message.Contains("MUST")) throw; } } Add a specow feature to your project (using the vs specow templates shipped with the vs specow extension) Create/Generate a step class that inherits TestStepBase, passing the scenarioContext parameter to the base. Use the navigation services and helpers to navigate, execute commands, and test your view models: [Binding] public class GeneralSteps : TestStepBase { public GeneralSteps(ScenarioContext scenarioContext) : base(scenarioContext) { // you need to instantiate your steps by passing the scenarioContext to the base } [Given(@"I am on the main view")] public void GivenIAmOnTheMainView() { Resolver.Instance.Resolve<INavigationService>().PushAsync<MainViewModel>(); Resolver.Instance.Resolve<INavigationService>().CurrentViewModelType.ShouldEqualType<MainViewModel> (); } [When(@"I click on the button")] public void WhenIClickOnTheButton() { GetCurrentViewModel<MainViewModel>().GetTextCommand.Execute(null); } [Then(@"I can see a Label with text ""(.*)""")] public void ThenICanSeeALabelWithText(string text) GoalKicker.com Xamarin.Forms Notes for Professionals 173 { GetCurrentViewModel<MainViewModel>().Text.ShouldEqual(text); } } Section 37.2: Advanced Usage for MVVM To add to the rst example, in order to test navigation statements that occurs within the application, we need to provide the ViewModel with a hook to the Navigation. To achieve this: Add the package SpecFlow.Xamarin.Forms.IViewModel from nuget to your PCL Xamarin.Forms project Implement the IViewModel interface in your ViewModel. This will simply expose the Xamarin.Forms INavigation property: public class MainViewModel : INotifyPropertyChanged, IViewModel.IViewModel { public INavigation Navigation { get; set; } The test framework will pick that up and manage internal navigation You can use any MVVM frameworks for you application (such as XLabs, MVVMCross, Prism to name a few. As long as the IViewModel interface is implemented in your ViewModel, the framework will pick it up. GoalKicker.com Xamarin.Forms Notes for Professionals 174 Credits Thank you greatly to all the people from Stack Overow Documentation who helped provide this content, more changes can be sent to <EMAIL> for new content to be published or updated aboozz pallikara <NAME> <NAME> cvanbeek <NAME> dpserge Ege Aydn Eng Soon Che<NAME> GvSharma hamalaiv hvaughan3 jzeferino <NAME> <NAME> <NAME> <NAME> nishantvodoo <NAME> Pucho <NAME> <NAME> <NAME> spaceplane <NAME> <NAME> <NAME>. <NAME> Chapter 10 Chapter 1 Chapter 34 Chapter 16 Chapters 22 and 37 Chapter 13 Chapter 9 Chapter 2 Chapter 1 Chapter 25 Chapters 4 and 33 Chapters 3, 6, 7, 8, 29 and 30 Chapter 5 Chapters 3, 11, 15, 19 and 31 Chapter 10 Chapter 21 Chapters 11 and 21 Chapter 23 Chapters 3 and 5 Chapter 24 Chapter 24 Chapter 15 Chapters 13 and 13 Chapter 13 Chapter 1 Chapter 5 Chapter 28 Chapters 27 and 35 Chapter 18 Chapters 1, 5, 11, 14 and 36 Chapter 1 Chapter 10 Chapter 12 Chapter 20 Chapters 5 and 17 Chapter 5 Chapters 13, 16 and 26 Chapter 32 GoalKicker.com Xamarin.Forms Notes for Professionals 175 You may also like
github.com/blockloop/scan
go
Go
README [¶](#section-readme) --- ### Scan [![GoDoc](https://godoc.org/github.com/blockloop/scan?status.svg)](https://godoc.org/github.com/blockloop/scan) [![Travis](https://img.shields.io/travis/blockloop/scan.svg)](https://travis-ci.org/blockloop/scan) [![Coveralls github](https://img.shields.io/coveralls/github/blockloop/scan.svg)](https://coveralls.io/github/blockloop/scan) [![Report Card](https://goreportcard.com/badge/github.com/blockloop/scan)](https://goreportcard.com/report/github.com/blockloop/scan) [![Dependabot Status](https://api.dependabot.com/badges/status?host=github&repo=blockloop/scan)](https://dependabot.com) Scan provides the ability to use database/sql/rows to scan datasets directly to structs or slices. For the most comprehensive and up-to-date docs see the [godoc](https://godoc.org/github.com/blockloop/scan) #### Examples ##### Multiple Rows ``` db, _ := sql.Open("sqlite3", "database.sqlite") rows, _ := db.Query("SELECT * FROM persons") var persons []Person err := scan.Rows(&persons, rows) fmt.Printf("%#v", persons) // []Person{ // {ID: 1, Name: "brett"}, // {ID: 2, Name: "fred"}, // {ID: 3, Name: "stacy"}, // } ``` ##### Multiple rows of primitive type ``` rows, _ := db.Query("SELECT name FROM persons") var names []string err := scan.Rows(&names, rows) fmt.Printf("%#v", names) // []string{ // "brett", // "fred", // "stacy", // } ``` ##### Single row ``` rows, _ := db.Query("SELECT * FROM persons where name = 'brett' LIMIT 1") var person Person err := scan.Row(&person, rows) fmt.Printf("%#v", person) // Person{ ID: 1, Name: "brett" } ``` ##### Scalar value ``` rows, _ := db.Query("SELECT age FROM persons where name = 'brett' LIMIT 1") var age int8 err := scan.Row(&age, row) fmt.Printf("%d", age) // 100 ``` ##### Strict Scanning Both `Rows` and `Row` have strict alternatives to allow scanning to structs *strictly* based on their `db` tag. To avoid unwanted behavior you can use `RowsStrict` or `RowStrict` to scan without using field names. Any fields not tagged with the `db` tag will be ignored even if columns are found that match the field names. ##### Columns `Columns` scans a struct and returns a string slice of the assumed column names based on the `db` tag or the struct field name respectively. To avoid assumptions, use `ColumnsStrict` which will *only* return the fields tagged with the `db` tag. Both `Columns` and `ColumnsStrict` are variadic. They both accept a string slice of column names to exclude from the list. It is recommended that you cache this slice. ``` package main type User struct { ID int64 Name string Age int BirthDate string `db:"bday"` Zipcode string `db:"-"` Store struct { ID int // ... } } var nobody = new(User) var userInsertCols = scan.Columns(nobody, "ID") // []string{ "Name", "Age", "bday" } var userSelectCols = scan.Columns(nobody) // []string{ "ID", "Name", "Age", "bday" } ``` ##### Values `Values` scans a struct and returns the values associated with the provided columns. Values uses a `sync.Map` to cache fields of structs to greatly improve the performance of scanning types. The first time a struct is scanned it's **exported** fields locations are cached. When later retrieving values from the same struct it should be much faster. See [Benchmarks](#readme-Benchmarks) below. ``` user := &User{ ID: 1, Name: "Brett", Age: 100, } vals := scan.Values([]string{"ID", "Name"}, user) // []interface{}{ 1, "Brett" } ``` I find that the usefulness of both Values and Columns lies within using a library such as [sq](https://github.com/Masterminds/squirrel "Squirrel"). ``` sq.Insert(userCols...). Into("users"). Values(scan.Values(userCols, &user)...) ``` #### Configuration AutoClose: Automatically call `rows.Close()` after scan completes (default true) #### Why While many other projects support similar features (i.e. [sqlx](https://github.com/jmoiron/sqlx)) scan allows you to use any database lib such as the stdlib or [squirrel](https://github.com/Masterminds/squirrel "Squirrel") to write fluent SQL statements and pass the resulting `rows` to `scan` for scanning. #### Benchmarks ``` λ go test -bench=. -benchtime=10s ./... goos: linux goarch: amd64 pkg: github.com/blockloop/scan BenchmarkColumnsLargeStruct-8 50000000 272 ns/op BenchmarkValuesLargeStruct-8 2000000 8611 ns/op BenchmarkScanRowOneField-8 2000000 8528 ns/op BenchmarkScanRowFiveFields-8 1000000 12234 ns/op BenchmarkScanTenRowsOneField-8 1000000 16802 ns/op BenchmarkScanTenRowsTenFields-8 100000 104587 ns/op PASS ok github.com/blockloop/scan 116.055s ``` Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package scan provides functionality for scanning database/sql rows into slices, structs, and primitive types dynamically ### Index [¶](#pkg-index) * [Variables](#pkg-variables) * [func Columns(v interface{}, excluded ...string) ([]string, error)](#Columns) * [func ColumnsStrict(v interface{}, excluded ...string) ([]string, error)](#ColumnsStrict) * [func Row(v interface{}, r RowsScanner) error](#Row) * [func RowStrict(v interface{}, r RowsScanner) error](#RowStrict) * [func Rows(v interface{}, r RowsScanner) (outerr error)](#Rows) * [func RowsStrict(v interface{}, r RowsScanner) (outerr error)](#RowsStrict) * [func Values(cols []string, v interface{}) ([]interface{}, error)](#Values) * [type RowsScanner](#RowsScanner) #### Examples [¶](#pkg-examples) * [Columns](#example-Columns) * [Columns (Exclude)](#example-Columns-Exclude) * [ColumnsStrict](#example-ColumnsStrict) * [Row](#example-Row) * [Row (Scalar)](#example-Row-Scalar) * [RowStrict](#example-RowStrict) * [Rows](#example-Rows) * [Rows (Primitive)](#example-Rows-Primitive) * [RowsStrict](#example-RowsStrict) * [Values](#example-Values) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) ``` var ( // ErrTooManyColumns indicates that a select query returned multiple columns and // attempted to bind to a slice of a primitive type. For example, trying to bind // `select col1, col2 from mutable` to []string ErrTooManyColumns = [errors](/errors).[New](/errors#New)("too many columns returned for primitive slice") // ErrSliceForRow occurs when trying to use Row on a slice ErrSliceForRow = [errors](/errors).[New](/errors#New)("cannot scan Row into slice") // AutoClose is true when scan should automatically close Scanner when the scan // is complete. If you set it to false, then you must defer rows.Close() manually AutoClose = [true](/builtin#true) ) ``` ### Functions [¶](#pkg-functions) #### func [Columns](https://github.com/blockloop/scan/blob/v1.3.0/columns.go#L18) [¶](#Columns) added in v1.1.0 ``` func Columns(v interface{}, excluded ...[string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) ``` Columns scans a struct and returns a list of strings that represent the assumed column names based on the db struct tag, or the field name. Any field or struct tag that matches a string within the excluded list will be excluded from the result Example [¶](#example-Columns) ``` package main import ( "fmt" "github.com/blockloop/scan" ) func main() { var person struct { ID int `db:"person_id"` Name string } cols, _ := scan.Columns(&person) fmt.Printf("%+v", cols) } ``` ``` Output: [person_id Name] ``` Share Format Run Example (Exclude) [¶](#example-Columns-Exclude) ``` package main import ( "fmt" "github.com/blockloop/scan" ) func main() { var person struct { ID int `db:"id"` Name string `db:"name"` Age string `db:"-"` } cols, _ := scan.Columns(&person) fmt.Printf("%+v", cols) } ``` ``` Output: [id name] ``` Share Format Run #### func [ColumnsStrict](https://github.com/blockloop/scan/blob/v1.3.0/columns.go#L25) [¶](#ColumnsStrict) added in v1.1.0 ``` func ColumnsStrict(v interface{}, excluded ...[string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) ``` ColumnsStrict is identical to Columns, but it only searches struct tags and excludes fields not tagged with the db struct tag Example [¶](#example-ColumnsStrict) ``` package main import ( "fmt" "github.com/blockloop/scan" ) func main() { var person struct { ID int `db:"id"` Name string Age string `db:"age"` } cols, _ := scan.ColumnsStrict(&person) fmt.Printf("%+v", cols) } ``` ``` Output: [id age] ``` Share Format Run #### func [Row](https://github.com/blockloop/scan/blob/v1.3.0/scanner.go#L32) [¶](#Row) ``` func Row(v interface{}, r [RowsScanner](#RowsScanner)) [error](/builtin#error) ``` Row scans a single row into a single variable. It requires that you use db.Query and not db.QueryRow, because QueryRow does not return column names. There is no performance impact in using one over the other. QueryRow only defers returning err until Scan is called, which is an unnecessary optimization for this library. Example [¶](#example-Row) ``` package main import ( "database/sql" "encoding/json" "os" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT id,name FROM persons LIMIT 1") if err != nil { panic(err) } var person struct { ID int `db:"id"` Name string `db:"name"` } err = scan.Row(&person, rows) if err != nil { panic(err) } json.NewEncoder(os.Stdout).Encode(&person) } ``` ``` Output: {"ID":1,"Name":"brett"} ``` Share Format Run Example (Scalar) [¶](#example-Row-Scalar) ``` package main import ( "database/sql" "fmt" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT name FROM persons LIMIT 1") if err != nil { panic(err) } var name string err = scan.Row(&name, rows) if err != nil { panic(err) } fmt.Printf("%q", name) } ``` ``` Output: "brett" ``` Share Format Run #### func [RowStrict](https://github.com/blockloop/scan/blob/v1.3.0/scanner.go#L38) [¶](#RowStrict) added in v1.3.0 ``` func RowStrict(v interface{}, r [RowsScanner](#RowsScanner)) [error](/builtin#error) ``` RowStrict scans a single row into a single variable. It is identical to Row, but it ignores fields that do not have a db tag Example [¶](#example-RowStrict) ``` package main import ( "database/sql" "encoding/json" "os" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT id,name FROM persons LIMIT 1") if err != nil { panic(err) } var person struct { ID int Name string `db:"name"` } err = scan.RowStrict(&person, rows) if err != nil { panic(err) } json.NewEncoder(os.Stdout).Encode(&person) } ``` ``` Output: {"ID":0,"Name":"brett"} ``` Share Format Run #### func [Rows](https://github.com/blockloop/scan/blob/v1.3.0/scanner.go#L72) [¶](#Rows) ``` func Rows(v interface{}, r [RowsScanner](#RowsScanner)) (outerr [error](/builtin#error)) ``` Rows scans sql rows into a slice (v) Example [¶](#example-Rows) ``` package main import ( "database/sql" "encoding/json" "os" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT id,name FROM persons ORDER BY name") if err != nil { panic(err) } var persons []struct { ID int `db:"id"` Name string `db:"name"` } err = scan.Rows(&persons, rows) if err != nil { panic(err) } json.NewEncoder(os.Stdout).Encode(&persons) } ``` ``` Output: [{"ID":1,"Name":"brett"},{"ID":2,"Name":"fred"}] ``` Share Format Run Example (Primitive) [¶](#example-Rows-Primitive) ``` package main import ( "database/sql" "encoding/json" "os" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT name FROM persons ORDER BY name") if err != nil { panic(err) } var names []string err = scan.Rows(&names, rows) if err != nil { panic(err) } json.NewEncoder(os.Stdout).Encode(&names) } ``` ``` Output: ["brett","fred"] ``` Share Format Run #### func [RowsStrict](https://github.com/blockloop/scan/blob/v1.3.0/scanner.go#L77) [¶](#RowsStrict) added in v1.3.0 ``` func RowsStrict(v interface{}, r [RowsScanner](#RowsScanner)) (outerr [error](/builtin#error)) ``` RowsStrict scans sql rows into a slice (v) only using db tags Example [¶](#example-RowsStrict) ``` package main import ( "database/sql" "encoding/json" "os" "testing" "github.com/blockloop/scan" _ "github.com/mattn/go-sqlite3" "github.com/stretchr/testify/require" ) func mustDB(t testing.TB, schema string) *sql.DB { db, err := sql.Open("sqlite3", ":memory:") require.NoError(t, err) _, err = db.Exec(schema) require.NoError(t, err) return db } func exampleDB() *sql.DB { return mustDB(&testing.T{}, `CREATE TABLE persons ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(120) NOT NULL DEFAULT '' ); INSERT INTO PERSONS (name) VALUES ('brett'), ('fred');`) } func main() { db := exampleDB() rows, err := db.Query("SELECT id,name FROM persons ORDER BY name") if err != nil { panic(err) } var persons []struct { ID int Name string `db:"name"` } err = scan.Rows(&persons, rows) if err != nil { panic(err) } json.NewEncoder(os.Stdout).Encode(&persons) } ``` ``` Output: [{"ID":0,"Name":"brett"},{"ID":0,"Name":"fred"}] ``` Share Format Run #### func [Values](https://github.com/blockloop/scan/blob/v1.3.0/values.go#L14) [¶](#Values) added in v1.2.0 ``` func Values(cols [][string](/builtin#string), v interface{}) ([]interface{}, [error](/builtin#error)) ``` Values scans a struct and returns the values associated with the columns provided. Only simple value types are supported (i.e. Bool, Ints, Uints, Floats, Interface, String) Example [¶](#example-Values) ``` package main import ( "fmt" "github.com/blockloop/scan" ) func main() { person := struct { ID int `db:"id"` Name string `db:"name"` }{ ID: 1, Name: "Brett", } cols := []string{"id", "name"} vals, _ := scan.Values(cols, &person) fmt.Printf("%+v", vals) } ``` ``` Output: [1 Brett] ``` Share Format Run ### Types [¶](#pkg-types) #### type [RowsScanner](https://github.com/blockloop/scan/blob/v1.3.0/interface.go#L7) [¶](#RowsScanner) ``` type RowsScanner interface { Close() [error](/builtin#error) Scan(dest ...interface{}) [error](/builtin#error) Columns() ([][string](/builtin#string), [error](/builtin#error)) ColumnTypes() ([]*[sql](/database/sql).[ColumnType](/database/sql#ColumnType), [error](/builtin#error)) Err() [error](/builtin#error) Next() [bool](/builtin#bool) } ``` RowsScanner is a database scanner for many rows. It is most commonly the result of *sql.DB Query(...)
github.com/jellydator/ttlcache
go
Go
README [¶](#section-readme) --- ### TTLCache - an in-memory cache with expiration TTLCache is a simple key/value cache in golang with the following functions: 1. Thread-safe 2. Individual expiring time or global expiring time, you can choose 3. Auto-Extending expiration on `Get` -or- DNS style TTL, see `SkipTtlExtensionOnHit(bool)` 4. Fast and memory efficient 5. Can trigger callback on key expiration [![Build Status](https://travis-ci.org/ReneKroon/ttlcache.svg?branch=master)](https://travis-ci.org/ReneKroon/ttlcache) ##### Usage ``` import ( "time" "fmt" "github.com/ReneKroon/ttlcache" ) func main () { newItemCallback := func(key string, value interface{}) { fmt.Printf("New key(%s) added\n", key) } checkExpirationCallback := func(key string, value interface{}) bool { if key == "key1" { // if the key equals "key1", the value // will not be allowed to expire return false } // all other values are allowed to expire return true } expirationCallback := func(key string, value interface{}) { fmt.Printf("This key(%s) has expired\n", key) } cache := ttlcache.NewCache() cache.SetTTL(time.Duration(10 * time.Second)) cache.SetExpirationCallback(expirationCallback) cache.Set("key", "value") cache.SetWithTTL("keyWithTTL", "value", 10 * time.Second) value, exists := cache.Get("key") count := cache.Count() result := cache.Remove("key") } ``` ##### Original Project TTLCache was forked from [wunderlist/ttlcache](https://github.com/wunderlist/ttlcache) to add extra functions not avaiable in the original scope. The main differences are: 1. A item can store any kind of object, previously, only strings could be saved 2. Optionally, you can add callbacks to: check if a value should expire, be notified if a value expires, and be notified when new values are added to the cache 3. The expiration can be either global or per item 4. Can exist items without expiration time 5. Expirations and callbacks are realtime. Don't have a pooling time to check anymore, now it's done with a heap. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [type Cache](#Cache) * + [func NewCache() *Cache](#NewCache) * + [func (cache *Cache) Count() int](#Cache.Count) + [func (cache *Cache) Get(key string) (interface{}, bool)](#Cache.Get) + [func (cache *Cache) Purge()](#Cache.Purge) + [func (cache *Cache) Remove(key string) bool](#Cache.Remove) + [func (cache *Cache) Set(key string, data interface{})](#Cache.Set) + [func (cache *Cache) SetCheckExpirationCallback(callback checkExpireCallback)](#Cache.SetCheckExpirationCallback) + [func (cache *Cache) SetExpirationCallback(callback expireCallback)](#Cache.SetExpirationCallback) + [func (cache *Cache) SetNewItemCallback(callback expireCallback)](#Cache.SetNewItemCallback) + [func (cache *Cache) SetTTL(ttl time.Duration)](#Cache.SetTTL) + [func (cache *Cache) SetWithTTL(key string, data interface{}, ttl time.Duration)](#Cache.SetWithTTL) + [func (cache *Cache) SkipTtlExtensionOnHit(value bool)](#Cache.SkipTtlExtensionOnHit) ### Constants [¶](#pkg-constants) ``` const ( // ItemNotExpire Will avoid the item being expired by TTL, but can still be exired by callback etc. ItemNotExpire [time](/time).[Duration](/time#Duration) = -1 // ItemExpireWithGlobalTTL will use the global TTL when set. ItemExpireWithGlobalTTL [time](/time).[Duration](/time#Duration) = 0 ) ``` ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Cache](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L15) [¶](#Cache) ``` type Cache struct { // contains filtered or unexported fields } ``` Cache is a synchronized map of items that can auto-expire once stale #### func [NewCache](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L238) [¶](#NewCache) ``` func NewCache() *[Cache](#Cache) ``` NewCache is a helper to create instance of the Cache struct #### func (*Cache) [Count](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L192) [¶](#Cache.Count) ``` func (cache *[Cache](#Cache)) Count() [int](/builtin#int) ``` Count returns the number of items in the cache #### func (*Cache) [Get](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L162) [¶](#Cache.Get) ``` func (cache *[Cache](#Cache)) Get(key [string](/builtin#string)) (interface{}, [bool](/builtin#bool)) ``` Get is a thread-safe way to lookup items Every lookup, also touches the item, hence extending it's life #### func (*Cache) [Purge](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L230) [¶](#Cache.Purge) ``` func (cache *[Cache](#Cache)) Purge() ``` Purge will remove all entries #### func (*Cache) [Remove](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L177) [¶](#Cache.Remove) ``` func (cache *[Cache](#Cache)) Remove(key [string](/builtin#string)) [bool](/builtin#bool) ``` #### func (*Cache) [Set](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L123) [¶](#Cache.Set) ``` func (cache *[Cache](#Cache)) Set(key [string](/builtin#string), data interface{}) ``` Set is a thread-safe way to add new items to the map #### func (*Cache) [SetCheckExpirationCallback](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L213) [¶](#Cache.SetCheckExpirationCallback) ``` func (cache *[Cache](#Cache)) SetCheckExpirationCallback(callback checkExpireCallback) ``` SetCheckExpirationCallback sets a callback that will be called when an item is about to expire in order to allow external code to decide whether the item expires or remains for another TTL cycle #### func (*Cache) [SetExpirationCallback](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L207) [¶](#Cache.SetExpirationCallback) ``` func (cache *[Cache](#Cache)) SetExpirationCallback(callback expireCallback) ``` SetExpirationCallback sets a callback that will be called when an item expires #### func (*Cache) [SetNewItemCallback](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L218) [¶](#Cache.SetNewItemCallback) ``` func (cache *[Cache](#Cache)) SetNewItemCallback(callback expireCallback) ``` SetNewItemCallback sets a callback that will be called when a new item is added to the cache #### func (*Cache) [SetTTL](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L199) [¶](#Cache.SetTTL) ``` func (cache *[Cache](#Cache)) SetTTL(ttl [time](/time).[Duration](/time#Duration)) ``` #### func (*Cache) [SetWithTTL](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L128) [¶](#Cache.SetWithTTL) ``` func (cache *[Cache](#Cache)) SetWithTTL(key [string](/builtin#string), data interface{}, ttl [time](/time).[Duration](/time#Duration)) ``` SetWithTTL is a thread-safe way to add new items to the map with individual ttl #### func (*Cache) [SkipTtlExtensionOnHit](https://github.com/jellydator/ttlcache/blob/v1.5.0/cache.go#L225) [¶](#Cache.SkipTtlExtensionOnHit) ``` func (cache *[Cache](#Cache)) SkipTtlExtensionOnHit(value [bool](/builtin#bool)) ``` SkipTtlExtensionOnHit allows the user to change the cache behaviour. When this flag is set to true it will no longer extend TTL of items when they are retrieved using Get, or when their expiration condition is evaluated using SetCheckExpirationCallback.
RweaveExtra
cran
R
Package ‘RweaveExtra’ October 12, 2022 Type Package Title Sweave Drivers with Extra Tricks Up their Sleeve Version 1.0-0 Date 2021-05-27 Description Weave and tangle drivers for Sweave extending the standard drivers with additional code chunk options. Currently, these are only options to completely ignore, or skip, code chunks on weaving, tangling, or both. Chunks ignored on weaving are not parsed and are written out verbatim on tangling. Chunks ignored on tangling are processed as usual on weaving, but completely left out of the tangled scripts. Depends R (>= 4.1-0) Imports utils License GPL (>= 2) URL https://gitlab.com/vigou3/RweaveExtra BugReports https://gitlab.com/vigou3/RweaveExtra/-/issues Encoding UTF-8 NeedsCompilation no Author <NAME> [cre, aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-05-28 09:30:12 UTC R topics documented: RweaveExtra-packag... 2 RtangleExtr... 2 RweaveExtraLate... 4 RweaveExtra-package Sweave Drivers with Extra Tricks Up their Sleeve Description Weave and tangle drivers for Sweave extending the standard drivers with additional code chunk options. Currently, these are only options to completely ignore, or skip, code chunks on weaving, tangling, or both. Chunks ignored on weaving are not parsed and are written out verbatim on tangling. Chunks ignored on tangling are processed as usual on weaving, but completely left out of the tangled scripts. Details The RweaveExtraLatex and RtangleExtra drivers extend the standard Sweave drivers RweaveLatex and Rtangle, respectively. They are selected through the driver argument of Sweave. Currently, the drivers provide additional options to completely ignore code chunks on weaving, tangling, or both. Chunks ignored on weaving are not parsed and are written out verbatim on tangling. Chunks ignored on tangling are processed as usual on weaving, but completely left out of the tangled scripts. In a literate programming workflow, the additional options allow to include code chunks in a file such as: • code that is not parsable by R (say, because of errors inserted for educational purposes); • code in another programming language entirely (say, a shell script to start an analysis); • code for a Shiny app. With the standard drivers, using option eval = FALSE results in code being commented out in tan- gled scripts files. Furthermore, there is no provision to process a chunk on weaving but leave it out on tangling. Author(s) NA Maintainer: NA RtangleExtra R Driver for Stangle with Additional Options Description A driver for Sweave extending the standard driver Rtangle with additional code chunk options. Usage RtangleExtra() RtangleExtraSetup(file, syntax, output = NULL, annotate = TRUE, split = FALSE, quiet = FALSE, drop.evalFALSE = FALSE, ignore.on.tangle = FALSE, ignore = FALSE, ...) Arguments file Name of Sweave source file. See the description of the corresponding argument of Sweave. syntax An object of class SweaveSyntax. See Rtangle. output Name of output file used unless split = TRUE. See Rtangle. annotate A logical or function. See Rtangle. split Split output into a file for each code chunk? quiet Logical to suppress all progress messages. drop.evalFALSE See Rtangle. ignore.on.tangle If TRUE, the code chunk is ignored entirely. ignore An alternative way to set both ignore.on.weave of RweaveExtraLatex and ignore.on.tangle. ... See RweaveLatex. Details Chunks ignored on tangling are not written out to script files, but they are processed normally on weaving (unless ignore=TRUE). If ignore.on.tangle or ignore is FALSE, the code chunk is processed using the standard driver Rtangle with its options. Value Named list of five functions; see Sweave or the ‘Sweave User Manual’ vignette in the utils pack- age. Author(s) <NAME>, based on work by <NAME> and R-core. See Also RweaveExtraLatex, Rtangle, Sweave. Examples testfile <- system.file("examples", "example-extra.Rnw", package = "RweaveExtra") ## Check the contents of the file if(interactive()) file.show(testfile) ## Tangle the file in the current working directory Stangle(testfile, driver = RtangleExtra()) ## View tangled file if(interactive()) file.show("example-extra.R") ## Use 'ignore.on.weave=TRUE' with the 'split=TRUE' option of Stangle ## to extract the shell script in a separate file Stangle(testfile, split = TRUE, annotate = FALSE) file.rename("example-extra-hello.R", "example-extra-hello.sh") # change extension if(interactive()) file.show("example-extra-hello.sh") RweaveExtraLatex R/LaTeX Driver for Sweave with Additional Options Description A driver for Sweave extending the standard driver RweaveLatex with additional code chunk options. Usage RweaveExtraLatex() RweaveExtraLatexSetup(file, syntax, output = NULL, quiet = FALSE, debug = FALSE, stylepath, ignore.on.weave = FALSE, ignore = FALSE, ...) Arguments file Name of Sweave source file. See the description of the corresponding argument of Sweave. syntax An object of class SweaveSyntax. See RweaveLatex. output Name of output file. See RweaveLatex. quiet If TRUE all progress messages are suppressed. See RweaveLatex. debug If TRUE, input and output of all code chunks is copied to the console. See RweaveLatex. stylepath See RweaveLatex. ignore.on.weave If TRUE, the code chunk is ignored entirely, i.e., neither parsed nor evaluated. ignore An alternative way to set both ignore.on.weave and ignore.on.tangle of RtangleExtra. ... See RweaveLatex. Details Chunks ignored on weaving are not parsed and are not evaluated, but they are written out on tangling as normal code chunks (unless ignore=TRUE). If ignore.on.weave or ignore is FALSE, the code chunk is processed using the standard driver RweaveLatex with its options. Value Named list of five functions; see Sweave or the ‘Sweave User Manual’ vignette in the utils pack- age. Author(s) <NAME>, based on work by <NAME> and R-core. See Also RtangleExtra, RweaveLatex, Sweave. Examples testfile <- system.file("examples", "example-extra.Rnw", package = "RweaveExtra") ## Check the contents of the file if(interactive()) file.show(testfile) ## Weave, then tangle the file in the current working directory Sweave(testfile, driver = RweaveExtraLatex()) Stangle(testfile, driver = RtangleExtra()) ## View weaved and tangled files if(interactive()) file.show("example-extra.tex") if(interactive()) file.show("example-extra.R")
QAIG
cran
R
Package ‘QAIG’ October 12, 2022 Type Package Title Automatic Item Generator for Quantitative Multiple-Choice Items Version 0.1.7 Date 2020-05-19 Maintainer <NAME> (Shubh) <<EMAIL>> Description A tool for automatic generation of sibling items from a parent item model defined by the user. It is an implementation of the process automatic item generation (AIG) focused on generating quantitative multiple-choice type of items (see Embretson, Kingston (2018) <doi:10.1111/jedm.12166>). URL https://github.com/shubh-b/QAIG BugReports https://github.com/shubh-b/QAIG/issues Depends R (>= 3.1.0) License GPL-3 Imports stringr, Formula, stats, utils Encoding UTF-8 LazyData TRUE RoxygenNote 6.1.1 Suggests knitr, rmarkdown VignetteBuilder knitr NeedsCompilation no Author <NAME> (Shubh) [aut, cre], <NAME> (Aiden) [aut] Repository CRAN Date/Publication 2020-05-20 17:20:05 UTC R topics documented: itemge... 2 itemgen Automatic item generator from a parent item model. Description itemgen function generates group of sibling items from a parent item model defined by user. Usage itemgen(stem_text = stem_text, formulae = formulae, N = N, C, options_affix, ans_key, save.csv) Arguments stem_text The stem of the parent item with specified number-variables and character- variables. formulae A raw text that contains necessary formulae for the options (response choices) along with necessary values or functions that help to calculate the numeric value for each option. N A list of numeric input vector(s) for the number variable(s) in the stem. C (Optional) A list of character input vector(s) for the character variable(s) in the stem if there is any. options_affix (Optional) A list that consists of vectors with prefixes and suffixes (if there is any) of the numeric values in the options along with any text that can be included as an option of the items. ans_key (Optional) A text that indicates the correct response if it is NOT specified within formulae by ’~’. save.csv (Optional) A name text given by the user for the output .csv file, if user wants to save the newly generated sibling items in working directory as a data.frame. Details User has to develop a short schema for the parent item model that contains formation of stem along with formula for each of the response choices of the parent item as the input. Number-variables and character-variables must be specified in particular manner in the stem. Each formula must be written in new line as text and should be declared together as an object. itemgen function delivers the changes in the positions of the variables in stem and calculates the response choices automatically by taking members from the input vectors given by user in the schema. As a result, several permutations of changes in the variables lead to generation of new group of items. Please see vignette of ’QAIG’ for more details. Value This function returns a data frame that contains stem, options, answer key etc. for all the generated sibling items within its rows to display in console and within its columns in the saved .csv file if the input for the argument ’save.csv’ is given in itemgen function. Note The formula model for each option must be distinct. itemgen function does NOT permit same numeric value as two or more response choices and hence it will throw an error. If same numeric value needs to be produced as more than one response choices, those models can be made different by adding 0 or multiplying 1 with the terms in the model. The model for the distractor options in formulae must be written using "?". Correct response option can be written using EITHER "~" OR "?". In OR case correct response must be indicated by the function argument "ans_key" to stop itemgen function throw an error. Please see section 2 and 3 in vignette of ’QAIG’. Author(s) <NAME> and <NAME> References <NAME>., <NAME>. (2011). The Role of Item Models in Automatic Item Generation. <NAME>., <NAME>. (2018). Automatic Item Generation: A More Efficient Process for Developing Mathematics Achievement Items? Examples stem_text <- "The sum value of all the odd [C1] between [N1] and [N2] is" n1 <- c(20, 24, 28, 32) n2 <- c(48, 52, 56) c1 <- c("natural numbers", "integers") N <- list(n1 = n1, n2 = n2) C <- list(c1 = c1) formulae <- "Option_A ? sum((n1+1) : (n2-1))/2 Option_B ~ (length(seq(n1+1, n2-1, by = 2)))*(n1+n2)/2 Option_C ? sum(n1 : n2)/2 Option_D ? (length(seq(n1, n2, by = 2)))*(n1+n2)/2 " options_affix <- list(Option_A = c("", ""), Option_B = c("", ""), Option_C = c("", ""), Option_D = c("", ""), Difficulty = "MEDIUM") # itemgen() function can be used as: itemgen(stem_text = stem_text, formulae = formulae, N = N, C = C, options_affix = options_affix)
biosensors
cran
R
Package ‘biosensors.usc’ October 12, 2022 Encoding UTF-8 Type Package Title Distributional Data Analysis Techniques for Biosensor Data Version 1.0 Date 2022-04-11 Description Unified and user-friendly framework for using new distributional representations of biosensors data in different statistical modeling tasks: regression models, hypothesis testing, cluster analysis, visualization, and descriptive analysis. Distributional representations are a functional extension of compositional time-range metrics and we have used them successfully so far in modeling glucose profiles and accelerometer data. However, these functional representations can be used to represent any biosensor data such as ECG or medical imaging such as fMRI. <NAME>, <NAME>, <NAME>, <NAME>. ``Glucodensities: A new representation of glucose profiles using distributional data analysis'' (2021) <doi:10.1177/0962280221998064>. Imports Rcpp, graphics, stats, methods, utils, energy, fda.usc, parallelDist, osqp, truncnorm Depends R (>= 2.15) LinkingTo Rcpp, RcppArmadillo LazyLoad Yes License GPL-2 NeedsCompilation yes RoxygenNote 7.1.1 Suggests rmarkdown, knitr VignetteBuilder knitr Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-8682-6772>), <NAME> [aut] (<https://orcid.org/0000-0003-3841-4447>), <NAME> [ctb] (<https://orcid.org/0000-0001-5889-3970>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-05-05 05:40:02 UTC R topics documented: biosensors.us... 2 clusterin... 3 clustering_predictio... 3 generate_dat... 4 hypothesis_testin... 5 load_dat... 6 nadayara_predictio... 7 nadayara_regressio... 7 regmod_predictio... 8 regmod_regressio... 9 ridge_regressio... 10 biosensors.usc biosensors.usc Package Description Biosensor data have the potential to improve disease control and detection. However, the analysis of these data under free-living conditions is not feasible with current statistical techniques. To address this challenge, we introduce a new functional representation of biosensor data, termed the gluco- density, together with a data analysis framework based on distances between them. The new data analysis procedure is illustrated through an application in diabetes with continuous-time glucose monitoring (CGM) data. In this domain, we show marked improvement with respect to state-of- the-art analysis methods. In particular, our findings demonstrate that (i) the glucodensity possesses an extraordinary clinical sensitivity to capture the typical biomarkers used in the standard clinical practice in diabetes; (ii) previous biomarkers cannot accurately predict glucodensity, so that the latter is a richer source of information and; (iii) the glucodensity is a natural generalization of the time in range metric, this being the gold standard in the handling of CGM data. Furthermore, the new method overcomes many of the drawbacks of time in range metrics and provides more indepth insight into assessing glucose metabolism. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> clustering clustering Description Performs energy clustering with Wasserstein distance using quantile distributional representations as covariates. Usage clustering(data, clusters=3, iter_max=10, restarts=1) Arguments data A biosensor object. clusters Number of clusters. iter_max Maximum number of iterations. restarts Number of restarts. Value An object of class bclustering: data A data frame with biosensor raw data. result A kgroups object (see energy library). Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) clus = clustering(data, clusters=3) clustering_prediction clustering_prediction Description Predicts the cluster of each element of the objects list Usage clustering_prediction(clustering, objects) Arguments clustering A gl.clustering object. objects Matrix of objects to cluster. Value The clusters to which these objects are assigned. generate_data generate_data Description Generates a quantile regression model V + V2 * v + tau * V2 * Q0 where Q0 is a truncated random variable, v = 2 * X, tau = 2 * X, V ~ Unif(-1, 1), V2 ~ Unif(-1, -1), V3 ~ Unif(0.8, 1.2), and E(V|X) = tau * Q0; Usage generate_data(n = 100, Qp = 100, Xp = 5) Arguments n Sample size. Qp Dimension of the quantile. Xp Dimension of covariates where X_i~Unif(0,1). Value A biosensor object: data NULL. densities NULL. quantiles A functional data object (fdata) with the empirical quantile estimation. variables A data frame with Xp covariates. Examples data = generate_data(n=100, Qp=100, Xp=5) names(data) head(data$quantiles) head(data$variables) plot(data$quantiles, main="Quantile curves") hypothesis_testing hypothesis_testing Description Hypothesis testing between two random samples of distributional representations to detect differ- ences in scale and localization (ANOVA test) or distributional differences (Energy distance). Usage hypothesis_testing(data1, data2, permutations=100) Arguments data1 A biosensor object. First population. data2 A biosensor object. Second population. permutations Number of permutations used in the energy distance calibration test. Value An object of class biotest: p1_mean Quantile mean of the first population. p1_variance Quantile variance of the first population. p2_mean Quantile mean of the second population. p2_variance Quantile variance of the second population. energy_pvalue P-value of the energy distance test. anova_pvalue P-value of the ANOvA-Fréchet test. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data1 = load_data(file1, file2) file3 = system.file("extdata", "data_2.csv", package = "biosensors.usc") file4 = system.file("extdata", "variables_2.csv", package = "biosensors.usc") data2 = load_data(file3, file4) htest = hypothesis_testing(data1, data2) load_data load_data Description R function to read biosensors data from a csv files. Usage load_data(filename_fdata, filename_variables = NULL) Arguments filename_fdata A csv file with the functional data. The csv file must have long format with, at least, the following three columns: id, time, and value, where the id identifies the individual, the time indicates the moment in which the data was captured, and the value is a monitor measure. filename_variables A csv file with the clinical variables. The csv file contains a row per individual and must have a column id identifying this individual. Value A biosensor object: data A data frame with biosensor raw data. densities A functional data object (fdata) with a non-parametric density estimation. quantiles A functional data object (fdata) with the empirical quantile estimation. variables A data frame with the covariates. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) names(data) head(data$quantiles) head(data$variables) plot(data$quantiles, main="Quantile curves") nadayara_prediction nadayara_prediction Description Functional non-parametric Nadaraya-Watson prediction with 2-Wasserstein distance. Usage nadayara_prediction(nadaraya, Qpred, hs=NULL) Arguments nadaraya A Nadaraya regression object. Qpred Quantile curves that will be used in the predictions hs Smoothing parameters for the predictions, by default hs = seq(0.8, 15, length = 200) Value An object of class bnadarayapred: prediction The Nadaraya-Watson prediction for the test data at each value of hs. hs Hs values used for the prediction. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) nada = nadayara_regression(data, "BMI") # Example of prediction with the column mean of quantiles npre = nadayara_prediction(nada, t(colMeans(data$quantiles$data))) nadayara_regression nadayara_regression Description Functional non-parametric Nadaraya-Watson regression with 2-Wasserstein distance, using as pre- dictor the distributional representation and as response a scalar outcome. Usage nadayara_regression(data, response) Arguments data A biosensor object. response The name of the scalar response. The response must be a column name in data$variables. Value An object of class bnadaraya: prediction The Nadaraya-Watson prediction for each point of the training data at each h=seq(0.8, 15, length=200). r2 R2 estimation for the training data at each h=seq(0.8, 15, length=200). error Standard mean-squared error after applying leave-one-out cross-validation for the training data at each h=seq(0.8, 15, length=200). data A data frame with biosensor raw data. response The name of the scalar response. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) nada = nadayara_regression(data, "BMI") regmod_prediction regmod_prediction Description Performs the Wasserstein regression using quantile functions. Usage regmod_prediction(data, xpred) Arguments data A bregmod object. xpred A kxp matrix of input values for regressors for prediction, where k is the number of points we do the prediction and p is the dimension of the input variables. Value A kxm array. Qpred(l, :) is the regression prediction of Q given X = xpred(l, :)’ where m is the dimension of the grid of quantile function. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) regm = regmod_regression(data, "BMI") # Example of prediction xpred = as.matrix(25) g1rmp = regmod_prediction(regm, xpred) regmod_regression regmod_regression Description Performs the Wasserstein regression using quantile functions. Usage regmod_regression(data, response) Arguments data A biosensor object. response The name of the scalar response. The response must be a column name in data$variables. Value An object of class bregmod containing the components: beta The beta coefficient functions of the fitting. prediction The prediction for each training data. residuals The residuals for each prediction value. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) regm = regmod_regression(data, "BMI") ridge_regression ridge_regression Description Performs a Ridge regression. Usage ridge_regression(data, response, w=NULL, method="manhattan", type="gaussian") Arguments data A biosensor object. response The name of the scalar response. The response must be a column name in data$variables. w A weight function. method The distance measure to be used (@seealso parallelDist::parDist). By default manhattan distance. type The kernel type ("gaussian" or "lapla"). By default gaussian distance. Value An object containing the components: best_alphas Best coefficients obtained with leave-one-out cross-validation criteria. best_kernel The kernel matrix of the best solution. best_sigma The sigma parameter of the best solution. best_lambda The lambda parameter of the best solution. sigmas The sigma parameters used in the fitting according to the median heuristic fitting criteria. predictions A matrix of predictions. r2 R-square of the different models fitted. error Mean squared-error of the different models fitted. predictions_cross A matrix of predictions obtained with leave-one-out cross-validation criteria. Examples # Data extracted from the paper: <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., # <NAME>., <NAME>., Glucotypes reveal new patterns of glucose dysregulation, PLoS # biology 16(7), 2018. file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc") file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc") data = load_data(file1, file2) regm = ridge_regression(data, "BMI")
TDApplied
cran
R
Package ‘TDApplied’ January 25, 2023 Type Package Title Machine Learning and Inference for Topological Data Analysis Version 2.0.4 Author <NAME> [aut, cre], Dr. <NAME> [aut, fnd] Maintainer <NAME> <<EMAIL>> Description Topological data analysis is a powerful tool for finding non-linear global structure in whole datasets. 'TDApplied' aims to bridge topological data analysis with data, statistical and machine learning practitioners so that more analyses may benefit from the power of topological data analysis. The main tool of topological data analysis is persistent homology, which computes a shape descriptor of a dataset, called a persistence diagram. There are five goals of this package: (1) deliver a fast implementation of persistent homology via a python interface, (2) convert persistence diagrams computed using the two main R packages for topological data analysis into a data frame, (3) implement fast versions of both distance and kernel calculations for pairs of persistence diagrams, (4) contribute tools for the interpretation of persistence diagrams, and (5) provide parallelized methods for machine learning and inference for persistence diagrams. Depends R (>= 3.2.2) Imports parallel, doParallel, foreach, clue, rdist, parallelly, kernlab, iterators, methods, stats, utils License GPL-3 URL https://github.com/shaelebrown/TDApplied BugReports https://github.com/shaelebrown/TDApplied/issues Encoding UTF-8 NeedsCompilation yes RoxygenNote 7.1.2 Suggests rmarkdown, knitr, testthat (>= 3.0.0), TDA, TDAstats, reticulate VignetteBuilder knitr Config/testthat/edition 3 Repository CRAN Date/Publication 2023-01-25 11:40:05 UTC R topics documented: bootstrap_persistence_threshold... 2 check_PyH_setu... 4 check_ripse... 5 diagram_distanc... 5 diagram_kerne... 7 diagram_kkmean... 8 diagram_kpc... 10 diagram_ksv... 12 diagram_md... 15 diagram_to_d... 17 distance_matri... 18 gram_matri... 19 import_ripse... 21 independence_tes... 21 permutation_tes... 23 plot_diagra... 25 predict_diagram_kkmean... 27 predict_diagram_kpc... 28 predict_diagram_ksv... 30 Py... 31 TDApplie... 33 bootstrap_persistence_thresholds Estimate persistence threshold(s) for topological features in a data set using bootstrapping. Description Bootstrapping is used to find a conservative estimate of a "confidence interval" around each point in the persistence diagram of the data set, and points whose (open) intervals do not overlap with the diagonal (birth = death) would be considered "significant" or "real". One threshold is computed for each dimension in the diagram. Usage bootstrap_persistence_thresholds( X, FUN = "calculate_homology", maxdim = 0, thresh, distance_mat = FALSE, ripser = NULL, ignore_infinite_cluster = TRUE, calculate_representatives = FALSE, num_samples = 30, alpha = 0.05, return_subsetted = FALSE, return_diag = TRUE, num_workers = parallelly::availableCores(omit = 1) ) Arguments X the input dataset, must either be a matrix or data frame. FUN a string representing the persistent homology function to use, either ’calcu- late_homology’ (the default) or ’ripsDiag’. maxdim the integer maximum homological dimension for persistent homology, default 0. thresh the positive numeric maximum radius of the Vietoris-Rips filtration. distance_mat a boolean representing if ‘X‘ is a distance matrix (TRUE) or not (FALSE, de- fault). dimensions together (TRUE, the default) or if one threshold should be calculated for each dimension separately (FALSE). ripser the imported ripser module when ‘FUN‘ is ‘PyH‘. ignore_infinite_cluster a boolean indicating whether or not to ignore the infinitely lived cluster when ‘FUN‘ is ‘PyH‘. calculate_representatives a boolean representing whether to calculate representative (co)cycles, default FALSE. Note that representatives cant be calculated when using the ’calcu- late_homology’ function. num_samples the positive integer number of bootstrap samples, default 30. alpha the type-1 error threshold, default 0.05. return_subsetted a boolean representing whether or not to return the subsetted persistence dia- gram (with or without representatives), default FALSE. return_diag a boolean representing whether or not to return the calculated persistence dia- gram, default TRUE. num_workers the integer number of cores used for parallelizing (over bootstrap samples), de- fault one less the maximum amount of cores on the machine. Details The thresholds are determined by calculating the 1-alpha percentile of the bottleneck distance values between the real persistence diagram and other diagrams obtained by bootstrap resampling the data. Note that since calculate_homology can ignore the longest-lived cluster, fewer "real" clusters may be found. To avoid this possibility try setting ‘FUN‘ equal to ’ripsDiag’. Value a numeric vector of threshold values ,with one for each dimension 0..‘maxdim‘ (in that order). Author(s) <NAME> - <<EMAIL>> References Chazal F et al (2017). "Robust Topological Inference: Distance to a Measure and Kernel Distance." https://www.jmlr.org/papers/volume18/15-484/15-484.pdf. Examples if(require("TDA")) { # create a persistence diagram from a sample of the unit circle df = TDA::circleUnif(n = 50) # calculate persistence thresholds for alpha = 0.05 # and return the calculated diagram as well as the subsetted diagram bootstrapped_diagram <- bootstrap_persistence_thresholds(X = df, FUN = "calculate_homology",maxdim = 1,thresh = 2,num_workers = 2) } check_PyH_setup Make sure that python has been configured correctly for persistent ho- mology calculations. Description Ensures that the reticulate package has been installed, that python is available to be used by reticu- late functions, and that the python module "ripser" has been installed. Usage check_PyH_setup() Details An error message will be thrown if any of the above conditions are not met. Author(s) <NAME> - <<EMAIL>> check_ripser Verify an imported ripser module. Description Verify an imported ripser module. Usage check_ripser(ripser) Arguments ripser the ripser module object. Author(s) <NAME> - <<EMAIL>> diagram_distance Calculate distance between a pair of persistence diagrams. Description Calculates the distance between a pair of persistence diagrams, either the output from a diagram_to_df function call or from a persistent homology calculation like ripsDiag/calculate_homology/PyH, in a particular homological dimension. Usage diagram_distance( D1, D2, dim = 0, p = 2, distance = "wasserstein", sigma = NULL ) Arguments D1 the first persistence diagram. D2 the second persistence diagram. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. p a number representing the wasserstein power parameter, at least 1 and default 2. distance a string which determines which type of distance calculation to carry out, either "wasserstein" (default) or "fisher". sigma either NULL (default) or a positive number representing the bandwidth for the Fisher information metric Details The most common distance calculations between persistence diagrams are the wasserstein and bot- tleneck distances, both of which "match" points between their two input diagrams and compute the "loss" of the optimal matching (see http://www.geometrie.tugraz.at/kerber/kerber_ papers/kmn-ghtcpd_journal.pdf for details). Another method for computing distances, the Fisher information metric, converts the two diagrams into distributions defined on the plane, and calculates a distance between the resulting two distributions (https://proceedings.neurips. cc/paper/2018/file/959ab9a0695c467e7caf75431a872e5c-Paper.pdf). If the ‘distance‘ pa- rameter is "fisher" then ‘sigma‘ must not be NULL. Value the numeric value of the distance calculation. Author(s) <NAME> - <<EMAIL>> References <NAME>, <NAME> and <NAME> (2017). "Geometry Helps to Compare Persistence Di- agrams." http://www.geometrie.tugraz.at/kerber/kerber_papers/kmn-ghtcpd_journal. pdf. <NAME>, <NAME> (2018). "Persistence fisher kernel: a riemannian manifold kernel for persistence di- agrams." https://proceedings.neurips.cc/paper/2018/file/959ab9a0695c467e7caf75431a872e5c-Paper. pdf. Examples if(require("TDA")) { # create two diagrams D1 <- TDA::ripsDiag(TDA::circleUnif(n = 20,r = 1), maxdimension = 1,maxscale = 2) D2 <- TDA::ripsDiag(TDA::sphereUnif(n = 20,d = 2,r = 1), maxdimension = 1,maxscale = 2) # calculate 2-wasserstein distance between D1 and D2 in dimension 1 diagram_distance(D1,D2,dim = 1,p = 2,distance = "wasserstein") # calculate bottleneck distance between D1 and D2 in dimension 0 diagram_distance(D1,D2,dim = 0,p = Inf,distance = "wasserstein") # Fisher information metric calculation between D1 and D2 for sigma = 1 in dimension 1 diagram_distance(D1,D2,dim = 1,distance = "fisher",sigma = 1) } diagram_kernel Calculate persistence Fisher kernel value between a pair of persis- tence diagrams. Description Returns the persistence Fisher kernel value between a pair of persistence diagrams in a particular homological dimension, each of which is either the output from a diagram_to_df function call or from a persistent homology calculation like ripsDiag/calculate_homology/PyH. Usage diagram_kernel(D1, D2, dim = 0, sigma = 1, t = 1) Arguments D1 the first persistence diagram. D2 the second persistence diagram. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. sigma a positive number representing the bandwidth for the Fisher information metric, default 1. t a positive number representing the scale for the persistence Fisher kernel, default 1. Details The persistence Fisher kernel is calculated from the Fisher information metric according to the formula kP F (D1 , D2 ) = exp(−t ∗ dF IM (D1 , D2 )), resembling a radial basis kernel for standard Euclidean spaces. Value the numeric kernel value. Author(s) <NAME> - <<EMAIL>> References Le T, Yamada M (2018). "Persistence fisher kernel: a riemannian manifold kernel for persistence di- agrams." https://proceedings.neurips.cc/paper/2018/file/959ab9a0695c467e7caf75431a872e5c-Paper. pdf. Murphy, K. "Machine learning: a probabilistic perspective", MIT press (2012). Examples if(require("TDA")) { # create two diagrams D1 <- TDA::ripsDiag(TDA::circleUnif(n = 20,r = 1), maxdimension = 1,maxscale = 2) D2 <- TDA::ripsDiag(TDA::sphereUnif(n = 20,d = 2,r = 1), maxdimension = 1,maxscale = 2) # calculate the kernel value between D1 and D2 with sigma = 2, t = 2 in dimension 1 diagram_kernel(D1,D2,dim = 1,sigma = 2,t = 2) # calculate the kernel value between D1 and D2 with sigma = 2, t = 2 in dimension 0 diagram_kernel(D1,D2,dim = 0,sigma = 2,t = 2) } diagram_kkmeans Cluster a group of persistence diagrams using kernel k-means. Description Finds latent cluster labels for a group of persistence diagrams, using a kernelized version of the popular k-means algorithm. An optimal number of clusters may be determined by analyzing the withinss field of the clustering object over several values of k. Usage diagram_kkmeans( diagrams, centers, dim = 0, t = 1, sigma = 1, num_workers = parallelly::availableCores(omit = 1), ... ) Arguments diagrams a list of n>=2 persistence diagrams which are either the output of a persistent ho- mology calculation like ripsDiag/calculate_homology/PyH, or the diagram_to_df function. centers number of clusters to initialize, no more than the number of diagrams although smaller values are recommended. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. t a positive number representing the scale for the persistence Fisher kernel, default 1. sigma a positive number representing the bandwidth for the Fisher information metric, default 1 num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. ... additional parameters for the kkmeans kernlab function. Details Returns the output of kkmeans on the desired Gram matrix of a group of persistence diagrams in a particular dimension. The additional list elements stored in the output are needed to estimate cluster labels for new persistence diagrams in the ‘predict_diagram_kkmeans‘ function. Value a ’diagram_kkmeans’ S3 object containing the output of kkmeans on the Gram matrix, i.e. a list containing the elements clustering an S4 object of class specc, the output of a kkmeans function call. The ‘.Data‘ slot of this object contains cluster memberships, ‘withinss‘ contains the within-cluster sum of squares for each cluster, etc. diagrams the input ‘diagrams‘ argument. dim the input ‘dim‘ argument. t the input ‘t‘ argument. sigma the input ‘sigma‘ argument. Author(s) <NAME> - <<EMAIL>> References <NAME> and Guan, Y and <NAME> (2004). "A Unified View of Kernel k-means , Spectral Clus- tering and Graph Cuts." https://people.bu.edu/bkulis/pubs/spectral_techreport.pdf. See Also predict_diagram_kkmeans for predicting cluster labels of new diagrams. Examples if(require("TDA") & require("TDAstats")) { # create two diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g <- list(D1,D1,D2,D2) # calculate kmeans clusters with centers = 2, and sigma = t = 2 in dimension 0 clust <- diagram_kkmeans(diagrams = g,centers = 2,dim = 0,t = 2,sigma = 2,num_workers = 2) } diagram_kpca Calculate the kernel PCA embedding of a group of persistence dia- grams. Description Project a group of persistence diagrams into a low-dimensional embedding space using a kernelized version of the popular PCA algorithm. Usage diagram_kpca( diagrams, dim = 0, t = 1, sigma = 1, features = 1, num_workers = parallelly::availableCores(omit = 1), th = 1e-04 ) Arguments diagrams a list of persistence diagrams which are either the output of a persistent homol- ogy calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. t a positive number representing the scale for the persistence Fisher kernel, default 1. sigma a positive number representing the bandwidth for the Fisher information metric, default 1 features number of features (principal components) to return, default 1. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. th the threshold value under which principal components are ignored (default 0.0001). Details Returns the output of kernlab’s kpca function on the desired Gram matrix of a group of persistence diagrams in a particular dimension. The prediction function predict_diagram_kpca can be used to project new persistence diagrams using an old embedding, and this could be one practical advantage of using diagram_kpca over diagram_mds. The embedding coordinates can also be used for further analysis, or simply as a data visualization tool for persistence diagrams. Value a list containing the elements pca the output of kernlab’s kpca function on the Gram matrix: an S4 object containing the slots ‘pcv‘ (a matrix containing the principal component vectors (column wise)), ‘eig‘ (the corre- sponding eigenvalues), ‘rotated‘ (the original data projected (rotated) on the principal compo- nents) and ‘xmatrix‘ (the original data matrix). diagrams the input ‘diagrams‘ argument. t the input ‘t‘ argument. sigma the input ‘sigma‘ argument. dim the input ‘dim‘ argument. Author(s) <NAME> - <<EMAIL>> References Scholkopf, B and Smola, A and M<NAME> (1998). "Nonlinear Component Analysis as a Kernel Eigenvalue Problem." https://www.mlpack.org/papers/kpca.pdf. See Also predict_diagram_kpca for predicting embedding coordinates of new diagrams. Examples if(require("TDA") & require("TDAstats")) { # create six diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 1,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::sphereUnif(n = 50,d = 2,r = 1), dim = 1,threshold = 2) D3 <- TDAstats::calculate_homology(TDA::torusUnif(n = 50,a = 0.25,c = 0.75), dim = 1,threshold = 2) D4 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 1,threshold = 2) D5 <- TDAstats::calculate_homology(TDA::sphereUnif(n = 50,d = 2,r = 1), dim = 1,threshold = 2) D6 <- TDAstats::calculate_homology(TDA::torusUnif(n = 50,a = 0.25,c = 0.75), dim = 1,threshold = 2) g <- list(D1,D2,D3,D4,D5,D6) # calculate their 2D PCA embedding with sigma = t = 2 in dimension 1 pca <- diagram_kpca(diagrams = g,dim = 1,t = 2,sigma = 2,features = 2,num_workers = 2) } diagram_ksvm Fit a support vector machine model where each training set instance is a persistence diagram. Description Returns the output of kernlab’s ksvm function on the Gram matrix of the list of persistence diagrams in a particular dimension. Usage diagram_ksvm( diagrams, cv = 1, dim, t = 1, sigma = 1, y, type = NULL, C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE, class.weights = NULL, fit = TRUE, cache = 40, tol = 0.001, shrinking = TRUE, num_workers = parallelly::availableCores(omit = 1) ) Arguments diagrams a list of persistence diagrams which are either the output of a persistent homol- ogy calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. cv a positive number at most the length of ‘diagrams‘ which determines the number of cross validation splits to be performed (default 1, aka no cross-validation). dim a non-negative integer vector of homological dimensions in which the model is to be fit. t a vector of positive numbers representing the grid of values for the scale of the persistence Fisher kernel, default 1. sigma a vector of positive numbers representing the grid of values for the bandwidth of the Fisher information metric, default 1 y a response vector with one label for each persistence diagram. Must be either numeric or factor. type a string representing the type of task to be performed. C a number representing the cost of constraints violation (default 1) this is the ’C’-constant of the regularization term in the Lagrange formulation. nu numeric parameter needed for nu-svc, one-svc and nu-svr. The ‘nu‘ parameter sets the upper bound on the training error and the lower bound on the fraction of data points to become Support Vector (default 0.2). epsilon epsilon in the insensitive-loss function used for eps-svr, nu-svr and eps-bsvm (default 0.1). prob.model if set to TRUE builds a model for calculating class probabilities or in case of regression, calculates the scaling parameter of the Laplacian distribution fitted on the residuals. Fitting is done on output data created by performing a 3-fold cross-validation on the training data. For details see references (default FALSE). class.weights a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All compo- nents have to be named. fit indicates whether the fitted values should be computed and included in the model or not (default TRUE). cache cache memory in MB (default 40). tol tolerance of termination criteria (default 0.001). shrinking option whether to use the shrinking-heuristics (default TRUE). num_workers the number of cores used for parallel computation, default is one less the number of cores on the machine. Details Cross validation is carried out in parallel, using a trick noted in doi: 10.1007/s4146801700087 - since the persistence Fisher kernel can be written as dP F (D1 , D2 ) = exp(t ∗ dF IM (D1 , D2 )) = exp(dF IM (D1 , D2 ))t , we can store the Fisher information metric distance matrix for each sigma value in the parameter grid to avoid recomputing distances, and cross validation is therefore per- formed in parallel. Note that the response parameter ‘y‘ must be a factor for classification - a character vector for instance will throw an error. Value a list containing the elements models the cross-validation results - a matrix storing the parameters for each model in the tuning grid and its mean cross-validation error over all splits. best_model the output of ksvm run on the whole dataset with the optimal model parameters found during cross-validation. See the help page for ksvm for more details about this object. diagrams the diagrams which were support vectors in the ‘best_model‘. These are used for down- stream prediction. dim the input ‘dim‘ argument. t the input ‘t‘ argument. sigma the input ‘sigma‘ argument. Author(s) <NAME> - <<EMAIL>> References Murphy, K. "Machine learning: a probabilistic perspective." MIT press (2012). See Also predict_diagram_ksvm for predicting labels of new diagrams. Examples if(require("TDA") & require("TDAstats")) { # create four diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D3 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D4 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g <- list(D1,D2,D3,D4) # create response vector y <- as.factor(c("circle","sphere","circle","sphere")) # fit model without cross validation model_svm <- diagram_ksvm(diagrams = g,cv = 1,dim = c(0), y = y,sigma = c(1),t = c(1), num_workers = 2) } diagram_mds Dimension reduction of a group of persistence diagrams via metric multidimensional scaling. Description Projects a group of persistence diagrams into a low-dimensional embedding space via metric mul- tidimensional scaling. Such a projection can be used for visualization of data, or a static analysis of the embedding dimensions. Usage diagram_mds( diagrams, k = 2, distance = "wasserstein", dim = 0, p = 2, sigma = NULL, eig = FALSE, add = FALSE, x.ret = FALSE, list. = eig || add || x.ret, num_workers = parallelly::availableCores(omit = 1) ) Arguments diagrams a list of n>=2 persistence diagrams which are either the output of a persistent ho- mology calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. k the dimension of the space which the data are to be represented in; must be in 1,2,...,n-1. distance a string representing the desired distance metric to be used, either ’wasserstein’ (default) or ’fisher’. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. p a positive number representing the wasserstein power, a number at least 1 (in- finity for the bottleneck distance), default 2. sigma a positive number representing the bandwidth for the Fisher information metric, default NULL. eig a boolean indicating whether the eigenvalues should be returned. add a boolean indicating if an additive constant c* should be computed, and added to the non-diagonal dissimilarities such that the modified dissimilarities are Eu- clidean. x.ret a boolean indicating whether the doubly centered symmetric distance matrix should be returned. list. a boolean indicating if a list should be returned or just the n*k matrix. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Details Returns the output of cmdscale on the desired distance matrix of a group of persistence diagrams in a particular dimension. If ‘distance‘ is "fisher" then ‘sigma‘ must not be NULL. Value the output of cmdscale on the diagram distance matrix. If ‘list.‘ is false (as per default), a matrix with ‘k‘ columns whose rows give the coordinates of the points chosen to represent the dissimilari- ties. Otherwise, a list containing the following components. points a matrix with ‘k‘ columns whose rows give the coordinates of the points chosen to represent the dissimilarities. eig the n eigenvalues computed during the scaling process if ‘eig‘ is true. x the doubly centered distance matrix if ‘x.ret‘ is true. ac the additive constant c∗, 0 if ‘add‘ = FALSE. GOF the numeric vector of length 2, representing the sum of all the eigenvalues divided by the sum of their absolute values (first vector element) or by the sum of the max of each eigenvalue and 0 (second vector element). Author(s) <NAME> - <<EMAIL>> References Cox M and Cox F (2008). "Multidimensional Scaling." doi: 10.1007/9783540330370_14. Examples if(require("TDA") & require("TDAstats")) { # create two diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g <- list(D1,D2) # calculate their 1D MDS embedding in dimension 0 with the bottleneck distance mds <- diagram_mds(diagrams = g,k = 1,dim = 0,p = Inf,num_workers = 2) } diagram_to_df Convert a TDA/TDAstats persistence diagram to a data frame. Description The output of homology calculations from the R packages TDA and TDAstats are not dataframes. This function converts these outputs into a data frame either for further usage in this package or for personalized analyses. Usage diagram_to_df(d) Arguments d the output of a TDA/TDAstats homology calculation, like ripsDiag or calculate_homology. Details If a diagram is constructed using a TDA function like ripsDiag with the ‘location‘ parameter set to true then the return value will ignore the location information. Value a 3-column data frame, with each row representing a topological feature. The first column is the feature dimension (a non-negative integer), the second column is the birth radius of the feature and the third column is the death radius. Author(s) <NAME> - <<EMAIL>> Examples if(require("TDA") & require("TDAstats")) { # create a persistence diagram from a 2D Gaussian df = data.frame(x = rnorm(n = 20,mean = 0,sd = 1),y = rnorm(n = 20,mean = 0,sd = 1)) # compute persistence diagram with ripsDiag from package TDA phom_TDA = TDA::ripsDiag(X = df,maxdimension = 0,maxscale = 1) # convert to data frame phom_TDA_df = diagram_to_df(d = phom_TDA) # compute persistence diagram with calculate_homology from package TDAstats phom_TDAstats = TDAstats::calculate_homology(mat = df,dim = 0,threshold = 1) # convert to data frame phom_TDAstats_df = diagram_to_df(d = phom_TDAstats) } distance_matrix Compute a distance matrix from a list of persistence diagrams. Description Calculate the distance matrix d for either a single list of persistence diagrams (D1 , D2 , . . . , Dn ), i.e. d[i, j] = d(Di , Dj ), or between two lists, (D1 , D2 , . . . , Dn ) and (D10 , D20 , . . . , Dn0 ), d[i, j] = d(Di , Dj0 ), in parallel. Usage distance_matrix( diagrams, other_diagrams = NULL, dim = 0, distance = "wasserstein", p = 2, sigma = NULL, num_workers = parallelly::availableCores(omit = 1) ) Arguments diagrams a list of persistence diagrams, either the output of persistent homology calcula- tions like ripsDiag/calculate_homology/PyH, or diagram_to_df. other_diagrams either NULL (default) or another list of persistence diagrams to compute a cross- distance matrix. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. distance a character determining which metric to use, either "wasserstein" (default) or "fisher". p a number representing the wasserstein power parameter, at least 1 and default 2. sigma a positive number representing the bandwidth of the Fisher information metric, default NULL. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Details Distance matrices of persistence diagrams are used in downstream analyses, like in the diagram_mds, permutation_test and diagram_ksvm functions. If ‘distance‘ is "fisher" then ‘sigma‘ must not be NULL. Value the numeric distance matrix. Author(s) <NAME> - <<EMAIL>> Examples if(require("TDA") & require("TDAstats")) { # create two diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), g <- list(D1,D2) # calculate their distance matrix in dimension 0 with the 2-wasserstein metric # using 2 cores in dimension 1 D <- distance_matrix(diagrams = g,dim = 0,distance = "wasserstein",p = 2,num_workers = 2) # now do the cross distance matrix, which is the same as the original D_cross <- distance_matrix(diagrams = g,other_diagrams = g, dim = 0,distance = "wasserstein", p = 2,num_workers = 2) } gram_matrix Compute the gram matrix for a group of persistence diagrams. Description Calculate the Gram matrix K for either a single list of persistence diagrams (D1 , D2 , . . . , Dn ), i.e. K[i, j] = kP F (Di , Dj ), or between two lists of persistence diagrams, (D1 , D2 , . . . , Dn ) and (D10 , D20 , . . . , Dn0 ), K[i, j] = kP F (Di , Dj0 ), in parallel. Usage gram_matrix( diagrams, other_diagrams = NULL, dim = 0, sigma = 1, t = 1, num_workers = parallelly::availableCores(omit = 1) ) Arguments diagrams a list of persistence diagrams, where each diagram is either the output of a persistent homology calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. other_diagrams either NULL (default) or another list of persistence diagrams to compute a cross- Gram matrix. dim the non-negative integer homological dimension in which the distance is to be computed, default 0. sigma a positive number representing the bandwidth for the Fisher information metric, default 1. t a positive number representing the scale for the kernel, default 1. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Details Gram matrices are used in downstream analyses, like in the ‘diagram_kkmeans‘, ‘diagram_nearest_cluster‘,‘diagram_kpca‘, ‘predict_diagram_kpca‘, ‘predict_diagram_ksvm‘ and ‘independence_test‘ functions. Value the numeric (cross) Gram matrix of class ’kernelMatrix’. Author(s) <NAME> - <<EMAIL>> Examples if(require("TDA") & require("TDAstats")) { # create two diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g <- list(D1,D2) # calculate the Gram matrix in dimension 0 with sigma = 2, t = 2 G <- gram_matrix(diagrams = g,dim = 0,sigma = 2,t = 2,num_workers = 2) # calculate cross-Gram matrix, which is the same as G G_cross <- gram_matrix(diagrams = g,other_diagrams = g,dim = 0,sigma = 2, t = 2,num_workers = 2) } import_ripser Import the python module ripser. Description The ripser module is needed for fast persistent cohomology calculations with the PyH function. Usage import_ripser() Details Same as "reticulate::import("ripser")", just with additional checks. Value the python ripser module. Author(s) <NAME> - <<EMAIL>> Examples ## Not run: # import ripser ripser <- import_ripser() ## End(Not run) independence_test Independence test for two groups of persistence diagrams. Description Carries out inference to determine if two groups of persistence diagrams are independent or not based on kernel calculations (see (https://proceedings.neurips.cc/paper/2007/file/d5cfead94f5350c12c322b5b6 pdf) for details). A small p-value in a certain dimension suggests that the groups are not indepen- dent in that dimension. Usage independence_test( g1, g2, dims = c(0, 1), sigma = 1, t = 1, num_workers = parallelly::availableCores(omit = 1), verbose = FALSE ) Arguments g1 the first group of persistence diagrams, where each diagram was either the output from a persistent homology calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. g2 the second group of persistence diagrams, where each diagram was either the output from a persistent homology calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. dims a non-negative integer vector of the homological dimensions in which the test is to be carried out, default c(0,1). sigma a positive number representing the bandwidth for the Fisher information metric, default 1. t a positive number representing the scale for the persistence Fisher kernel, default 1. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. verbose a boolean flag for if the time duration of the function call should be printed, default FALSE Details The test is carried out with a parametric null distribution, making it much faster than non-parametric approaches. If all of the diagrams in either g1 or g2 are the same in some dimension, then some p-values may be NaN. Value a list with the following elements: dimensions the input ‘dims‘ argument. test_statisics a numeric vector of the test statistic value in each dimension. p_values a numeric vector of the p-values in each dimension. run_time the run time of the function call, containing time units. Author(s) <NAME> - <<EMAIL>> References Gretton A et al. (2007). "A Kernel Statistical Test of Independence." https://proceedings. neurips.cc/paper/2007/file/d5cfead94f5350c12c322b5b664544c1-Paper.pdf. Examples if(require("TDA") & require("TDAstats")) { # create two independent groups of diagrams of length 6, which # is the minimum length D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g1 <- list(D1,D2,D2,D2,D2,D2) g2 <- list(D2,D1,D1,D1,D1,D1) # do independence test with sigma = t = 1 in dimension 0 indep_test <- independence_test(g1,g2,dims = c(0),num_workers = 2) } permutation_test Permutation test for finding group differences between persistence di- agrams. Description A non-parametric ANOVA-like test for persistence diagrams (see https://link.springer.com/ article/10.1007/s41468-017-0008-7 for details). In each desired dimension a test statistic (loss) is calculated, then the group labels are shuffled for some number of iterations and the loss is recomputed each time thereby generating a null distribution for the test statistic. This test generates a p-value in each desired dimension. Usage permutation_test( ..., iterations = 20, p = 2, q = 2, dims = c(0, 1), paired = FALSE, distance = "wasserstein", sigma = NULL, num_workers = parallelly::availableCores(omit = 1), verbose = FALSE ) Arguments ... lists of persistence diagrams which are either the output of persistent homol- ogy calculations like ripsDiag/calculate_homology/PyH, or diagram_to_df. Each list must contain at least 2 diagrams. iterations the number of iterations for permuting group labels, default 20. p a positive number representing the wasserstein power parameter, a number at least 1 (and Inf if using the bottleneck distance) and default 2. q a finite number at least 1 for exponentiation in the Turner loss function, default 2. dims a non-negative integer vector of the homological dimensions in which the test is to be carried out, default c(0,1). paired a boolean flag for if there is a second-order pairing between diagrams at the same index in different groups, default FALSE distance a string which determines which type of distance calculation to carry out, either "wasserstein" (default) or "fisher". sigma the positive bandwidth for the Fisher information metric, default NULL. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. verbose a boolean flag for if the time duration of the function call should be printed, default FALSE Details The test is carried out in parallel and optimized in order to not recompute already-calculated dis- tances. As such, memory issues may occur when the number of persistence diagrams is very large. Like in (https://github.com/hassan-abdallah/Statistical_Inference_PH_fMRI/blob/main/ Abdallah_et_al_Statistical_Inference_PH_fMRI.pdf) an option is provided for pairing dia- grams between groups to reduce variance (in order to boost statistical power), and like it was sug- gested in the original paper functionality is provided for an arbitrary number of groups (not just 2). A small p-value in a dimension suggests that the groups are different (separated) in that dimension. If ‘distance‘ is "fisher" then ‘sigma‘ must not be NULL. TDAstats also has a ‘permutation_test‘ function so care should be taken to use the desired function when using TDApplied with TDAstats. Value a list with the following elements: dimensions the input ‘dims‘ argument. permvals a numeric vector of length ‘iterations‘ with the permuted loss value for each iteration (permutation) test_statisics a numeric vector of the test statistic value in each dimension. p_values a numeric vector of the p-values in each dimension. run_time the run time of the function call, containing time units. Author(s) <NAME> - <<EMAIL>> References <NAME>, <NAME> (2017). "Hypothesis testing for topological data analysis." https://link. springer.com/article/10.1007/s41468-017-0008-7. Abdallah H et al. (2021). "Statistical Inference for Persistent Homology applied to fMRI." https: //github.com/hassan-abdallah/Statistical_Inference_PH_fMRI/blob/main/Abdallah_et_ al_Statistical_Inference_PH_fMRI.pdf. Examples if(require("TDA") & require("TDAstats")) { # create two groups of diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g1 <- list(D1,D2) g2 <- list(D1,D2) # run test in dimension 0 with 1 iteration perm_test <- TDApplied::permutation_test(g1,g2,iterations = 1, } plot_diagram Plot persistence diagrams Description Plots a persistence diagram outputted from either a persistent homology calculation or from dia- gram_to_df, with maximum homological dimension no more than 12 (otherwise the legend doesn’t fit in the plot). Each homological dimension has its own color and point type (with colors chosen to be clear and distinct from each other), and the main plot title can be altered via the ‘title‘ parameter. Usage plot_diagram( D, title = NULL, max_radius = NULL, legend = TRUE, thresholds = NULL ) Arguments D a persistence diagram, either outputted from either a persistent homology ho- mology calculation like ripsDiag/calculate_homology/PyH or from diagram_to_df, with maximum dimension at most 12. title the character string plot title, default NULL. max_radius the x and y limits of the plot are defined as ‘c(0,max_radius)‘, and the default value of ‘max_radius‘ is the maximum death value in ‘D‘. legend a logical indicating whether to include a legend of feature dimensions, default TRUE. thresholds either a numeric vector with one persistence threshold for each dimension in ‘D‘ or the output of a bootstrap_persistence_thresholds function call, default NULL. Details The ‘thresholds‘ parameter, if not NULL, can either be a user-defined numeric vector, with one entry (persistence threshold) for each dimension in ‘D‘, or the output of bootstrap_persistence_thresholds. Points whose persistence are greater than or equal to their dimension’s threshold will be plotted in their dimension’s color, and in gray otherwise. Author(s) <NAME> - <<EMAIL>> Examples if(require("TDA") & require("TDAstats")) { # create a sample diagram from the unit circle df <- TDA::circleUnif(n = 50) diag <- TDAstats::calculate_homology(df,threshold = 2) # plot without title plot_diagram(diag) # plot with title plot_diagram(diag,title = "Example diagram") # determine persistence thresholds thresholds <- bootstrap_persistence_thresholds(X = df,maxdim = 1, thresh = 2,num_samples = 3, num_workers = 2) # plot with bootstrap persistence thresholds plot_diagram(diag,title = "Example diagram with thresholds",thresholds = thresholds) #' # plot with personalized persistence thresholds plot_diagram(diag,title = "Example diagram with personalized thresholds",thresholds = c(0.5,1)) } predict_diagram_kkmeans Predict the cluster labels for new persistence diagrams using a pre- computed clustering. Description Returns the nearest (highest kernel value) kkmeans cluster center label for new persistence dia- grams. This allows for reusing old cluster models for new tasks, or to perform cross validation. Usage predict_diagram_kkmeans( new_diagrams, clustering, num_workers = parallelly::availableCores(omit = 1) ) Arguments new_diagrams a list of persistence diagrams which are either the output of a persistent homol- ogy calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. clustering the output of a diagram_kkmeans function call. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Value a vector of the predicted cluster labels for the new diagrams. Author(s) <NAME> - <<EMAIL>> See Also diagram_kkmeans for clustering persistence diagrams. Examples if(require("TDA") & require("TDAstats")) { # create two diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g <- list(D1,D1,D2,D2) # calculate kmeans clusters with centers = 2, and sigma = t = 2 in dimension 0 clust <- diagram_kkmeans(diagrams = g,centers = 2,dim = 0,t = 2,sigma = 2,num_workers = 2) # create two new diagrams D4 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) D5 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), dim = 0,threshold = 2) g_new <- list(D4,D5) # predict cluster labels predict_diagram_kkmeans(new_diagrams = g_new,clustering = clust,num_workers = 2) } predict_diagram_kpca Project persistence diagrams into a low-dimensional space via a pre- computed kernel PCA embedding. Description Compute the location in low-dimensional space of each element of a list of new persistence dia- grams using a previously-computed kernel PCA embedding (from the diagram_kpca function). Usage predict_diagram_kpca( new_diagrams, embedding, num_workers = parallelly::availableCores(omit = 1) ) Arguments new_diagrams a list of persistence diagrams which are either the output of a persistent homol- ogy calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. embedding the output of a diagram_kpca function call. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Value the data projection (rotation), stored as a numeric matrix. Each row corresponds to the same-index diagram in ‘new_diagrams‘. Author(s) <NAME> - <<EMAIL>> See Also diagram_kpca for embedding persistence diagrams into a low-dimensional space. Examples if(require("TDA") & require("TDAstats")) { # create six diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 1,threshold = 2) D2 <- TDAstats::calculate_homology(TDA::sphereUnif(n = 50,d = 2,r = 1), dim = 1,threshold = 2) D3 <- TDAstats::calculate_homology(TDA::torusUnif(n = 50,a = 0.25,c = 0.75), dim = 1,threshold = 2) D4 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 1,threshold = 2) D5 <- TDAstats::calculate_homology(TDA::sphereUnif(n = 50,d = 2,r = 1), dim = 1,threshold = 2) D6 <- TDAstats::calculate_homology(TDA::torusUnif(n = 50,a = 0.25,c = 0.75), dim = 1,threshold = 2) g <- list(D1,D2,D3,D4,D5,D6) # calculate their 2D PCA embedding with sigma = t = 2 in dimension 0 pca <- diagram_kpca(diagrams = g,dim = 1,t = 2,sigma = 2,features = 2,num_workers = 2) # project two new diagrams onto old model D7 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 0,threshold = 2) D8 <- TDAstats::calculate_homology(TDA::circleUnif(n = 50,r = 1), dim = 0,threshold = 2) g_new <- list(D4,D5) # calculate new embedding coordinates new_pca <- predict_diagram_kpca(new_diagrams = g_new,embedding = pca,num_workers = 2) } predict_diagram_ksvm Predict the outcome labels for a list of persistence diagrams using a pre-trained diagram ksvm model. Description Returns the predicted response vector of the model on the new diagrams. Usage predict_diagram_ksvm( new_diagrams, model, num_workers = parallelly::availableCores(omit = 1) ) Arguments new_diagrams a list of persistence diagrams which are either the output of a persistent homol- ogy calculation like ripsDiag/calculate_homology/PyH, or diagram_to_df. model the output of a diagram_ksvm function call. num_workers the number of cores used for parallel computation, default is one less than the number of cores on the machine. Details This function is a wrapper of the kernlab predict function. Value a vector containing the output of predict.ksvm on the cross Gram matrix of the new diagrams and the support vector diagrams stored in the model. Author(s) <NAME> - <<EMAIL>> See Also diagram_ksvm for training a SVM model on a training set of persistence diagrams. Examples if(require("TDA") & require("TDAstats")) { # create four diagrams D1 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), D2 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), D3 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), D4 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), g <- list(D1,D2,D3,D4) # create response vector y <- as.factor(c("circle","sphere","circle","sphere")) # fit model without cross validation model_svm <- diagram_ksvm(diagrams = g,cv = 1,dim = c(0), y = y,sigma = c(1),t = c(1), num_workers = 2) # create two new diagrams D5 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), D6 <- TDAstats::calculate_homology(TDA::circleUnif(n = 10,r = 1), g_new <- list(D5,D6) # predict predict_diagram_ksvm(new_diagrams = g_new,model = model_svm,num_workers = 2) } PyH Fast persistent homology calculations with python. Description This function is a wrapper of the python wrapper of the ripser engine for persistent cohomology, but is still faster than using the R package TDAstats (see the TDApplied package vignette for details). Usage PyH( X, maxdim = 1, thresh, distance_mat = FALSE, ripser, ignore_infinite_cluster = TRUE, calculate_representatives = FALSE ) Arguments X either a matrix or dataframe, representing either point cloud data or a distance matrix. In either case there must be at least two rows and 1 column. maxdim the non-negative integer maximum dimension for persistent homology, default 1. thresh the non-negative numeric radius threshold for the Vietoris-Rips filtration. distance_mat a boolean representing whether the input X is a distance matrix or not, default FALSE. ripser the ripser python module. ignore_infinite_cluster a boolean representing whether to remove clusters (0 dimensional cycles) which die at the threshold value. Default is TRUE as this is the default for TDAstats homology calculations, but can be set to FALSE which is the default for python ripser. calculate_representatives a boolean representing whether to return a list of representative cocycles for the topological features found in the persistence diagram, default FALSE. Details If ‘distance_mat‘ is ‘TRUE‘ then ‘X‘ must be a square matrix. The ‘ripser‘ parameter should be the result of an ‘import_ripser‘ function call, but since that function is slow the ripser object should be explicitly created before a PyH function call (see examples). Cohomology is computed over Z2, as is the case for the TDAstats function calculate_homology (this is also the default for ripser in c++). If representative cocycles are returned, then they are stored in a list with one element for each point in the persistence diagram, ignoring dimension 0 points. Each representative of a dimension d cocycle (1 for loops, 2 for voids, etc.) is a kxd dimension matrix/array containing the row number-labelled edges, triangles etc. in the cocycle. Value Either a dataframe containing the persistence diagram if ‘calculate_representatives‘ is ‘FALSE‘ (the default), otherwise a list with two elements: diagram of class diagram, containing the persistence diagram, and representatives, a list containing the edges, triangles etc. contained in each represen- tative cocycle. Author(s) <NAME> - <<EMAIL>> Examples ## Not run: # create sample data df <- data.frame(x = 1:10,y = 1:10) # import the ripser module ripser <- import_ripser() # calculate persistence diagram up to dimension 1 with a maximum # radius of 5 phom <- PyH(X = df,thresh = 5,ripser = ripser) ## End(Not run) TDApplied Machine learning and inference for persistence diagrams Description Topological data analysis is a powerful tool for finding non-linear global structure in whole datasets. ’TDApplied’ aims to bridge topological data analysis with data, statistical and machine learning practitioners so that more analyses may benefit from the power of topological data analysis. The main tool of topological data analysis is persistent homology, which computes a shape descriptor of a dataset, called a persistence diagram. There are five goals of this package: (1) deliver a fast implementation of persistent homology via a python interface, (2) convert persistence diagrams computed using the two main R packages for topological data analysis into a data frame, (3) im- plement fast versions of both distance and kernel calculations for pairs of persistence diagrams, (4) contribute tools for the interpretation of persistence diagrams, and (5) provide parallelized methods for machine learning and inference for persistence diagrams.
Visual_Studio_Add-Ins_Succinctly.pdf
free_programming_book
Unknown
By <NAME> by <NAME> 2 Copyright 2013 by Syncfusion Inc. 2501 Aerial Center Parkway Suite 200 Morrisville, NC 27560 USA All rights reserved. I mportant licensing information. Please read. This book is available for free download from www.syncfusion.com on completion of a registration form. If you obtained this book from any other source, please register and download a free copy from www.syncfusion.com. This book is licensed for reading only if obtained from www.syncfusion.com. This book is licensed strictly for personal or educational use. Redistribution in any form is prohibited. The authors and copyright holders provide absolutely no warranty for any information provided. The authors and copyright holders shall not be liable for any claim, damages, or any other liability arising from, out of, or in connection with the information in this book. Please do not use this book if the listed terms are unacceptable. Use shall constitute acceptance of the terms listed. SYNCFUSION, SUCCINCTLY, DELIVER INNOVATION WITH EASE, ESSENTIAL, and .NET ESSENTIALS are the registered trademarks of Syncfusion, Inc. Technical Reviewer: <NAME>, senior product manager, Syncfusion, Inc. Copy Editor: <NAME> Acquisitions Coordinator: <NAME>, senior marketing strategist, Syncfusion, Inc. Proofreader: <NAME>, content producer, Syncfusion, Inc. 3 The World's Best UI Component Suite for Building 4.6 out of 5 stars Powerful Apps SHOPMART <NAME> Filters Search for something... Dashboard Online Orders 23456 Products S M T W T 29 30 Customers F S 27 28 2 3 4 5 6 7 8 9 10 11 12 13 14 15 26 31 g Laptop: 56% Orders January 2022 Revenue by Product Cate ories Message Total users offline Orders 945 345 9789 65 95 Sales Overview Sales Analytics Monthly 1 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 Laptop Mobile Accessories Users Top Sale Products Teams Setting Order Delivery Stats Completed 120 Invoices In Progress 24 Log Out G Mobile: 25% Accessories: 19% $51,456 OTHER et our ree y F New Invoice $999.00 +12.8% Apple Macbook Pro Laptop $1299.00 50K +32.8% Galaxy S22 Ultra Mobile $499.99 0 +22.8% Dell Inspiron 55 $899.00 100K 25K Status Order id Date Client name Amount #1208 Jan 21, 2022 <NAME> $1,534.00 .NE Cash $1500 Apple iPhone 13 Pro Mobile T nd a Completed 10 May 11 May 12 May Today S ript UI Components Java c syncfusion.com/communitylicense 1,700+ components for Support within 24 hours Uncompromising mobile, web, and on all business days quality desktop platforms 20+ years in Hassle-free licensing Trusted by the world's leading companies 28000+ customers business Table of Contents The Story behind the Succinctly Series of Books ... 13 About the Author ... 15 Preface ... 16 Chapter 1 Microsoft Visual Studio ... 18 Visual Studio add-ins ... 18 IDTExtensibility2 Interface ... 18 IDTCommandTarget Interface ... 19 Assemblies ... 19 Wizard ... 19 Chapter 2 Add-in Hello World ... 20 Create the project ... 20 Select your language ... 21 Application hosts ... 21 Name and Description ... 21 Add-in options ... 22 About Information ... 23 Summary ... 23 Connection Code ... 23 Exec Code ... 25 Query Status code ... 25 Generated files ... 26 Chapter 3 Hooking into the IDE ... 27 OnConnection Method ... 27 4 Linking to menu items ... 27 Linking to other windows... 29 Adding to the code window ... 29 Other IDE windows ... 30 Adding a toolbar button... 31 QueryStatus Method ... 31 Other methods ... 32 OnAddInsUpdate method ... 32 OnBeginShutdown method ... 32 OnDisconnection method ... 32 On StartupComplete ... 32 A few caveats ... 32 Chapter 4 Application and Add-in Objects ... 34 Application Object ... 34 ActiveDocument... 34 ActiveWindow ... 34 Debugger ... 34 Documents ... 34 Edition ... 35 ItemOperations ... 35 LocaleID ... 35 MainWindow ... 35 Mode ... 35 Solution ... 35 ToolWindows ... 35 Windows ... 36 AddIn Object ... 36 5 Add-in properties ... 36 Collection ... 36 Connected ... 36 Description ... 36 GUID ... 37 Name ... 37 Object... 37 ProgID ... 37 SatelliteDLLPath ... 37 Assemblies ... 37 Extensibility.dll ... 37 CommandBars.dll ... 37 EnvDTE.dll ... 38 VSLangProj.dll ... 38 Chapter 5 Save Some Files Add-In ... 39 SaveSomeFiles add-in ... 39 Designing the selection form... 39 Implementing the Exec() method ... 40 But not while debugging... 41 Summary ... 42 Chapter 6 Testing Your Add-In ... 43 Configuration files ... 43 For Testing.AddIn ... 43 Add-in settings ... 43 LoadBehavior ... 43 CommandPreload ... 44 Add-in life cycle ... 44 6 0: Call Manually ... 44 1: Load at Start-up ... 45 Debugging ... 45 Common mistakes ... 45 Add-in not enabled on menu ... 45 Add-in never invoked ... 46 Events not triggering ... 46 Not seeing code changes ... 46 Removing an add-in module ... 46 Pesky Unable to delete message ... 46 Chapter 7 Visual Studio Environment ... 47 VS Info Wizard ... 47 VS Info Form ... 47 Exec() method ... 48 Getting options ... 49 Getting add-ins installed ... 51 Environment information ... 51 Getting an OS-friendly name ... 52 Displaying the form ... 53 Final results ... 53 Chapter 8 Solution ... 55 Solution Info Wizard ... 55 Updating the menu icon ... 55 Exec() method ... 56 Solution info ... 56 Totaling project information ... 56 Properties ... 58 7 Displaying the results ... 58 Solution methods ... 59 Close ... 59 FindProjectItem ... 59 SaveAs... 59 SolutionBuild ... 60 Build ... 60 Clean... 60 Run ... 60 BuildState ... 60 Chapter 9 Projects ... 61 Project Info Wizard ... 61 Exec() method ... 61 Getting each project ... 62 Project type ... 62 VSProject type ... 63 References ... 63 Project Items ... 63 ITEMS ... 64 Adding the JavaScript ... 65 Showing the Results ... 65 Styling the HTML ... 65 Chapter 10 IDE Windows ... 67 Windows... 67 Tool windows ... 67 Document windows ... 67 Window object ... 68 8 Properties ... 68 Methods ... 68 ActiveWindow ... 68 MainWindow ... 68 Windows... 69 Window Kind constants ... 69 Tool windows ... 70 Document windows ... 70 Is AJAX being used?... 71 Getting the active window ... 72 Making sure it is HTML code ... 72 Parsing the HTML code ... 72 Showing our findings ... 73 Summary ... 74 Chapter 11 Documents ... 75 Getting the document... 75 Document object ... 75 Text document object ... 76 Converting C# to VB ... 77 Summary ... 77 Chapter 12 Code Window ... 78 Simple code manipulation ... 78 Attaching to the code window ... 78 Responding to the click... 79 Getting selected code ... 79 Tweak the code fragment ... 80 Putting the code back ... 80 9 Moving the code around ... 81 Text Document ... 81 Edit point ... 82 More complex code manipulation ... 83 Chapter 13 Code Model ... 84 Using the code model ... 84 Get the code model of a source file ... 85 Code element properties ... 85 Putting it all together ... 86 Class documenter ... 88 Attaching to the code editor window ... 88 Getting the code model... 89 Finding the class elements ... 90 Building our header ... 91 Organizing the code elements ... 92 Variables ... 93 Enums ... 93 Properties ... 94 Methods ... 94 Writing the header back to the source window ... 95 Summary ... 95 Chapter 14 Tool Windows ... 96 Error List... 96 Task List ... 97 Solution Explorer ... 98 Output Window ... 98 Searching for bad words ... 99 10 Bad words scan ... 99 Using a tool button ... 100 Only if a solution is open... 100 Getting tool windows ... 101 Looping through the project ... 101 Marking bad words ... 101 Adding a clean-up task ... 102 Summary ... 102 Chapter 15 Source Code Generation ... 103 Source code helper class... 103 Standardized headers ... 106 Wizard settings ... 106 Moving to File menu... 107 Options screen ... 107 Generate the header ... 108 Add sub/function call ... 109 Add standard variables ... 109 Open a new window ... 109 Item Operations object... 110 Summary ... 110 Chapter 16 Deploying Your Add-In ... 111 Installing the add-in ... 111 Add-in Manager ... 112 Summary ... 112 Chapter 17 Object Reference ... 113 Application Object (DTE2) ... 113 Windows and documents ... 114 11 Document ... 114 Window ... 115 Solution and projects ... 115 Solution ... 116 Project ... 116 Project Item ... 117 Code manipulation ... 118 Text Document ... 118 Edit Point ... 118 Code Model ... 119 Code Element ... 120 Chapter 18 Add-in Helper Class ... 121 MakeEmptySolution ... 121 GetVSProjectsFolder ... 121 FindMenuIndex ... 122 Chapter 19 Third-Party Add-Ins ... 123 Microsoft add-ins ... 123 Community add-ins ... 123 Indent Guides ... 123 12 The Story behind the Succinctly Series of Books <NAME>, Vice President Syncfusion, Inc. taying on the cutting edge S As many of you may know, Syncfusion is a provider of software components for the Microsoft platform. This puts us in the exciting but challenging position of always being on the cutting edge. Whenever platforms or tools are shipping out of Microsoft, which seems to be about every other week these days, we have to educate ourselves, quickly. Information is plentiful but harder to digest In reality, this translates into a lot of book orders, blog searches, and Twitter scans. While more information is becoming available on the Internet and more and more books are being published, even on topics that are relatively new, one aspect that continues to inhibit us is the inability to find concise technology overview books. We are usually faced with two options: read several 500+ page books or scour the web for relevant blog posts and other articles. Just as everyone else who has a job to do and customers to serve, we find this quite frustrating. The Succinctly series This frustration translated into a deep desire to produce a series of concise technical books that would be targeted at developers working on the Microsoft platform. We firmly believe, given the background knowledge such developers have, that most topics can be translated into books that are between 50 and 100 pages. This is exactly what we resolved to accomplish with the Succinctly series. Isnt everything wonderful born out of a deep desire to change things for the better? The best authors, the best content Each author was carefully chosen from a pool of talented experts who shared our vision. The book you now hold in your hands, and the others available in this series, are a result of the authors tireless work. You will find original content that is guaranteed to get you up and running in about the time it takes to drink a few cups of coffee. 13 Free forever Syncfusion will be working to produce books on several topics. The books will always be free. Any updates we publish will also be free. Free? What is the catch? There is no catch here. Syncfusion has a vested interest in this effort. As a component vendor, our unique claim has always been that we offer deeper and broader frameworks than anyone else on the market. Developer education greatly helps us market and sell against competing vendors who promise to enable AJAX support with one click, or turn the moon to cheese! Let us know what you think If you have any topics of interest, thoughts, or feedback, please feel free to send them to us at <EMAIL>. We sincerely hope you enjoy reading this book and that it helps you better understand the topic of study. Thank you for reading. Please follow us on Twitter and Like us on Facebook to help us spread the word about the Succinctly series! 14 About the Author <NAME> has been programming since 1981 in a variety of languages, including BASIC, Clipper, FoxPro, Delphi, Classic ASP, Visual Basic, and Visual C#. He has also worked in various database platforms, including DBASE, Paradox, Oracle, and SQL-Server from version 6.5 up through SQL 2012. He is the author of six computer books on Clipper and FoxPro programming, Network Programming, and Client/Server development with Delphi. He also wrote several third-party developer tools, including CLIPWKS, which allowed the ability to programmatically create and read native Lotus and Excel spreadsheet files from Clipper applications. Joe has worked for a number of companies including Sperry Univac, MCI-WorldCom, Ronin, Harris Interactive, Thomas Jefferson University, People Metrics, and Investor Force. He is one of the primary authors of Results for Research (market research software), PEPSys (industrial distribution software) and a key contributor to AccuBuild (accounting software for the construction industry). He has a background in accounting as well, having worked as a controller for several years in the industrial distribution field, although his real passion is computer programming. In his spare time, Joe is an avid tennis player and a crazy soccer player (he plays goalie). He also practices yoga and martial arts (holding a brown belt in Judo). 15 Preface Target Audience This book is for developers who are currently using Microsoft Visual Studio and want to add their own custom features to that development environment. It assumes you are comfortable programming in C# and are also comfortable writing classes and class methods to implement interfaces. It is designed to provide a quick overview of how to create an add-in, how to test your add-in, and how to install and share it. There are a number of add-in modules to provide working examples to whet your appetite. The focus of this book is the add-in ability in Visual Studio; it does not cover the more powerful, but substantially more complex, package add-in feature of Visual Studio. Tools Needed In order to be able to follow along with all of the examples in this book, you will need Microsoft Visual Studio 2010 or Visual Studio 2012. Many of the examples may work in older versions of Visual Studio as well. The extensibility features have been in the IDE since Visual Studio release in 1997. Note, however, that add-in modules are not supported in Express editions of Visual Studio. Formatting Throughout the book, I have used several formatting conventions. Note: Ideas and notes about the current topic. Tip: Ideas, tips, and suggestions. 16 Code Blocks public void Exec(string commandName, vsCommandExecOption executeOption, ref object varIn, ref object varOut, ref bool handled) { handled = false; } Using Code Examples All code samples in this book are available at https://bitbucket.org/syncfusion/visualstudio-addins_succinctly/. 17 Chapter 1 Microsoft Visual Studio Microsofts Visual Studio is one of the most popular integrated development environments (IDE) available today. Yet as popular and powerful as Visual Studio is, there may be times when you want to add your own quirks to the tool. And fortunately, Microsoft makes it pretty easy to do just that. You can create add-ins to the various menus and toolbars to perform a variety of tasks, pretty much anything you can program. The add-in can be written in Visual Basic, C#, or C++; there are no arcane or additional languages to learn. Visual Studio has been around in various incarnations since the late 1990s. Early Microsoft IDE products were separate for the language you were working in; Visual Basic was one tool, Visual C++ another, etc. However, with the release of Visual Studio 97, Microsoft began to bundle the various programming languages into the same IDE. Visual Studio 97 included Visual Basic, Visual C++, Visual J++, Visual FoxPro, and Visual Interdev. When Microsoft created Visual Studio 97, it was built as an extensible core platform, making it easier for Microsoft developers to integrate new features into the IDE. They also allowed outside developers to write add-ins to enhance the product using the same extensible platform that the Visual Studio engineers worked in. As the Visual Studio platform continued to grow, third-party developers continually wrote add-ins to integrate tools into Visual Studio. Shortly after the release of Visual Studio 2008, Microsoft created a website called the Visual Studio Gallery. New tools and enhancements are added, and as of this writing, there are more than 3,000 add-ins listed in the gallery. The extensibility built into Visual Studio makes it an excellent environment to start and build your own improvements to the IDE. Getting started down that path is what this book is all about. Visual Studio add-ins To build a Visual Studio add-in, you will need to create a new class that will provide implementation methods for two interfaces from the Extensibility and EnvDTE namespaces. An interface is a module containing declarations of methods and events, but with no implementation provided. This approach allows your add-in to plug and play into the Visual Studio IDE. You will also need to generate an XML configuration file, which tells Visual Studio how to load your add-in and where your add-ins assembly code file (DLL) can be found. IDTExtensibility2 Interface This interface from the Extensibility namespace is used to hook your add-in into the Visual Studio IDE. Although you will need to create method implementations for each of the interface events, only the OnConnection method is needed to get your add-in loaded into the IDE. 18 IDTCommandTarget Interface This interface from the EnvDTE namespace is used to process a request from the IDE to run your add-in. The first parameter to both methods is the command name, so your add-in code knows which (of possibly multiple) commands Visual Studio is asking about. Assemblies When implementing an add-in module, the following assemblies need to be included in the project: Extensibility EnvDTE There are later versions of the EnvDTE assembly, which add on additional classes, enums, and interfaces. If you choose to implement an add-in that interacts with some of the later versions of Visual Studio, you may need to include these assemblies in your project as well: EnvDTE: All versions of Visual Studio. EnvDTE80: VS 2005 and above, interfaces typically ending with 2, e.g., EnvDTE2. EnvDTE90: VS 2008 and above, interfaces ending with 3, e.g., HTMLWindow3. EnvDTE100: VS 2010 and above. When you create an add-in module using the Add-in Wizard, EnvDTE and EnvDTE80 are typically included for you. Wizard Visual Studios New Project menu includes a wizard that will generate most of the code you need to integrate your add-in into the IDE. It will also generate the XML configuration file to allow the IDE to find and load your add-in program. In the next chapter, we will use this wizard to create the famous Hello World programming example. 19 Chapter 2 Add-in Hello World Ever since the classic example in the book The C Programming Language, the Hello World program has been the starting point for new example programs. In this chapter, we will use the project wizard to create a Visual Studio add-in version of this classic example. Create the project To create a new add-in project, we will use the Add-in Wizard built into Visual Studio: 1. Open Visual Studio and select New Project on the File menu. 2. Choose Other Project Types from the Installed Templates list. 3. Choose Extensibility. Figure 1: Creating a new project There are two types of add-ins you can create, one that can be loaded into Visual Studio (which is the focus of this book), as well as a shared add-in that can be used across different Microsoft products (such as Word, Excel, Outlook, etc.). Note: All of the add-in modules we create in this book will start with the wizard screen, so you will use it quite a bit. It is definitely a time-saver compared to creating the implementation class code and XML files manually. 20 Select your language After the wizard splash screen, you will be given the option to select the programming language you want the code to be generated in. The options are: Visual C# Visual Basic Visual C++ /CLR Visual C++/ATL For the examples in this book, we will work in Visual C#, but you may use whichever language you are most comfortable programming in. This choice determines the language in which the add-in project will be generated, but does not impact running or using the add-in. Application hosts The application hosts selection screen lets you indicate which host applications can run your add-in. The options are: Visual Studio Visual Studio Macros You can select either or both options. For the examples in this book, we only need to select Visual Studio. The add-in XML file will contain a <HostApplication> entry for each option selected. Most add-ins in this book will have a UI component, so you shouldn't need to select Visual Studio Macros. Note: When using Visual Studio macros, interactive commands such as LaunchWizard, OpenFile, etc. are not available. Other commands, such as Quit, behave differently. For example, the Quit command closes the Macros IDE and returns to Visual Studio, rather than shutting down Visual Studio itself. Name and Description You can provide a name and description for your add-in. The name will be used as the menu label as well as the internal name for the command your code implements. The description is stored as tooltip text, which the IDE displays when the user selects the add-in from the Add-in Manager window. 21 Figure 2: Adding a name and description to the add-in Note: The wizard will generate a unique, qualified command name consisting of <filename>.Connect.<commandName> when referencing your add-in modules commands. Add-in options The add-in options screen helps control the generated code for your add-in. In our examples, we are going to hook our class into the Tools menu, so we select the first option to generate code on the Connection method to load our add-in. 22 Figure 3: Add-in options For debugging purposes, do not select the Load Add-in check box when the host application starts. Ignoring this option will make debugging easier. When you are ready to deploy your application, it is an easy update to have your add-in load at start-up time. Note: In some examples, we might connect to a different menu or toolbar, but it is still beneficial to let the wizard generate the default method, even if we tweak its code. About Information The About Information option lets you specify text to display in the Visual Studio About box. When you run the About Visual Studio menu option, the dialog box displays a list of all installed products, including add-ins. When a user navigates to your add-in, any information you provide will be displayed below the add-in list. Summary After you have filled in all of the information, a summary screen will be shown: Figure 4: Summary of add-in options Double-check your selections, and if they all look okay, click Finish. The wizard will work for a bit, and then produce a source file that provides implementation methods for the IDTExtensibility2 and IDTCommandTarget interfaces. It will also generate the XML configuration files with your add-in load instructions. Connection Code The code generated by the OnConnection method will look like the following code sample (it may be different depending upon language and settings): 23 public void OnConnection(object application, ext_ConnectMode connectMode, object addInInst, ref Array custom) { _applicationObject = (DTE2)application; _addInInstance = (AddIn)addInInst; if(connectMode == ext_ConnectMode.ext_cm_UISetup) { object []contextGUIDS = new object[] { }; Commands2 commands = (Commands2)_applicationObject.Commands; string toolsMenuName = "Tools"; //Place the command on the tools menu. //Find the MenuBar command bar, CommandBars.CommandBar menuBarCommandBar = ((CommandBars.CommandBars)_applicationObject.CommandBars)["MenuBar"]; //Find the Tools command bar on the MenuBar command bar: CommandBarControl toolsControl = menuBarCommandBar.Controls[toolsMenuName]; CommandBarPopup toolsPopup = (CommandBarPopup)toolsControl; try { //Add a command to the Commands collection: Command command = commands.AddNamedCommand2(_addInInstance, "HelloWorld", "HelloWorld", "Executes the command for HelloWorld", true, 59, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported+ (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); //Add a control for the command to the tools menu: if((command != null) && (toolsPopup != null)) { command.AddControl(toolsPopup.CommandBar, 1); } } catch(System.ArgumentException) { // If here, the exception is probably because a command with that name // already exists. If so there is no need to re-create the command and we // can safely ignore the exception. } } } The code checks to see the connect mode, and only installs the add-in during the UI Setup call. This event is called during the splash screen display when Visual Studio starts. OnConnection will be called other times by Visual Studio, but there is no need to install the command into the menu structure more than once. The code will search the IDEs command menu structure to find the Tools menu, and then add your add-in to that menu. Most add-in modules are placed on the Tools menu, but you are free to put them anywhere youd like. 24 Notice that the Application parameter is assigned to _applicationObject, a private variable in the class. This variable and the _addInInstance variable will allow your add-in code to interact with various Visual Studio elements. Note: The private class variables (_applicationObject and _AddInInstance) are populated during the connection routine, so they can be referred to during your Exec() and Query Status() method calls. Exec Code The generated code includes an Exec() method, which is where youll add the code you want your add-in to execute. The Handled variable, passed by reference, should be set to true to inform Visual Studio that the particular command was processed by the add-in method. public void Exec(string commandName, vsCommandExecOption executeOption, ref object varIn, ref object varOut, ref bool handled) { handled = false; if(executeOption == vsCommandExecOption.vsCommandExecOptionDoDefault) { if(commandName == "HelloWorld.Connect.HelloWorld") { MessageBox.Show("Hello World!", "Hello" ,MessageBoxButtons.OK); handled = true; return; } } } Note: You will need to add a reference to System.Windows.Forms to include the MessageBox code in your add-in. Query Status code This method returns a status code to Visual Studio when it requests the current status of your method. public void QueryStatus(string commandName, vsCommandStatusTextWanted neededText, ref vsCommandStatus status, ref object commandText) { if(neededText == vsCommandStatusTextWanted.vsCommandStatusTextWantedNone) { 25 if(commandName == "HelloWorld.Connect.HelloWorld") { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported |vsCommandStatus.vsCommandStatusEnabled; return; } } } Although the generated code provides the basic query result, you may need to adjust the code to return a Not Supported status if the add-in module should not be called during the debugging process of Visual Studio. If your code does not update the status variable, the default behavior is to disable the menu or toolbar item. Generated files The wizard will generate the standard project files (Project and AssemblyInfo.cs), as well as the source code files containing your add-in code: Connect.cs: Generated source code of the add-in. <YourName>.AddIn: XML Configuration file for your add-in. <YourName> - For Testing.AddIn: XML configuration file to test your add-in. 26 Chapter 3 Hooking into the IDE In this chapter, we will look at the code to hook your add-in module into Visual Studio, and see how you can find the menus and tool windows to integrate with your add-in. OnConnection Method The OnConnection method is the method used to load your add-in to the Visual Studio IDE. The wizard-generated code searches the GUI controls for the Tools menu item and adds your add-in module as the first item in that drop-down menu. Tip: The wizard will run the connection code when ConnectMode is ext_cm_UISetup. If you want the add-in module to attach to Tool windows or other items, rather than the standard bar or menu, you might want to wait to connect until ConnectMode is ext_cm_AfterStartup to ensure the control you want to connect to is created. Linking to menu items Visual Studio contains a large collection of commands to perform the IDE functions and a set of controls to provide the user with access to these commands. To link your add-in, youll need to add your command to Visual Studios collection and youll need to add a control into the GUI elements of Visual Studio. We can review how to do these steps by exploring the code in the OnConnection method generated by the wizard. Commands2 commands = (Commands2)_applicationObject.Commands; string toolsMenuName = "Tools"; These two lines put a reference to the command collection into a variable and define the menu (from the top bar) that we want to hook into. You can easily replace the string with the File, Edit, View, Help, or some other menu caption, whichever is the best spot for your add-in. In our example program in Chapter 5, we are going to move our add-in module into the File menu, rather than the Tools menu. VisualStudio.CommandBars.CommandBar menuBarCommandBar = ((VisualStudio.CommandBars.CommandBars) _applicationObject.CommandBars)["MenuBar"]; CommandBarControl toolsControl = menuBarCommandBar.Controls[toolsMenuName]; CommandBarPopup toolsPopup = (CommandBarPopup)toolsControl; 27 These next lines get the top menu bar and find the drop-down menu associated with the string we specified previously. The menu control is placed in the toolsPopup variable. At this point, we have both the commands collection and the toolsPopup GUI control. The following two lines add our add-in to the command collection and the GUI control. Command command = commands.AddNamedCommand2(_addInInstance, "HelloWorld", "Hello World", "Classic Hello World example", true, 59, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported+ (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); //Add a control for the command to the tools menu: if((command != null) && (toolsPopup != null)) { command.AddControl(toolsPopup.CommandBar, 1); } The AddNamedCommand2() method has a number of arguments which we can adjust to control our new menu item. After the add-in instance and command name, the next two parameters are the button text (Hello World) and the tooltip text (Classic Hello World example). The next parameter is the MSOButton flag, which indicates how the bitmap parameter is interpreted. The default value of true means the bitmap parameter is an integer ID of a bitmap installed in the application (Visual Studio in this case). The hard-coded 59 is the bitmap parameter which is used to choose the icon to add next to the menu; text.59 is the add-in default icon (a smiley face). However, there are a lot of other options available. A few selected ones are shown in the following code and can be defined as constants in your add-in code. const int CLOCK_ICON = 33; const int DEFAULT_ICON = 59; const int EXCEL_ICON = 263; const int FOXPRO_ICON = 266; const int TOOLS_ICON = 642; const int PUSHPIN_ICON = 938; const int PRINTER_ICON = 986; const int LIGHTBULB_ICON = 1000; const int PAPERCLIP_ICON = 1079; const int DOCUMENTS_ICON = 1197; const int RED_STAR_ICON = 6743; 28 Tip: There are thousands of icon resources embedded within Visual Studio. You can use a resource editor to preview some of the icons you might want to include on your add-ins menu. The other parameters are: Optional list of GUIDs indicating when the command can be called (typically an empty array is passed). Command Status: Typically Supported and Enabled. Command Style: How the button is presented (icon and text). Control Type: Usually a button control. You can tweak the command line, for example, to have your add-in module initially disabled and later have your code enable it during the Query Status event. The AddControl() method attaches the newly created command object to the pop-up menu youve chosen. The second parameter, the 1 in this example, refers to the menu position where the new item should be placed. Note: 1 puts the new object at the top of the menu. You can also get the count of controls on the command bar pop-up menu and add 1, which will put the option at the end of the menu. Linking to other windows In addition to the main menu structure, you can also attach your add-in to the various context menus of various IDE windows, such as the Code window or the Solution Explorer. However, if you do this, you should typically load your command during the AfterStartup connection mode, rather than during UI setup, just to ensure the window you are attempting to attach to is created already in Visual Studio. if (connectMode == ext_ConnectMode.ext_cm_AfterStartup) { } Adding to the code window The following code sample shows how to add a pop-up menu item to the Code Window tool window of Visual Studio. Note we are using Code Window rather than Menu Bar. 29 // Create the command object. object[] contextGUIDS = new object[] { }; Commands2 commands = (Commands2)_applicationObject.Commands; Command cmd = commands.AddNamedCommand2(_addInInstance, "HelloWorld", "Hello World", "Hello from Code Window ", true, 59, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported+ (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); // Create a command bar on the code window. CommandBar CmdBar = ((CommandBars)_applicationObject.CommandBars)["Code Window"]; // Add a command to the Code window's shortcut menu. CommandBarControl cmdBarCtl = (CommandBarControl)cmd.AddControl(CmdBar, CmdBar.Controls.Count + 1); cmdBarCtl.Caption = "HelloWorld"; Note that in this example, we are adding our module to the end of the context menu, not the first item. We will cover creating an add-in attached to the code window in Chapter 12. Other IDE windows There is a large number of other command bar windows you can interact with, including: Formatting Image Editor Debug Table Designer You can find all the available Command Bar windows with the following code added to your Exec() method. CommandBars commandBars = (CommandBars)_applicationObject.CommandBars; StringBuilder sb = new StringBuilder(); foreach (CommandBar cb in commandBars) { sb.AppendLine(cb.Name); } MessageBox.Show(sb.ToString(), "Windows" ,MessageBoxButtons.OK); 30 The command bar object has both a Name and NameLocal property (holding localized menu names for international versions of Visual Studio). However, when you search for menus and windows, you can use the English name, which is how they are stored internally. Adding a toolbar button The following code sample shows how to add a toolbar button to the standard toolbar of Visual Studio. Note we are using Standard instead of Menu Bar. // Add the command. Command cmd = (Command)_applicationObject.Commands.AddNamedCommand(_addInInstance, "HelloCommand", "HelloCommand", "Hello World", true, 59, null, (int)vsCommandStatus.vsCommandStatusSupported + (int)vsCommandStatus.vsCommandStatusEnabled); CommandBar stdCmdBar = null; // Reference the Visual Studio standard toolbar. CommandBars commandBars = (CommandBars)_applicationObject.CommandBars; foreach (CommandBar cb in commandBars) { if(cb.Name=="Standard") { stdCmdBar = cb; break; } } // Add a button to the standard toolbar. CommandBarControl stdCmdBarCtl = (CommandBarControl)cmd.AddControl(stdCmdBar, stdCmdBar.Controls.Count + 1); stdCmdBarCtl.Caption = "Hello World; // Set the toolbar's button style to an icon button. CommandBarButton cmdBarBtn = (CommandBarButton)stdCmdBarCtl; cmdBarBtn.Style = MsoButtonStyle.msoButtonIcon; When adding to a toolbar, the MSOButton Style controls how the icon appears on the toolbar. Some options are: msoButtonIcon: Only show button. msoButtonIconAndCaption: Show icon and caption text. msoButtonIconAndWrapCaption: Show icon and wrap caption text. QueryStatus Method The QueryStatus method is called by Visual Studio whenever the IDE wants to display your menu item. The method returns a status code to the IDE, indicating whether the menu option is currently supported or enabled. Visual Studio then uses this status to determine the menus appearance and whether or not the user can activate it. 31 Note that there is no Command Status Disabled option. If you want your command to be disabled, simply do not update the status variable, since the default status is disabled. Other methods There are other methods you can use to interact with Visual Studio. While these methods are generated as empty modules by the wizard, you might need them depending on your add-ins behavior. OnAddInsUpdate method This method is called when add-ins are loaded into the Visual Studio environment (as well as when the user clicks OK from the Add-in Manager window). If your add-in is dependent on other add-ins, you can check those dependencies during this method. OnBeginShutdown method When the user begins to close Visual Studio, this method is called. This is the time to clean up any resources your add-in has created, and save any user configuration information needed for the next time the add-in is loaded. OnDisconnection method This method is called when Visual Studio unloads your add-in. If you created or locked any resources when your add-in was connected, this is the method you can use to unlock or free those resources. On StartupComplete This method is called once Visual Studio has completed the start-up process. If your add-in is not loaded due to a component dependency, you could install your add-in during this method to ensure all components within Visual Studio have been loaded. A few caveats Before we dig in and design some add-in modules, there are a couple of tips to keep in mind. Tip: Avoid admin rights. When designing your add-in module, keep in mind that starting with Windows Vista, Windows employs User Account Control (UAC), which means it is very likely 32 that Visual Studio will not have admin rights. Tip: Be careful about storing setting information in non-writable folders or registry entries. Use the APPDATA system variable to find a folder to store your settings. By keeping the new security model in mind, you can prevent your add-in modules from requiring Visual Studio to be run in admin mode or seeing the access denied error. 33 Chapter 4 Application and Add-in Objects In this chapter, we will give a quick overview of the two main object classes that Visual Studio provides to add-in authors to interact with the IDE and with other add-ins. Application Object The _applicationObject variable contains a DTE2 object reference, which provides properties and methods to allow you to interact with the Visual Studio IDE. Many of these properties will be explored in subsequent chapters and examples. Some of the more commonly used ones are: ActiveDocument This property returns a document object reference to the document that currently has focus. The object contains information such as the file name, whether the document has been saved, the selected text, the kind of document being edited, etc. ActiveWindow This property returns a window object reference to the currently active window. The window object contains the caption, kind of window (tool or document window), the size (width and height), and position (left and top). It also contains a reference to the document currently in the window. You can do some basic manipulation of the window, such as hiding it, moving it, closing it, etc. Debugger This property returns a reference to the debugger object, which allows you to find out the current breakpoints, processes running on the machine, the current program and process, etc. You can also move to various code points, evaluate expressions, etc. Documents The Documents property is a collection of all currently open documents within Visual Studio. Each individual item refers to a document within the IDE. In Chapter 11, we will work with document objects and their contents. 34 Edition This property contains a string indicating the edition of Visual Studio, i.e. Professional, Enterprise, etc. It can be useful if your add-in should not be run in certain editions, for example. ItemOperations This property provides an object class that allows you to add new or existing items to the current project. You can also navigate to a URL and have the IDE open a browser window. We will use this object in Chapter 15 when we generate source code files. LocaleID This property returns the locale in which the development IDE is running. You might use this to customize your add-in for various countries and languages. MainWindow This property returns the main parent window for the IDE. It contains all of the various window properties and you can explore its LinkedWindows collection to find the various other windows linked to it. Mode The Mode property indicates whether the IDE is in design (vsIDEModeDesign) or debug mode (vsIDEModeDebug). You might want to disable your add-in from running while the user is debugging code. Solution This property returns a reference object to the currently open solution in the IDE. The solution object contains a collection of all projects in the solution, the file name, the global settings for the solution, whether it has been saved, etc. In addition, you can add and remove files from the solution, iterate projects, save the solution as another name, etc. We will explore the solution object in Chapter 8. ToolWindows This property returns an object that makes it easier to search for some of the common tool windows, such as the Task List, the Solution Explorer, the Error list, etc. We explore tool windows in Chapter 14. 35 Windows This property is a collection of windows currently open within Visual Studio. Each item in the collection is a window object, allowing you to resize and move windows, update captions, change focus, etc. We explore the windows collection in detail in Chapter 10. AddIn Object The _addInInstance object is an instance of the AddIn class. The _addInInst parameter is passed to your add-in during the onConnection method and it is assigned to the private class variable _addInInstance. This variable provides details specific to this instance of your add-in. Add-in properties The following properties are available for your add-in. Collection The Collection property returns a reference to the collection of add-in objects currently installed in Visual Studio. You can use this property to check for any dependencies your add-in may have. Connected This is a Boolean value indicating whether your add-in is loaded and connected within Visual Studio. You can connect your add-in programmatically by setting this property to True if not already connected, i.e.: if (_addinInstance.Connected) { } Description This string property contains the descriptive text that is displayed in the Add-in Manager and sometimes as tooltip text. The property is read/write, so you can dynamically update the title in your add-in. 36 GUID This read-only string contains the CLSID of the add-in from the add-in registry entry. Name This read-only string property holds the command name associated with the add-in. It is the name parameter passed to the AddNamedCommand method of the Visual Studio Commands collection. Object The Object property is a reference to the instance of the actual object containing your add-ins implementation code. You can use this property to access any additional information youve stored in your object class that is needed to implement your add-in module. ProgID This read-only string property contains the unique program name for your add-ins command, typically the file name, class name, and command name delimited by periods. SatelliteDLLPath This read-only string is the full path name where the DLL containing the code implementing the add-in is located. Assemblies The following assemblies are used by the add-in modules and can be added into your code as necessary. Keep in mind that some features are only available in later versions of Visual Studio, so only use them if you know the minimum version your add-in will run in. Extensibility.dll This assembly contains all versions of Visual Studio-IDTExtensibility2 and enums for connection purposes. CommandBars.dll Starting in VS 2005, Microsoft.VisualStudio.CommandBars.dll contains the command bar model. Early versions used the command bar model from Office.dll. 37 EnvDTE.dll This assembly contains the extensibility model of Visual Studio to manage the IDE, solutions, projects, files, code, etc. Later versions are all additive to provide more version specific features: 80 (VS 2005, 2008, 2010) 90 (VS 2008, 2010) 100 (VS 2010) VSLangProj.dll This assembly contains more detailed extensibility models, specifically for VB.NET and C# projects. 38 Chapter 5 Save Some Files Add-In Now that we have explored the various parts of an add-in module, we can put them all together and write a simple add-in project. We can start by creating a basic add-in using the wizard. Be sure to have the wizard generate our starting code and the code to hook it into the Tools menu. Our add-in is going to look at all documents that have been edited, but not saved, and display them in a check box list. Users can then mark the ones they want to save and click to save only those files. Our add-in will be called SaveSomeFiles. SaveSomeFiles add-in We can start our add-in using the Add-in Wizard described in Chapter 2. Use the following settings while running the wizard: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description:SaveSomeFiles and Selectively save open files. Create UI Menu and make sure load at start-up is not selected. Verify the settings in the Summary screen, and if they look okay, generate the code. Note: Add a reference to System.Windows.Form in your add-in projects references. Youll need this for the GUI screen we will build. You will want to include this for most add-ins you create. Designing the selection form The selection form will be a standard Windows form with a CheckedListBox control, Save, and Cancel buttons. Our add-in will populate the list box and then display it to the user. Once the user clicks Save, the code will save the selected files. If the user clicks Cancel, the dialog box will close and no action will take place. 39 Figure 5: Selection form Create a Windows form as shown in Figure 5. Name the checked list box control on the form CLB. Be sure to set the Modifiers property to Public, so we can access the checked list box from within our add-in code. In general, any control on the form that will be populated by your add-in will need to be set to public. Figure 6: Setting the Modifiers property to Public Throughout this book, we will create several Windows forms for our add-ins. Feel free to indulge your creative talents to make these screens look nice. The code samples will provide the name and type of control the add-in will interact with. Other than that, we wont spend too much time detailing how to create forms. Implementing the Exec() method The code in the Exec() method first needs to find out which files need to be saved. It does this by iterating through the documents collection of the _applicationObject variable, as shown in the following code sample. Any file that has been modified but has not been saved is added to the check box on the form we created. 40 if(commandName == "SaveSomeFiles.Connect.SaveSomeFiles") { SaveFiles theForm = new SaveFiles(); // Create the form. theForm.CLB.Items.Clear(); // Clear out the items stack. // Iterate through each document currently open in the IDE. foreach (Document theDoc in _applicationObject.Documents) { if (theDoc.Saved==false) { theForm.CLB.Items.Add(theDoc.FullName.ToString()); } } // Show the form with files to be saved. theForm.ShowDialog(); After the user closes the dialog box, we need to step through the checked items, and call the Save method on the corresponding document object. if (theForm.DialogResult == DialogResult.OK) { foreach (int xx in theForm.CLB.CheckedIndices) { foreach (Document theDoc in _applicationObject.Documents) { if (theDoc.FullName.ToString() == theForm.CLB.Items[xx].ToString()) { theDoc.Save(); } } } } theForm.Dispose(); Once weve completed the Save operation for the requested files, we can dispose of the form we created earlier in the code. But not while debugging We want to adapt our code so that the Save Some Files option is not available if the IDE is in debug mode. To implement this action, we need to update the Query Status method. public void QueryStatus(string commandName, vsCommandStatusTextWanted neededText, ref vsCommandStatus status, ref object commandText) { if(neededText == vsCommandStatusTextWanted.vsCommandStatusTextWantedNone) { 41 if(commandName == "SaveSomeFiles.Connect.SaveSomeFiles") { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported | vsCommandStatus.vsCommandStatusEnabled; } } return; } After weve identified our command (SaveSomeFiles.Connect.SaveSomeFiles), we need to add an additional IF test to determine which status code to return. If we are in debug mode, then we return the default status; otherwise, we return the standard enabled and supported result. The following code can be added into the Query Status method. if (commandName == "SaveSomeFiles.Connect.SaveSomeFiles") { if (_applicationObject.Mode == vsIDEMode.vsIDEModeDebug) { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported; } else { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported | vsCommandStatus.vsCommandStatusEnabled; } return; } else { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported | vsCommandStatus.vsCommandStatusEnabled; } return; } When the IDE attempts to build the File menu, it will ask any add-ins whether they are enabled. If we are in debug mode at the time, the menu option will be disabled. Summary In this chapter, we designed a simple add-in module to show how to interact with Windows forms and to extract information from the _applicationObject variable. Many add-in modules will have similar approaches, collecting and displaying information to the user, and then interacting through the variable directly with Visual Studio. 42 Chapter 6 Testing Your Add-In Once youve coded your add-in, you can easily test it using the Visual Studio debugger. However, debugging can be a bit easier once you review the XML configuration files and understand the add-in life cycle. Configuration files When you first create an add-in using the wizard, the wizard generates your class code, and also two XML configuration files that control your add-ins behavior. These are: <Add-in Name> - For Testing.Addin <Add-in Name>.AddIn For Testing.AddIn When you generate an add-in using the wizard, it creates the XML file and places it into the add-in folder. This file allows Visual Studio to load your add-in, but refers to the assembly DLL in your project folder. Other than the assembly location, this file has the same content as your actual AddIn file youll use to install the add-in. Of course, this can create a catch-22 for future debugging sessions. When Visual Studio loads the add-in, the DLL containing the add-in code is locked by the IDE. Hence, you might see this message when you attempt to tweak and build your add-in: Unable to delete file ".\bin\CodeWindow.dll". Access to the path 'C:\Users\..\documents\visual studio 2010\Projects\CodeWindow\CodeWindow\bin\CodeWindow.dll' is denied. Add-in settings The add-in configuration file contains some settings that control how and when the add-in is loaded. These settings can be found in the <Addin> element in the XML file. LoadBehavior This value indicates when the add-in is loaded. The available values are: 43 0: Add-in must be loaded manually. 1: Add-in automatically loads when IDE starts. 4: Add-in loads when started from command prompt. You will rarely see option 4, since most add-ins provide a UI and would not make sense when run from the command line. Option 0 is good for debugging, because the add-ins DLL is only loaded when you open the add-in, not every time you open the IDE. CommandPreload This value determines if the add-in is loaded via the Add-in Manager or automatically when Visual Studio starts (the first time after the add-in file is installed): 0: Add-in must be manually started by Add-in Manager. 1: Add-in is loaded first time when Visual Studio starts after install. Sometimes, when you are debugging an add-in, you might need to set Command Preload to 0 in the add-in files to allow you to update the DLL when you compile your add-in. If you set it to 1, you might get the Unable to delete error when you attempt to rebuild your add-in file. I recommend generating the Add-in Wizard without the add-in being loaded at start-up to make it easier for testing and debugging. Once you are ready to deploy your add-in, you can manually change the Load Behavior flag to 1 if you want your add-in loaded at start-up time. Add-in life cycle The Load Behavior setting controls which events from your connect class are called and when. Regardless of the setting, the first two events are always: OnConnect with the cm_UISetup connect mode. Disconnect with the dm_UISetup disconnect mode. This is why when you attach your add-in module to Visual Studio during the CM_UISetup mode, it is always called. 0: Call Manually When set to Call Manually, the add-in does not get called again until you actually request it from the menu. The command has been added to the IDE, and the menu updated, but the code is not loaded. Once you select the item from the menu or toolbar, Connect with cm_AfterStartup is called. At this point, the add-in is loaded to memory. You can manually unload it using the Add-in Manager, in which case Disconnect with dm_userShutdown is called. If you dont close it manually, the events OnShutdown and Disconnect with dm_HostShutdown are called. If youve closed it manually, these events will not be called since the add-in is no longer in memory. 44 If you plan on loading the add-in manually, be sure to set any configuration information you want to save during the Disconnect method. 1: Load at Start-up When set to load at start-up, additional events are triggered since the IDE is loading your add-in as part of the start-up code: Connect with cm_Startup. Adds-in update event. Start-up complete event. This means that once the IDE makes its appearance, your add-in module is loaded in memory. Unless you unload it manually using the Add-in Manager, the following events will be triggered when the user shuts down Visual Studio: Begin Shutdown. Disconnect with dm_HostShutdown. The events that your add-in will respond to are handled differently. During debugging sessions, I generally only load the add-in when called from the menu. However, once the code is ready for deployment, I typically set the flag to 1, load at start-up. Debugging When you debug your add-in by pressing F5, a second instance of Visual Studio will be loaded. When this instance loads, it will see the newly added XML file and load the add-in module into Visual Studio. At this point, you can step through the add-in and debug the code or you can run it and test its behavior. Common mistakes Here are some common mistakes that might pop up while using your add-in. Add-in not enabled on menu If your add-in is not available on the menu, check the QueryStatus method and ensure that the return status variable contains both Command Supported and Command Enabled. 45 Add-in never invoked If the command name in the Exec command does not match the add-in class name and friendly name in the add-in XML file, your exec method will never reach your code. Events not triggering Be sure the events you are expecting to be called are compatible with the Load Behavior mode set in the XML file. Not seeing code changes Sometimes, your add-in code appears not to recognize recent code changes. If this is the case, the most likely culprit is that the wrong DLL version was loaded. I would recommend making sure the Load Behavior flag is set to 0, restarting the IDE, and running the add-in from the menu. This should load the most recent version of the DLL. Removing an add-in module There are times you might need to remove an add-in module entirely. The easiest way to do this is to find and delete (or rename) the Addin XML file in any of the paths specified in the Add-in and Macros properties. In the Tools menu, select Options, and then Environment. Pesky Unable to delete message Occasionally, you might not be able to shake that Unable to Delete message, no matter how many times you restart the IDE and tweak settings. If your add-in is not marked as load on startup, the DLL should not be loaded. However, in the event you cannot unload it, you can start the IDE with the /SafeMode switch, which loads Visual Studio without any add-in modules at all. Even if you start the IDE in Safe Mode, you can still debug since the second instance of the IDE will start in regular mode without the /SafeMode switch being applied. If you are having trouble working with an add-in, consider making an add-in free shortcut on your desktop to run the IDE with add-ins. 46 Chapter 7 Visual Studio Environment In this chapter we will create an add-in module to provide some details about the Visual Studio installed version and the computer that the IDE is currently running on. Although the collected information will be displayed in a Windows form, you could also add logic to create a text file of the information, allowing a user to duplicate the Visual Studio environment on another computer if desired. VS Info Wizard Start your VS info add-in by using the wizard and the following settings: Visual C# (or your preferred language). Application Host: Only Visual Studio, because we wouldnt use this in a macro. Name/Description: VS_Info and Info about Visual Studio and Dev Environment. Create UI Menu and load at start-up is not selected. Verify the settings at the Summary screen, and if they look okay, generate the code. VS Info Form We need to create a form to hold our Visual Studio information, so we will need to add a Windows form to the project. In addition, we will be using the String Builder object to assemble our information, so add the following line to your connect.cs file: using System.Text; Create a form similar to the following figure, but feel free to add your own artistic touches. 47 Figure 7: Form for Visual Studio information However, be sure the two text boxes have PUBLIC modifiers. Ive named the Visual Studio text box VSINFO and the Environment text box ENVINFO. If you use different names, youll need to tweak the code in your Exec() method. Name the form that you create VSInfoForm; I recommend using a monospace font so the generated text will line up nicely. Exec() method In our Exec() method, we are going to gather information and build lines of text to transfer to the forms windows. We will start simply by grabbing some simple string properties from the _applicationObject variable, as seen in the following code: if(commandName == "VS_Info.Connect.VS_Info") { VSInfoForm theForm = new VSInfoForm(); StringBuilder sb = new StringBuilder(); // Create the form. // Get information specifically about Visual Studio. sb.AppendLine("Visual Studio " + _applicationObject.Edition + " edition"); sb.AppendLine(" Version " + _applicationObject.Version.ToString()); sb.AppendLine(""); sb.AppendLine("Full EXE Name " + _applicationObject.FullName); sb.AppendLine(" Parameters " + _applicationObject.CommandLineArguments.ToString()); sb.AppendLine(""); sb.AppendLine("Registry Root " + _applicationObject.RegistryRoot.ToString()); sb.AppendLine(""); 48 Getting options Visual Studio has an options menu to allow you to tweak the settings and behavior of the IDE. It can be found in the Tools menu under Options. Figure 8: Visual Studio options You can access any of the Visual Studio options by using the get_Properties method of the _applicationObject variable. The method takes two parameters, the category name and the page name. The example code that follows shows how to get a collection of the options from the Environment category, General page. // Gives you access to various IDE options (see Tools | Options menu). Properties theSection = _applicationObject.get_Properties("Environment", "General"); This method will return a properties collection object with the options from the indicated section. We can then iterate through the properties collection to find the individual options. Note: You need to know the exact names of the categories and pages; otherwise, youll encounter an error message. 49 Figure 9: Error result of mismatched option categories and pages If you plan on using the options, you can go to the following registry key to get the actual category names: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\<ver>\AutomationPropertie s. This registry location will show you the actual text fields to use with the get_Properties method. Figure 10: Text fields to use with Get_properties method For our code, we are going to get the list of folders that Visual Studio looks in when loading addins. // Gives you access to various IDE options (see Tools | Options menu). Properties theSection = _applicationObject.get_Properties("Environment", "AddinMacrosSecurity"); foreach (Property theProp in theSection) { // Add-in locations are handled specially. 50 if (theProp.Name == "AddinFileLocations") { object[] theArr = (object[])theProp.Value; for (int i = 0; i < theProp.Collection.Count;i++) { string s = (string)theArr[i]; sb.AppendLine(s); } } } To access the properties, we assign the results of get_Properties to a variable of type Properties. We then iterate through this collection, getting an individual property object for every entry within the category and page. The property contents will vary a bit, from simple name and value pairs to the slightly more complex code in the previous code sample to iterate the path list for add-ins. Getting add-ins installed Once we have gathered the settings and paths, we also want to report on the currently installed add-ins. The following code sample shows how to accomplish that: sb.AppendLine(""); foreach (AddIn theItem in _applicationObject.AddIns) { sb.AppendLine(theItem.Name.ToString()+" (" + theItem.Description.ToString() + ") "); sb.AppendLine(" "+theItem.SatelliteDllPath.ToString()); sb.AppendLine(""); } // And put the results into the forms edit box. theForm.VSINFO.Text = sb.ToString(); And the final step is to put the string weve just assembled into the forms Edit box. Environment information We can take a similar approach to provide users some information about the development environment Visual Studio is running in. sb.Clear(); // Get information about development machine. sb.AppendLine(" Machine Name: "+Environment.MachineName.ToString()); sb.AppendLine(" User: "+Environment.UserDomainName + "/" + Environment.UserName.ToString()); sb.AppendLine("Operating System: "+ OSVersionToFriendlyName(Environment.OSVersion.Version.Major, Environment.OSVersion.Version.Minor)); 51 sb.AppendLine(" "+Environment.OSVersion.ToString() + " with " + Environment.ProcessorCount.ToString() + " processors"); if (Environment.OSVersion.Platform == PlatformID.Win32Windows || Environment.OSVersion.Platform == PlatformID.Win32Windows) { sb.AppendLine("You are using an older, unsupported OS, you should consider upgrading to a later version"); } if (System.Windows.Forms.SystemInformation.MonitorCount > 1) { sb.AppendLine("Multiple monitors setup"); } theForm.ENVINFO.Text = sb.ToString(); Getting an OS-friendly name You can use the version information supplied in the environment class to convert the version into a friendlier name (such as Windows XP, Windows Vista, etc.) The OSVersionToFriendlyName() function handles that task. public string OSVersionToFriendlyName(int MajorVer,int MinorVer) { string OsName = "Unknown"; switch (MajorVer) { case 1 : { OsName="Windows 1.0"; break; } case 2 : { OsName ="Windows 2.0"; break; } case 3: { switch (MinorVer) { case 10: { OsName = "Windows NT 3.1"; break; } case 11: { OsName = "Windows for Workgroups 3.11"; break; } case 5: { OsName = "Windows NT Workstation 3.5"; break; } case 51: { OsName = "Windows NT Workstation 3.51"; break; } } } break; case 4: { switch (MinorVer) { case 0: { OsName = "Windows 95"; break; } case 1: { OsName = "Windows 98"; break; } case 90: { OsName = "Windows Me"; break; } } } break; case 5: 52 { switch (MinorVer) { case 0: { OsName = "Windows 2000 Professional"; break; } case 1: { OsName = "Windows XP"; break; } case 2: { OsName = "Windows XP Professional x64"; break; } } } break; case 6: { switch (MinorVer) { case 0: { OsName = "Windows Vista"; break; } case 1: { OsName = "Windows 7"; break; } } } break; default: break; } return OsName; } Displaying the form Once the information has been gathered and transferred to the form, we now simply display the form. theForm.ShowDialog(); handled = true; return; Final results Once you build and run the add-in, your screen should look something like this: 53 Figure 11: Completed information form You can adjust the code to include different button options, and even add a print button to print the contents of the Visual Studio setup to a printer or to a text file. 54 Chapter 8 Solution In this chapter, we are going to create an add-in module to explore the current solution open in the IDE, and present a summary screen of some key statistics about the solution. It provides a basic example of how to programmatically access various aspects of the solution. Solution Info Wizard Start your Solution Info add-in using the wizard and the following settings: Visual C# (or your preferred language). Application Host: Only Visual Studio, since it wouldnt make sense in a macro. Name/Description: SolutionInfo, Info, and Statistics about solution. Create UI Menu and do not load at start-up. Verify the settings at the Summary screen, and if they look okay, generate the code. Updating the menu icon To help distinguish our add-in a bit, we are going to change the default icon to the lightbulb symbol. You can add the constant to the top of your connect.cs class file: const int LIGHTBULB_ICON = 1000; And revise the OnConnection method to use this constant, rather than the hard-code 59 the wizard generates. Command command = commands.AddNamedCommand2(_addInInstance, "SolutionInfo", "Solution Info", "Info & Statistics about solution", true, LIGHTBULB_ICON, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported + (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, VsCommandControlType.vsCommandControlTypeButton); 55 Exec() method The Exec() method will collect a variety of solution information and present it to the user. But first, we need to make sure a solution is open. // Only makes sense if a solution is open. if (_applicationObject.Solution.IsOpen==false) { MessageBox.Show("No solution is open","ERROR"); handled = true; return; } Note: Be sure to add a reference to System.Windows.Forms to your project. Solution info We can now use the Solution object to explore some aspects of the currently open solution. We will use a similar approach of building the information in a string builder variable and then transferring the text to our GUI form. In the following code sample, we are taking some of the simple properties of the solution object and displaying them: StringBuilder sb = new StringBuilder(); Solution theSol = _applicationObject.Solution; sb.AppendLine(" Solution: " + Path.GetFileName(theSol.FullName.ToString())); sb.AppendLine(" Full Path: " + theSol.FullName.ToString()); sb.AppendLine("Start-up projects: "); foreach (String s in (Array)theSol.SolutionBuild.StartupProjects) { sb.AppendLine(" " + s); } Totaling project information We also want to report on the number of projects and code files the solution contains. To do so, we need to iterate through the projects associated with the solution. Project theProj; int NumVBprojects = 0; int NumVBmodules = 0; int NumCSprojects = 0; // Generic project item. 56 int NumCSmodules = 0; int NumOtherProjects = 0; // Iterate through the projects to determine number of each kind. for (int x = 1; x <= theSol.Count; x++) { theProj = theSol.Item(x); switch (theProj.Kind) { case PrjKind.prjKindVBProject : { NumVBprojects++; // Increment number of VB projects. NumVBmodules += theProj.ProjectItems.Count; break; } case PrjKind.prjKindCSharpProject : { NumCSprojects++; // Increment number of C# projects. NumCSmodules += theProj.ProjectItems.Count; break; } default: { NumOtherProjects++; break; } } } The Kind property of the project object is a GUID string, so we need a little help in determining the project type. To this end, we need to add a reference to VSLangProj into our add-in project. You might see a compiler error stating that prjKind cannot be embedded when you attempt to reference the project kinds. To solve this error, right-click on the VSLangProj reference and bring up the Properties dialog. Set the Embed Interop Types to false to prevent the types from being embedded in the assembly. Now that weve collected the information, we need to format it for display to our user, which the following code sample shows: sb.AppendLine("Visual Basic code"); sb.AppendLine(" " + NumVBprojects.ToString() + " projects containing " + NumVBmodules.ToString() + " modules"); sb.AppendLine(""); sb.AppendLine("Visual C# code"); sb.AppendLine(" " + NumCSprojects.ToString() + " projects containing " + NumCSmodules.ToString() + " modules"); sb.AppendLine(""); sb.AppendLine("Miscellaneous projects"); sb.AppendLine(" " + NumOtherProjects.ToString() + " other projects"); sb.AppendLine(""); 57 Project GUIDS are stored in a registry key, HKLM\Software\Microsoft\VisualStudio\<vers> Projects. You can copy the GUID to additional constants if you need to report on other project types during the Solution add-in. The following are some sample constants for VS projects: // Constants for additional project types. const string WEB_APPLICATION_PROJECT = "{E24C65DC-7377-472b-9ABA-BC803B73C61A}"; const string DEPLOYMENT_PROJECT = "{54435603-DBB4-11D2-8724-00A0C9A8B90C}"; Properties After weve gathered some of the solution info, we can iterate through the properties associated with the solution, and append them to our string builder variable. // Get properties. sb.AppendLine("Solutuion Properties"); Properties props = theSol.Properties; foreach (Property prop in props) { sb.Append(" " + prop.Name + " = "); try { sb.AppendLine(prop.Value.ToString()); } catch { sb.AppendLine("(Nothing)"); } } // Put built string onto form. SolInfoForm theForm = new SolInfoForm(); theForm.SOLINFO.Text = sb.ToString(); theForm.ShowDialog(); handled = true; return; Displaying the results After weve built our string builder variables, we need to create a form to display them to the end user. 58 Figure 12: Solution Information form You can download the project code or create your own form similar to the one in Figure 12. The add-in code assumes the text box control is named SOLINFO. Be sure to name it the same and mark its modifier as PUBLIC so the add-in can place the solution information on the form. Solution methods In addition to displaying information about the solution, you can also perform certain operations on the solution, much like the IDE does. Some of these include: Close This method closes the solution, with an optional Boolean parameter to save the solution first. FindProjectItem This method searches the project space, looking for an item by file name. If the item is found, the method returns a ProjectItem reference to the file you were searching for. SaveAs This method allows you to save the solution under a different file name. 59 SolutionBuild The solution object also provides information about the active configuration and the last build state. You can use the Solution Build object of the solution to perform solution-level operations, such as: Build Build the solution with an optional parameter to wait for the build to complete. You might decide to do automated solution builds in the background as part of your testing cycle. Clean This method cleans up extra files used by the solution, and features an option to wait for completion before continuing. Run This option runs the start-up project associated with the solution. BuildState This property reports the current state of the build and is an enumerated type from the following list: vsBuildStateDone: Build is complete. vsBuildStateInProgress: Solution currently being built. vsBuldStateNotStarted: Solution has not been built. 60 Chapter 9 Projects In this chapter, we are going to create an add-in module which will generate an HTML document providing technical details about the project. It will save the HTML file to disk and open it in a browser window to be viewed from within Visual Studio. Project Info Wizard Start your Project Info add-in using the wizard and the following settings: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: ProjectInfo and Generate HTML project documentation. Create UI Menu and do not load at start-up. Verify the settings at the Summary screen, and if they look okay, generate the code. Exec() method The Exec() method will collect the projects associated with the solution and present them to the user, but first, we need to make sure a solution is open. // Only makes sense if a solution is open. if (_applicationObject.Solution.IsOpen==false) { MessageBox.Show("No solution is open","ERROR"); handled = true; return; } Note: Be sure to add a reference to System.Windows.Forms to your project. You will also need to add VSLangProj and disable the Embed Interop types property. Once we know we have an open solution, we can start to build HTML documentation. // Find all project information. Solution theSol = _applicationObject.Solution; StringBuilder sb = new StringBuilder(); string shortName = Path.GetFileNameWithoutExtension(theSol.FullName); sb.AppendLine("<html>"); sb.AppendLine("<head>"); 61 sb.AppendLine("<title>" + shortName + "</title>"); AddJavaScript(sb); sb.AppendLine("</head>"); sb.AppendLine("<body>"); sb.AppendLine("<h1>" + shortName + " solution</h1>"); Notice the AddJavaScript() function in the middle, which we will use to write some simple JavaScript functionality in our webpage. We will add this function toward the end of the chapter. Tip: If you want to learn JavaScript quickly, be sure to download JavaScript Succinctly from the Syncfusion website. jQuery Succinctly is another excellent reference book. Getting each project Using the solution object as a starting point, we can write code that will loop through all projects and output some basic project information to the HTML file we are building. VSProject theVSProj = null; Project theProj; for (int xx=1;xx<=theSol.Projects.Count;xx++) { theProj = theSol.Projects.Item(xx); theVSProj = (VSProject)theSol.Projects.Item(xx).Object; sb.AppendLine("<h2 onclick='ToggleDiv(\"proj"+xx.ToString()+"\");'>" + theProj.Name+"</h2>"); sb.AppendLine("<div id='proj" + xx.ToString() + "' style='display:none;'>"); sb.AppendLine("<h3 onclick='ToggleDiv(\"info" + xx.ToString() + "\");'>INFO</h3>"); sb.AppendLine("<div id='info" + xx.ToString() + "'>"); sb.AppendLine("<p>Unique Name: " + theProj.UniqueName + "</br>"); sb.AppendLine("Full Path: " + theProj.FullName + "</br>"); // Report language if (theProj.Kind==VSLangProj.PrjKind.prjKindCSharpProject) { sb.AppendLine(" Language: C#</p>"); } if (theProj.Kind == VSLangProj.PrjKind.prjKindVBProject) { sb.AppendLine(" Language: Visual Basic</p>"); } sb.AppendLine("</div>"); We get the Item() project and create two project variables from it. The first, the Project object type, is a generic project reference, providing us with basic information such as name, path name, unique name, etc. The second variable, the VSProject object type, is a Visual Studio project, and contains additional properties and methods unique to Visual Studio. Project type The project type has general properties we can use to display the project info: 62 Name: Short name of project. FullName: Full name and path of project. Unique Name: Namespace and unique project name. VSProject type The VSProject type has more detailed information, specifically in Visual Studio. We can access this information to display all the references that the project uses. The following code sample places the references into a list structure in our HTML documentation. References This example code includes the reference name and description, version number, and whether the reference is an ActiveX control or an assembly. sb.AppendLine("<h3 onclick='ToggleDiv(\"ref" + xx.ToString() + "\");'>REFERENCES</h3>"); sb.AppendLine("<div id='ref" + xx.ToString() + "'>"); sb.AppendLine("<ul>"); foreach (Reference theRef in theVSProj.References) { string theVer = theRef.BuildNumber.ToString(); if (theVer == "0") {theVer = theRef.MajorVersion.ToString() + "." + theRef.MinorVersion.ToString(); } sb.Append("<li>" + theRef.Name + " (" + theVer + ")"); if (theRef.Description.Length > 0) { sb.Append(" -" + theRef.Description); } if (theRef.Type == prjReferenceType.prjReferenceTypeActiveX) { sb.Append(" [ActiveX] "); } sb.AppendLine("</li>"); } sb.AppendLine("</ul>"); sb.AppendLine("</div>"); Project Items Each project contains a list of all of the items that make up that project. These can be accessed via the Project Items property. The project items consist of the various source code files that make up the project. These can include source files, XML files, etc. If the file is a source code file, Visual Studio provides a code model which allows our code to access the namespace, classes, etc. within that file. We will use the code model to show the classes with a source file in this chapter, but cover the code model in more depth in Chapter 13. 63 sb.AppendLine("<h3 onclick='ToggleDiv(\"items" + xx.ToString() + "\");'>ITEMS</h3>"); sb.AppendLine("<div id='items" + xx.ToString() + "' style='display:none;'>"); sb.AppendLine("<ul>"); FileCodeModel theCM = null; foreach (ProjectItem theItem in theProj.ProjectItems) { sb.AppendLine("<li>" + theItem.Name); theCM = theItem.FileCodeModel; if (theCM != null) { sb.AppendLine("<ul>"); foreach (CodeElement theElt in theCM.CodeElements) { // List all the classes we find within the code file. if (theElt.Kind == vsCMElement.vsCMElementClass) { sb.AppendLine("<li>"+theElt.Name+"</li>"); } // If we find a namespace, there may be a class in there as well. if (theElt.Kind == vsCMElement.vsCMElementNamespace) { foreach (CodeElement theInnerElt in theElt.Children) { string theNameSpace = theElt.Name; if (theInnerElt.Kind == vsCMElement.vsCMElementClass) { sb.AppendLine("<li>" + theNameSpace+"/"+ theInnerElt.Name + "</li>"); } } } } sb.AppendLine("</ul>"); } sb.AppendLine("</li>"); } sb.AppendLine("</ul>"); sb.AppendLine("</div>"); For every project item, we first include the name in the list we are building. We also check to see if a code model exists for the current file (which one should for C# and VB modules). If we find a code model, we iterate through the code elements, looking for either class entities or classes within namespaces. For each class we encounter, we include the class name and optionally the namespace. This allows our project HTML display to drill down to the class level for a given project. A sample list of items that will be generated is shown in the following list. This is a C# project that has assembly information and two source files. One file has a single class called ComputePayrollAmount and the other source file has a class called Connect within a namespace called CodeModelSample: ITEMS AssemblyInfo.cs 64 Class1.cs o ComputePayrollAmount Connect.cs o CodeModelSample/Connect If your add-in is going to do any type of source code manipulation, be sure to explore the code model object described in Chapter 13. Adding the JavaScript The following function adds the JavaScript to the HTML header to allow us to toggle various project elements. private void AddJavaScript(StringBuilder sb) { sb.AppendLine("<script type='text/javascript'>"); sb.AppendLine("function ToggleDiv(theID) "); sb.AppendLine("{"); sb.AppendLine("var e = document.getElementById(theID);"); sb.AppendLine("if(e.style.display == 'block')"); sb.AppendLine(" e.style.display = 'none';"); sb.AppendLine("else"); sb.AppendLine(" e.style.display = 'block';"); sb.AppendLine("}"); sb.AppendLine("</script>"); } Showing the Results Once the HTML file is generated, the following code saves it and then displays it. string theFile = Path.ChangeExtension(theSol.FullName, "html"); System.IO.File.WriteAllText(theFile, sb.ToString()); System.Diagnostics.Process.Start("file://"+theFile); Styling the HTML The generated HTML is rather plain looking, but you can add some style sheet commands to improve the look and feel of the document. The following code is a function to add style commands to the HTML document. 65 private void AddStyles(StringBuilder sb) { sb.AppendLine("h2 {"); sb.AppendLine("font: bold italic 2em/1em \"Times New Roman\", \"MS Serif\", \"New York\", serif;"); sb.AppendLine("margin: 0;"); sb.AppendLine("padding: 0;"); sb.AppendLine("border-top: solid #e7ce00 medium;"); sb.AppendLine("border-bottom: dotted #e7ce00 thin;"); sb.AppendLine("width: 600px;"); sb.AppendLine("color: #e7ce00;"); sb.AppendLine("}"); } In you want to use such a function, insert the function call immediately after the AddJavaScript() function call. 66 The World's Best UI Component Suite 4.6 out of 5 stars for Building Powerful Apps SHOPMART Search for something... <NAME> Filters Dashboard Orders Products January 2022 Customers S M T W T F 26 27 28 29 30 31 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 S 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 Message Revenue by Product Cate ories g Laptop: 56% offline Orders Online Orders 23456 945 345 Total users 9789 65 Sales Overview 95 Sales Analytics Monthly Mobile: 25% Accessories: 19% $51,456 OTHER Laptop Mobile Accessories Users Top Sale Products Teams Setting Order Delivery Stats Completed 120 In Progress 24 Log Out Invoices New Invoice Cash $1500 Apple iPhone 13 Pro Mobile $999.00 +12.8% Apple Macbook Pro Laptop $1299.00 50K +32.8% Galaxy S22 Ultra Mobile $499.99 0 +22.8% Dell Inspiron 55 $899.00 100K 25K Order id Date Client name Amount #1208 Jan 21, 2022 Olive Yew $1,534.00 Status Completed 10 May 11 May 12 May Today Get your Free .NET and JavaScript UI Components syncfusion.com/communitylicense 1,700+ components for Support within 24 hours Uncompromising mobile, web, and on all business days quality desktop platforms 20+ years in Hassle-free licensing Trusted by the world's leading companies 28000+ customers business Chapter 10 IDE Windows In the prior chapters, weve looked at the Visual Studio environment and the solution and projects that a user can edit. In this chapter, we are going to explore the open windows and files that a user can be working with in Visual Studio at any given time. Windows When you open Visual Studio, even without a solution open, a number of windows are created by the application for helping with the development tasks. These are referred to as your tool windows. When you edit a source code file or a form, these are open in an editor window, and referred to as document windows. You can do some interaction with these windows, such as resizing them, moving them, activating and closing them, etc. Tool windows When you open Visual Studio, the following tool windows are generally open even before you open a solution. Solution Explorer Start Page Properties Find and Replace Object Browser Class View You can find the associated window by stepping through the Windows Collection property on the application object. The window object will have basic window manipulation properties, but you can cast it to a specific window type, such as the Solution Explorer, the Error List, etc. We will cover some of the tool windows in more detail in Chapter 14. Document windows Once you open a solution in Visual Studio, every open source-file will be placed into a document window. The type of window varies depending on the type of file being edited. The Code Window holds source code files, such as VB or C# code. HTML or ASPX files have another editor, visual design tools, etc. 67 Window object The Window object contains properties and methods to do basic manipulation of the window itself, not its contents. Properties You can manipulate the windows location and behavior using the coordinates and some Boolean options: AutoHides(Boolean): Sets whether or not the window can be hidden. Caption(string): Caption on window title bar. Document(object): Associated document object if document window. Height and Width (integer): Window size in pixels. Left and Top (integer): Window location from edge of container. Visible (Boolean): Sets whether or not the window is currently displayed to the user. Methods You can manipulate the window using a few methods: Activate: Give the window focus. Close: Close the document with the option to save it. ActiveWindow The ActiveWindow property returns a reference to the window object that currently has focus in Visual Studio. Typically, this will be a document window containing the code currently being edited, but can be any window the user clicked on. If no solution is open, the ActiveWindow will return the main window. You can test the Kind property to see if a document or tool window is currently active. MainWindow The main window is a special window distinct from other windows in the environment. It is typically maximized and the docking window for all the other tool and document windows the user might have open. The Boolean properties of AutoHides and IsFloating are not defined for the main window. 68 The main windows Kind property will be Tool and its Type property will be vsWindowTypeMainWindow. Windows The windows collection contains a list of all windows in the IDE (some of which may not be visible). You can iterate through this list or search for a particular toll window using the vsWindowsKind constants. For example, the following code sample gets a few selected windows and saves them to variables for easier reference: DTE2 theApp = _applicationObject; Window theSolExploer = theApp.Windows.Item(Constants.vsWindowKindSolutionExplorer); Window theProperties = theApp.Windows.Item(Constants.vsWindowKindProperties); Window theCallStack = theApp.Windows.Item(Constants.vsWindowKindCallStack); You can also walk through the windows collection, or search for all document windows as the following code sample illustrates: foreach (Window theWind in _applicationObject.Windows) { if (theWind.Kind == "Document") { MessageBox.Show(theWind.Caption); } } This can be useful if you want your add-in to provide an option to scan all open documents rather than the entire project. Window Kind constants The following WindowKind constants are available to find particular windows in the collection: 69 vsWindowKindCallStack: The Call Stack window. vsWindowKindClassView: The Class View window. vsWindowKindCommandWindow: The Command window. vsWindowKindFindReplace: The Find Replace dialog. vsWindowKindFindResults1: The Find Results 1 window. vsWindowKindMainWindow: The Visual Studio IDE window. vsWindowKindObjectBrowser: The Object Browser window. vsWindowKindOutput: The Output window. vsWindowKindProperties: The Properties window. vsWindowKindResourceView: The Resource Editor. vsWindowKindServerExplorer: The Server Explorer. vsWindowKindSolutionExplorer: The Solution Explorer. vsWindowKindTaskList: The Task List window. vsWindowKindToolbox: The Toolbox. vsWindowKindWatch: The Watch window. You can use these constants to find a particular window, or test the ObjectKind property of a window against the constant to know the window you are interacting with, as follows: if (theWind.ObjectKind == EnvDTE.Constants.vsWindowKindCallStack) { // Do something with call stack. } Tool windows Several of the various tool windows can be cast to object types to allow properties and methods specific to that particular tool. These include: CommandWindow theCMD ErrorList theErrs UIHierarchy theSolExplore OutputWindow theOutput TaskList theTaskList ToolBox theToolBox = _applicationObject.ToolWindows.CommandWindow; = _applicationObject.ToolWindows.ErrorList; = _applicationObject.ToolWindows.SolutionExplorer; = _applicationObject.ToolWindows.OutputWindow; = _applicationObject.ToolWindows.TaskList; = _applicationObject.ToolWindows.ToolBox; The tool window types are explored in more detail in Chapter 14. Document windows In addition to the supporting tool windows, there are a number of different editors that might be used when a source file is open. The most common is the code window (which we discuss in Chapter 12 and Chapter 13); however, you can use the window object to gather some information about the source code being edited. 70 When you obtain a windows object, either through the ActiveWindow property or by searching the Windows collection, you can use the Kind property of document to know it is a source file being editing in the IDE. The window will also have an Object property associated with it, and you can test the type of this property to determine what kind of source window is being looked at. For example, the following code can test whether Visual Studio is looking at some sort of HTML code: Window ActiveWin = _applicationObject.ActiveWindow; // Grab the active window. if (ActiveWin.Object is HTMLWindow) { HTMLWindow theHTML = (HTMLWindow)ActiveWin.Object; // Cast as an HTML window. } You can use the Visual Basic Assemblies, which are automatically included in VB projects but need to be added manually to C# projects, to determine the type of a window. The following code shows the type of window, which you can then cast the object to: Microsoft.VisualBasic.Information.TypeName(ActiveWin.Object) If you want to have your add-in manipulate different kinds of windows, this can help you find the window type to cast the object property to. Is AJAX being used? For a simple example of how to use the window object, we are going to write an add-in that will determine whether the HTML or ASPX code currently open in the active window appears to be using AJAX. AJAX requires a script manager object and at least one Update Panel. This wizard will look at the HTML code in the designer window and see if both elements are found. We are still going to use the wizard to create our basic add-in, so fire up the wizard with the following: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: IsAjaxEnabled and Check for HTML and AJAX. Create UI Menu and do not load at start-up. Although this is a simple wizard, it will show the basics of how to parse HTML code being edited in Visual Studio. 71 Getting the active window In our Exec() method of the code, we need to get the active window and make sure it is not a tool window. The following code does that: if(commandName == "IsAjaxEnabled.Connect.IsAjaxEnabled") { handled = true; Window ActiveWin = _applicationObject.ActiveWindow; // Grab the active window. if (ActiveWin.Kind != "Document") { // Tell user only wants on document windows. MessageBox.Show("Please select an HTML or ASPX page to check...."); return; } Note: Dont forget to add the reference to System.Windows.Forms for the MesageBox(). Making sure it is HTML code Once we know it is a document window, we want to confirm it is a HTML window and if so, typecast the window object to HTML Window. // And only for HTML modules (ASPX, HTML, etc.). if (ActiveWin.Object is HTMLWindow) { HTMLWindow theHTML = (HTMLWindow)ActiveWin.Object; Boolean FoundSM = false; Boolean FoundUP = false; // Cast as an HTML window. We are also setting up our flags to check the AJAX code samples we are searching for. Parsing the HTML code We can write our own HTML code parser if we are feeling particularly ambitious, but to keep things simple, we will use Microsofts parser instead. The HTML Window object we created previously has a property called CurrentTab, which can be: vsHTMLTabsDesign vsHTMLTabsSource 72 If the HTML window is on the design page, the CurrentTabObject property will contain a reference to the HTML document object from the page. So we are going to save the user's current mode, and if need be, switch to the design tab so we can grab that HTML document for our parsing purposes. vsHTMLTabs PriorMode = theHTML.CurrentTab; // See if in Design mode, and if not, switch to it. if (theHTML.CurrentTab != vsHTMLTabs.vsHTMLTabsDesign) { theHTML.CurrentTab = vsHTMLTabs.vsHTMLTabsDesign; } if (theHTML.CurrentTab == vsHTMLTabs.vsHTMLTabsDesign) { // Get an HTML document from the current object in the design window. IHTMLDocument2 theHTMLDoc = (IHTMLDocument2)theHTML.CurrentTabObject; Note: Youll need to add a reference to Microsoft.mshtml to create the HTML document object. Once you have the HTML document object, you can use the object to walk through the entire document model. For our code, we are simply searching each element to see if we find a script manager and an update panel. foreach (IHTMLElement element in theHTMLDoc.all) { try { if (element.outerHTML.ToUpper().Contains("ASP:SCRIPTMANAGER")) { FoundSM = true; } if (element.outerHTML.ToUpper().Contains("ASP:UPDATEPANEL")) { FoundUP = true; } } catch { } } There is a lot of functionality built into the HTML document object. You could simply set the BGColor property and the IDE will add the appropriate element to the document and mark the source HTML file as edited. The scope of the HTML document is beyond this book, but can be a very useful starting point if you want to manipulate HTML or ASPX code opened in Visual Studio. Showing our findings Once weve made our loop through the HTML elements, we want to return the designer back to its original mode, and then report our findings. 73 if (theHTML.CurrentTab != PriorMode) { theHTML.CurrentTab = PriorMode; } // Evaluate flags to see if we are using AJAX. if (FoundSM && FoundUP) { MessageBox.Show("AJAX appears to be in use on this form"); } else { if (FoundSM) { MessageBox.Show("Script Manager found, but no update panels..."); } if (FoundUP) { MessageBox.Show("Update Panel(s) found, but missing Script Manager..."); } } if (FoundSM == false && FoundUP == false) { MessageBox.Show("AJAX does not appears to be used on this form"); } Summary While this add-in module performs a very simple task, it does provide an example of how to easily parse HTML code. Microsoft .NET provides a nice collection of tools to manipulate HTML, and plugging that into the windows of the add-in code should give you a good starting point to develop tools for your HTML code. 74 Chapter 11 Documents Each source file being edited is represented in Visual Studio as a document object, which has the necessary properties to access the file content, save it, etc. In this chapter, we will look at how to use the document object. Getting the document The document object can be obtained from any source window or by asking Visual Studio for the active document. Any Document type window will include a reference to the document object associated with it. The following code sample illustrates a few ways to get the document object. Document ActiveDoc = _applicationObject.ActiveDocument; foreach (Document CurrentDoc in _applicationObject.Documents) { } foreach(Window CurWindow in _applicationObject.Windows) { if (CurWindow.Kind == "Document") { Document CurDoc = CurWindow.Document; } } Document object The document object has some basic properties to allow you to determine the file and path name, which windows the document is loaded in, which project item it is associated with, etc.: 75 ActiveWindow: Window the document object is actively displayed in. FullName: Path and file name of document. Language: String containing language, e.g., CSHARP, CSS, XML, VB, etc. Path: Folder document is located in. ProjectItem: Associated project item the document is from. Saved: Has the document been changed since last opened? Selection: Selected text object associated with the document. Windows: All windows the document is displayed in. With these basic properties, you can navigate from the document to windows or to the project item within the solution. You could back up a copy of the document before you make any changes, etc. In addition, there are a few methods you can use with the document object to save the file, activate its window, close the document, etc.: Activate(): Move focus to this document. Close(): Close with the option to save. Redo(): Redo the last operation. Save(): Save the document with an optional "Save As" file name. Undo(): Undo the last operation. You can use the basic properties and methods to perform operations on the document as a whole. In later chapters, we will explore getting the code and text from the document and manipulating it as well. Text document object Each document object has an object method which provides access to the text content of the document and allows you to do some basic edit operations in the document. You can get the associated text document with the following code sample. Document theDoc = _applicationObject.ActiveDocument; TextDocument theTextDoc = (TextDocument)theDoc.Object("TextDocument"); The text document object has two properties of interest to help editing the text in the document window. The start and end points are objects representing the first and last points in the file. You can create an edit point from either of these objects if you want to do some basic editing. We cover how to edit with edit points in the next chapter. An edit point is the programmatic equivalent of the users cursor location while editing. Edit operations take place from the edit point in the document. The text document has some basic manipulation methods for the document. These include: ClearBookMarks(): Clear any bookmarks from the documents margin. CreateEditPoint(): Position the programmatic cursor for editing. MarkText(): Search for a pattern and mark lines containing the pattern. ReplacePattern(): Search and replace text in the document. These methods allow you to easily manipulate the text content. 76 Converting C# to VB There is a great number of websites that will convert your C# code to VB.NET, but lets assume we wanted to tackle such a beast ourselves (which is way beyond the scope of this book). The following code sample shows how we could use the MarkText and ReplacePattern methods to get started. Document ActiveDoc = _applicationObject.ActiveDocument; TextDocument TextDoc = (TextDocument)ActiveDoc.Object("TextDocument"); if (TextDoc != null) { TextDoc.MarkText("public void"); // Bookmark all the lines we are going to tweak. TextDoc.ReplacePattern("public void", "sub"); } In this simple example, weve searched for all C# void methods and converted them to sub calls (the VB equivalent). Weve also marked the line weve changed so the user can navigate to the bookmarked lines to review the code changes. Summary The document and text document objects can perform some basic file manipulations and global text updates, but in the next two chapters we explore code modification in more detail, including the built-in Visual Studio code-parser class, the code model. 77 Chapter 12 Code Window One of the common features that add-in modules offer is the ability to manipulate the code in windows. In this chapter, we will create an add-in to interact with the code in an open document window. We will explore how to pull code from the window, manipulate it, and write it back. Simple code manipulation We are still going to use the wizard to create our basic add-in, but rather than attach our code to the Tools menu, this time we will attach it to the context menu of the code window. So lets start up the wizard with the following: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: CodeHelp and Send sample of code to a guru Create UI Menu and do not load at start-up. What this add-in will do is allow you to send an email with a sample of code to your local guru (hopefully you have one) and ask him or her what the code is doing. The add-in will then leave a [TODO] comment indicating that the code was sent to a guru and we need to document what is being accomplished by this code when the guru replies. Attaching to the code window Since our add-in module is going to attach to the code window instead of the main menu, we need to code our connection logic slightly differently. object []contextGUIDS = new object[] { }; Commands2 commands = (Commands2)_applicationObject.Commands; // Create the command object. Command cmd = commands.AddNamedCommand2(_addInInstance, "CodeHelp", "CodeHelp, "Send sample to Guru..", true, 59, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported + (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); // Create a command bar on the code window. CommandBar CmdBar = ((CommandBars)_applicationObject.CommandBars)["Code Window"]; // Add a command to the Code Window's shortcut menu. CommandBarControl cmdBarCtl = (CommandBarControl)cmd.AddControl(CmdBar, CmdBar.Controls.Count + 1); cmdBarCtl.Caption = "Send sample to Guru..."; 78 We are searching for the Code Window (precise spelling is important) and adding a pop-up menu control to it, rather than to the usual menu. Responding to the click When the user selects the menu item, the Exec() method will be called to process the Send code to the Guru menu option. Since the code will only be called from a code window (since our menu item is attached to the context menu), we can assume an active document will always be available. Getting selected code Once we have the selected text and confirmed some text was selected: // See if any selected text. Document theDoc = _applicationObject.ActiveDocument; TextSelection sel = (TextSelection)theDoc.Selection; if (sel.Text == "") { MessageBox.Show("Please select some text...", "Error"); handled = true; return; } We can build a mail message and send it to our guru. // Let's make an e-mail for the guru. string subjectLine = "Having trouble understanding this code"; string msgbody = "<p>Can you review it and tell me what the #$@* it is doing<p>" + Environment.NewLine + Environment.NewLine + "<pre>"+sel.Text +"</pre>"+ Environment.NewLine + "<p>Thanks..."; MailMessage mail = new MailMessage(); mail.To.Add(GuruEmail); mail.From = new MailAddress("xxxx@"+fromDomain); mail.Subject = subjectLine; mail.Body = msgbody; mail.IsBodyHtml = true; SmtpClient smtp = new SmtpClient(); smtp.Host = "smtp.gmail.com"; //Or your SMTP server address. smtp.Credentials = new System.Net.NetworkCredential ("xxxxxxxx <EMAIL>", "password "); smtp.EnableSsl = true; bool MailSent = false; try { Cursor.Current = Cursors.WaitCursor; 79 smtp.Send(mail); MailSent = true; } catch { MessageBox.Show("Error sending mail", "ERROR", MessageBoxButtons.OK, MessageBoxIcon.Error); } Cursor.Current = Cursors.Default; Note: You will need to add a reference to System.net.mail and define a string constant GuruEmail and a string constant fromDomain with your e-mail domain. Tweak the code fragment Now that weve sent our code to the guru for his or her comments and review, we want to mark the code with a to-do comment so we can add documentation when the guru replies to our email. // Now we add the comment back to the code. string commentChar = "//"; string theTodoText = _applicationObject.ToolWindows.TaskList.DefaultCommentToken.ToString(); string revisedCode = commentChar + " " + theTodoText + " GURU - sent to guru on " + DateTime.Now.ToString() + Environment.NewLine + commentChar + " < Guru answer here >" + Environment.NewLine + sel.Text + commentChar + " ********" + Environment.NewLine; if (MailSent == false) { revisedCode = commentChar + " " + theTodoText + " Ask guru about this"+ Environment.NewLine+ sel.Text + Environment.NewLine+ commentChar + " ********" + Environment.NewLine; } In this example, we are adding the comment and the to-do token that the IDE currently uses for tasks. This will allow our comment to be found easily using the task list window within Visual Studio. Putting the code back And now that weve assembled our revised code string, we need to update the editor with the revision. This is accomplished by copying the revised text to the Windows clipboard as text, and then pasting that text back to the editor. 80 DataObject theObj = new System.Windows.Forms.DataObject(); try { theObj.SetData(System.Windows.Forms.DataFormats.Text, revisedCode); System.Windows.Forms.Clipboard.SetDataObject(theObj); sel.Paste(); } catch { MessageBox.Show("couldn't paste comment, sorry"); } This is just a simple example of basic interaction with the contents of the code window. We dont perform any analysis of the content. We simply grab the code, tweak it somehow, and then put it back. However, this is only part of the capabilities built into Visual Studio. Moving the code around You do not have to put the samples or code revisions back where they came from. You can add code to the top or beginning of the file, or even at a random spot in the middle (although users are not likely to appreciate that). Text Document In order to manipulate the document, you need access to the TextDocument object associated with the current document. This is accomplished with the following command: Document theDoc = _applicationObject.ActiveDocument; TextDocument theText = (TextDocument)theDoc.Object(); The TextDocument object provides two point objects (StartPoint and EndPoint) referring to the expected locations in the body of text. In addition, there are some simple methods to mark and replace text within the document that matches a particular pattern. For example, the following code sample will replace all occurrences of double with float in the document associated with the text document. theText.ReplacePattern("double", "float"); 81 You can also add FindOptions as the third parameter to control case matching, where to start searching the document, whether to match the whole word, etc. If you need to perform simple replacements on the entire document, the Text Document object provides the methods to do just that. Edit point The Text Document object also allows you to create an EditPoint object, which can be used to position your code anywhere within the document and make edits, deletions, insertions, etc. To create the edit point variable, use the following command: EditPoint thePoint = theText.CreateEditPoint(); This allows a much finer degree of control over text manipulation. An edit point is the location in the file where you want to manipulate text. If you create an EditPoint object with no parameters, it is the same as the starting point from the text document. You can also specify a point as a parameter, so you could create an edit point based on the last location in the document by using EndPoint. Once you have the EditPoint object, you have a variety of navigation functions to move through the text, such as: CharLeft(n): Move to the left n characters. CharRight(n): Move to the right n characters. EndOfLine(): Move to the end of the current line. LineDown(n): Move down n lines. LineUp(n): Move up n lines. WordLeft(n): Move to the left n words. WordRight(n): Move to the right n words. When the point is positioned, you can insert text into the document. You can also use the Get methods to extract text from the document. For example, the following sample looks for lines beginning with using and adds a comment indicating the line needs to be converted to imports when converting from C# to Visual Basic. EditPoint thePoint = theText.CreateEditPoint(); for (int x = 1; x < theText.EndPoint.Line; x++) { if (thePoint.GetText(5).ToString() == "using") { thePoint.EndOfLine(); thePoint.Insert(" // convert to imports statement"); } thePoint.LineDown(1); 82 } More complex code manipulation While the methods and examples in this chapter allow simple text manipulations, they do not provide much in the way of parsing your source code. Fortunately, Visual Studio has a built-in class system that allows us to analyze code windows much more efficiently, without writing our own code parsers or worrying about VB.NET or C#. This system is the code model, and is described in the next chapter. 83 Chapter 13 Code Model The code model is a language-independent view of a source code file. You can use this view to extract code elements from the classes found within a namespace down to the variables and methods within a class. Using the code model In order to illustrate how the code model system works, lets build a very simple class source file. using System; public class ComputePayrollAmount { public string PersonName; public double payRate; private double TaxRate = 0.28F; // Internally used in class. public void GetWeeklyPay() { double TotalPay = 40.0 * payRate; string checkLine = WriteCheck(PersonName, TotalPay); } private string WriteCheck(string forWhom,double Amount) { string result = "Pay to the order of " + forWhom + " $" + Amount.ToString(); return result; } } You can access the code model for the active document using the following code sample: FileCodeModel fileCM = dte.ActiveDocument.ProjectItem.FileCodeModel; Once you have the code model available, one of the properties is a variable called code elements, which contains the code pieces at the current level. In our previous example, two code elements are returned: Import element. Class element. The import element occurs once for each import or using statement in the file. Whether you are using VB, C++, or C#, each statement to import a module is included. 84 The second code element is the class statement and will contain the full name of the class, as well as an object property called Children. This object contains the code elements within the class structure. In this case, there will be five elements. The three variables and two methods are included in the Children object. The fifth child of the class element refers to the WriteCheck() method, and it has two children elements, representing the parameters in the function. Through recursive calls, you can easily trace a source file from its namespace, to classes within the file, to methods within the classes, to parameters of the methods. The code element object can represent variables, methods, parameters, namespaces, etc. It has a kind property to know what you are working with, and point properties to keep track of location with the source file. Get the code model of a source file The code model is associated with a project item (see Chapter 9). Every document object that is part of a project has a ProjecItem property associated with it. Once you have a reference to the document, you can get the code model by using the following code. In this sample, we are getting the CodeModel property of the currently active document: FileCodeModel fileCM = dte.ActiveDocument.ProjectItem.FileCodeModel; The FileCodeModel class has two properties of interest; one is the language property that contains a GUID string indicating the type of language, C#, C++, or VB. You can use the following constants to determine what language the module is written in: vsCMLanguageCSharp vsCMLanguageVC vsCMLanguageVB The other property is the collection of code elements. Each code element in this list can have a child collection of additional associated code elements, and can contain children elements as deep as the source code structure goes. Each item in the collection represents a single code element from the source file. Code element properties The key properties for working with individual code elements are listed in the following table: 85 Property Children Data Type Collection Description Collection of nested code elements, if any. FullName String Fully qualified (class and variable name) name of element. Property Kind Data Type Enumerated Description The type of element, such as: vsCMElementVariable, vsCMElementClass, etc. Name String Name of the code element. StartPoint TextPoint An object pointing to the beginning of the element. EndPoint TextPoint An object referring to the end of the code element. With these key properties, you can determine the type of code element you are working with, and you can extract it or write it back to the source code window using the point properties. In addition to providing code elements, the file code model also allows you to add variable elements (classes, variables, namespaces, etc.), get a code element at a particular point in the code, and remove a code element from the source file. Putting it all together For our example project, we are going to create an add-in that will read a source code file and document the class details, as well as all the public variables and methods within the class. To look at what the code will do, consider the following class example, before and after: using System; public class ComputePayrollAmount { public string PersonName; public double payRate; enum Payclass { FullTime = 1, PartTime = 2, Consultants = 4 }; private string _firstName; public string FirstName { get { return _firstName; } set { _firstName = value; } } private double TaxRate = 0.28F; // Internally used in class. private string _LastName; public string LastName { 86 get { return _LastName; } set { _LastName = value; } } enum MaritalStatus { Single = 1, Married = 2, Seperated = 4 }; public void GetWeeklyPay() { double TotalPay = 40.0 * payRate; string checkLine = WriteCheck(PersonName, TotalPay); } public ComputePayrollAmount() { } private string WriteCheck(string forWhom,double Amount) { string result = "Pay to the order of " + forWhom + " $" + Amount.ToString(); return result; } } This is often how classes built over time and by multiple developers end up: public variables, methods, properties, etc., all intermixed in the source code file. After running our add-in, the following comment code is generated and added to the top of the class definition: // [==================================================================================== // Class: ComputePayrollAmount // Author: jbooth // Date: 10/4/2012 // // Class Information: // Inherits: Object // // Public Interface: // // Variables: PersonName (string) // payRate (double) [40.0F] // Properties: FirstName (string) // LastName (string) // Methods: GetWeeklyPay // WriteCheck(forWhom:string,Amount:double) ==> string // =====================================================================================] You can see in this example that the code model distinguished between public and private variables and methods, and also discerned enough to not include the constructor in the list of public methods. For simplicity's sake, our code assumes a single class in a source file, and will only process the first class it finds. However, you can use this concept as a starting point for enforcing coding standards, making code more readable, etc. 87 Note: The code will overwrite the existing comment if you run it multiple times. Class documenter We are still going to use the wizard to create our basic add-in, and then attach our module to the context menu of the code window. So lets start up the wizard with the following: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: DocumentClass and Document a class file Create UI Menu and do not load at start-up. Attaching to the code editor window The first change we want to make is to move the menu option to appear on the Code window context menu rather than the Tools menu. Our connect code should be changed to the following: if(connectMode == ext_ConnectMode.ext_cm_UISetup) { // Create the command object. object[] contextGUIDS = new object[] { }; Commands2 commands = (Commands2)_applicationObject.Commands; try { Command cmd = commands.AddNamedCommand2(_addInInstance, "DocumentClass", "Class Documentator","Document your class module ", true, 59, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported + (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); // Create a command bar on the code window. CommandBar CmdBar = ((CommandBars)_applicationObject.CommandBars)["Code Window"]; // Add a command to the Code Window's shortcut menu. CommandBarControl cmdBarCtl = (CommandBarControl)cmd.AddControl(CmdBar, CmdBar.Controls.Count + 1); cmdBarCtl.Caption = "Class Doc"; } catch (System.ArgumentException) { } } 88 Getting the code model We now need to add our code to the Exec() function to get the code model and use its parsing ability to create a documentation header. if(commandName == "DocumentClass.Connect.DocumentClass) { FileCodeModel2 fileCM = null; // Make sure there is an open source-code file. try { fileCM = (FileCodeModel2)_applicationObject.ActiveDocument. ProjectItem.FileCodeModel; } catch { MessageBox.Show("No active source file is open..."); handled = true; return; } // Some files (such as XML) will not have an associated code model. if (fileCM == null) { MessageBox.Show("Not a valid programming language source file..."); handled = true; return; } Assuming weve gotten a code mode (valid source file), we need to make sure it is a language we can work with, in this case, either VB or C#. string CommentChar = ""; switch (fileCM.Language) { case CodeModelLanguageConstants.vsCMLanguageCSharp: { CommentChar = "//"; break; } case CodeModelLanguageConstants.vsCMLanguageVB: { CommentChar = "'"; break; } } if (CommentChar == "") { MessageBox.Show("Only works with VB or C# class modules"); handled = true; return; } 89 Finding the class elements Once we know weve got a valid code module, we need to find the code element to locate the first class. Typically, a class declaration is either in the first level, or one level down (child level of the Namespace level). The following code will search two levels deep for a class code element: // Scan the file, looking for a class construct may require two passes. CodeElements elts = fileCM.CodeElements; CodeElement elt = null; int xClassElt = 0; int xNameSpace = 0; int nLevels = 0; while (xClassElt == 0) { nLevels++; for (int i = 1; i <= elts.Count; i++) { elt = elts.Item(i); if (elt.Kind == vsCMElement.vsCMElementClass) { xClassElt = i; break; } if (elt.Kind == vsCMElement.vsCMElementNamespace) { xNameSpace = i; break; } } // Found namespace and no class, let's work through the namespace looking for a class. if (xNameSpace != 0 && xClassElt == 0) { elts = elts.Item(xNameSpace).Children; } // Don't search deeper than three levels. if (nLevels > 2) { break; } } // If no class found, exit. if (xClassElt == 0) { MessageBox.Show("No class module found in source file..."); handled = true; return; } Once weve found a class element, we grab the child elements (i.e. the variables, methods, etc. that we want to document.) // Now we are ready to document our class. CodeClass theclass = (CodeClass)elts.Item(xClassElt); 90 object[] interfaces = {}; object[] bases = {}; Notice that weve cast the generic CodeElement to the more specific CodeClass object, which gives us access to the particular class details. This type casting is necessary to pull additional information from the various code elements, rather than just relying on the properties in the generic CodeElement class. Building our header We now create a string builder object and extract some information from our class variable to display in the documentation header text. // Some initial header info. StringBuilder sb = new StringBuilder(); sb.AppendLine(CommentChar+"[========================================================="); if (theclass.Namespace != null) { sb.AppendLine(CommentChar + " Namespace: " + theclass.Namespace.Name.ToString()); } if (theclass.IsAbstract) { sb.Append( CommentChar+" Abstract "); } else { sb.Append( CommentChar+" "); } sb.AppendLine("Class: "+theclass.Name); sb.AppendLine( CommentChar+" Author: "+Environment.UserName); sb.AppendLine( CommentChar+" Date: "+DateTime.Now.ToShortDateString()); sb.AppendLine( CommentChar+" "); sb.AppendLine( CommentChar+" Class Information:"); // Information about the class. string docCategory = " Inherits:"; foreach (CodeElement theBase in theclass.Bases) { sb.AppendLine(CommentChar + docCategory + " " + theBase.Name); docCategory = " "; } docCategory = " Implements:"; foreach (CodeElement theImpl in theclass.ImplementedInterfaces) { sb.AppendLine(CommentChar + docCategory + " " + theImpl.Name); docCategory = " "; } sb.AppendLine( CommentChar+" "); sb.AppendLine( CommentChar+" Public Interface:"); sb.AppendLine( CommentChar+" "); 91 You can look to the CodeClass object for more information you might want to display. Our next step is to collect the code elements making up the class and store them in a collection so that we can display them grouped by element type later in the module. Organizing the code elements This next section of the code loops through the class elements and organizes them by type into queue structures. Note that we are testing the generic code elements type and then storing the appropriate typed element type (i.e. CodeVariable, CodeEnum, etc.) into the queue structure for the particular element kind. elts = theclass.Children; // Build queues to hold various elements. Queue<CodeEnum> EnumStack = new Queue<CodeEnum>(); Queue<CodeVariable> VarStack = new Queue<CodeVariable>(); Queue<CodeProperty> PropStack = new Queue<CodeProperty>(); Queue<CodeFunction> FuncStack = new Queue<CodeFunction>(); foreach (CodeElement oneElt in elts) { // Get the code element, and determine its type. switch (oneElt.Kind) { case vsCMElement.vsCMElementEnum : { EnumStack.Enqueue((CodeEnum)oneElt); break; } case vsCMElement.vsCMElementVariable: { VarStack.Enqueue((CodeVariable)oneElt); break; } case vsCMElement.vsCMElementProperty: { PropStack.Enqueue((CodeProperty)oneElt); break; } case vsCMElement.vsCMElementFunction: { FuncStack.Enqueue((CodeFunction)oneElt); break; } default : { break; }; } } Once weve collected and built our queue collections, we can now write them back to the documentation comment text organized by code element type. 92 Variables For each public variable, we want to report the variable name and data type, as well as any initial value it might be set to. Note that the variables (and all code elements) are documented in the order they are encountered; they are not sorted in the queue. // Iterate through the variables looking for public variables. foreach (CodeVariable theVar in VarStack) { if (theVar.Access == vsCMAccess.vsCMAccessPublic) { sb.Append(CommentChar + docCategory + " " + theVar.Name + " (" + theVar.Type.AsString + ")"); docCategory = " "; if (!(theVar.InitExpression == null)) { sb.Append(" ["+theVar.InitExpression.ToString()+"]"); } sb.AppendLine(""); } } Enums For the enumerated types, we want to report the name of the enumeration and all of the elements (stored as children variables to the enum itself). The following code illustrates how to walk through the enum and its children elements. docCategory = " Enums:"; foreach (CodeEnum theEnum in EnumStack) { if (theEnum.Access == vsCMAccess.vsCMAccessPublic) { sb.Append(CommentChar + docCategory + " " + theEnum.Name + " "); docCategory = " "; if (theEnum.Children.Count > 0) { sb.Append("("); for (int xx = 1; xx <= theEnum.Children.Count; xx++) { int yy = theEnum.Children.Count - xx + 1; CodeVariable theVar = (CodeVariable)theEnum.Children.Item(yy); sb.Append(theVar.Name); if (yy > 1) { sb.Append(","); } } sb.Append(")"); } sb.AppendLine(""); } } 93 Properties For each public property, we want to display the property name and the propertys data type. The following code loops through the collected CodeProperty elements and does just that. docCategory = " Properties:"; foreach (CodeProperty theProp in PropStack) { if (theProp.Access == vsCMAccess.vsCMAccessPublic) { sb.Append(CommentChar + docCategory + " " + theProp.Name + " (" + theProp.Type.AsString + ")"); sb.AppendLine(""); docCategory = " "; } } Methods As we process the class methods, we want to only report on public methods and exclude the constructor from the documentation. We need to handle parameters and the return type (if not VOID). docCategory = " Methods:"; foreach (CodeFunction theFunc in FuncStack) { if (theFunc.FunctionKind != vsCMFunction.vsCMFunctionConstructor && theFunc.Access==vsCMAccess.vsCMAccessPublic) { sb.Append(CommentChar + docCategory + " " + theFunc.Name); docCategory = " "; if (theFunc.Parameters.Count > 0) { int yy = theFunc.Parameters.Count; sb.Append("("); foreach (CodeParameter theParam in theFunc.Parameters) { sb.Append(theParam.Name+":"+theParam.Type.AsString); yy--; if (yy > 0) { sb.Append(","); } } sb.Append(")"); } if (theFunc.Type.AsString.ToUpper().EndsWith("VOID")==false) { sb.Append(" ==> " + theFunc.Type.AsString); } sb.AppendLine(""); } } 94 Writing the header back to the source window By this point, we have a nicely formatted documentation block of text, showing all the public elements of the class. We are going to use our text document and edit point objects to either write the text to the top of the file, or update the prior version of the documentation. This allows you to run the add-in as often as you want after youve added new public code elements to the class. TextDocument theText = (TextDocument)_applicationObject.ActiveDocument.Object(); EditPoint thePoint = theText.CreateEditPoint(); // Check and see if a comment already exists. string theLine = thePoint.GetText(thePoint.LineLength); bool FoundOldComment = false; string OldComment = theLine+Environment.NewLine; if (theLine.StartsWith(CommentChar + " [===")) // Start of delimiter. { while (thePoint.AtEndOfDocument == false && FoundOldComment==false) { thePoint.LineDown(1); theLine = thePoint.GetText(thePoint.LineLength); OldComment+= theLine+Environment.NewLine; FoundOldComment = theLine.StartsWith(CommentChar) && theLine.EndsWith("==]"); } } if (FoundOldComment) { thePoint = theText.CreateEditPoint(theText.StartPoint); thePoint.ReplacePattern(theText.EndPoint, OldComment, sb.ToString()); } else { thePoint.Insert(sb.ToString()); } Summary The code model features of Visual Studio allow you to determine code elements without the need to write your own parsing routines. Although not all elements are returned in the code collection (such as compiler directives), the model gives you a great starting point for writing add-in modules to work with the code in a source file. 95 Chapter 14 Tool Windows The Visual Studio IDE consists of a number of different tool windows to manage the solution. You can access these windows through the ToolWindows property of your _applicationObject variable. A few of the commonly used windows have classes written specifically for those windows, but every window is accessible, either through one of the common classes or through the GetToolWindow() method. Error List The Error List window contains all errors, warnings, and messages that the most recent compile or build step encountered. Figure 13: Error List You can programmatically access the error messages using the Error List window. EnvDTE80.ErrorList errList = _applicationObject.ToolWindows.ErrorList; The Error List object contains three Boolean properties, indicating which messages are included in the error list: ShowErrors ShowMessages ShowWarnings 96 You can toggle these properties to control the content of the error items list. Task List Visual Studio provides a Task List Manager which allows developers to build a task list by entering tasks or by adding TODO comments in the source code. Figure 14: Task List Manager You can programmatically access the Task List window and all tasks with your add-in using the following code: EnvDTE.TaskList TaskList = _applicationObject.ToolWindows.TaskList; The task list object allows you to read or set the Default Comment Token (which is usually TODO) via the DefaultCommentToken string property. The primary interface with the task list is the TaskItems object. You can find the number of items in the list using the integer Count property. You can also get details about any single task by using the Item() indexed property. This will return an individual task item entry, with the following properties: 97 Property Category Data Type String Description Comment or User Task Checked Boolean Is the task item checked? Description String Descriptive text of the task Displayed Boolean Is the task item currently displayed? FileName String If a file is associated with the task, its fully qualified path name is provided Property Line Data Type Integer Description Line number in file where TODO comment is found Priority Enum vsTaskPriorityLow, PriorityMedium, PriorityHigh Solution Explorer The Solution Explorer window shows a tree view UI element displaying the currently open solution, as the following figure illustrates: Figure 15: Solution Explorer You can access the Solution Explorer using the following code: EnvDTE.UIHierarchy SolExplore = _applicationObject.ToolWindows.SolutionExplorer You can traverse the tree of the Solution Explorer using the UIHierarchyItems property to provide you access to each level of the tree display. Each item represents a single element in the view. In our previous example, the first hierarchy item would be the SaveSomeFiles level. That item would have a UIHierarchyItems collection as well, which would contain the My Project item, the AssemblyInfo.vb item, Connect.vb, etc. Output Window The Output Window is a text window showing the output of various IDE tools, such as the build process, the debug process, etc. You can use the ToolWindow object to gain access to the Output Window, as shown in the following example: OutputWindow outWnd = _applicationObject.ToolWindows.OutputWindow; 98 Each tool has its own pane, which is selected by the user via a drop-down menu. You can add your own pane if you want a place to collect and display messages from your own tool. For example, the following code adds a pane to keep track of user interface issues your tool might discover: OutputWindowPane OutputPane = outWnd.OutputWindowPanes.Add("UI issue "); Searching for bad words Another useful add-in module we could write would search a solutions source projects looking for a list of forbidden words." A famous CAD design software application once contained a message (probably left over by a programmer) that said, this is a message, you idiot. While the programmer might have thought it was humorous, the company that had to write an apology letter and send out a patched version of the software probably didnt see the humor. In order to prevent a similar incident, we can write an add-in to search all files within a project and add to the error list any occurrences of the forbidden words. Ill define that as a regular expression constant so you can create your own list of words. For this add-in, we will add a button to the standard toolbar. If a solution is open, we will scan all the projects and add the bad words and locations to our own pane in the output window. We will also add an entry into the task list with a high priority to clean the words up. Bad words scan We are still going to use the wizard to create our basic add-in, and then attach our module to the standard toolbar of the IDE. So lets start up the wizard with the following: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: Bad Words and Scan files for bad words. Create UI Menu and do not load at start-up. After the wizard generates your class source file, add the following variables to the class definition. You can customize your list of words by adding them to the BAD_WORD_LIST string. const int RED_STAR_ICON = 6743; const string BAD_WORD_LIST = "(stupid|idiot|fool)"; bool AddedToTaskList = false; 99 Using a tool button For this example, we are going to call our add-in module from the toolbar rather than a menu item. We will also disable the item if a solution is not open in Visual Studio, since this add-in searches all project items with a solution in order to mark the occurrences of your bad word list. Change your OnConnection method code to add an icon to the standard toolbar instead, as shown in the following code sample: // Add the command. Command cmd = (Command)_applicationObject.Commands.AddNamedCommand(_addInInstance, "BadWords", "BadWords", "Search for bad words", true, RED_STAR_ICON, null, (int)vsCommandStatus.vsCommandStatusSupported + (int)vsCommandStatus.vsCommandStatusEnabled); CommandBar stdCmdBar = null; // Reference the Visual Studio standard toolbar. CommandBars commandBars = (CommandBars)_applicationObject.CommandBars; foreach (CommandBar cb in commandBars) { if(cb.Name=="Standard") { stdCmdBar = cb; break; } } // Add a button to the standard toolbar. CommandBarControl stdCmdBarCtl = (CommandBarControl)cmd.AddControl(stdCmdBar, stdCmdBar.Controls.Count + 1); // Set a caption for the toolbar button. stdCmdBarCtl.Caption = "Search for bad words"; // Set the toolbar's button style to an icon button. CommandBarButton cmdBarBtn = (CommandBarButton)stdCmdBarCtl; cmdBarBtn.Style = MsoButtonStyle.msoButtonIcon; Only if a solution is open Since we only want to display the icon if a solution is open, we need to add code to check that condition during our QueryStatus method call. if(commandName == "BadWords.Connect.BadWords") { if (_applicationObject.Solution.Count > 0) { status = (vsCommandStatus)vsCommandStatus.vsCommandStatusSupported | vsCommandStatus.vsCommandStatusEnabled; } return; } 100 Getting tool windows In our Exec() method, we want to make sure a solution is open and if so, get references to the Output window and the task list. We also want to add our custom pane called Bad Words and activate the Output window, as shown in the following code sample: handled = true; if (_applicationObject.Solution.Count <1) { MessageBox.Show("Please open a solution to scan..."); return; } // Need to get all project items and search for "bad words". OutputWindow outWnd = _applicationObject.ToolWindows.OutputWindow; TaskList theTasks = _applicationObject.ToolWindows.TaskList; OutputWindowPane OutputPane = outWnd.OutputWindowPanes.Add("Bad words"); OutputPane.Clear(); bool FoundBadWords = false; // Activate the output window. Window win = _applicationObject.Windows.Item(EnvDTE.Constants.vsWindowKindOutput); win.Activate(); Looping through the project We are now ready to loop through the solution and all projects within. Within each project, we search through the project items. If the project item has a document object attached to it, and the document item contains a text document object, weve found a file with text (source, XML configuration file, etc.) that we should scan for entries. Some files, such as the Assembly file, will not have a text document object, so we will skip scanning those files. foreach (Project CurProject in _applicationObject.Solution) { foreach (ProjectItem CurItem in CurProject.ProjectItems) { Document theDoc = null; Try { theDoc = CurItem.Document; } catch { } if (theDoc != null) { TextDocument theText = (TextDocument)theDoc.Object("TextDocument"); if (theText != null) { Marking bad words Using the Mark Text method of the Text Document object, we apply a regular expression search to see if the file contains any words from the list. The lines are bookmarked and the file name is added to our Output window. The following code performs the search task: 101 if (theText.MarkText(BAD_WORD_LIST,(int)vsFindOptions.vsFindOptionsRegularExpression)) { OutputPane.OutputString(CurItem.Name + " contains bad words"+Environment.NewLine); FoundBadWords = true; } Adding a clean-up task Our final step in the process is to add an entry to the task list, reminding the programmer to clean up the code that contains the bad words. We only do this if the add-in found bad words and weve not yet added the task. The task priority and task icon control the appearance of the task in the task, with red being a high priority item. if (FoundBadWords && AddedToTaskList==false) { TaskItems2 TLItems = (TaskItems2)theTasks.TaskItems; TLItems.Add("Bad Words", "Bad Words", "Remove bad words " + BAD_WORD_LIST + " from source files", vsTaskPriority.vsTaskPriorityHigh, vsTaskIcon.vsTaskIconNone, true, null, 10, true, true); AddedToTaskList = true; } When the add-in completes, it will have marked all lines with words from your bad word list and added the list of files to the output window with the drop-down pane of bad words. Summary While the Tool windows have a more narrow focus than the general source editing windows, the object types available provide your add-in with the ability to integrate easily with the toolbars, so you can add your custom output, save to-do items, etc. 102 Chapter 15 Source Code Generation One time-saving option you can add to the Visual Studio IDE is the ability to generate source code. As developers, there are often common code-writing tasks we need to perform, and by designing an input screen and code generator, we can create an add-in module to save time with the development cycle. Source code helper class To assist in the generation of source code, it can be beneficial to create a helper class, which is basically a collection of routines to perform some common tasks that are likely to occur in generating code. We start our class definition by defining the different programming languages we want to support and providing a property to allow users to decide which language they want to generate code in. // <summary> // Code Gen class: Helper class to generate code for add-ins. // </summary> public class CodeGen { // <summary> // List of programming languages generator works with. // </summary> public enum ProgrammingLanguages { VisualBasic = 1, CSharp = 2 } private private ProgrammingLanguages theLang = ProgrammingLanguages.VisualBasic; bool inComment = false; // <summary> // Programming language to generate code for. // </summary> public ProgrammingLanguages Programming_Language { get { return theLang; } set { theLang = value; } } } Once the basic class is created, we can add some methods to generate the appropriate comment text. The simplest method is SingleLineComment() which generates the appropriate syntax and comment text for a comment in a line of code. public string SingleLineComment(string theComment) 103 { string res = string.Empty; switch (theLang) { case ProgrammingLanguages.CSharp: res = "// " + theComment; break; case ProgrammingLanguages.VisualBasic: res = "' " + theComment; break; } return res; } The code determines the appropriate delimiter based on the chosen programming language, and then returns a string of the delimiter and the comment text. We can also add a method called StartComment(), which writes a multi-line comment starting delimiter and sets a flag to indicate we are in commented code. public string StartComment() { return StartComment(""); } public string StartComment(string theComment) { string res = string.Empty; switch (theLang) { case ProgrammingLanguages.CSharp: res = "/* " + theComment; break; case ProgrammingLanguages.VisualBasic: res = " " + theComment; break; } inComment = true; return res; } The StopComment() method writes the appropriate ending comment delimiter and turns off the commenting flag. public string StopComment() { string res = string.Empty; switch (theLang) { case ProgrammingLanguages.CSharp: res = "*/ "; break; case ProgrammingLanguages.VisualBasic: break; 104 } inComment = false; return res; } There are a couple of additional methods to round out our helper class. These include MakeFileName(), which is used to append the appropriate extension to a file name. public string MakeFileName(string theName) { string res = theName; switch (theLang) { case ProgrammingLanguages.CSharp: res += ".cs"; break; case ProgrammingLanguages.VisualBasic: res += ".vb"; break; } return res; } We can also use DeclareVariable() to create a variable in any of the languages. public string DeclareVariable(string varName, string DataType, string DefaultValue) { string res = string.Empty; switch (theLang) { case ProgrammingLanguages.CSharp: res = DataType + " " + varName; if (DefaultValue.Length > 0) { res += " = " + DefaultValue; } res += ";"; break; case ProgrammingLanguages.VisualBasic: res = "DIM " + varName + " AS " + DataType; if (DefaultValue.Length > 0) { res += " = " + DefaultValue; } break; } return res; } Our final function, StartRoutine(), returns a function declaration and delimiter shell. public string StartRoutine(string typeOfCall, string RoutineName,string ReturnType) { 105 string res = string.Empty; switch (theLang) { case ProgrammingLanguages.CSharp: if (typeOfCall.StartsWith("P")) { res = "public void "; } else { res = "public "+ReturnType+" "; } res += RoutineName+"()"+Environment.NewLine; res += "{"; break; case ProgrammingLanguages.VisualBasic: if (typeOfCall.StartsWith("P")) { res = "sub " + RoutineName; } else { res = "function " + RoutineName + "() as " + ReturnType; } res += Environment.NewLine; break; } return res; } With this simple class library available to help out code generation, we can now begin our add-in code. Standardized headers Imagine your company has a set of standard headers that every code module must include. These headers include the date and time the program was created, as well as the version of Visual Studio used to create the file. Wizard settings Start your standard headers add-in using the wizard and the following settings: Visual C# (or your preferred language). Application Host: Only Visual Studio. Name/Description: StdHeaders and Generate a standard heading module. Create UI Menu and do not load at start-up. Verify the settings at the Summary screen, and if they look okay, generate the code. 106 Moving to File menu For our standard headers add-in, we would rather have the menu item attached to the File menu using an icon instead of the Tools menu. We need to change a couple of lines in our onConnection method: public void OnConnection(object application, ext_ConnectMode connectMode, object addInInst, ref Array custom) { _applicationObject = (DTE2)application; _addInInstance = (AddIn)addInInst; if(connectMode == ext_ConnectMode.ext_cm_UISetup) { object []contextGUIDS = new object[] { }; Commands2 commands = (Commands2)_applicationObject.Commands; string toolsMenuName = "File"; In this case, we are changing the toolsMenuName variable from Tools to File. const int DOCUMENTS_ICON = 1197; We will also add the constant for the documents icon, and use that value rather than the hardcoded 59 in the AddNamedCommand2() call. //Add a command to the Commands collection: Command command = commands.AddNamedCommand2(_addInInstance, "StdHeaders", "StdHeaders", "Standardized headers", true, DOCUMENTS_ICON, ref contextGUIDS, (int)vsCommandStatus.vsCommandStatusSupported+ (int)vsCommandStatus.vsCommandStatusEnabled, (int)vsCommandStyle.vsCommandStylePictAndText, vsCommandControlType.vsCommandControlTypeButton); Options screen The options screen is a Windows form that asks users a few questions about the type of code header they want to generate. As a starting point, we will need to know the file to save, the programming language to use, and whether to generate a function or subroutine call. 107 Figure 16: Options screen for standard header add-in Generate the header If the user clicks OK to generate the header code, we will pull the selected options from the Windows form and use them to set our generated header and code template. StringBuilder sb = new StringBuilder(); // String to hold header. CodeGen Gen = new CodeGen(); // Code generation helper. StdHeaderForm theForm = new StdHeaderForm(); theForm.ShowDialog(); if (theForm.DialogResult == DialogResult.OK) { string cFile = theForm.CODEFILE.Text; // Get programming language choice. switch (theForm.LANGCOMBO.SelectedIndex) { case 0: { Gen.Programming_Language = CodeGen.ProgrammingLanguages.CSharp; break; } case 1: { Gen.Programming_Language = CodeGen.ProgrammingLanguages.VisualBasic break; } } sb.AppendLine(Gen.StartComment()); sb.AppendLine(Gen.WriteCode("==============================================")); sb.AppendLine(Gen.WriteCode(" Program: " + cFile )); sb.AppendLine(Gen.WriteCode(" Author: " + Environment.UserName)); sb.AppendLine(Gen.WriteCode(" Date/Time: " + DateTime.Now.ToShortDateString() + "/" + DateTime.Now.ToShortTimeString())); sb.AppendLine(Gen.WriteCode(" Environment: Visual Studio " + _applicationObject.Edition)); 108 sb.AppendLine(Gen.WriteCode("==============================================")); sb.AppendLine(Gen.StopComment()); sb.AppendLine(Gen.WriteCode("")); Add sub/function call The following code sample adds a call to the routine TBD (hopefully the developer will update the name of the routine): //Write the function prototype. sb.AppendLine(Gen.StartRoutine(theForm.TYPECOMBO.Text.ToUpper(), "TBD", "string")); Add standard variables In some applications, standard variables are used so that every programmer uses the same variable names and meanings. In our example here, we use variables called SourceModified and StartTime to track modifications and monitor performance. // Optionally, write standard variables. if (theForm.INCLUDECHECK.Checked) { sb.AppendLine(Gen.DeclareVariable("SourceModified","string")); sb.AppendLine(Gen.DeclareVariable("StartTime","DateTime","DateTime.Now()")); } sb.AppendLine(Gen.EndRoutine(theForm.TYPECOMBO.Text.ToUpper())); Open a new window Once the string builder variable is created with the headers, we need to save it to a file and then open it within the IDE. cFile = Gen.MakeFileName(cFile); StreamWriter objWriter = new System.IO.StreamWriter(cfile); objWriter.Write(sb.ToString()); objWriter.Close(); ItemOperations itemOp; itemOp = _applicationObject.ItemOperations; itemOp.OpenFile(cfile, Constants.vsViewKindCode); 109 After the add-in completes, a new window will be open with code similar to the following example: ' ======================================================== ' Program: Sample ' Author: Joe ' Date/Time: 10/13/2012/9:24 AM ' Environment: Visual Studio Professional ' ======================================================== ' Function TBD() As String Dim SourceModified As String Dim StartTime As DateTime = DateTime.Now() End Function Item Operations object The ItemOperations object of the _applicationObject provides methods to open and add files in the Visual Studio IDE. In the previous example, weve created the file, and using the ItemOp variable, instructed Visual Studio to open the file in a code editor window. The object allows you to programmatically perform some of the options from the File menu. Other methods of the Item Operations object include: AddExistingItem(): Add an existing file to the project. AddNewItem(): Add a new item to the project. You can pass two parameters: the category name/item name (such as General/XML File), and the display name in the project window. IsFileOpen(): Is the file name passed as a parameter open in an IDE window? Navigate(): Open a browser window to a specified URL. NewFile(): Create a new file using the virtual path indicating the type of file. You can optionally specify the file name for the item and the view kind to open the file in. OpenFile(): Open an existing file in the editor using a specified view kind. In our example code, we created a file and opened it in a code view. Summary This chapter demonstrated how to build a source code file by pulling information from Visual Studio and the environment to create a standardized header. It also showed how to open the source file in a Visual Studio document window to allow the user to start programming immediately. 110 Chapter 16 Deploying Your Add-In Once the add-in module you have developed is completed, debugged, and ready to go, you probably will want to share it with your fellow developers. In this chapter, we will cover what needs to be done to install your add-in and to interact with it through the Add-in Manager. Installing the add-in To install your add-in, youll need to copy two files to one of the folders where Visual Studio looks for add-in modules. This is usually \Documents\Visual Studio 2010\Addins\ in Visual Studio 2010, or \Documents\Visual Studio 2012\Addins\ in Visual Studio 2012. You can also look in Visual Studios Options dialog, under the Environment node's Add-in/Macro Security page for the Add-in File Paths list. Figure 17: Installing add-ins Tip: You might need to select Show All Settings if the Add-in/Macros Security page does not appear. If you copy the Assembly DLL file and the .AddIn XML files to this folder, Visual Studio will discover it and possibly load it the next time Visual Studio is started. (The .AddIn XML files have options to control when the add-in is loaded. See Chapter 6 for details.) 111 Add-in Manager The Add-in Manager is a tool window under the Tools menu that lets you interact with all IDEinstalled add-ins. You can change when the add-in is loaded, and you can disable the add-in as well. The descriptive text youve been entering in various add-in modules in this book will appear in the Add-in Manager tool window. Figure 18: Add-in Manager Whether the add-in module is enabled, whether it loads at start-up, and whether it can run from the command line is stored in the XML file. You can manipulate the XML file using the Add-in Manager window shown in the previous figure. Tip: Clearing the add-in name does not immediately unload it from memory. You will most likely need to exit and restart Visual Studio to remove the DLL from memory. If you install an add-in that does not behave and causes problems, you can start the IDE with the /SafeMode switch, which loads Visual Studio without any add-in modules at all. Summary Add-in module installation in Visual Studio versions 2008 and newer is very simple using the XML configuration option. You can make an install program or script if need be, but with an audience of primarily programmers, you can probably simply ask them to copy the two files to the appropriate folder. 112 Chapter 17 Object Reference This chapter contains a summary of some of the basic object classes you can use to interact with Visual Studio. Application Object (DTE2) The application object (typically stored in the variable _applicationObject) is an encapsulation of everything within the Visual Studio IDE. 113 Property ActiveDocument Data Type Document Description Currently active document. ActiveSolutionProjects Collection of projects Collection of all projects in current solution. ActiveWindow Window Currently active or topmost window. Addins Collection Collection of all available add-ins. CommandBars Command Bar Access to Visual Studio commands and menus. CommandLineArgument String Command-line arguments passed to Visual Studio when it was started. Debugger Debugger Access to Visual Studio debugger object. DisplayMode Enum DisplayMDI or DisplayMDITabs. Documents Collection Collection of open documents in the IDE. Edition String Ultimate, Premium, Professional, or Express. FullName String Full path and file name. ItemOperations Object Allows file manipulation within Visual Studio. LocaleID Integer Geographic region, 1033-United States, etc. MainWindow Window Main window of the development environment. Mode Enum IDE Mode Design or IDE Mode Debug. RegistryRoot String Root key in registry where settings are stored. Solution Solution Current solution object. ToolWindows Tool Window Shortcut access object to IDE tool windows. Property Version Data Type String Description 10.0, 12.0, etc. Windows Collection Collection of all open IDE windows. Windows and documents Windows represent tool windows or editing forms used by Visual Studio. Tool windows include the Solution Explorer, Properties, the Tool Box, etc. Document windows are editing windows containing document objects that represent the source code being edited by the user. See Chapters 9 and 10 for examples and more details. Document Property ActiveWindow Data Type Window Description Window the document is open in. FullName String Full path and file name of file in the document. Kind String GUID string indicating type of document. Name String Name of the document. Path String Full path, without the file name. ProjectItem ProjectItem Item within the project associated with the document. Saved Boolean True if the document has not been modified since last open. Selection Selection Current selection text in document. Methods Activate Move focus to the document. Close Close, and optionally save the document. NewWindow Create a new window to view the document. Object Run-time object associated with the document. Redo Re-execute last action that was undone. Save Save the document to disk. Undo Reverse last action performed on document. 114 Window The window represents either a tool window or a document window that contains text being edited. Property AutoHides Data Type Boolean Description Can the tool window be hidden? Caption String The window title. Document Document The document in the window (if one exists). Height Integer Height of the window in pixels. Kind String Either Tool or Document. Left Integer Distance from left edge of the container in pixels. Object Object Allow run-time access to contents in the window, most of Object(TextDocument). ObjectKind String A GUID representing the tool contained in the window. Project Project The project associated with the window. ProjectItem ProjectItem The project element associated with the window. Selection Selection object The currently selected text in the window. Top Integer Distance from the top edge of the container in pixels. Visible Boolean Is the window currently visible? Width Integer Width of window, in pixels. WindowState Enum vsWindowStateNormal, StateMinimize, or StateMaximize. Methods Activate Move focus to the window. Close Close and optionally save the document. SetTabPicture Set a picture to display for a tool window. Solution and projects The solution object represents a solution and its component projects, which you can manipulate easily. See Chapters 8 and 9 for more information. 115 Solution The following table lists the solution properties of _applicationObject. Property Data Type Description Collection of add-in objects associated with the solution. AddIns Collection FullName String Path and file name of the solution. Globals Collection Global variables saved with solution. IsOpen Boolean Is the solution open? Projects Collection Collection of all projects in the solution. Properties Collection Names and values of all solution properties. Saved Boolean Has the solution been saved? SolutionBuild SolutionBuild An object with build information of the solution. TemplatePath String object Template path for type of project, i.e. C#, VB. Methods AddFromFile Add an existing file to the project. AddFromTemplate Copy a template file and add it to the solution. Close Close the solution. Create Create an empty solution. FindProjectItem Find an item in a project. Item Get a project in the solution. Open Open the solution. ProjectItemsTemplatePath Return location of project item templates for specific project types. Remove Remove specified project from solution. SaveAs Save solution under another name. Project You can iterate through the solution's Projects collection to get details on any given project in the solution space. 116 Property CodeModel Data Type Object Description Code Model (access to source elements). FullName String Full path and file name of the project. Globals Collection Global add-in values associated with the project. Kind GUID String Type of solution, VB or C#, for example. Name String Short project name. ProjectItems Collection Collection of items making up the project. Properties Collection Properties associated with the project. Saved Boolean Has the project been saved? UniqueName String Unique name for the project. Methods Save Save the project or project item. SaveAs Save as a new file name. Project Item Project items are the files (source, XML files, etc.) that make up the project. 117 Property ContainingProject Data Type Project Description The project hosting the project item or file. Document Object A document object (if any exist) for the file. FileCodeModel Code Model Allows you to access high-level code elements within the source file. FileCount Integer Number of files associated with the project item. FileNames Collection File names associated with the item. IsOpen Boolean Is the project item open? Kind GUID Type of the item. Name String Name of the project item. Object Object Run-time object associated with the project item. Properties Collection Properties associated with the item. Saved Boolean Has project item been modified since last open? Methods Delete Remove project from solution and storage. ExpandView Expand solution explorer view to show project. Open Open the project item. Remove Remove item from project and delete from disk. Save Save project item. SaveAs Save project item under another file name. Code manipulation These objects are the basic tools to manipulate text in source windows, both using simple string manipulations and the more complex code model parser. See Chapters 12 and 13 for some example usage. Text Document Property EndPoint Data Type Text point Description A point referring to the end of the document. Selection Object The currently selected text. StartPoint Text point A point referring to the start of the document. Methods ClearBookmarks Remove all unnamed bookmarks from the text document. CreateEditPoint Create a point object to edit text within the document. MarkText Create unnamed bookmarks for all found text in document. ReplacePattern Replace patterns within entire document or range. ReplaceText Simple text replacement with document. Edit Point Property AtEndOfDocument Data Type Boolean Description Is the point positioned at the documents end? 118 Property AtEndOfLine Data Type Boolean Description Is the object at the end of the line? AtStartOfDocument Boolean Is the point positioned at the start of the document? AtStartOfLine Boolean Is the object at the start of a line? CodeElement Object Return the code element at the current position. DisplayColumn Integer The current column containing the point. Line Integer The current line number in the document. LineLength Integer Number of characters in the current line. Methods ChangeCase Change case of selected text. CharLeft Move edit point specified number of characters to the left. CharRight Move edit point specified number of characters to the right. ClearBookmark Clear any unnamed bookmark on the current line. Copy Copy range of text to clipboard. Cut Copy text to clipboard and delete from document. Delete Delete text from document. GetLines Get lines of text between two lines. GetText Get string of text. Insert Insert text into document. LineDown Move down one line. LineUp Move up one line. MoveToLineAndOffset Move to a line and character offset. Paste Paste contents of clipboard at current point. ReplaceText Replace selected text with given text. WordLeft Move specified number of words to the left. WordRight Move specified number of words to the right. Code Model Using the code model is discussed in Chapter 13. 119 Property CodeElements Data Type Collection Description All the code element objects at this level. IsCaseSenstive Boolean Is the current language case sensitive? Language String Language the file is coded in. Methods AddClass Create a code class construct. AddEnum Create an enum construct in the code. AddFunction Create new function code. AddInterface Create new interface code. AddNamespace Create a new namespace in the module. AddVariable Create new variable code. IsValidID Is the specified identifier valid in the current language? Remove Remove code element from file. Code Element Property Children Data Type Collection Description Collection of child code elements. EndPoint Text Point Ending location for this element in file. FullName String Fully qualified code element name. InfoLocation Enum Is code element in project or external. IsCodeType Boolean Can a code type object be obtained from the element? Kind Enum Type of code element, i.e. class, function, etc. Language String Language the code element is written in. Name String Short name of the code element. ProjectItem ProjectItem The project item associated with the code element. StartPoint Text Point Starting location of the code element within the file. 120 Chapter 18 Add-in Helper Class As you begin to work with add-ins, you might find yourself writing your own library of helper routines. Here is a sample class library to get you started. We begin by declaring a variable in the helper class to hold a reference to the DTE2 object (_applicationObject), so we do not have to pass it around as a parameter. using System; using EnvDTE80; using Microsoft.VisualStudio.CommandBars; using System.Windows.Forms; public class AddInsHelper { public DTE2 app { get; set; } MakeEmptySolution While the solution object allows you to create an empty solution, it has a couple of changes to be aware of. The directory will not be made if it does not exist, and you need to save the new solution file. To isolate these behaviors, we can create our own method call to make an empty solution. public void MakeEmptySolution(string folder, string SolName) { string FullFolder; // Close solution if open. if (app.Solution.IsOpen) { app.Solution.Close(true); } // Get or make the folder for the solution. FullFolder = System.IO.Path.Combine(GetVSProjectsFolder(), folder); if (!System.IO.Directory.Exists(FullFolder)) { System.IO.Directory.CreateDirectory(FullFolder); } string tempFile = System.IO.Path.Combine(FullFolder, "TempSolution.sln"); app.Solution.Create(FullFolder, tempFile); tempFile = System.IO.Path.Combine(FullFolder, SolName); app.Solution.SaveAs(tempFile); } GetVSProjectsFolder Another useful function is a wrapper to the get_properties method to return a path where Visual Studio stores new projects. 121 public string GetVSProjectsFolder() { EnvDTE.Properties theProp = app.get_Properties("Environment", "ProjectsAndSolution"); return theProp.Item("ProjectsLocation").Value.ToString(); } FindMenuIndex The FindMenuIndex method finds the index of a particular menu item on one of the main menus menu bars. This allows you to control where to palce your add-in module if you dont want it as the first or last item on the menu. public int FindMenuIndex(string MainMenu, string subMenu) { int res = 1; try { CommandBar menuBar = ((CommandBars)app.CommandBars)["MenuBar"]; { foreach (CommandBarControl cb in menuBar.Controls) { if (MainMenu.ToString().ToUpper() == cb.Caption.ToString().ToUpper().Replace("&", "")) { CommandBarPopup toolsPopup = (CommandBarPopup)cb; for (int xx = 1; xx <= toolsPopup.Controls.Count; xx++) { if (toolsPopup.Controls[xx].Caption.ToString().ToUpper().Replace("&", "") == subMenu.ToUpper()) { res = toolsPopup.Controls[xx].Index; break; } } break; } } } } catch { } return res; } Hopefully a few of these functions will help you get started writing your own tools and help support your add-in development projects. 122 Chapter 19 Third-Party Add-Ins Ever since Microsoft provided outside parties the ability to create add-ins, thousands of thirdparty add-ins have been written and shared among developers. Many of the add-in modules are available for free or little cost. A good starting point is the Microsoft Visual Studio Gallery at http://visualstudiogallery.msdn.microsoft.com/. Microsoft add-ins Microsoft developers have provided a number of add-ins to the gallery. A few sample add-in programs available include: Color Printing: Allows code files to be printed in color. Regex Editor: IntelliSense, syntax color, and testing regular expressions. If you work with regular expressions, this add-in is a great addition to the IDE. PowerCommands: Useful extensions to the Visual Studio IDE. Adds commands such as Undo close, Insert GUID, Extract Constant, etc. to the IDE. Productivity Power Tools: Developer productivity extensions. Enhancements to the IDE, such as organize Visual Basic imports (similar to organize Usings in C#), align assignment statements, customize document tabs, etc. It is not unusual to see the functionality of add-in modules developed internally by Microsoft make it into future releases of Visual Studio. Community add-ins There is a large community of programmers who are writing and sharing their add-ins to the website. Some useful add-ins from the community includes: Indent Guides The Indent Guides add-in displays vertical lines to show indentation levels. It provides a useful visual guide for aligning statements. 123 Figure 19: Indent Guide Add-in Routing Assistant The Routing Assistant add-in enables users to browse, define, match, and filter ASP.NET MVC routes for ASP.NET applications and websites with ease, directly from within Visual Studio. With the popularity of the MVC framework, this add-in module is a great time-saver and way to understand MVC routing behavior. devColor The devColor add-in underlines the colors in style sheets and includes a color picker dialog. Figure 20: devColor Add-in The Visual Studio Gallery is a worthwhile site to visit and bookmark. You can also contribute your own add-ins if you create one that could be useful to other programmers. Fame and glory await you! 124
BradleyTerry2
cran
R
Package ‘BradleyTerry2’ October 12, 2022 Version 1.1-2 Title Bradley-Terry Models URL https://github.com/hturner/BradleyTerry2 BugReports https://github.com/hturner/BradleyTerry2/issues Description Specify and fit the Bradley-Terry model, including structured versions in which the pa- rameters are related to explanatory variables through a linear predictor and versions with contest- specific effects, such as a home advantage. Depends R (>= 2.10) Imports brglm, gtools, lme4 (>= 1.0), qvcalc, stats Suggests prefmod, testthat Enhances gnm License GPL (>= 2) LazyData yes Encoding UTF-8 RoxygenNote 7.0.2 Language en-GB NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-02-03 13:50:04 UTC R topics documented: add1.BT... 2 anova.BT... 4 basebal... 5 BTabilitie... 6 BT... 8 CEM... 12 chameleon... 15 citation... 17 countsToBinomia... 18 flatlizard... 19 footbal... 22 GenDavidso... 24 glmmPQ... 27 glmmPQL.contro... 31 icehocke... 32 plotProportion... 34 predict.BTglmmPQ... 39 predict.BT... 41 qvcalc.BTabilitie... 44 residuals.BT... 46 seed... 48 sound.field... 49 springal... 50 add1.BTm Add or Drop Single Terms to/from a Bradley Terry Model Description Add or drop single terms within the limit specified by the scope argument. For models with no random effects, compute an analysis of deviance table, otherwise compute the Wald statistic of the parameters that have been added to or dropped from the model. Usage ## S3 method for class 'BTm' add1(object, scope, scale = 0, test = c("none", "Chisq", "F"), x = NULL, ...) Arguments object a fitted object of class inheriting from "BTm". scope a formula specifying the model including all terms to be considered for adding or dropping. scale an estimate of the dispersion. Not implemented for models with random effects. test should a p-value be returned? The F test is only appropriate for models with no random effects for which the dispersion has been estimated. The Chisq test is a likelihood ratio test for models with no random effects, otherwise a Wald test. x a model matrix containing columns for all terms in the scope. Useful if add1 is to be called repeatedly. Warning: no checks are done on its validity. ... further arguments passed to add1.glm(). Details The hierarchy is respected when considering terms to be added or dropped: all main effects con- tained in a second-order interaction must remain, and so on. In a scope formula ‘.’ means ‘what is already there’. For drop1, a missing scope is taken to mean that all terms in the model may be considered for dropping. If scope includes player covariates and there are players with missing values over these covariates, then a separate ability will be estimated for these players in all fitted models. Similarly if there are missing values in any contest-level variables in scope, the corresponding contests will be omitted from all models. If formula includes random effects, the same random effects structure will apply to all models. Value An object of class "anova" summarizing the differences in fit between the models. Author(s) <NAME> See Also BTm(), anova.BTm() Examples result <- rep(1, nrow(flatlizards$contests)) BTmodel1 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + (1|..), data = flatlizards, tol = 1e-4, sigma = 2, trace = TRUE) drop1(BTmodel1) add1(BTmodel1, ~ . + head.length[..] + SVL[..], test = "Chisq") BTmodel2 <- update(BTmodel1, formula = ~ . + head.length[..]) drop1(BTmodel2, test = "Chisq") anova.BTm Compare Nested Bradley Terry Models Description Compare nested models inheriting from class "BTm". For models with no random effects, compute analysis of deviance table, otherwise compute Wald tests of additional terms. Usage ## S3 method for class 'BTm' anova(object, ..., dispersion = NULL, test = NULL) Arguments object a fitted object of class inheriting from "BTm". ... additional "BTm" objects. dispersion a value for the dispersion. Not implemented for models with random effects. test optional character string (partially) matching one of "Chisq", "F" or "Cp" to specify that p-values should be returned. The Chisq test is a likelihood ratio test for models with no random effects, otherwise a Wald test. Options "F" and "Cp" are only applicable to models with no random effects, see stat.anova(). Details For models with no random effects, an analysis of deviance table is computed using anova.glm(). Otherwise, Wald tests are computed as detailed here. If a single object is specified, terms are added sequentially and a Wald statistic is computed for the extra parameters. If the full model includes player covariates and there are players with missing values over these covariates, then the NULL model will include a separate ability for these players. If there are missing values in any contest-level variables in the full model, the corresponding contests will be omitted throughout. The random effects structure of the full model is assumed for all sub- models. For a list of objects, consecutive pairs of models are compared by computing a Wald statistic for the extra parameters in the larger of the two models. The Wald statistic is always based on the variance-covariance matrix of the larger of the two models being compared. Value An object of class "anova" inheriting from class "data.frame". Warning The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and ’s default of na.action = na.omit is used. An error will be returned in this case. The same problem will occur when separate abilities have been estimated for different subsets of players in the models being compared. However no warning is given in this case. Author(s) <NAME> See Also BTm(), add1.BTm() Examples result <- rep(1, nrow(flatlizards$contests)) BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + (1|..), data = flatlizards, trace = TRUE) anova(BTmodel) baseball Baseball Data from Agresti (2002) Description Baseball results for games in the 1987 season between 7 teams in the Eastern Division of the Amer- ican League. Usage baseball Format A data frame with 42 observations on the following 4 variables. home.team a factor with levels Baltimore, Boston, Cleveland, Detroit, Milwaukee, New York, Toronto. away.team a factor with levels Baltimore, Boston, Cleveland, Detroit, Milwaukee, New York, Toronto. home.wins a numeric vector. away.wins a numeric vector. Note This dataset is in a simpler format than the one described in Firth (2005). Source Page 438 of Agresti, A. (2002) Categorical Data Analysis (2nd Edn.). New York: Wiley. References <NAME>. (2005) Bradley-Terry models in R. Journal of Statistical Software, 12(1), 1–12. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. See Also BTm() Examples ## This reproduces the analysis in Sec 10.6 of Agresti (2002). data(baseball) # start with baseball data as provided by package ## Simple Bradley-Terry model, ignoring home advantage: baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") ## Now incorporate the "home advantage" effect baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) ## Compare the fit of these two models: anova(baseballModel1, baseballModel2) BTabilities Estimated Abilities from a Bradley-Terry Model Description Computes the (baseline) ability of each player from a model object of class "BTm". Usage BTabilities(model) Arguments model a model object for which inherits(model, "BTm") is TRUE Details The player abilities are either directly estimated by the model, in which case the appropriate pa- rameter estimates are returned, otherwise the abilities are computed from the terms of the fitted model that involve player covariates only (those indexed by model$id in the model formula). Thus parameters in any other terms are assumed to be zero. If one player has been set as the reference, then predict.BTm() can be used to obtain ability estimates with non-player covariates set to other values, see examples for predict.BTm(). If the abilities are structured according to a linear predictor, and if there are player covariates with missing values, the abilities for the corresponding players are estimated as separate parameters. In this event the resultant matrix has an attribute, named "separate", which identifies those players whose ability was estimated separately. For an example, see flatlizards(). Value A two-column numeric matrix of class c("BTabilities", "matrix"), with columns named "ability" and "se"; has one row for each player; has attributes named "vcov", "modelcall", "factorname" and (sometimes — see below) "separate". The first three attributes are not printed by the method print.BTabilities. Author(s) <NAME> and <NAME> References <NAME>. (2005) Bradley-Terry models in R. Journal of Statistical Software, 12(1), 1–12. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. See Also BTm(), residuals.BTm() Examples ### citations example ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## Fit the "standard" Bradley-Terry model citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) BTabilities(citeModel) ### baseball example data(baseball) # start with baseball data as provided by package ## Fit mode with home advantage baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- BTm(cbind(home.wins, away.wins), home.team, away.team, formula = ~ team + at.home, id = "team", data = baseball) ## Estimate abilities for each team, relative to Baltimore, when ## playing away from home: BTabilities(baseballModel2) BTm Bradley-Terry Model and Extensions Description Fits Bradley-Terry models for pair comparison data, including models with structured scores, or- der effect and missing covariate data. Fits by either maximum likelihood or maximum penalized likelihood (with Jeffreys-prior penalty) when abilities are modelled exactly, or by penalized quasi- likelihood when abilities are modelled by covariates. Usage BTm( outcome = 1, player1, player2, formula = NULL, id = "..", separate.ability = NULL, refcat = NULL, family = "binomial", data = NULL, weights = NULL, subset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, offset = NULL, br = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ... ) Arguments outcome the binomial response: either a numeric vector, a factor in which the first level denotes failure and all others success, or a two-column matrix with the columns giving the numbers of successes and failures. player1 either an ID factor specifying the first player in each contest, or a data.frame containing such a factor and possibly other contest-level variables that are spe- cific to the first player. If given in a data.frame, the ID factor must have the name given in the id argument. If a factor is specified it will be used to create such a data.frame. player2 an object corresponding to that given in player1 for the second player in each contest, with identical structure – in particular factors must have identical levels. formula a formula with no left-hand-side, specifying the model for player ability. See details for more information. id the name of the ID factor. separate.ability (if formula does not include the ID factor as a separate term) a character vec- tor giving the names of players whose abilities are to be modelled individually rather than using the specification given by formula. refcat (if formula includes the ID factor as a separate term) a character specifying which player to use as a reference, with the first level of the ID factor as the default. Overrides any other contrast specification for the ID factor. family a description of the error distribution and link function to be used in the model. Only the binomial family is implemented, with either"logit", "probit" , or "cauchit" link. (See stats::family() for details of family functions.) data an optional object providing data required by the model. This may be a single data frame of contest-level data or a list of data frames. Names of data frames are ignored unless they refer to data frames specified by player1 and player2. The rows of data frames that do not contain contest-level data must correspond to the levels of a factor used for indexing, i.e. row 1 corresponds to level 1, etc. Note any rownames are ignored. Objects are searched for first in the data object if provided, then in the environment of formula. If data is a list, the data frames are searched in the order given. weights an optional numeric vector of ‘prior weights’. subset an optional logical or numeric vector specifying a subset of observations to be used in the fitting process. na.action a function which indicates what should happen when any contest-level variables contain NAs. The default is the na.action setting of options. See details for the handling of missing values in other variables. start a vector of starting values for the fixed effects. etastart a vector of starting values for the linear predictor. mustart a vector of starting values for the vector of means. offset an optional offset term in the model. A vector of length equal to the number of contests. br logical. If TRUE fitting will be by penalized maximum likelihood as in Firth (1992, 1993), using brglm::brglm(), rather than maximum likelihood using glm(), when abilities are modelled exactly or when the abilities are modelled by covariates and the variance of the random effects is estimated as zero. model logical: whether or not to return the model frame. x logical: whether or not to return the design matrix for the fixed effects. contrasts an optional list specifying contrasts for the factors in formula. See the contrasts.arg of model.matrix(). ... other arguments for fitting function (currently either glm(), brglm::brglm(), or glmmPQL()) Details In each comparison to be modelled there is a ’first player’ and a ’second player’ and it is assumed that one player wins while the other loses (no allowance is made for tied comparisons). The countsToBinomial() function is provided to convert a contingency table of wins into a data frame of wins and losses for each pair of players. The formula argument specifies the model for player ability and applies to both the first player and the second player in each contest. If NULL a separate ability is estimated for each player, equivalent to setting formula = reformulate(id). Contest-level variables can be specified in the formula in the usual manner, see formula(). Player covariates should be included as variables indexed by id, see examples. Thus player covariates must be ordered according to the levels of the ID factor. If formula includes player covariates and there are players with missing values over these covari- ates, then a separate ability will be estimated for those players. When player abilities are modelled by covariates, then random player effects should be added to the model. These should be specified in the formula using the vertical bar notation of lme4::lmer(), see examples. When specified, it is assumed that random player effects arise from a N (0,σ 2 ) distribution and model parameters, including σ, are estimated using PQL (Breslow and Clayton, 1993) as imple- mented in the glmmPQL() function. Value An object of class c("BTm", "x"), where "x" is the class of object returned by the model fitting function (e.g. glm). Components are as for objects of class "x", with additionally id the id argument. separate.ability the separate.ability argument. refcat the refcat argument. player1 a data frame for the first player containing the ID factor and any player-specific contest-level variables. player2 a data frame corresponding to that for player1. assign a numeric vector indicating which coefficients correspond to which terms in the model. term.labels labels for the model terms. random for models with random effects, the design matrix for the random effects. Author(s) <NAME>, <NAME> References <NAME>. (2002) Categorical Data Analysis (2nd ed). New York: Wiley. <NAME>. (1992) Bias reduction, the Jeffreys prior and GLIM. In Advances in GLIM and Statistical Modelling, Eds. Fahrmeir, L., <NAME>., <NAME>. and <NAME>., pp91–100. New York: Springer. <NAME>. (1993) Bias reduction of maximum likelihood estimates. Biometrika 80, 27–38. <NAME>. (2005) Bradley-Terry models in R. Journal of Statistical Software, 12(1), 1–12. <NAME>. (1994) Citation patterns in the journals of statistics and probability. Statistical Science 9, 94–108. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. See Also countsToBinomial(), glmmPQL(), BTabilities(), residuals.BTm(), add1.BTm(), anova.BTm() Examples ######################################################## ## Statistics journal citation data from Stigler (1994) ## -- see also Agresti (2002, p448) ######################################################## ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## First fit the "standard" Bradley-Terry model citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) ## Now the same thing with a different "reference" journal citeModel2 <- update(citeModel, refcat = "JASA") BTabilities(citeModel2) ################################################################## ## Now an example with an order effect -- see Agresti (2002) p438 ################################################################## data(baseball) # start with baseball data as provided by package ## Simple Bradley-Terry model, ignoring home advantage: baseballModel1 <- BTm(cbind(home.wins, away.wins), home.team, away.team, data = baseball, id = "team") ## Now incorporate the "home advantage" effect baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1) baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0) baseballModel2 <- update(baseballModel1, formula = ~ team + at.home) ## Compare the fit of these two models: anova(baseballModel1, baseballModel2) ## ## For a more elaborate example with both player-level and contest-level ## predictor variables, see help(chameleons). ## CEMS Dittrich, Hatzinger and Katzenbeisser (1998, 2001) Data on Manage- ment School Preference in Europe Description Community of European management schools (CEMS) data as used in the paper by Dittrich et al. (1998, 2001), re-formatted for use with BTm() Usage CEMS Format A list containing three data frames, CEMS$preferences, CEMS$students and CEMS$schools. The CEMS$preferences data frame has 303 * 15 = 4505 observations (15 possible comparisons, for each of 303 students) on the following 8 variables: student a factor with levels 1:303 school1 a factor with levels c("Barcelona", "London", "Milano", "Paris", "St.Gallen", "Stockholm"); the first management school in a comparison school2 a factor with the same levels as school1; the second management school in a comparison win1 integer (value 0 or 1) indicating whether school1 was preferred to school2 win2 integer (value 0 or 1) indicating whether school2 was preferred to school1 tied integer (value 0 or 1) indicating whether no preference was expressed win1.adj numeric, equal to win1 + tied/2 win2.adj numeric, equal to win2 + tied/2 The CEMS$students data frame has 303 observations (one for each student) on the following 8 variables: STUD a factor with levels c("other", "commerce"), the student’s main discipline of study ENG a factor with levels c("good, poor"), indicating the student’s knowledge of English FRA a factor with levels c("good, poor"), indicating the student’s knowledge of French SPA a factor with levels c("good, poor"), indicating the student’s knowledge of Spanish ITA a factor with levels c("good, poor"), indicating the student’s knowledge of Italian WOR a factor with levels c("no", "yes"), whether the student was in full-time employment while studying DEG a factor with levels c("no", "yes"), whether the student intended to take an international degree SEX a factor with levels c("female", "male") The CEMS$schools data frame has 6 observations (one for each management school) on the follow- ing 7 variables: Barcelona numeric (value 0 or 1) London numeric (value 0 or 1) Milano numeric (value 0 or 1) Paris numeric (value 0 or 1) St.Gallen numeric (value 0 or 1) Stockholm numeric (value 0 or 1) LAT numeric (value 0 or 1) indicating a ’Latin’ city Details The variables win1.adj and win2.adj are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Davidson (1970) method that is used by Dittrich et al. (1998): see the examples. Author(s) <NAME> Source Royal Statistical Society datasets website, at https://rss.onlinelibrary.wiley.com/hub/journal/ 14679876/series-c-datasets/pre_2016. References Davidson, <NAME>. (1970) Extending the Bradley-Terry model to accommodate ties in paired compar- ison experiments. Journal of the American Statistical Association 65, 317–328. Dittrich, R., <NAME>. and <NAME>. (1998) Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings. Applied Statistics 47, 511–525. <NAME>., <NAME>. and <NAME>. (2001) Corrigendum: Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings. Applied Statistics 50, 247–249. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. Examples ## ## Fit the standard Bradley-Terry model, using the simple 'add 0.5' ## method to handle ties: ## table3.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~.. , refcat = "Stockholm", data = CEMS) ## The results in Table 3 of Dittrich et al (2001) are reproduced ## approximately by a simple re-scaling of the estimates: table3 <- summary(table3.model)$coef[, 1:2]/1.75 print(table3) ## ## Now fit the 'final model' from Table 6 of Dittrich et al.: ## table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * Paris[..] + WOR[student] * Milano[..] + WOR[student] * Barcelona[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ## ## Again re-scale to reproduce approximately Table 6 of Dittrich et ## al. (2001): ## table6 <- summary(table6.model)$coef[, 1:2]/1.75 print(table6) ## ## Not run: ## Now the slightly simplified model of Table 8 of Dittrich et al. (2001): ## table8.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * LAT[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) table8 <- summary(table8.model)$coef[, 1:2]/1.75 ## ## Notice some larger than expected discrepancies here (the coefficients ## named "..Barcelona", "..Milano" and "..Paris") from the results in ## Dittrich et al. (2001). Apparently a mistake was made in Table 8 of ## the published Corrigendum note (<NAME> personal communication, ## February 2010). ## print(table8) ## End(Not run) chameleons Male Cape Dwarf Chameleons: Measured Traits and Contest Out- comes Description Data as used in the study by Stuart-Fox et al. (2006). Physical measurements made on 35 male Cape dwarf chameleons, and the results of 106 inter-male contests. Usage chameleons Format A list containing three data frames: chameleons$winner, chameleons$loser and chameleons$predictors. The chameleons$winner and chameleons$loser data frames each have 106 observations (one per contest) on the following 4 variables: ID a factor with 35 levels C01, C02, ... , C43, the identity of the winning (or losing) male in each contest prev.wins.1 integer (values 0 or 1), did the winner/loser of this contest win in an immediately previous contest? prev.wins.2 integer (values 0, 1 or 2), how many of his (maximum) previous 2 contests did each male win? prev.wins.all integer, how many previous contests has each male won? The chameleons$predictors data frame has 35 observations, one for each male involved in the contests, on the following 7 variables: ch.res numeric, residuals of casque height regression on SVL, i.e. relative height of the bony part on the top of the chameleons’ heads jl.res numeric, residuals of jaw length regression on SVL tl.res numeric, residuals of tail length regression on SVL mass.res numeric, residuals of body mass regression on SVL (body condition) SVL numeric, snout-vent length (body size) prop.main numeric, proportion (arcsin transformed) of area of the flank occupied by the main pink patch on the flank prop.patch numeric, proportion (arcsin transformed) of area of the flank occupied by the entire flank patch Details The published paper mentions 107 contests, but only 106 contests are included here. Contest num- ber 16 was deleted from the data used to fit the models, because it involved a male whose predictor- variables were incomplete (and it was the only contest involving that lizard, so it is uninformative). Author(s) <NAME> Source The data were obtained by Dr <NAME>, https://devistuartfox.com/, and they are re- produced here with her kind permission. These are the same data that were used in Stuart-Fox, <NAME>., <NAME>., <NAME>. and <NAME>. (2006) Multiple signals in chameleon contests: designing and analysing animal contests as a tournament. Animal Behaviour 71, 1263– 1271. Examples ## ## Reproduce Table 3 from page 1268 of the above paper: ## summary(chameleon.model <- BTm(player1 = winner, player2 = loser, formula = ~ prev.wins.2 + ch.res[ID] + prop.main[ID] + (1|ID), id = "ID", data = chameleons)) head(BTabilities(chameleon.model)) ## ## Note that, although a per-chameleon random effect is specified as in the ## above [the term "+ (1|ID)"], the estimated variance for that random ## effect turns out to be zero in this case. The "prior experience" ## effect ["+ prev.wins.2"] in this analysis has explained most of the ## variation, leaving little for the ID-specific predictors to do. ## Despite that, two of the ID-specific predictors do emerge as ## significant. ## ## Test whether any of the other ID-specific predictors has an effect: ## add1(chameleon.model, ~ . + jl.res[ID] + tl.res[ID] + mass.res[ID] + SVL[ID] + prop.patch[ID]) citations Statistics Journal Citation Data from Stigler (1994) Description Extracted from a larger table in Stigler (1994). Inter-journal citation counts for four journals, “Biometrika”, “Comm Statist.”, “JASA” and “JRSS-B”, as used on p448 of Agresti (2002). Usage citations Format A 4 by 4 contingency table of citations, cross-classified by the factors cited and citing each with levels Biometrika, Comm Statist, JASA, and JRSS-B. Details In the context of paired comparisons, the ‘winner’ is the cited journal and the ‘loser’ is the one doing the citing. Source Agrest<NAME>. (2002) Categorical Data Analysis (2nd ed). New York: Wiley. References <NAME>. (2005) Bradley-Terry models in R. Journal of Statistical Software 12(1), 1–12. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. <NAME>. (1994) Citation patterns in the journals of statistics and probability. Statistical Science 9, 94–108. See Also BTm() Examples ## Data as a square table, as in Agresti p448 citations ## ## Convert frequencies to success/failure data: ## citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") ## Standard Bradley-Terry model fitted to these data citeModel <- BTm(cbind(win1, win2), journal1, journal2, data = citations.sf) countsToBinomial Convert Contingency Table of Wins to Binomial Counts Description Convert a contingency table of wins to a four-column data frame containing the number of wins and losses for each pair of players. Usage countsToBinomial(xtab) Arguments xtab a contingency table of wins cross-classified by “winner” and “loser” Value A data frame with four columns player1 the first player in the contest. player2 the second player in the contest. win1 the number of times player1 won. win2 the number of times player2 won. Author(s) <NAME> See Also BTm() Examples ######################################################## ## Statistics journal citation data from Stigler (1994) ## -- see also Agresti (2002, p448) ######################################################## citations ## Convert frequencies to success/failure data citations.sf <- countsToBinomial(citations) names(citations.sf)[1:2] <- c("journal1", "journal2") citations.sf flatlizards Augrabies Male Flat Lizards: Contest Results and Predictor Variables Description Data collected at Augrabies Falls National Park (South Africa) in September-October 2002, on the contest performance and background attributes of 77 male flat lizards (Platysaurus broadleyi). The results of exactly 100 contests were recorded, along with various measurements made on each lizard. Full details of the study are in Whiting et al. (2006). Usage flatlizards Format This dataset is a list containing two data frames: flatlizards$contests and flatlizards$predictors. The flatlizards$contests data frame has 100 observations on the following 2 variables: winner a factor with 77 levels lizard003 ... lizard189. loser a factor with the same 77 levels lizard003 ... lizard189. The flatlizards$predictors data frame has 77 observations (one for each of the 77 lizards) on the following 18 variables: id factor with 77 levels (3 5 6 ... 189), the lizard identifiers. throat.PC1 numeric, the first principal component of the throat spectrum. throat.PC2 numeric, the second principal component of the throat spectrum. throat.PC3 numeric, the third principal component of the throat spectrum. frontleg.PC1 numeric, the first principal component of the front-leg spectrum. frontleg.PC2 numeric, the second principal component of the front-leg spectrum. frontleg.PC3 numeric, the third principal component of the front-leg spectrum. badge.PC1 numeric, the first principal component of the ventral colour patch spectrum. badge.PC2 numeric, the second principal component of the ventral colour patch spectrum. badge.PC3 numeric, the third principal component of the ventral colour patch spectrum. badge.size numeric, a measure of the area of the ventral colour patch. testosterone numeric, a measure of blood testosterone concentration. SVL numeric, the snout-vent length of the lizard. head.length numeric, head length. head.width numeric, head width. head.height numeric, head height. condition numeric, a measure of body condition. repro.tactic a factor indicating reproductive tactic; levels are resident and floater. Details There were no duplicate contests (no pair of lizards was seen fighting more than once), and there were no tied contests (the result of each contest was clear). The variables head.length, head.width, head.height and condition were all computed as residuals (of directly measured head length, head width, head height and body mass index, re- spectively) from simple least-squares regressions on SVL. Values of some predictors are missing (NA) for some lizards, ‘at random’, because of instrument problems unconnected with the value of the measurement being made. Source The data were collected by Dr <NAME>, http://whitinglab.com/people/martin-whiting/, and they appear here with his kind permission. References <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2006). Ultraviolet signals ultra-aggression in a lizard. Animal Behaviour 72, 353–363. See Also BTm() Examples ## ## Fit the standard Bradley-Terry model, using the bias-reduced ## maximum likelihood method: ## result <- rep(1, nrow(flatlizards$contests)) BTmodel <- BTm(result, winner, loser, br = TRUE, data = flatlizards$contests) summary(BTmodel) ## ## That's fairly useless, though, because of the rather small ## amount of data on each lizard. And really the scientific ## interest is not in the abilities of these particular 77 ## lizards, but in the relationship between ability and the ## measured predictor variables. ## ## So next fit (by maximum likelihood) a "structured" B-T model in ## which abilities are determined by a linear predictor. ## ## This reproduces results reported in Table 1 of Whiting et al. (2006): ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..], data = flatlizards) summary(Whiting.model) ## ## Equivalently, fit the same model using glmmPQL: ## Whiting.model <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), sigma = 0, sigma.fixed = TRUE, data = flatlizards) summary(Whiting.model) ## ## But that analysis assumes that the linear predictor formula for ## abilities is _perfect_, i.e., that there is no error in the linear ## predictor. This will always be unrealistic. ## ## So now fit the same predictor but with a normally distributed error ## term --- a generalized linear mixed model --- by using the BTm ## function instead of glm. ## Whiting.model2 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards, trace = TRUE) summary(Whiting.model2) ## ## The estimated coefficients (of throat.PC1, throat.PC3, ## head.length and SVL are not changed substantially by ## the recognition of an error term in the model; but the estimated ## standard errors are larger, as expected. The main conclusions from ## Whiting et al. (2006) are unaffected. ## ## With the normally distributed random error included, it is perhaps ## at least as natural to use probit rather than logit as the link ## function: ## require(stats) Whiting.model3 <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) summary(Whiting.model3) BTabilities(Whiting.model3) ## Note the "separate" attribute here, identifying two lizards with ## missing values of at least one predictor variable ## ## Modulo the usual scale change between logit and probit, the results ## are (as expected) very similar to Whiting.model2. football English Premier League Football Results 2008/9 to 2012/13 Description The win/lose/draw results for five seasons of the English Premier League football results, from 2008/9 to 2012/13 Usage football Format A data frame with 1881 observations on the following 4 variables. season a factor with levels 2008-9, 2009-10, 2010-11, 2011-12, 2012-13 home a factor specifying the home team, with 29 levels Ars (Arsenal), ... , Wol (Wolverhampton) away a factor specifying the away team, with the same levels as home. result a numeric vector giving the result for the home team: 1 for a win, 0 for a draw, -1 for a loss. Details In each season, there are 20 teams, each of which plays one home game and one away game against all the other teams in the league. The results in 380 games per season. Source These data were downloaded from http://soccernet.espn.go.com in 2013. The site has since moved and the new site does not appear to have an equivalent source. References Davidson, <NAME>. (1970). On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. Journal of the American Statistical Association, 65, 317–328. See Also GenDavidson() Examples ### example requires gnm if (require(gnm)) { ### convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") head(football.tri) ### add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ### fit Davidson model for ties ### - subset to first and last season for illustration Davidson <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri, subset = season %in% c("2008-9", "2012-13")) ### see ?GenDavidson for further analysis } GenDavidson Specify a Generalised Davidson Term in a gnm Model Formula Description GenDavidson is a function of class "nonlin" to specify a generalised Davidson term in the formula argument to gnm::gnm(), providing a model for paired comparison data where ties are a possible outcome. Usage GenDavidson( win, tie, loss, player1, player2, home.adv = NULL, tie.max = ~1, tie.mode = NULL, tie.scale = NULL, at.home1 = NULL, at.home2 = NULL ) Arguments win a logical vector: TRUE if player1 wins, FALSE otherwise. tie a logical vector: TRUE if the outcome is a tie, FALSE otherwise. loss a logical vector: TRUE if player1 loses, FALSE otherwise. player1 an ID factor specifying the first player in each contest, with the same set of levels as player2. player2 an ID factor specifying the second player in each contest, with the same set of levels as player2. home.adv a formula for the parameter corresponding to the home advantage effect. If NULL, no home advantage effect is estimated. tie.max a formula for the parameter corresponding to the maximum tie probability. tie.mode a formula for the parameter corresponding to the location of maximum tie prob- ability, in terms of the probability that player1 wins, given the outcome is not a draw. tie.scale a formula for the parameter corresponding to the scale of dependence of the tie probability on the probability that player1 wins, given the outcome is not a draw. at.home1 a logical vector: TRUE if player1 is at home, FALSE otherwise. at.home2 a logical vector: TRUE if player2 is at home, FALSE otherwise. Details GenDavidson specifies a generalisation of the Davidson model (1970) for paired comparisons where a tie is a possible outcome. It is designed for modelling trinomial counts corresponding to the win/draw/loss outcome for each contest, which are assumed Poisson conditional on the total count for each match. Since this total must be one, the expected counts are equivalently the probabilities for each possible outcome, which are modelled on the log scale: log(p(ibeatsj)k ) = θijk + log(µαi log(p(draw)k ) = θijk + δ + c+ σ(π log(µαi ) − (1 − π)log(αj ))+ (1 − σ)(log(µαi + αj )) log(p(jbeatsi)k ) = θijk + log(αj ) Here θijk is a structural parameter to fix the trinomial totals; µ is the home advantage parameter; αi and αj are the abilities of players i and j respectively; c is a function of the parameters such that expit(δ) is the maximum probability of a tie, σ scales the dependence of the probability of a tie on the relative abilities and π allows for asymmetry in this dependence. For parameters that must be positive (αi , σ, µ), the log is estimated, while for parameters that must be between zero and one (δ, π), the logit is estimated, as illustrated in the example. Value A list with the anticipated components of a "nonlin" function: predictors the formulae for the different parameters and the ID factors for player 1 and player 2. variables the outcome variables and the “at home” variables, if specified. common an index to specify that common effects are to be estimated for the players. term a function to create a deparsed mathematical expression of the term, given labels for the predictors. start a function to generate starting values for the parameters. Author(s) <NAME> References Davidson, R. R. (1970). On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. Journal of the American Statistical Association, 65, 317–328. See Also football(), plotProportions() Examples ### example requires gnm if (require(gnm)) { ### convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") head(football.tri) ### add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ### fit shifted & scaled Davidson model ### - subset to first and last season for illustration shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri, subset = season %in% c("2008-9", "2012-13")) ### look at coefs coef <- coef(shifScalDav) ## home advantage exp(coef["home.adv"]) ## max p(tie) plogis(coef["tie.max"]) ## mode p(tie) plogis(coef["tie.mode"]) ## scale relative to Davidson of dependence of p(tie) on p(win|not a draw) exp(coef["tie.scale"]) ### check model fit alpha <- names(coef[-(1:4)]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, data = football.tri, subset = count == 1) } ### analyse all five seasons ### - takes a little while to run, particularly likelihood ratio tests ## Not run: ### fit Davidson model Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### fit scaled Davidson model scalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### fit shifted & scaled Davidson model shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ### compare models anova(Dav, scalDav, shifScalDav, test = "Chisq") ### diagnostic plots main <- c("Davidson", "Scaled Davidson", "Shifted & Scaled Davidson") mod <- list(Dav, scalDav, shifScalDav) names(mod) <- main ## use football.tri data so that at.home can be found, ## but restrict to actual match results par(mfrow = c(2,2)) for (i in 1:3) { coef <- parameters(mod[[i]]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, main = main[i], data = football.tri, subset = count == 1) } ## End(Not run) glmmPQL PQL Estimation of Generalized Linear Mixed Models Description Fits GLMMs with simple random effects structure via Breslow and Clayton’s PQL algorithm. The GLMM is assumed to be of the form g(µ) = Xβ + Ze where g is the link function, µ is the vector of means and X, Z are design matrices for the fixed effects β and random effects e respectively. Furthermore the random effects are assumed to be i.i.d. N (0, σ 2 ). Usage glmmPQL( fixed, random = NULL, family = "binomial", data = NULL, subset = NULL, weights = NULL, offset = NULL, na.action = NULL, start = NULL, etastart = NULL, mustart = NULL, control = glmmPQL.control(...), sigma = 0.1, sigma.fixed = FALSE, model = TRUE, x = FALSE, contrasts = NULL, ... ) Arguments fixed a formula for the fixed effects. random a design matrix for the random effects, with number of rows equal to the length of variables in formula. family a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See family() for details of family functions.) data an optional data frame, list or environment (or object coercible by as.data.frame() to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which glmmPQL called. subset an optional logical or numeric vector specifying a subset of observations to be used in the fitting process. weights an optional vector of ‘prior weights’ to be used in the fitting process. offset an optional numeric vector to be added to the linear predictor during fitting. One or more offset terms can be included in the formula instead or as well, and if more than one is specified their sum is used. See model.offset(). na.action a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options(), and is na.fail() if that is unset. start starting values for the parameters in the linear predictor. etastart starting values for the linear predictor. mustart starting values for the vector of means. control a list of parameters for controlling the fitting process. See the glmmPQL.control() for details. sigma a starting value for the standard deviation of the random effects. sigma.fixed logical: whether or not the standard deviation of the random effects should be fixed at its starting value. model logical: whether or not the model frame should be returned. x logical: whether or not the design matrix for the fixed effects should be returned. contrasts an optional list. See the contrasts.arg argument of model.matrix(). ... arguments to be passed to glmmPQL.control(). Value An object of class "BTglmmPQL" which inherits from "glm" and "lm": coefficients a named vector of coefficients, with a "random" attribute giving the estimated random effects. residuals the working residuals from the final iteration of the IWLS loop. random the design matrix for the random effects. fitted.values the fitted mean values, obtained by transforming the linear predictors by the inverse of the link function. rank the numeric rank of the fitted linear model. family the family object used. linear.predictors the linear fit on link scale. deviance up to a constant, minus twice the maximized log-likelihood. aic a version of Akaike’s An Information Criterion, minus twice the maximized log-likelihood plus twice the number of parameters, computed by the aic com- ponent of the family. null.deviance the deviance for the null model, comparable with deviance. iter the numer of iterations of the PQL algorithm. weights the working weights, that is the weights in the final iteration of the IWLS loop. prior.weights the weights initially supplied, a vector of 1’s if none were. df.residual the residual degrees of freedom. df.null the residual degrees of freedom for the null model. y if requested (the default) the y vector used. (It is a vector even for a binomial model.) x if requested, the model matrix. model if requested (the default), the model frame. converged logical. Was the PQL algorithm judged to have converged? call the matched call. formula the formula supplied. terms the terms object used. data the data argument used. offset the offset vector used. control the value of the control argument used. contrasts (where relevant) the contrasts used. xlevels (where relevant) a record of the levels of the factors used in fitting. na.action (where relevant) information returned by model.frame on the special handling of NAs. sigma the estimated standard deviation of the random effects sigma.fixed logical: whether or not sigma was fixed varFix the variance-covariance matrix of the fixed effects varSigma the variance of sigma Author(s) <NAME> References <NAME>. and <NAME>. (1993) Approximate inference in Generalized Linear Mixed Models. Journal of the American Statistical Association 88(421), 9–25. <NAME>. (1977) Maximum likelihood approaches to variance component estimation and to related problems. Journal of the American Statistical Association 72(358), 320–338. See Also predict.BTglmmPQL(),glmmPQL.control(),BTm() Examples ############################################### ## Crowder seeds example from Breslow & Clayton ############################################### summary(glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = "binomial", data = seeds)) summary(glmmPQL(cbind(r, n - r) ~ seed*extract, random = diag(nrow(seeds)), family = "binomial", data = seeds)) glmmPQL.control Control Aspects of the glmmPQL Algorithm Description Set control variables for the glmmPQL algorithm. Usage glmmPQL.control(maxiter = 50, IWLSiter = 10, tol = 1e-06, trace = FALSE) Arguments maxiter the maximum number of outer iterations. IWLSiter the maximum number of iterated weighted least squares iterations used to esti- mate the fixed effects, given the standard deviation of the random effects. tol the tolerance used to determine convergence in the IWLS iterations and over all (see details). trace logical: whether or not to print the score for the random effects variance at the end of each iteration. Details This function provides an interface to control the PQL algorithm used by BTm() for fitting Bradley Terry models with random effects. The algorithm iterates between a series of iterated weighted least squares iterations to update the fixed effects and a single Fisher scoring iteration to update the standard deviation of the random effects. Convergence of both the inner and outer iterations are judged by comparing the squared components of the relevant score vector with corresponding elements of the diagonal of the Fisher information matrix. If, for all components of the relevant score vector, the ratio is less than tolerance^2, or the corresponding diagonal element of the Fisher information matrix is less than 1e-20, iterations cease. Value A list with the arguments as components. Author(s) <NAME> References <NAME>. and <NAME>. (1993), Approximate inference in Generalized Linear Mixed Models. Journal of the American Statistical Association 88(421), 9–25. See Also glmmPQL(), BTm() Examples ## Variation on example(flatlizards) result <- rep(1, nrow(flatlizards$contests)) ## BTm passes arguments on to glmmPQL.control() args(BTm) BTmodel <- BTm(result, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), data = flatlizards, tol = 1e-3, trace = TRUE) summary(BTmodel) icehockey College Hockey Men’s Division I 2009-10 results Description Game results from American College Hockey Men’s Division I composite schedule 2009-2010. Usage icehockey Format A data frame with 1083 observations on the following 6 variables. date a numeric vector visitor a factor with 58 levels Alaska Anchorage ... Yale v_goals a numeric vector opponent a factor with 58 levels Alaska Anchorage ... Yale o_goals a numeric vector conference a factor with levels AH, CC, CH, EC, HE, NC, WC result a numeric vector: 1 if visitor won, 0.5 for a draw and 0 if visitor lost home.ice a logical vector: 1 if opponent on home ice, 0 if game on neutral ground Details The Division I ice hockey teams are arranged in six conferences: Atlantic Hockey, Central Colle- giate Hockey Association, College Hockey America, ECAC Hockey, Hockey East and the Western Collegiate Hockey Association, all part of the National Collegiate Athletic Association. The com- posite schedule includes within conference games and between conference games. The data set here contains only games from the regular season, the results of which determine the teams that play in the NCAA national tournament. There are six automatic bids that go to the conference tournament champions, the remaining 10 teams are selected based upon ranking under the NCAA’s system of pairwise comparisons (https://www.collegehockeynews.com/info/?d= pwcrpi). Some have argued that Bradley-Terry rankings would be fairer (https://www.collegehockeynews. com/info/?d=krach). Source http://www.collegehockeystats.net/0910/schedules/men. References <NAME>. Build your own rankings: http://www.elynah.com/tbrw/2010/rankings.diy. shtml. College Hockey News https://www.collegehockeynews.com/. Selections for 2010 NCAA tournament: https://www.espn.com/college-sports/news/story? id=5012918. Examples ### Fit the standard Bradley-Terry model standardBT <- BTm(outcome = result, player1 = visitor, player2 = opponent, id = "team", data = icehockey) ## Bradley-Terry abilities abilities <- exp(BTabilities(standardBT)[,1]) ## Compute round-robin winning probability and KRACH ratings ## (scaled abilities such that KRACH = 100 for a team with ## round-robin winning probability of 0.5) rankings <- function(abilities){ probwin <- abilities/outer(abilities, abilities, "+") diag(probwin) <- 0 nteams <- ncol(probwin) RRWP <- rowSums(probwin)/(nteams - 1) low <- quantile(abilities, 0.45) high <- quantile(abilities, 0.55) middling <- uniroot(function(x) {sum(x/(x+abilities)) - 0.5*nteams}, lower = low, upper = high)$root KRACH <- abilities/middling*100 cbind(KRACH, RRWP) } ranks <- rankings(abilities) ## matches those produced by Joe Schlobotnik's Build Your Own Rankings head(signif(ranks, 4)[order(ranks[,1], decreasing = TRUE),]) ## At one point the NCAA rankings gave more credit for wins on ## neutral/opponent's ground. Home ice effects are easily ## incorporated into the Bradley-Terry model, comparing teams ## on a "level playing field" levelBT <- BTm(result, data.frame(team = visitor, home.ice = 0), data.frame(team = opponent, home.ice = home.ice), ~ team + home.ice, id = "team", data = icehockey) abilities <- exp(BTabilities(levelBT)[,1]) ranks2 <- rankings(abilities) ## Look at movement between the two rankings change <- factor(rank(ranks2[,1]) - rank(ranks[,1])) barplot(xtabs(~change), xlab = "Change in Rank", ylab = "No. Teams") ## Take out regional winners and look at top 10 regional <- c("RIT", "Alabama-Huntsville", "Michigan", "Cornell", "Boston College", "North Dakota") ranks <- ranks[!rownames(ranks) %in% regional] ranks2 <- ranks2[!rownames(ranks2) %in% regional] ## compare the 10 at-large selections under both rankings ## with those selected under NCAA rankings cbind(names(sort(ranks, decr = TRUE)[1:10]), names(sort(ranks2, decr = TRUE)[1:10]), c("Miami", "Denver", "Wisconsin", "St. Cloud State", "Bemidji State", "Yale", "Northern Michigan", "New Hampshire", "Alsaka", "Vermont")) plotProportions Plot Proportions of Tied Matches and Non-tied Matches Won Description Plot proportions of tied matches and non-tied matches won by the first player, within matches binned by the relative player ability, as expressed by the probability that the first player wins, given the match is not a tie. Add fitted lines for each set of matches, as given by the generalized Davidson model. Usage plotProportions( win, tie = NULL, loss, player1, player2, abilities = NULL, home.adv = NULL, tie.max = NULL, tie.scale = NULL, tie.mode = NULL, at.home1 = NULL, at.home2 = NULL, data = NULL, subset = NULL, bin.size = 20, xlab = "P(player1 wins | not a tie)", ylab = "Proportion", legend = NULL, col = 1:2, ... ) Arguments win a logical vector: TRUE if player1 wins, FALSE otherwise. tie a logical vector: TRUE if the outcome is a tie, FALSE otherwise (NULL if there are no ties). loss a logical vector: TRUE if player1 loses, FALSE otherwise. player1 an ID factor specifying the first player in each contest, with the same set of levels as player2. player2 an ID factor specifying the second player in each contest, with the same set of levels as player2. abilities the fitted abilities from a generalized Davidson model (or a Bradley-Terry model). home.adv if applicable, the fitted home advantage parameter from a generalized Davidson model (or a Bradley-Terry model). tie.max the fitted parameter from a generalized Davidson model corresponding to the maximum tie probability. tie.scale if applicable, the fitted parameter from a generalized Davidson model corre- sponding to the scale of dependence of the tie probability on the probability that player1 wins, given the outcome is not a draw. tie.mode if applicable, the fitted parameter from a generalized Davidson model corre- sponding to the location of maximum tie probability, in terms of the probability that player1 wins, given the outcome is not a draw. at.home1 a logical vector: TRUE if player1 is at home, FALSE otherwise. at.home2 a logical vector: TRUE if player2 is at home, FALSE otherwise. data an optional data frame providing variables required by the model, with one ob- servation per match. subset an optional logical or numeric vector specifying a subset of observations to in- clude in the plot. bin.size the approximate number of matches in each bin. xlab the label to use for the x-axis. ylab the label to use for the y-axis. legend text to use for the legend. col a vector specifying colours to use for the proportion of non-tied matches won and the proportion of tied matches. ... further arguments passed to plot. Details If home.adv is specified, the results are re-ordered if necessary so that the home player comes first; any matches played on neutral ground are omitted. First the probability that the first player wins given that the match is not a tie is computed: expit(home.adv + abilities[player1] − abilities[player2]) where home.adv and abilities are parameters from a generalized Davidson model that have been estimated on the log scale. The matches are then binned according to this probability, grouping together matches with similar relative ability between the first player and the second player. Within each bin, the proportion of tied matches is computed and these proportions are plotted against the mid-point of the bin. Then the bins are re-computed omitting the tied games and the proportion of non-tied matches won by the first player is found and plotted against the new mid-point. Finally curves are added for the probability of a tie and the conditional probability of win given the match is not a tie, under a generalized Davidson model with parameters as specified by tie.max, tie.scale and tie.mode. The function can also be used to plot the proportions of wins along with the fitted probability of a win under the Bradley-Terry model. Value A list of data frames: win a data frame comprising prop.win, the proportion of non-tied matches won by the first player in each bin and bin.win, the mid-point of each bin. tie (when ties are present) a data frame comprising prop.tie, the proportion of tied matches in each bin and bin.tie, the mid-point of each bin. Note This function is designed for single match outcomes, therefore data aggregated over player pairs will need to be expanded. Author(s) <NAME> See Also GenDavidson(), BTm() Examples #### A Bradley-Terry example using icehockey data ## Fit the standard Bradley-Terry model, ignoring home advantage standardBT <- BTm(outcome = result, player1 = visitor, player2 = opponent, id = "team", data = icehockey) ## comparing teams on a "level playing field" levelBT <- BTm(result, data.frame(team = visitor, home.ice = 0), data.frame(team = opponent, home.ice = home.ice), ~ team + home.ice, id = "team", data = icehockey) ## compare fit to observed proportion won ## exclude tied matches as not explicitly modelled here par(mfrow = c(1, 2)) plotProportions(win = result == 1, loss = result == 0, player1 = visitor, player2 = opponent, abilities = BTabilities(standardBT)[,1], data = icehockey, subset = result != 0.5, main = "Without home advantage") plotProportions(win = result == 1, loss = result == 0, player1 = visitor, player2 = opponent, home.adv = coef(levelBT)["home.ice"], at.home1 = 0, at.home2 = home.ice, abilities = BTabilities(levelBT)[,1], data = icehockey, subset = result != 0.5, main = "With home advantage") #### A generalized Davidson example using football data if (require(gnm)) { ## subset to first and last season for illustration football <- subset(football, season %in% c("2008-9", "2012-13")) ## convert to trinomial counts football.tri <- expandCategorical(football, "result", idvar = "match") ## add variable to indicate whether team playing at home football.tri$at.home <- !logical(nrow(football.tri)) ## fit Davidson model Dav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ## fit shifted & scaled Davidson model shifScalDav <- gnm(count ~ GenDavidson(result == 1, result == 0, result == -1, home:season, away:season, home.adv = ~1, tie.max = ~1, tie.scale = ~1, tie.mode = ~1, at.home1 = at.home, at.home2 = !at.home) - 1, eliminate = match, family = poisson, data = football.tri) ## diagnostic plots main <- c("Davidson", "Shifted & <NAME>") mod <- list(Dav, shifScalDav) names(mod) <- main alpha <- names(coef(Dav)[-(1:2)]) ## use football.tri data so that at.home can be found, ## but restrict to actual match results par(mfrow = c(1,2)) for (i in 1:2) { coef <- parameters(mod[[i]]) plotProportions(result == 1, result == 0, result == -1, home:season, away:season, abilities = coef[alpha], home.adv = coef["home.adv"], tie.max = coef["tie.max"], tie.scale = coef["tie.scale"], tie.mode = coef["tie.mode"], at.home1 = at.home, at.home2 = !at.home, main = main[i], data = football.tri, subset = count == 1) } } predict.BTglmmPQL Predict Method for BTglmmPQL Objects Description Obtain predictions and optionally standard errors of those predictions from a "BTglmmPQL" object. Usage ## S3 method for class 'BTglmmPQL' predict( object, newdata = NULL, newrandom = NULL, level = ifelse(object$sigma == 0, 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, terms = NULL, na.action = na.pass, ... ) Arguments object a fitted object of class "BTglmmPQL" newdata (optional) a data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. newrandom if newdata is provided, a corresponding design matrix for the random effects, will columns corresponding to the random effects estimated in the original model. level an integer vector giving the level(s) at which predictions are required. Level zero corresponds to population-level predictions (fixed effects only), whilst level one corresponds to the individual-level predictions (full model) which are NA for contests involving individuals not in the original data. By default level = 0 if the model converged to a fixed effects model, 1 otherwise. type the type of prediction required. The default is on the scale of the linear predic- tors; the alternative "response" is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabil- ities on logit scale) and type = "response" gives the predicted probabilities. The "terms" option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale (fixed effects only). se.fit logical switch indicating if standard errors are required. terms with type ="terms" by default all terms are returned. A character vector speci- fies which terms are to be returned. na.action function determining what should be done with missing values in newdata. The default is to predict NA. ... further arguments passed to or from other methods. Details If newdata is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit are treated is determined by the na.action argument of that fit. If na.action = na.omit omitted cases will not appear in the residuals, whereas if na.action = na.exclude they will appear (in predictions and standard errors), with residual value NA. See also napredict. Standard errors for the predictions are approximated assuming the variance of the random effects is known, see Booth and Hobert (1998). Value If se.fit = FALSE, a vector or matrix of predictions. If se = TRUE, a list with components fit Predictions se.fit Estimated standard errors Author(s) <NAME> References <NAME>. and <NAME>. (1998). Standard errors of prediction in Generalized Linear Mixed Models. Journal of the American Statistical Association 93(441), 262 – 272. See Also predict.glm(), predict.BTm() Examples seedsModel <- glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = binomial, data = seeds) pred <- predict(seedsModel, level = 0) predTerms <- predict(seedsModel, type = "terms") all.equal(pred, rowSums(predTerms) + attr(predTerms, "constant")) predict.BTm Predict Method for Bradley-Terry Models Description Obtain predictions and optionally standard errors of those predictions from a fitted Bradley-Terry model. Usage ## S3 method for class 'BTm' predict( object, newdata = NULL, level = ifelse(is.null(object$random), 0, 1), type = c("link", "response", "terms"), se.fit = FALSE, dispersion = NULL, terms = NULL, na.action = na.pass, ... ) Arguments object a fitted object of class "BTm" newdata (optional) a data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. level for models with random effects: an integer vector giving the level(s) at which predictions are required. Level zero corresponds to population-level predictions (fixed effects only), whilst level one corresponds to the player-level predictions (full model) which are NA for contests involving players not in the original data. By default, level = 0 for a fixed effects model, 1 otherwise. type the type of prediction required. The default is on the scale of the linear predic- tors; the alternative "response" is on the scale of the response variable. Thus for a default Bradley-Terry model the default predictions are of log-odds (proba- bilities on logit scale) and type = "response" gives the predicted probabilities. The "terms" option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale (fixed effects only). se.fit logical switch indicating if standard errors are required. dispersion a value for the dispersion, not used for models with random effects. If omitted, that returned by summary applied to the object is used, where applicable. terms with type ="terms" by default all terms are returned. A character vector speci- fies which terms are to be returned. na.action function determining what should be done with missing values in newdata. The default is to predict NA. ... further arguments passed to or from other methods. Details If newdata is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit are treated is determined by the na.action argument of that fit. If na.action = na.omit omitted cases will not appear in the residuals, whereas if na.action = na.exclude they will appear (in predictions and standard errors), with residual value NA. See also napredict. Value If se.fit = FALSE, a vector or matrix of predictions. If se = TRUE, a list with components fit Predictions se.fit Estimated standard errors Author(s) <NAME> See Also predict.glm(), predict.glmmPQL() Examples ## The final model in example(flatlizards) result <- rep(1, nrow(flatlizards$contests)) Whiting.model3 <- BTm(1, winner, loser, ~ throat.PC1[..] + throat.PC3[..] + head.length[..] + SVL[..] + (1|..), family = binomial(link = "probit"), data = flatlizards, trace = TRUE) ## `new' data for contests between four of the original lizards ## factor levels must correspond to original levels, but unused levels ## can be dropped - levels must match rows of predictors newdata <- list(contests = data.frame( winner = factor(c("lizard048", "lizard060"), levels = c("lizard006", "lizard011", "lizard048", "lizard060")), loser = factor(c("lizard006", "lizard011"), levels = c("lizard006", "lizard011", "lizard048", "lizard060")) ), predictors = flatlizards$predictors[c(3, 6, 27, 33), ]) predict(Whiting.model3, level = 1, newdata = newdata) ## same as predict(Whiting.model3, level = 1)[1:2] ## introducing a new lizard newpred <- rbind(flatlizards$predictors[c(3, 6, 27), c("throat.PC1","throat.PC3", "SVL", "head.length")], c(-5, 1.5, 1, 0.1)) rownames(newpred)[4] <- "lizard059" newdata <- list(contests = data.frame( winner = factor(c("lizard048", "lizard059"), levels = c("lizard006", "lizard011", "lizard048", "lizard059")), loser = factor(c("lizard006", "lizard011"), levels = c("lizard006", "lizard011", "lizard048", "lizard059")) ), predictors = newpred) ## can only predict at population level for contest with new lizard predict(Whiting.model3, level = 0:1, se.fit = TRUE, newdata = newdata) ## predicting at specific levels of covariates ## consider a model from example(CEMS) table6.model <- BTm(outcome = cbind(win1.adj, win2.adj), player1 = school1, player2 = school2, formula = ~ .. + WOR[student] * Paris[..] + WOR[student] * Milano[..] + WOR[student] * Barcelona[..] + DEG[student] * St.Gallen[..] + STUD[student] * Paris[..] + STUD[student] * St.Gallen[..] + ENG[student] * St.Gallen[..] + FRA[student] * London[..] + FRA[student] * Paris[..] + SPA[student] * Barcelona[..] + ITA[student] * London[..] + ITA[student] * Milano[..] + SEX[student] * Milano[..], refcat = "Stockholm", data = CEMS) ## estimate abilities for a combination not seen in the original data ## same schools schools <- levels(CEMS$preferences$school1) ## new student data students <- data.frame(STUD = "other", ENG = "good", FRA = "good", SPA = "good", ITA = "good", WOR = "yes", DEG = "no", SEX = "female", stringsAsFactors = FALSE) ## set levels to be the same as original data for (i in seq_len(ncol(students))){ students[,i] <- factor(students[,i], levels(CEMS$students[,i])) } newdata <- list(preferences = data.frame(student = factor(500), # new id matching with `students[1,]` school1 = factor("London", levels = schools), school2 = factor("Paris", levels = schools)), students = students, schools = CEMS$schools) ## warning can be ignored as model specification was over-parameterized predict(table6.model, newdata = newdata) ## if treatment contrasts are use (i.e. one player is set as the reference ## category), then predicting the outcome of contests against the reference ## is equivalent to estimating abilities with specific covariate values ## add student with all values at reference levels students <- rbind(students, data.frame(STUD = "other", ENG = "good", FRA = "good", SPA = "good", ITA = "good", WOR = "no", DEG = "no", SEX = "female", stringsAsFactors = FALSE)) ## set levels to be the same as original data for (i in seq_len(ncol(students))){ students[,i] <- factor(students[,i], levels(CEMS$students[,i])) } newdata <- list(preferences = data.frame(student = factor(rep(c(500, 502), each = 6)), school1 = factor(schools, levels = schools), school2 = factor("Stockholm", levels = schools)), students = students, schools = CEMS$schools) predict(table6.model, newdata = newdata, se.fit = TRUE) ## the second set of predictions (elements 7-12) are equivalent to the output ## of BTabilities; the first set are adjust for `WOR` being equal to "yes" BTabilities(table6.model) qvcalc.BTabilities Quasi Variances for Estimated Abilities Description A method for qvcalc::qvcalc() to compute a set of quasi variances (and corresponding quasi standard errors) for estimated abilities from a Bradley-Terry model as returned by BTabilities(). Usage ## S3 method for class 'BTabilities' qvcalc(object, ...) Arguments object a "BTabilities" object as returned by BTabilities(). ... additional arguments, currently ignored. Details For details of the method see Firth (2000), Firth (2003) or Firth and de Menezes (2004). Quasi variances generalize and improve the accuracy of “floating absolute risk” (Easton et al., 1991). This device for economical model summary was first suggested by Ridout (1989). Ordinarily the quasi variances are positive and so their square roots (the quasi standard errors) exist and can be used in plots, etc. Value A list of class "qv", with components covmat The full variance-covariance matrix for the estimated abilities. qvframe A data frame with variables estimate, SE, quasiSE and quasiVar, the last two being a quasi standard error and quasi-variance for each ability. dispersion NULL (dispersion is fixed to 1). relerrs Relative errors for approximating the standard errors of all simple contrasts. factorname The name of the ID factor identifying players in the BTm formula. coef.indices NULL (no required for this method). modelcall The call to BTm to fit the Bradley-Terry model from which the abilities were estimated. Author(s) <NAME> References <NAME>, <NAME>. and <NAME>. (1991) Floating absolute risk: an alternative to relative risk in survival and case-control analysis avoiding an arbitrary reference group. Statistics in Medicine 10, 1025–1035. <NAME>. (2000) Quasi-variances in Xlisp-Stat and on the web. Journal of Statistical Software 5.4, 1–13. https://www.jstatsoft.org/article/view/v005i04. <NAME>. (2003) Overcoming the reference category problem in the presentation of statistical mod- els. Sociological Methodology 33, 1–18. <NAME>. and <NAME>. (2004) Quasi-variances. Biometrika 91, 65–80. <NAME>. de (1999) More useful standard errors for group and factor effects in generalized linear models. D.Phil. Thesis, Department of Statistics, University of Oxford. Ridout, M.S. (1989). Summarizing the results of fitting generalized linear models to data from designed experiments. In: Statistical Modelling: Proceedings of GLIM89 and the 4th International Workshop on Statistical Modelling held in Trento, Italy, July 17–21, 1989 (A. Decarli et al., eds.), pp 262–269. New York: Springer. See Also qvcalc::worstErrors(), qvcalc::plot.qv(). Examples example(baseball) baseball.qv <- qvcalc(BTabilities(baseballModel2)) print(baseball.qv) plot(baseball.qv, xlab = "team", levelNames = c("Bal", "Bos", "Cle", "Det", "Mil", "NY", "Tor")) residuals.BTm Residuals from a Bradley-Terry Model Description Computes residuals from a model object of class "BTm". In additional to the usual options for objects inheriting from class "glm", a "grouped" option is implemented to compute player-specific residuals suitable for diagnostic checking of a predictor involving player-level covariates. Usage ## S3 method for class 'BTm' residuals( object, type = c("deviance", "pearson", "working", "response", "partial", "grouped"), by = object$id, ... ) Arguments object a model object for which inherits(model, "BTm") is TRUE. type the type of residuals which should be returned. The alternatives are: "deviance" (default), "pearson", "working", "response", and "partial". by the grouping factor to use when type = "grouped". ... arguments to pass on other methods. Details For type other than "grouped" see residuals.glm(). For type = "grouped" the residuals returned are weighted means of working residuals, with weights equal to the binomial denominators in the fitted model. These are suitable for diagnostic model checking, for example plotting against candidate predictors. Value A numeric vector of length equal to the number of players, with a "weights" attribute. Author(s) <NAME> and <NAME> References <NAME>. (2005) Bradley-Terry models in R. Journal of Statistical Software 12(1), 1–12. <NAME>. and <NAME>. (2012) Bradley-Terry models in R: The BradleyTerry2 package. Journal of Statistical Software, 48(9), 1–21. See Also BTm(), BTabilities() Examples ## ## See ?springall ## springall.model <- BTm(cbind(win.adj, loss.adj), col, row, ~ flav[..] + gel[..] + flav.2[..] + gel.2[..] + flav.gel[..] + (1 | ..), data = springall) res <- residuals(springall.model, type = "grouped") with(springall$predictors, plot(flav, res)) with(springall$predictors, plot(gel, res)) ## Weighted least-squares regression of these residuals on any variable ## already included in the model yields slope coefficient zero: lm(res ~ flav, weights = attr(res, "weights"), data = springall$predictors) lm(res ~ gel, weights = attr(res, "weights"), data = springall$predictors) seeds Seed Germination Data from Crowder (1978) Description Data from Crowder(1978) giving the proportion of seeds germinated for 21 plates that were ar- ranged according to a 2x2 factorial layout by seed variety and type of root extract. Usage seeds Format A data frame with 21 observations on the following 4 variables. r the number of germinated seeds. n the total number of seeds. seed the seed variety. extract the type of root extract. Source Crowder, M. (1978) Beta-Binomial ANOVA for proportions. Applied Statistics, 27, 34–37. References <NAME>. and <NAME>. (1993) Approximate inference in Generalized Linear Mixed Models. Journal of the American Statistical Association, 88(421), 9–25. See Also glmmPQL() Examples summary(glmmPQL(cbind(r, n - r) ~ seed + extract, random = diag(nrow(seeds)), family = binomial, data = seeds)) sound.fields Kousgaard (1984) Data on Pair Comparisons of Sound Fields Description The results of a series of factorial subjective room acoustic experiments carried out at the Technical University of Denmark by <NAME>. Usage sound.fields Format A list containing two data frames, sound.fields$comparisons, and sound.fields$design. The sound.fields$comparisons data frame has 84 observations on the following 8 variables: field1 a factor with levels c("000", "001", "010", "011", "100", "101", "110", "111"), the first sound field in a comparison field2 a factor with the same levels as field1; the second sound field in a comparison win1 integer, the number of times that field1 was preferred to field2 tie integer, the number of times that no preference was expressed when comparing field1 and field2 win2 integer, the number of times that field2 was preferred to field1 win1.adj numeric, equal to win1 + tie/2 win2.adj numeric, equal to win2 + tie/2 instrument a factor with 3 levels, c("cello", "flute", "violin") The sound.fields$design data frame has 8 observations (one for each of the sound fields com- pared in the experiment) on the following 3 variables: a") a factor with levels c("0", "1"), the direct sound factor (0 for obstructed sight line, 1 for free sight line); contrasts are sum contrasts b a factor with levels c("0", "1"), the reflection factor (0 for -26dB, 1 for -20dB); contrasts are sum contrasts c a factor with levels c("0", "1"), the reverberation factor (0 for -24dB, 1 for -20dB); contrasts are sum contrasts Details The variables win1.adj and win2.adj are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Davidson (1970) method that is used by Kousgaard (1984): see the examples. Author(s) <NAME> Source Kousgaard, N. (1984) Analysis of a Sound Field Experiment by a Model for Paired Comparisons with Explanatory Variables. Scandinavian Journal of Statistics 11, 51–57. References <NAME>. (1970) Extending the Bradley-Terry model to accommodate ties in paired compar- ison experiments. Journal of the American Statistical Association 65, 317–328. Examples ## ## Fit the Bradley-Terry model to data for flutes, using the simple ## 'add 0.5' method to handle ties: ## flutes.model <- BTm(cbind(win1.adj, win2.adj), field1, field2, ~ field, id = "field", subset = (instrument == "flute"), data = sound.fields) ## ## This agrees (after re-scaling) quite closely with the estimates given ## in Table 3 of Kousgaard (1984): ## table3.flutes <- c(-0.581, -1.039, 0.347, 0.205, 0.276, 0.347, 0.311, 0.135) plot(c(0, coef(flutes.model)), table3.flutes) abline(lm(table3.flutes ~ c(0, coef(flutes.model)))) ## ## Now re-parameterise that model in terms of the factorial effects, as ## in Table 5 of Kousgaard (1984): ## flutes.model.reparam <- update(flutes.model, formula = ~ a[field] * b[field] * c[field] ) table5.flutes <- c(.267, .250, -.088, -.294, .062, .009, -0.070) plot(coef(flutes.model.reparam), table5.flutes) abline(lm(table5.flutes ~ coef(flutes.model.reparam))) springall Springall (1973) Data on Subjective Evaluation of Flavour Strength Description Data from Section 7 of the paper by Springall (1973) on Bradley-Terry response surface modelling. An experiment to assess the effects of gel and flavour concentrations on the subjective assessment of flavour strength by pair comparisons. Usage springall Format A list containing two data frames, springall$contests and springall$predictors. The springall$contests data frame has 36 observations (one for each possible pairwise compar- ison of the 9 treatments) on the following 7 variables: row a factor with levels 1:9, the row number in Springall’s dataset# col a factor with levels 1:9, the column number in Springall’s dataset win integer, the number of wins for column treatment over row treatment loss integer, the number of wins for row treatment over column treatment tie integer, the number of ties between row and column treatments win.adj numeric, equal to win + tie/2 loss.adj numeric, equal to loss + tie/2 The predictors data frame has 9 observations (one for each treatment) on the following 5 vari- ables: flav numeric, the flavour concentration gel numeric, the gel concentration flav.2 numeric, equal to flav^2 gel.2 numeric, equal to gel^2 flav.gel numeric, equal to flav * gel Details The variables win.adj and loss.adj are provided in order to allow a simple way of handling ties (in which a tie counts as half a win and half a loss), which is slightly different numerically from the Rao and Kupper (1967) model that Springall (1973) uses. Author(s) <NAME> Source Springall, A (1973) Response surface fitting using a generalization of the Bradley-Terry paired comparison method. Applied Statistics 22, 59–68. References <NAME>. and <NAME>. (1967) Ties in paired-comparison experiments: a generalization of the Bradley-Terry model. Journal of the American Statistical Association, 63, 194–204. Examples ## ## Fit the same response-surface model as in section 7 of ## Springall (1973). ## ## Differences from Springall's fit are minor, arising from the ## different treatment of ties. ## ## Springall's model in the paper does not include the random effect. ## In this instance, however, that makes no difference: the random-effect ## variance is estimated as zero. ## summary(springall.model <- BTm(cbind(win.adj, loss.adj), col, row, ~ flav[..] + gel[..] + flav.2[..] + gel.2[..] + flav.gel[..] + (1 | ..), data = springall))
pygbif
readthedoc
Unknown
pygbif Documentation Release 0.6.3 <NAME> May 25, 2023 Installation 2.1 Installation guid... 5 3.1 Frequently Asked Question... 7 3.2 Usecase... 7 4.1 pygbif module... 9 4.2 caching modul... 9 4.3 occurrence modul... 11 4.4 registry modul... 25 4.5 species modul... 32 4.6 maps modul... 39 4.7 utils modul... 41 5.1 Changelo... 43 5.2 Contributor... 46 5.3 Contributin... 46 5.4 Contributor Code of Conduc... 47 5.5 LICENS... 47 5.6 Indices and table... 48 i ii pygbif Documentation, Release 0.6.3 Python client for the GBIF API Source on GitHub at gbif/pygbif pygbif Documentation, Release 0.6.3 2 Installation CHAPTER 1 Getting help Having trouble? Or want to know how to get started? • Try the FAQ – it’s got answers to some common questions. • Looking for specific information? Try the genindex • Report bugs with pygbif in our issue tracker. pygbif Documentation, Release 0.6.3 4 Chapter 1. Getting help CHAPTER 2 Installation 2.1 Installation guide 2.1.1 Installing pygbif Stable from pypi pip install pygbif Development version [sudo] pip install git+git://github.com/gbif/pygbif.git#egg=pygbif Installation guide How to install pygbif. pygbif Documentation, Release 0.6.3 6 Chapter 2. Installation CHAPTER 3 Docs 3.1 Frequently Asked Questions 3.1.1 What other GBIF clients are out there? • R: rgbif • Ruby: gbifrb 3.2 Usecases 3.2.1 Use case 1: Get occurrence data for a set of taxonomic names Load library from pygbif import species as species from pygbif import occurrences as occ First, get GBIF backbone taxonomic keys splist = ['Cyanocitta stelleri', 'Junco hyemalis', 'Aix sponsa', 'Ursus americanus', 'Pinus conorta', 'Poa annuus'] keys = [ species.name_backbone(x)['usageKey'] for x in splist ] Then, get a count of occurrence records for each taxon, and pull out number of records found for each taxon out = [ occ.search(taxonKey = x, limit=0)['count'] for x in keys ] Make a dict of species names and number of records, sorting in descending order x = dict(zip(splist, out)) sorted(x.items(), key=lambda z:z[1], reverse=True) pygbif Documentation, Release 0.6.3 Frequently Asked Questions Frequently asked questions. Usecases Usecases for pygbif. CHAPTER 4 Modules 4.1 pygbif modules pygbif is split up into modules for each of the major groups of API methods. • Registry - Datasets, Nodes, Installations, Networks, Organizations • Species - Taxonomic names • Occurrences - Occurrence data, including the download API • Maps - Make maps You can import the entire library, or each module individually as needed. In addition, the caching method allows you to manage whether HTTP requests are cached or not. 4.2 caching module caching module API: • pygbif.caching Example usage: import pygbif pygbif.caching(True) 4.2.1 caching API pygbif.caching(cache=False, name=None, backend=’sqlite’, expire_after=86400, allow- able_codes=(200, ), allowable_methods=(’GET’, )) pygbif caching management pygbif Documentation, Release 0.6.3 Parameters • cache – [bool] if True all http requests are cached. if False (default), no http requests are cached. • name – [str] the cache name. when backend=sqlite, this is the path for the sqlite file, ignored if sqlite not used. if not set, the file is put in your temporary directory, and therefore is cleaned up/deleted after closing your python session • backend – [str] the backend, one of: – sqlite sqlite database (default) – memory not persistent, stores all data in Python dict in memory – mongodb (experimental) MongoDB database (pymongo < 3.0 required and configured) – redis stores all data on a redis data store (redis required and configured) • expire_after – [str] timedelta or number of seconds after cache will be expired or None (default) to ignore expiration. default: 86400 seconds (24 hrs) • allowable_codes – [tuple] limit caching only for response with this codes (default: 200) • allowable_methods – [tuple] cache only requests of this methods (default: ‘GET’) Returns sets options to be used by pygbif, returns the options you selected in a hash Note: setting cache=False will turn off caching, but the backend data still persists. thus, you can turn caching back on without losing your cache. this also means if you want to delete your cache you have to do it yourself. Note: on loading pygbif, we clean up expired responses Usage: import pygbif # caching is off by default from pygbif import occurrences %time z=occurrences.search(taxonKey = 3329049) %time w=occurrences.search(taxonKey = 3329049) # turn caching on pygbif.caching(True) %time z=occurrences.search(taxonKey = 3329049) %time w=occurrences.search(taxonKey = 3329049) # set a different backend pygbif.caching(cache=True, backend="redis") %time z=occurrences.search(taxonKey = 3329049) %time w=occurrences.search(taxonKey = 3329049) # set a different backend pygbif.caching(cache=True, backend="mongodb") %time z=occurrences.search(taxonKey = 3329049) %time w=occurrences.search(taxonKey = 3329049) # set path to a sqlite file pygbif.caching(name = "some/path/my_file") pygbif Documentation, Release 0.6.3 4.3 occurrence module occurrence module API: • search • get • get_verbatim • get_fragment • count • count_basisofrecord • count_year • count_datasets • count_countries • count_schema • count_publishingcountries • download • download_meta • download_list • download_get Example usage: from pygbif import occurrences as occ occ.search(taxonKey = 3329049) occ.get(key = 1986559641) occ.count(isGeoreferenced = True) occ.download('basisOfRecord = PRESERVED_SPECIMEN') occ.download('taxonKey = 3119195') occ.download('decimalLatitude > 50') occ.download_list(user = "sckott", limit = 5) occ.download_meta(key = "0000099-140929101555934") occ.download_get("0000066-140928181241064") 4.3.1 occurrences API occurrences.search(repatriated=None, kingdomKey=None, phylumKey=None, classKey=None, orderKey=None, familyKey=None, genusKey=None, subgenusKey=None, scien- tificName=None, country=None, publishingCountry=None, hasCoordinate=None, typeStatus=None, recordNumber=None, lastInterpreted=None, continent=None, geometry=None, recordedBy=None, recordedByID=None, identifiedByID=None, basisOfRecord=None, datasetKey=None, eventDate=None, catalogNum- ber=None, year=None, month=None, decimalLatitude=None, decimalLongi- tude=None, elevation=None, depth=None, institutionCode=None, collection- Code=None, hasGeospatialIssue=None, issue=None, q=None, spellCheck=None, mediatype=None, limit=300, offset=0, establishmentMeans=None, facet=None, facetMincount=None, facetMultiselect=None, **kwargs) Search GBIF occurrences pygbif Documentation, Release 0.6.3 Parameters • taxonKey – [int] A GBIF occurrence identifier • q – [str] Simple search parameter. The value for this parameter can be a simple word or a phrase. • spellCheck – [bool] If True ask GBIF to check your spelling of the value passed to the search parameter. IMPORTANT: This only checks the input to the search parameter, and no others. Default: False • repatriated – [str] Searches for records whose publishing country is different to the country where the record was recorded in • kingdomKey – [int] Kingdom classification key • phylumKey – [int] Phylum classification key • classKey – [int] Class classification key • orderKey – [int] Order classification key • familyKey – [int] Family classification key • genusKey – [int] Genus classification key • subgenusKey – [int] Subgenus classification key • scientificName – [str] A scientific name from the GBIF backbone. All included and synonym taxa are included in the search. • datasetKey – [str] The occurrence dataset key (a uuid) • catalogNumber – [str] An identifier of any form assigned by the source within a physical collection or digital dataset for the record which may not unique, but should be fairly unique in combination with the institution and collection code. • recordedBy – [str] The person who recorded the occurrence. • recordedByID – [str] Identifier (e.g. ORCID) for the person who recorded the occur- rence • identifiedByID – [str] Identifier (e.g. ORCID) for the person who provided the taxo- nomic identification of the occurrence. • collectionCode – [str] An identifier of any form assigned by the source to identify the physical collection or digital dataset uniquely within the text of an institution. • institutionCode – [str] An identifier of any form assigned by the source to identify the institution the record belongs to. Not guaranteed to be que. • country – [str] The 2-letter country code (as per ISO-3166-1) of the country in which the occurrence was recorded. See here http://en.wikipedia.org/wiki/ISO_3166-1_alpha-2 • basisOfRecord – [str] Basis of record, as defined in our BasisOfRecord enum here http://gbif.github.io/gbif-api/apidocs/org/gbif/api/vocabulary/BasisOfRecord.html Accept- able values are: – FOSSIL_SPECIMEN An occurrence record describing a fossilized specimen. – HUMAN_OBSERVATION An occurrence record describing an observation made by one or more people. – LIVING_SPECIMEN An occurrence record describing a living specimen. pygbif Documentation, Release 0.6.3 – MACHINE_OBSERVATION An occurrence record describing an observation made by a machine. – MATERIAL_CITATION An occurrence record based on a reference to a scholarly pub- lication. – OBSERVATION An occurrence record describing an observation. – OCCURRENCE An existence of an organism at a particular place and time. No more specific basis. – PRESERVED_SPECIMEN An occurrence record describing a preserved specimen. • eventDate – [date] Occurrence date in ISO 8601 format: yyyy, yyyy-MM, yyyy-MM- dd, or MM-dd. Supports range queries, smaller,larger (e.g., 1990,1991, whereas 1991, 1990 wouldn’t work) • year – [int] The 4 digit year. A year of 98 will be interpreted as AD 98. Supports range queries, smaller,larger (e.g., 1990,1991, whereas 1991,1990 wouldn’t work) • month – [int] The month of the year, starting with 1 for January. Supports range queries, smaller,larger (e.g., 1,2, whereas 2,1 wouldn’t work) • decimalLatitude – [float] Latitude in decimals between -90 and 90 based on WGS 84. Supports range queries, smaller,larger (e.g., 25,30, whereas 30,25 wouldn’t work) • decimalLongitude – [float] Longitude in decimals between -180 and 180 based on WGS 84. Supports range queries (e.g., -0.4,-0.2, whereas -0.2,-0.4 wouldn’t work). • publishingCountry – [str] The 2-letter country code (as per ISO-3166-1) of the coun- try in which the occurrence was recorded. • elevation – [int/str] Elevation in meters above sea level. Supports range queries, smaller,larger (e.g., 5,30, whereas 30,5 wouldn’t work) • depth – [int/str] Depth in meters relative to elevation. For example 10 meters below a lake surface with given elevation. Supports range queries, smaller,larger (e.g., 5,30, whereas 30,5 wouldn’t work) • geometry – [str] Searches for occurrences inside a polygon described in Well Known Text (WKT) format. A WKT shape written as either POINT, LINESTRING, LINEAR- RING POLYGON, or MULTIPOLYGON. Example of a polygon: ((30.1 10.1, 20, 20 40, 40 40, 30.1 10.1)) would be queried as http://bit.ly/1BzNwDq. Poly- gons must have counter-clockwise ordering of points. • hasGeospatialIssue – [bool] Includes/excludes occurrence records which contain spatial issues (as determined in our record interpretation), i.e. hasGeospatialIssue=TRUE returns only those records with spatial issues while hasGeospatialIssue=FALSE includes only records without spatial issues. The absence of this parameter returns any record with or without spatial issues. • issue – [str] One or more of many possible issues with each occurrence record. See Details. Issues passed to this parameter filter results by the issue. • hasCoordinate – [bool] Return only occurence records with lat/long data (True) or all records (False, default). • typeStatus – [str] Type status of the specimen. One of many options. See ?typestatus • recordNumber – [int] Number recorded by collector of the data, different from GBIF record number. See http://rs.tdwg.org/dwc/terms/#recordNumber} for more info pygbif Documentation, Release 0.6.3 • lastInterpreted – [date] Date the record was last modified in GBIF, in ISO 8601 format: yyyy, yyyy-MM, yyyy-MM-dd, or MM-dd. Supports range queries, smaller,larger (e.g., 1990,1991, whereas 1991,1990 wouldn’t work) • continent – [str] Continent. One of africa, antarctica, asia, europe, north_america (North America includes the Caribbean and reachies down and includes Panama), oceania, or south_america • fields – [str] Default (all) returns all fields. minimal returns just taxon name, key, latitude, and longitude. Or specify each field you want returned by name, e.g. fields = ['name','latitude','elevation']. • mediatype – [str] Media type. Default is NULL, so no filtering on mediatype. Options: NULL, MovingImage, Sound, and StillImage • limit – [int] Number of results to return. Default: 300 • offset – [int] Record to start at. Default: 0 • facet – [str] a character vector of length 1 or greater • establishmentMeans – [str] EstablishmentMeans, possible values include: INTRO- DUCED, INVASIVE, MANAGED, NATIVE, NATURALISED, UNCERTAIN • facetMincount – [int] minimum number of records to be included in the faceting results • facetMultiselect – [bool] Set to True to still return counts for values that are not currently filtered. See examples. Default: False Returns A dictionary Usage: from pygbif import occurrences occurrences.search(taxonKey = 3329049) # Return 2 results, this is the default by the way occurrences.search(taxonKey=3329049, limit=2) # Instead of getting a taxon key first, you can search for a name directly # However, note that using this approach (with `scientificName="..."`) # you are getting synonyms too. The results for using `scientifcName` and # `taxonKey` parameters are the same in this case, but I wouldn't be surprised if ˓→for some # names they return different results occurrences.search(scientificName = 'Ursus americanus') from pygbif import species key = species.name_backbone(name = 'Ursus americanus', rank='species')['usageKey'] occurrences.search(taxonKey = key) # Search by dataset key occurrences.search(datasetKey='7b5d6a48-f762-11e1-a439-00145eb45e9a', limit=20) # Search by catalog number occurrences.search(catalogNumber="49366", limit=20) # occurrences.search(catalogNumber=["49366","Bird.27847588"], limit=20) # Use paging parameters (limit and offset) to page. Note the different results # for the two queries below. occurrences.search(datasetKey='7b5d6a48-f762-11e1-a439-00145eb45e9a', offset=10, ˓→limit=5) (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) occurrences.search(datasetKey='7b5d6a48-f762-11e1-a439-00145eb45e9a', offset=20, ˓→limit=5) # Many dataset keys # occurrences.search(datasetKey=["50c9509d-22c7-4a22-a47d-8c48425ef4a7", ˓→"7b5d6a48-f762-11e1-a439-00145eb45e9a"], limit=20) # Search by collector name res = occurrences.search(recordedBy="smith", limit=20) [ x['recordedBy'] for x in res['results'] ] # Many collector names # occurrences.search(recordedBy=["smith","<NAME>"], limit=20) # recordedByID occurrences.search(recordedByID="https://orcid.org/0000-0003-1691-239X", limit = ˓→3) # identifiedByID occurrences.search(identifiedByID="https://orcid.org/0000-0003-1691-239X", limit ˓→= 3) # Search for many species splist = ['Cyanocitta stelleri', 'Junco hyemalis', 'Aix sponsa'] keys = [ species.name_suggest(x)[0]['key'] for x in splist ] out = [ occurrences.search(taxonKey = x, limit=1) for x in keys ] [ x['results'][0]['speciesKey'] for x in out ] # Search - q parameter occurrences.search(q = "kingfisher", limit=20) ## spell check - only works with the `search` parameter ### spelled correctly - same result as above call occurrences.search(q = "kingfisher", limit=20, spellCheck = True) ### spelled incorrectly - stops with suggested spelling occurrences.search(q = "kajsdkla", limit=20, spellCheck = True) ### spelled incorrectly - stops with many suggested spellings ### and number of results for each occurrences.search(q = "helir", limit=20, spellCheck = True) # Search on latitidue and longitude occurrences.search(decimalLatitude=50, decimalLongitude=10, limit=2) # Search on a bounding box ## in well known text format occurrences.search(geometry='POLYGON((30.1 10.1, 10 20, 20 40, 40 40, 30.1 10.1)) ˓→', limit=20) from pygbif import species key = species.name_suggest(q='Aesculus hippocastanum')[0]['key'] occurrences.search(taxonKey=key, geometry='POLYGON((30.1 10.1, 10 20, 20 40, 40 ˓→40, 30.1 10.1))', limit=20) ## multipolygon wkt = 'MULTIPOLYGON(((-123 38, -123 43, -116 43, -116 38, -123 38)),((-97 41, -97 ˓→45, -93 45, -93 41, -97 41)))' occurrences.search(geometry = wkt, limit = 20) # Search on country occurrences.search(country='US', limit=20) (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) occurrences.search(country='FR', limit=20) occurrences.search(country='DE', limit=20) # Get only occurrences with lat/long data occurrences.search(taxonKey=key, hasCoordinate=True, limit=20) # Get only occurrences that were recorded as living specimens occurrences.search(taxonKey=key, basisOfRecord="LIVING_SPECIMEN", ˓→hasCoordinate=True, limit=20) # Get occurrences for a particular eventDate occurrences.search(taxonKey=key, eventDate="2013", limit=20) occurrences.search(taxonKey=key, year="2013", limit=20) occurrences.search(taxonKey=key, month="6", limit=20) # Get occurrences based on depth key = species.name_backbone(name='Salmo salar', kingdom='animals')['usageKey'] occurrences.search(taxonKey=key, depth="5", limit=20) # Get occurrences based on elevation key = species.name_backbone(name='Puma concolor', kingdom='animals')['usageKey'] occurrences.search(taxonKey=key, elevation=50, hasCoordinate=True, limit=20) # Get occurrences based on institutionCode occurrences.search(institutionCode="TLMF", limit=20) # Get occurrences based on collectionCode occurrences.search(collectionCode="Floristic Databases MV - Higher Plants", ˓→limit=20) # Get only those occurrences with spatial issues occurrences.search(taxonKey=key, hasGeospatialIssue=True, limit=20) # Search using a query string occurrences.search(q="kingfisher", limit=20) # Range queries ## See Detail for parameters that support range queries ### this is a range depth, with lower/upper limits in character string occurrences.search(depth='50,100') ## Range search with year occurrences.search(year='1999,2000', limit=20) ## Range search with latitude occurrences.search(decimalLatitude='29.59,29.6') # Search by specimen type status ## Look for possible values of the typeStatus parameter looking at the typestatus ˓→dataset occurrences.search(typeStatus = 'allotype') # Search by specimen record number ## This is the record number of the person/group that submitted the data, not GBIF ˓→'s numbers ## You can see that many different groups have record number 1, so not super ˓→helpful (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) occurrences.search(recordNumber = 1) # Search by last time interpreted: Date the record was last modified in GBIF ## The lastInterpreted parameter accepts ISO 8601 format dates, including ## yyyy, yyyy-MM, yyyy-MM-dd, or MM-dd. Range queries are accepted for ˓→lastInterpreted occurrences.search(lastInterpreted = '2014-04-01') # Search by continent ## One of africa, antarctica, asia, europe, north_america, oceania, or south_ ˓→america occurrences.search(continent = 'south_america') occurrences.search(continent = 'africa') occurrences.search(continent = 'oceania') occurrences.search(continent = 'antarctica') # Search for occurrences with images occurrences.search(mediatype = 'StillImage') occurrences.search(mediatype = 'MovingImage') x = occurrences.search(mediatype = 'Sound') [z['media'] for z in x['results']] # Query based on issues occurrences.search(taxonKey=1, issue='DEPTH_UNLIKELY') occurrences.search(taxonKey=1, issue=['DEPTH_UNLIKELY','COORDINATE_ROUNDED']) # Show all records in the Arizona State Lichen Collection that cant be matched to ˓→the GBIF # backbone properly: occurrences.search(datasetKey='84c0e1a0-f762-11e1-a439-00145eb45e9a', issue=[ ˓→'TAXON_MATCH_NONE','TAXON_MATCH_HIGHERRANK']) # If you pass in an invalid polygon you get hopefully informative errors ### the WKT string is fine, but GBIF says bad polygon wkt = 'POLYGON((-178.59375 64.83258989321493,-165.9375 59.24622380205539, -147.3046875 59.065977905449806,-130.78125 51.04484764446178,-125.859375 36. ˓→70806354647625, -112.1484375 23.367471303759686,-105.1171875 16.093320185359257,-86.8359375 9. ˓→23767076398516, -82.96875 2.9485268155066175,-82.6171875 -14.812060061226388,-74.8828125 -18. ˓→849111862023985, -77.34375 -47.661687803329166,-84.375 -49.975955187343295,174.7265625 -50. ˓→649460483096114, 179.296875 -42.19189902447192,-176.8359375 -35.634976650677295,176.8359375 -31. ˓→835565983656227, 163.4765625 -6.528187613695323,152.578125 1.894796132058301,135.703125 4. ˓→702353722559447, 127.96875 15.077427674847987,127.96875 23.689804541429606,139.921875 32. ˓→06861069132688, 149.4140625 42.65416193033991,159.2578125 48.3160811030533,168.3984375 57. ˓→019804336633165, 178.2421875 59.95776046458139,-179.6484375 61.16708631440347,-178.59375 64. ˓→83258989321493))' occurrences.search(geometry = wkt) # Faceting ## return no occurrence records with limit=0 x = occurrences.search(facet = "country", limit = 0) (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) x['facets'] ## also return occurrence records x = occurrences.search(facet = "establishmentMeans", limit = 10) x['facets'] x['results'] ## multiple facet variables x = occurrences.search(facet = ["country", "basisOfRecord"], limit = 10) x['results'] x['facets'] x['facets']['country'] x['facets']['basisOfRecord'] x['facets']['basisOfRecord']['count'] ## set a minimum facet count x = occurrences.search(facet = "country", facetMincount = 30000000L, limit = 0) x['facets'] ## paging per each faceted variable ### do so by passing in variables like "country" + "_facetLimit" = "country_ ˓→facetLimit" ### or "country" + "_facetOffset" = "country_facetOffset" x = occurrences.search( facet = ["country", "basisOfRecord", "hasCoordinate"], country_facetLimit = 3, basisOfRecord_facetLimit = 6, limit = 0 ) x['facets'] # requests package options ## There's an acceptable set of requests options (['timeout', 'cookies', 'auth', ## 'allow_redirects', 'proxies', 'verify', 'stream', 'cert']) you can pass ## in via **kwargs, e.g., set a timeout x = occurrences.search(timeout = 1) occurrences.get(**kwargs) Gets details for a single, interpreted occurrence Parameters key – [int] A GBIF occurrence key Returns A dictionary, of results Usage: from pygbif import occurrences occurrences.get(key = 1258202889) occurrences.get(key = 1227768771) occurrences.get(key = 1227769518) occurrences.get_verbatim(**kwargs) Gets a verbatim occurrence record without any interpretation Parameters key – [int] A GBIF occurrence key Returns A dictionary, of results Usage: pygbif Documentation, Release 0.6.3 from pygbif import occurrences occurrences.get_verbatim(key = 1258202889) occurrences.get_verbatim(key = 1227768771) occurrences.get_verbatim(key = 1227769518) occurrences.get_fragment(**kwargs) Get a single occurrence fragment in its raw form (xml or json) Parameters key – [int] A GBIF occurrence key Returns A dictionary, of results Usage: from pygbif import occurrences occurrences.get_fragment(key = 1052909293) occurrences.get_fragment(key = 1227768771) occurrences.get_fragment(key = 1227769518) occurrences.count(basisOfRecord=None, country=None, isGeoreferenced=None, datasetKey=None, publishingCountry=None, typeStatus=None, issue=None, year=None, **kwargs) Returns occurrence counts for a predefined set of dimensions For all parameters below, only one value allowed per function call. See search() for passing more than one value per parameter. Parameters • taxonKey – [int] A GBIF occurrence identifier • basisOfRecord – [str] A GBIF occurrence identifier • country – [str] A GBIF occurrence identifier • isGeoreferenced – [bool] A GBIF occurrence identifier • datasetKey – [str] A GBIF occurrence identifier • publishingCountry – [str] A GBIF occurrence identifier • typeStatus – [str] A GBIF occurrence identifier • issue – [str] A GBIF occurrence identifier • year – [int] A GBIF occurrence identifier Returns dict Usage: from pygbif import occurrences occurrences.count(taxonKey = 3329049) occurrences.count(country = 'CA') occurrences.count(isGeoreferenced = True) occurrences.count(basisOfRecord = 'OBSERVATION') occurrences.count_basisofrecord() Lists occurrence counts by basis of record. Returns dict Usage: pygbif Documentation, Release 0.6.3 from pygbif import occurrences occurrences.count_basisofrecord() occurrences.count_year(**kwargs) Lists occurrence counts by year Parameters year – [int] year range, e.g., 1990,2000. Does not support ranges like asterisk, 2010 Returns dict Usage: from pygbif import occurrences occurrences.count_year(year = '1990,2000') occurrences.count_datasets(country=None, **kwargs) Lists occurrence counts for datasets that cover a given taxon or country Parameters • taxonKey – [int] Taxon key • country – [str] A country, two letter code Returns dict Usage: from pygbif import occurrences occurrences.count_datasets(country = "DE") occurrences.count_countries(**kwargs) Lists occurrence counts for all countries covered by the data published by the given country Parameters publishingCountry – [str] A two letter country code Returns dict Usage: from pygbif import occurrences occurrences.count_countries(publishingCountry = "DE") occurrences.count_schema() List the supported metrics by the service Returns dict Usage: from pygbif import occurrences occurrences.count_schema() occurrences.count_publishingcountries(**kwargs) Lists occurrence counts for all countries that publish data about the given country Parameters country – [str] A country, two letter code Returns dict Usage: pygbif Documentation, Release 0.6.3 from pygbif import occurrences occurrences.count_publishingcountries(country = "DE") occurrences.download(format=’SIMPLE_CSV’, user=None, pwd=None, email=None, pred_type=’and’) Spin up a download request for GBIF occurrence data. Parameters • queries (str, list or dictionary) – One or more of query arguments to kick of a download job. See Details. • format – (character) One of the GBIF accepted download formats https://www.gbif.org/ faq?question=download-formats • pred_type – (character) One of equals (=), and (&), or‘ (|), lessThan (<), lessThanOrEquals (<=), greaterThan (>), greaterThanOrEquals (>=), in, within, not (!), like • user – (character) User name within GBIF’s website. Required. Set in your env vars with the option GBIF_USER • pwd – (character) User password within GBIF’s website. Required. Set in your env vars with the option GBIF_PWD • email – (character) Email address to recieve download notice done email. Required. Set in your env vars with the option GBIF_EMAIL Argument passed have to be passed as character (e.g., country = US), with a space between key (country), operator (=), and value (US). See the type parameter for possible options for the operator. This character string is parsed internally. Acceptable arguments to ... (args) are: • taxonKey = TAXON_KEY • scientificName = SCIENTIFIC_NAME • country = COUNTRY • publishingCountry = PUBLISHING_COUNTRY • hasCoordinate = HAS_COORDINATE • hasGeospatialIssue = HAS_GEOSPATIAL_ISSUE • typeStatus = TYPE_STATUS • recordNumber = RECORD_NUMBER • lastInterpreted = LAST_INTERPRETED • continent = CONTINENT • geometry = GEOMETRY • basisOfRecord = BASIS_OF_RECORD • datasetKey = DATASET_KEY • eventDate = EVENT_DATE • catalogNumber = CATALOG_NUMBER • year = YEAR • month = MONTH pygbif Documentation, Release 0.6.3 • decimalLatitude = DECIMAL_LATITUDE • decimalLongitude = DECIMAL_LONGITUDE • elevation = ELEVATION • depth = DEPTH • institutionCode = INSTITUTION_CODE • collectionCode = COLLECTION_CODE • issue = ISSUE • mediatype = MEDIA_TYPE • recordedBy = RECORDED_BY • repatriated = REPATRIATED • classKey = CLASS_KEY • coordinateUncertaintyInMeters = COORDINATE_UNCERTAINTY_IN_METERS • crawlId = CRAWL_ID • datasetId = DATASET_ID • datasetName = DATASET_NAME • distanceFromCentroidInMeters = DISTANCE_FROM_CENTROID_IN_METERS • establishmentMeans = ESTABLISHMENT_MEANS • eventId = EVENT_ID • familyKey = FAMILY_KEY • format = FORMAT • fromDate = FROM_DATE • genusKey = GENUS_KEY • geoDistance = GEO_DISTANCE • identifiedBy = IDENTIFIED_BY • identifiedByID = IDENTIFIED_BY_ID • kingdomKey = KINGDON_KEY • license = LICENSE • locality = LOCALITY • modified = MODIFIED • networkKey = NETWORK_KEY • occurrenceId = OCCURRENCE_ID • occurrenceStatus = OCCURRENCE_STATUS • orderKey = ORDER_KEY • organismId = ORGANISM_ID • organismQuantity = ORGANISM_QUANTITY • organismQuantityType = ORGANISM_QUANTITY_TYPE pygbif Documentation, Release 0.6.3 • otherCatalogNumbers = OTHER_CATALOG_NUMBERS • phylumKey = PHYLUM_KEY • preparations = PREPARATIONS • programme = PROGRAMME • projectId = PROJECT_ID • protocol = PROTOCOL • publishingCountry = PUBLISHING_COUNTRY • publishingOrg = PUBLISHING_ORG • publishingOrgKey = PUBLISHING_ORG_KEY • recordedByID = RECORDED_BY_ID • recordNumber = RECORD_NUMBER • relativeOrganismQuantity = RELATIVE_ORGANISM_QUANTITY • sampleSizeUnit = SAMPLE_SIZE_UNIT • sampleSizeValue = SAMPLE_SIZE_VALUE • samplingProtocol = SAMPLING_PROTOCOL • speciesKey = SPECIES_KEY • stateProvince = STATE_PROVINCE • subgenusKey = SUBGENUS_KEY • taxonId = TAXON_ID • toDate = TO_DATE • userCountry = USER_COUNTRY • verbatimScientificName = VERBATIM_SCIENTIFIC_NAME • waterBody = WATER_BODY See the API docs http://www.gbif.org/developer/occurrence#download and the predicates docs http://www.gbif. org/developer/occurrence#predicates for more info. GBIF has a limit of 100,000 predicates and 10,000 points (in within predicates) for download queries – so if your download request is particularly complex, you may need to split it into multiple requests by one factor or another. Returns A dictionary, of results Usage: from pygbif import occurrences as occ occ.download('basisOfRecord = PRESERVED_SPECIMEN') occ.download('taxonKey = 3119195') occ.download('decimalLatitude > 50') occ.download('elevation >= 9000') occ.download('decimalLatitude >= 65') occ.download('country = US') occ.download('institutionCode = TLMF') occ.download('catalogNumber = Bird.27847588') (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) res = occ.download(['taxonKey = 7264332', 'hasCoordinate = TRUE']) # pass output to download_meta for more information occ.download_meta(occ.download('decimalLatitude > 75')) # multiple queries gg = occ.download(['decimalLatitude >= 65', 'decimalLatitude <= -65'], pred_type ='or') gg = occ.download(['depth = 80', 'taxonKey = 2343454'], pred_type ='or') # repratriated data for Costa Rica occ.download(['country = CR', 'repatriated = true']) # turn off logging import logging logger = logging.getLogger() logger.disabled = True z = occ.download('elevation >= 95000') logger.disabled = False w = occ.download('elevation >= 10000') # nested and complex queries with multiple predicates ## For more complex queries, it may be advantagous to format the query in JSON ˓→format. It must follow the predicate format described in the API documentation ˓→(https://www.gbif.org/developer/occurrence#download): query = { "type": "and", "predicates": [ { "type": "in", "key": "TAXON_KEY", "values": ["2387246","2399391","2364604"]}, { "type": "isNotNull", "parameter": "YEAR"}, { "type": "not", "predicate": { "type": "in", "key": "ISSUE", "values": ["RECORDED_DATE_INVALID", "TAXON_MATCH_FUZZY", "TAXON_MATCH_HIGHERRANK"] }} ]} occ.download(query) # The same query can also be applied in the occ.download function (including ˓→download format specified): occ.download(['taxonKey in ["2387246", "2399391","2364604"]', 'year !Null', ˓→"issue !in ['RECORDED_DATE_INVALID', 'TAXON_MATCH_FUZZY', 'TAXON_MATCH_ ˓→HIGHERRANK']"], "DWCA") occurrences.download_meta(**kwargs) Retrieves the occurrence download metadata by its unique key. Further named arguments passed on to requests.get can be included as additional arguments Parameters key – [str] A key generated from a request, like that from download Usage: from pygbif import occurrences as occ (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) occ.download_meta(key = "0003970-140910143529206") occ.download_meta(key = "0000099-140929101555934") occurrences.download_list(pwd=None, limit=20, offset=0) Lists the downloads created by a user. Parameters • user – [str] A user name, look at env var GBIF_USER first • pwd – [str] Your password, look at env var GBIF_PWD first • limit – [int] Number of records to return. Default: 20 • offset – [int] Record number to start at. Default: 0 Usage: from pygbif import occurrences as occ occ.download_list(user = "sckott") occ.download_list(user = "sckott", limit = 5) occ.download_list(user = "sckott", offset = 21) occurrences.download_get(path=’.’, **kwargs) Get a download from GBIF. Parameters • key – [str] A key generated from a request, like that from download • path – [str] Path to write zip file to. Default: ".", with a .zip appended to the end. • kwargs – Further named arguments passed on to requests.get Downloads the zip file to a directory you specify on your machine. The speed of this function is of course proportional to the size of the file to download, and affected by your internet connection speed. This function only downloads the file. To open and read it, see https://github.com/BelgianBiodiversityPlatform/ python-dwca-reader Usage: from pygbif import occurrences as occ x=occ.download_get("0000066-140928181241064") occ.download_get("0003983-140910143529206") # turn off logging import logging logger = logging.getLogger() logger.disabled = True x = occ.download_get("0000066-140928181241064") # turn back on logger.disabled = False x = occ.download_get("0000066-140928181241064") 4.4 registry module registry module API: pygbif Documentation, Release 0.6.3 • organizations • nodes • networks • installations • datasets • dataset_metrics • dataset_suggest • dataset_search Example usage: from pygbif import registry registry.dataset_metrics(uuid='3f8a1297-3259-4700-91fc-acc4170b27ce') 4.4.1 registry API registry.datasets(type=None, uuid=None, query=None, id=None, limit=100, offset=None, **kwargs) Search for datasets and dataset metadata. Parameters • data – [str] The type of data to get. Default: all • type – [str] Type of dataset, options include OCCURRENCE, etc. • uuid – [str] UUID of the data node provider. This must be specified if data is anything other than all. • query – [str] Query term(s). Only used when data = 'all' • id – [int] A metadata document id. References http://www.gbif.org/developer/registry#datasets Usage: from pygbif import registry registry.datasets(limit=5) registry.datasets(type="OCCURRENCE") registry.datasets(uuid="a6998220-7e3a-485d-9cd6-73076bd85657") registry.datasets(data='contact', uuid="a6998220-7e3a-485d-9cd6-73076bd85657") registry.datasets(data='metadata', uuid="a6998220-7e3a-485d-9cd6-73076bd85657") registry.datasets(data='metadata', uuid="a6998220-7e3a-485d-9cd6-73076bd85657", ˓→id=598) registry.datasets(data=['deleted','duplicate']) registry.datasets(data=['deleted','duplicate'], limit=1) registry.dataset_metrics() Get details on a GBIF dataset. Parameters uuid – [str] One or more dataset UUIDs. See examples. References: http://www.gbif.org/developer/registry#datasetMetrics Usage: pygbif Documentation, Release 0.6.3 from pygbif import registry registry.dataset_metrics(uuid='3f8a1297-3259-4700-91fc-acc4170b27ce') registry.dataset_metrics(uuid='66dd0960-2d7d-46ee-a491-87b9adcfe7b1') registry.dataset_metrics(uuid=['3f8a1297-3259-4700-91fc-acc4170b27ce', '66dd0960- ˓→2d7d-46ee-a491-87b9adcfe7b1']) registry.dataset_suggest(type=None, keyword=None, owningOrg=None, publishingOrg=None, hostingOrg=None, publishingCountry=None, decade=None, limit=100, offset=None, **kwargs) Search that returns up to 20 matching datasets. Results are ordered by relevance. Parameters • q – [str] Query term(s) for full text search. The value for this parameter can be a simple word or a phrase. Wildcards can be added to the simple word parameters only, e.g. q=*puma* • type – [str] Type of dataset, options include OCCURRENCE, etc. • keyword – [str] Keyword to search by. Datasets can be tagged by keywords, which you can search on. The search is done on the merged collection of tags, the dataset keyword- Collections and temporalCoverages. SEEMS TO NOT BE WORKING ANYMORE AS OF 2016-09-02. • owningOrg – [str] Owning organization. A uuid string. See organizations() • publishingOrg – [str] Publishing organization. A uuid string. See organizations() • hostingOrg – [str] Hosting organization. A uuid string. See organizations() • publishingCountry – [str] Publishing country. • decade – [str] Decade, e.g., 1980. Filters datasets by their temporal coverage bro- ken down to decades. Decades are given as a full year, e.g. 1880, 1960, 2000, etc, and will return datasets wholly contained in the decade as well as those that cover the entire decade or more. Facet by decade to get the break down, e.g. /search? facet=DECADE&facet_only=true (see example below) • limit – [int] Number of results to return. Default: 300 • offset – [int] Record to start at. Default: 0 Returns A dictionary References: http://www.gbif.org/developer/registry#datasetSearch Usage: from pygbif import registry registry.dataset_suggest(q="Amazon", type="OCCURRENCE") # Suggest datasets tagged with keyword "france". registry.dataset_suggest(keyword="france") # Suggest datasets owned by the organization with key # "<KEY>" (UK NBN). registry.dataset_suggest(owningOrg="07f617d0-c688-11d8-bf62-b8a03c50a862") # Fulltext search for all datasets having the word "amsterdam" somewhere in # its metadata (title, description, etc). registry.dataset_suggest(q="amsterdam") (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) # Limited search registry.dataset_suggest(type="OCCURRENCE", limit=2) registry.dataset_suggest(type="OCCURRENCE", limit=2, offset=10) # Return just descriptions registry.dataset_suggest(type="OCCURRENCE", limit = 5, description=True) # Search by decade registry.dataset_suggest(decade=1980, limit = 30) registry.dataset_search(type=None, keyword=None, owningOrg=None, publishingOrg=None, hostingOrg=None, decade=None, publishingCountry=None, facet=None, facetMincount=None, facetMultiselect=None, hl=False, limit=100, off- set=None, **kwargs) Full text search across all datasets. Results are ordered by relevance. Parameters • q – [str] Query term(s) for full text search. The value for this parameter can be a simple word or a phrase. Wildcards can be added to the simple word parameters only, e.g. q=*puma* • type – [str] Type of dataset, options include OCCURRENCE, etc. • keyword – [str] Keyword to search by. Datasets can be tagged by keywords, which you can search on. The search is done on the merged collection of tags, the dataset keyword- Collections and temporalCoverages. SEEMS TO NOT BE WORKING ANYMORE AS OF 2016-09-02. • owningOrg – [str] Owning organization. A uuid string. See organizations() • publishingOrg – [str] Publishing organization. A uuid string. See organizations() • hostingOrg – [str] Hosting organization. A uuid string. See organizations() • publishingCountry – [str] Publishing country. • decade – [str] Decade, e.g., 1980. Filters datasets by their temporal coverage bro- ken down to decades. Decades are given as a full year, e.g. 1880, 1960, 2000, etc, and will return datasets wholly contained in the decade as well as those that cover the entire decade or more. Facet by decade to get the break down, e.g. /search? facet=DECADE&facet_only=true (see example below) • facet – [str] A list of facet names used to retrieve the 100 most frequent values for a field. Allowed facets are: type, keyword, publishingOrg, hostingOrg, decade, and publish- ingCountry. Additionally subtype and country are legal values but not yet implemented, so data will not yet be returned for them. • facetMincount – [str] Used in combination with the facet parameter. Set facetMin- count={#} to exclude facets with a count less than {#}, e.g. http://api.gbif.org/v1/dataset/ search?facet=type&limit=0&facetMincount=10000 only shows the type value ‘OCCUR- RENCE’ because ‘CHECKLIST’ and ‘METADATA’ have counts less than 10000. • facetMultiselect – [bool] Used in combination with the facet parameter. Set facetMultiselect=True to still return counts for values that are not currently fil- tered, e.g. http://api.gbif.org/v1/dataset/search?facet=type&limit=0&type=CHECKLIST& facetMultiselect=true still shows type values ‘OCCURRENCE’ and ‘METADATA’ even though type is being filtered by type=CHECKLIST pygbif Documentation, Release 0.6.3 • hl – [bool] Set hl=True to highlight terms matching the query when in fulltext search fields. The highlight will be an emphasis tag of class ‘gbifH1’ e.g. http://api.gbif.org/ v1/dataset/search?q=plant&hl=true Fulltext search fields include: title, keyword, country, publishing country, publishing organization title, hosting organization title, and description. One additional full text field is searched which includes information from metadata docu- ments, but the text of this field is not returned in the response. • limit – [int] Number of results to return. Default: 300 • offset – [int] Record to start at. Default: 0 Note Note that you can pass in additional faceting parameters on a per field basis. For exam- ple, if you want to limit the numbef of facets returned from a field foo to 3 results, pass in foo_facetLimit = 3. GBIF does not allow all per field parameters, but does allow some. See also examples. Returns A dictionary References: http://www.gbif.org/developer/registry#datasetSearch Usage: from pygbif import registry # Gets all datasets of type "OCCURRENCE". registry.dataset_search(type="OCCURRENCE", limit = 10) # Fulltext search for all datasets having the word "amsterdam" somewhere in # its metadata (title, description, etc). registry.dataset_search(q="amsterdam", limit = 10) # Limited search registry.dataset_search(type="OCCURRENCE", limit=2) registry.dataset_search(type="OCCURRENCE", limit=2, offset=10) # Search by decade registry.dataset_search(decade=1980, limit = 10) # Faceting ## just facets registry.dataset_search(facet="decade", facetMincount=10, limit=0) ## data and facets registry.dataset_search(facet="decade", facetMincount=10, limit=2) ## many facet variables registry.dataset_search(facet=["decade", "type"], facetMincount=10, limit=0) ## facet vars ### per variable paging x = registry.dataset_search( facet = ["decade", "type"], decade_facetLimit = 3, type_facetLimit = 3, limit = 0 ) ## highlight x = registry.dataset_search(q="plant", hl=True, limit = 10) [ z['description'] for z in x['results'] ] pygbif Documentation, Release 0.6.3 registry.installations(uuid=None, q=None, identifier=None, identifierType=None, limit=100, off- set=None, **kwargs) Installations metadata. Parameters • data – [str] The type of data to get. Default is all data. If not all, then one or more of contact, endpoint, dataset, comment, deleted, nonPublishing. • uuid – [str] UUID of the data node provider. This must be specified if data is anything other than all. • q – [str] Query nodes. Only used when data='all'. Ignored otherwise. • identifier – [fixnum] The value for this parameter can be a simple string or integer, e.g. identifier=120 • identifierType – [str] Used in combination with the identifier parameter to fil- ter identifiers by identifier type: DOI, FTP, GBIF_NODE, GBIF_PARTICIPANT, GBIF_PORTAL, HANDLER, LSID, UNKNOWN, URI, URL, UUID • limit – [int] Number of results to return. Default: 100 • offset – [int] Record to start at. Default: 0 Returns A dictionary References: http://www.gbif.org/developer/registry#installations Usage: from pygbif import registry registry.installations(limit=5) registry.installations(q="france") registry.installations(uuid="b77901f9-d9b0-47fa-94e0-dd96450aa2b4") registry.installations(data='contact', uuid="b77901f9-d9b0-47fa-94e0-dd96450aa2b4 ˓→") registry.installations(data='contact', uuid="2e029a0c-87af-42e6-87d7-f38a50b78201 ˓→") registry.installations(data='endpoint', uuid="b77901f9-d9b0-47fa-94e0-dd96450aa2b4 ˓→") registry.installations(data='dataset', uuid="b77901f9-d9b0-47fa-94e0-dd96450aa2b4 ˓→") registry.installations(data='deleted') registry.installations(data='deleted', limit=2) registry.installations(data=['deleted','nonPublishing'], limit=2) registry.installations(identifierType='DOI', limit=2) registry.networks(uuid=None, q=None, identifier=None, identifierType=None, limit=100, off- set=None, **kwargs) Networks metadata. Note: there’s only 1 network now, so there’s not a lot you can do with this method. Parameters • data – [str] The type of data to get. Default: all • uuid – [str] UUID of the data network provider. This must be specified if data is anything other than all. • q – [str] Query networks. Only used when data = 'all'. Ignored otherwise. pygbif Documentation, Release 0.6.3 • identifier – [fixnum] The value for this parameter can be a simple string or integer, e.g. identifier=120 • identifierType – [str] Used in combination with the identifier parameter to fil- ter identifiers by identifier type: DOI, FTP, GBIF_NODE, GBIF_PARTICIPANT, GBIF_PORTAL, HANDLER, LSID, UNKNOWN, URI, URL, UUID • limit – [int] Number of results to return. Default: 100 • offset – [int] Record to start at. Default: 0 Returns A dictionary References: http://www.gbif.org/developer/registry#networks Usage: from pygbif import registry registry.networks(limit=1) registry.networks(uuid='2b7c7b4f-4d4f-40d3-94de-c28b6fa054a6') registry.nodes(uuid=None, q=None, identifier=None, identifierType=None, limit=100, offset=None, isocode=None, **kwargs) Nodes metadata. Parameters • data – [str] The type of data to get. Default: all • uuid – [str] UUID of the data node provider. This must be specified if data is anything other than all. • q – [str] Query nodes. Only used when data = 'all' • identifier – [fixnum] The value for this parameter can be a simple string or integer, e.g. identifier=120 • identifierType – [str] Used in combination with the identifier parameter to fil- ter identifiers by identifier type: DOI, FTP, GBIF_NODE, GBIF_PARTICIPANT, GBIF_PORTAL, HANDLER, LSID, UNKNOWN, URI, URL, UUID • limit – [int] Number of results to return. Default: 100 • offset – [int] Record to start at. Default: 0 • isocode – [str] A 2 letter country code. Only used if data = 'country'. Returns A dictionary References http://www.gbif.org/developer/registry#nodes Usage: from pygbif import registry registry.nodes(limit=5) registry.nodes(identifier=120) registry.nodes(uuid="1193638d-32d1-43f0-a855-8727c94299d8") registry.nodes(data='identifier', uuid="03e816b3-8f58-49ae-bc12-4e18b358d6d9") registry.nodes(data=['identifier','organization','comment'], uuid="03e816b3-8f58- ˓→49ae-bc12-4e18b358d6d9") uuids = ["8cb55387-7802-40e8-86d6-d357a583c596","02c40d2a-1cba-4633-90b7- ˓→e36e5e97aba8", "7a17efec-0a6a-424c-b743-f715852c3c1f","b797ce0f-47e6-4231-b048-6b62ca3b0f55", (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) "1193638d-32d1-43f0-a855-8727c94299d8","d3499f89-5bc0-4454-8cdb-60bead228a6d", "cdc9736d-5ff7-4ece-9959-3c744360cdb3","a8b16421-d80b-4ef3-8f22-098b01a89255", "8df8d012-8e64-4c8a-886e-521a3bdfa623","b35cf8f1-748d-467a-adca-4f9170f20a4e", "03e816b3-8f58-49ae-bc12-4e18b358d6d9","073d1223-70b1-4433-bb21-dd70afe3053b", "07dfe2f9-5116-4922-9a8a-3e0912276a72","086f5148-c0a8-469b-84cc-cce5342f9242", "0909d601-bda2-42df-9e63-a6d51847ebce","0e0181bf-9c78-4676-bdc3-54765e661bb8", "109aea14-c252-4a85-96e2-f5f4d5d088f4","169eb292-376b-4cc6-8e31-9c2c432de0ad", "1e789bc9-79fc-4e60-a49e-89dfc45a7188","1f94b3ca-9345-4d65-afe2-4bace93aa0fe"] [ registry.nodes(data='identifier', uuid=x) for x in uuids ] registry.organizations(uuid=None, q=None, identifier=None, identifierType=None, limit=100, off- set=None, **kwargs) Organizations metadata. Parameters • data – [str] The type of data to get. Default is all data. If not all, then one or more of contact, endpoint, identifier, tag, machineTag, comment, hostedDataset, ownedDataset, deleted, pending, nonPublishing. • uuid – [str] UUID of the data node provider. This must be specified if data is anything other than all. • q – [str] Query nodes. Only used when data='all'. Ignored otherwise. • identifier – [fixnum] The value for this parameter can be a simple string or integer, e.g. identifier=120 • identifierType – [str] Used in combination with the identifier parameter to fil- ter identifiers by identifier type: DOI, FTP, GBIF_NODE, GBIF_PARTICIPANT, GBIF_PORTAL, HANDLER, LSID, UNKNOWN, URI, URL, UUID • limit – [int] Number of results to return. Default: 100 • offset – [int] Record to start at. Default: 0 Returns A dictionary References: http://www.gbif.org/developer/registry#organizations Usage: from pygbif import registry registry.organizations(limit=5) registry.organizations(q="france") registry.organizations(identifier=120) registry.organizations(uuid="e2e717bf-551a-4917-bdc9-4fa0f342c530") registry.organizations(data='contact', uuid="e2e717bf-551a-4917-bdc9-4fa0f342c530 ˓→") registry.organizations(data='deleted') registry.organizations(data='deleted', limit=2) registry.organizations(data=['deleted','nonPublishing'], limit=2) registry.organizations(identifierType='DOI', limit=2) 4.5 species module species module API: pygbif Documentation, Release 0.6.3 • name_backbone • name_suggest • name_usage • name_lookup • name_parser Example usage: from pygbif import species species.name_suggest(q='Puma concolor') 4.5.1 species API species.name_backbone(rank=None, kingdom=None, phylum=None, clazz=None, order=None, fam- ily=None, genus=None, strict=False, verbose=False, offset=None, limit=100, **kwargs) Lookup names in the GBIF backbone taxonomy. Parameters • name – [str] Full scientific name potentially with authorship (required) • rank – [str] The rank given as our rank enum. (optional) • kingdom – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • phylum – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • class – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • order – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • family – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • genus – [str] If provided default matching will also try to match against this if no direct match is found for the name alone. (optional) • strict – [bool] If True it (fuzzy) matches only the given name, but never a taxon in the upper classification (optional) • verbose – [bool] If True show alternative matches considered which had been rejected. • offset – [int] Record to start at. Default: 0 • limit – [int] Number of results to return. Default: 100 If you are looking for behavior similar to the GBIF website when you search for a name, name_backbone may be what you want. For example, a search for Lantanophaga pusillidactyla on the GBIF website and with name_backbone will give back as a first result the correct name Lantanophaga pusillidactylus. A list for a single taxon with many slots (with verbose=False - default), or a list of length two, first element for the suggested taxon match, and a data.frame with alternative name suggestions resulting from fuzzy matching (with verbose=True). pygbif Documentation, Release 0.6.3 If you don’t get a match GBIF gives back a list of length 3 with slots synonym, confidence, and matchType='NONE'. reference: https://www.gbif.org/developer/species#searching Usage: from pygbif import species species.name_backbone(name='Helianthus annuus', kingdom='plants') species.name_backbone(name='Helianthus', rank='genus', kingdom='plants') species.name_backbone(name='Poa', rank='genus', family='Poaceae') # Verbose - gives back alternatives species.name_backbone(name='Helianthus annuus', kingdom='plants', verbose=True) # Strictness species.name_backbone(name='Poa', kingdom='plants', verbose=True, strict=False) species.name_backbone(name='Helianthus annuus', kingdom='plants', verbose=True, ˓→strict=True) # Non-existent name species.name_backbone(name='Aso') # Multiple equal matches species.name_backbone(name='Oenante') species.name_suggest(datasetKey=None, rank=None, limit=100, offset=None, **kwargs) A quick and simple autocomplete service that returns up to 20 name usages by doing prefix matching against the scientific name. Results are ordered by relevance. Parameters • q – [str] Simple search parameter. The value for this parameter can be a simple word or a phrase. Wildcards can be added to the simple word parameters only, e.g. q=*puma* (Required) • datasetKey – [str] Filters by the checklist dataset key (a uuid, see examples) • rank – [str] A taxonomic rank. One of class, cultivar, cultivar_group, domain, family, form, genus, informal, infrageneric_name, infraorder, infraspecific_name, infrasubspecific_name, kingdom, order, phylum, section, series, species, strain, subclass, subfamily, subform, subgenus, subkingdom, suborder, subphylum, subsection, subseries, subspecies, subtribe, subvariety, superclass, superfamily, superorder, superphylum, suprageneric_name, tribe, unranked, or variety. • limit – [fixnum] Number of records to return. Maximum: 1000. (optional) • offset – [fixnum] Record number to start at. (optional) Returns A dictionary References: http://www.gbif.org/developer/species#searching Usage: from pygbif import species species.name_suggest(q='Puma concolor') x = species.name_suggest(q='Puma') (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) species.name_suggest(q='Puma', rank="genus") species.name_suggest(q='Puma', rank="subspecies") species.name_suggest(q='Puma', rank="species") species.name_suggest(q='Puma', rank="infraspecific_name") species.name_suggest(q='Puma', limit=2) species.name_lookup(rank=None, higherTaxonKey=None, status=None, isExtinct=None, habi- tat=None, nameType=None, datasetKey=None, nomenclaturalStatus=None, limit=100, offset=None, facet=False, facetMincount=None, facetMultise- lect=None, type=None, hl=False, verbose=False, **kwargs) Lookup names in all taxonomies in GBIF. This service uses fuzzy lookup so that you can put in partial names and you should get back those things that match. See examples below. Parameters • q – [str] Query term(s) for full text search (optional) • rank – [str] CLASS, CULTIVAR, CULTIVAR_GROUP, DOMAIN, FAMILY, FORM, GENUS, INFORMAL, INFRAGENERIC_NAME, INFRAORDER, INFRASPECIFIC_NAME, INFRASUBSPECIFIC_NAME, KINGDOM, ORDER, PHYLUM, SECTION, SERIES, SPECIES, STRAIN, SUBCLASS, SUBFAMILY, SUBFORM, SUBGENUS, SUBKINGDOM, SUBORDER, SUBPHYLUM, SUBSECTION, SUBSERIES, SUBSPECIES, SUBTRIBE, SUBVARIETY, SUPERCLASS, SUPERFAMILY, SUPERORDER, SUPERPHYLUM, SUPRAGENERIC_NAME, TRIBE, UNRANKED, VARIETY (optional) • verbose – [bool] If True show alternative matches considered which had been rejected. • higherTaxonKey – [str] Filters by any of the higher Linnean rank keys. Note this is within the respective checklist and not searching nub keys across all checklists (optional) • status – [str] (optional) Filters by the taxonomic status as one of: – ACCEPTED – DETERMINATION_SYNONYM Used for unknown child taxa referred to via spec, ssp, . . . – DOUBTFUL Treated as accepted, but doubtful whether this is correct. – HETEROTYPIC_SYNONYM More specific subclass of SYNONYM. – HOMOTYPIC_SYNONYM More specific subclass of SYNONYM. – INTERMEDIATE_RANK_SYNONYM Used in nub only. – MISAPPLIED More specific subclass of SYNONYM. – PROPARTE_SYNONYM More specific subclass of SYNONYM. – SYNONYM A general synonym, the exact type is unknown. • isExtinct – [bool] Filters by extinction status (e.g. isExtinct=True) • habitat – [str] Filters by habitat. One of: marine, freshwater, or terrestrial (optional) • nameType – [str] (optional) Filters by the name type as one of: – BLACKLISTED surely not a scientific name. – CANDIDATUS Candidatus is a component of the taxonomic name for a bacterium that cannot be maintained in a Bacteriology Culture Collection. pygbif Documentation, Release 0.6.3 – CULTIVAR a cultivated plant name. – DOUBTFUL doubtful whether this is a scientific name at all. – HYBRID a hybrid formula (not a hybrid name). – INFORMAL a scientific name with some informal addition like “cf.” or indetermined like Abies spec. – SCINAME a scientific name which is not well formed. – VIRUS a virus name. – WELLFORMED a well formed scientific name according to present nomenclatural rules. • datasetKey – [str] Filters by the dataset’s key (a uuid) (optional) • nomenclaturalStatus – [str] Not yet implemented, but will eventually allow for fil- tering by a nomenclatural status enum • limit – [fixnum] Number of records to return. Maximum: 1000. (optional) • offset – [fixnum] Record number to start at. (optional) • facet – [str] A list of facet names used to retrieve the 100 most frequent values for a field. Allowed facets are: datasetKey, higherTaxonKey, rank, status, isExtinct, habitat, and nameType. Additionally threat and nomenclaturalStatus are legal values but not yet implemented, so data will not yet be returned for them. (optional) • facetMincount – [str] Used in combination with the facet parameter. Set facetMincount={#} to exclude facets with a count less than {#}, e.g. http://bit.ly/ 1bMdByP only shows the type value ACCEPTED because the other statuses have counts less than 7,000,000 (optional) • facetMultiselect – [bool] Used in combination with the facet parameter. Set facetMultiselect=True to still return counts for values that are not currently fil- tered, e.g. http://bit.ly/19YLXPO still shows all status values even though status is being filtered by status=ACCEPTED (optional) • type – [str] Type of name. One of occurrence, checklist, or metadata. (op- tional) • hl – [bool] Set hl=True to highlight terms matching the query when in fulltext search fields. The highlight will be an emphasis tag of class gbifH1 e.g. q='plant', hl=True. Fulltext search fields include: title, keyword, country, publishing country, publishing organization title, hosting organization title, and description. One additional full text field is searched which includes information from metadata documents, but the text of this field is not returned in the response. (optional) Returns A dictionary References http://www.gbif.org/developer/species#searching Usage: from pygbif import species # Look up names like mammalia species.name_lookup(q='mammalia') # Paging species.name_lookup(q='mammalia', limit=1) (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) species.name_lookup(q='mammalia', limit=1, offset=2) # large requests, use offset parameter first = species.name_lookup(q='mammalia', limit=1000) second = species.name_lookup(q='mammalia', limit=1000, offset=1000) # Get all data and parse it, removing descriptions which can be quite long species.name_lookup('Helianthus annuus', rank="species", verbose=True) # Get all data and parse it, removing descriptions field which can be quite long out = species.name_lookup('Helianthus annuus', rank="species") res = out['results'] [ z.pop('descriptions', None) for z in res ] res # Fuzzy searching species.name_lookup(q='Heli', rank="genus") # Limit records to certain number species.name_lookup('Helianthus annuus', rank="species", limit=2) # Query by habitat species.name_lookup(habitat = "terrestrial", limit=2) species.name_lookup(habitat = "marine", limit=2) species.name_lookup(habitat = "freshwater", limit=2) # Using faceting species.name_lookup(facet='status', limit=0, facetMincount='70000') species.name_lookup(facet=['status', 'higherTaxonKey'], limit=0, facetMincount= ˓→'700000') species.name_lookup(facet='nameType', limit=0) species.name_lookup(facet='habitat', limit=0) species.name_lookup(facet='datasetKey', limit=0) species.name_lookup(facet='rank', limit=0) species.name_lookup(facet='isExtinct', limit=0) # text highlighting species.name_lookup(q='plant', hl=True, limit=30) # Lookup by datasetKey species.name_lookup(datasetKey='3f8a1297-3259-4700-91fc-acc4170b27ce') species.name_usage(name=None, data=’all’, language=None, datasetKey=None, uuid=None, sour- ceId=None, rank=None, shortname=None, limit=100, offset=None, **kwargs) Lookup details for specific names in all taxonomies in GBIF. Parameters • key – [fixnum] A GBIF key for a taxon • name – [str] Filters by a case insensitive, canonical namestring, e.g. ‘Puma concolor’ • data – [str] The type of data to get. Default: all. Options: all, verbatim, name, parents, children, related, synonyms, descriptions, distributions, media, references, speciesProfiles, vernacularNames, typeSpecimens, root • language – [str] Language. Expects a ISO 639-1 language codes using 2 lower case pygbif Documentation, Release 0.6.3 letters. Languages returned are 3 letter codes. The language parameter only applies to the /species, /species/{int}, /species/{int}/parents, /species/ {int}/children, /species/{int}/related, /species/{int}/synonyms routes (here routes are determined by the data parameter). • datasetKey – [str] Filters by the dataset’s key (a uuid) • uuid – [str] A uuid for a dataset. Should give exact same results as datasetKey. • sourceId – [fixnum] Filters by the source identifier. • rank – [str] Taxonomic rank. Filters by taxonomic rank as one of: CLASS, CULTIVAR, CULTIVAR_GROUP, DOMAIN, FAMILY, FORM, GENUS, INFORMAL, INFRAGENERIC_NAME, INFRAORDER, INFRASPECIFIC_NAME, INFRASUBSPECIFIC_NAME, KINGDOM, ORDER, PHYLUM, SECTION, SERIES, SPECIES, STRAIN, SUBCLASS, SUBFAMILY, SUBFORM, SUBGENUS, SUBKINGDOM, SUBORDER, SUBPHYLUM, SUBSECTION, SUBSERIES, SUBSPECIES, SUBTRIBE, SUBVARIETY, SUPERCLASS, SUPERFAMILY, SUPERORDER, SUPERPHYLUM, SUPRAGENERIC_NAME, TRIBE, UNRANKED, VARIETY • shortname – [str] A short name..need more info on this? • limit – [fixnum] Number of records to return. Default: 100. Maximum: 1000. (op- tional) • offset – [fixnum] Record number to start at. (optional) References: See http://www.gbif.org/developer/species#nameUsages for details Usage: from pygbif import species species.name_usage(key=1) # Name usage for a taxonomic name species.name_usage(name='Puma', rank="GENUS") # All name usages species.name_usage() # References for a name usage species.name_usage(key=2435099, data='references') # Species profiles, descriptions species.name_usage(key=5231190, data='speciesProfiles') species.name_usage(key=5231190, data='descriptions') species.name_usage(key=2435099, data='children') # Vernacular names for a name usage species.name_usage(key=5231190, data='vernacularNames') # Limit number of results returned species.name_usage(key=5231190, data='vernacularNames', limit=3) # Search for names by dataset with datasetKey parameter species.name_usage(datasetKey="<KEY>") species.name_parser(**kwargs) Parse taxon names using the GBIF name parser pygbif Documentation, Release 0.6.3 Parameters name – [str] A character vector of scientific names. (required) reference: http://www.gbif.org/developer/species#parser Usage: from pygbif import species species.name_parser('x Agropogon littoralis') species.name_parser(['Arrhenatherum elatius var. elatius', 'Secale cereale subsp. cereale', 'Secale cereale ssp. cereale', 'Vanessa atalanta (Linnaeus, 1758)']) 4.6 maps module maps module API: • map Example usage: from pygbif import maps maps.map(taxonKey = 2435098) 4.6.1 maps API maps.map(z=0, x=0, y=0, format=’@1x.png’, srs=’EPSG:4326’, bin=None, hexPerTile=None, style=’classic.point’, taxonKey=None, country=None, publishingCountry=None, pub- lisher=None, datasetKey=None, year=None, basisOfRecord=None, **kwargs) GBIF maps API Parameters • source – [str] Either density for fast, precalculated tiles, or adhoc for any search • z – [str] zoom level • x – [str] longitude • y – [str] latitude • format – [str] format of returned data. One of: – .mvt - vector tile – @Hx.png - 256px raster tile (for legacy clients) – @1x.png - 512px raster tile, @2x.png for a 1024px raster tile – @2x.png - 1024px raster tile – @3x.png - 2048px raster tile – @4x.png - 4096px raster tile • srs – [str] Spatial reference system. One of: – EPSG:3857 (Web Mercator) – EPSG:4326 (WGS84 plate caree) – EPSG:3575 (Arctic LAEA) pygbif Documentation, Release 0.6.3 – EPSG:3031 (Antarctic stereographic) • bin – [str] square or hex to aggregate occurrence counts into squares or hexagons. Points by default. • hexPerTile – [str] sets the size of the hexagons (the number horizontally across a tile) • squareSize – [str] sets the size of the squares. Choose a factor of 4096 so they tessalate correctly: probably from 8, 16, 32, 64, 128, 256, 512. • style – [str] for raster tiles, choose from the available styles. Defaults to classic.point. • taxonKey – [int] A GBIF occurrence identifier • datasetKey – [str] The occurrence dataset key (a uuid) • country – [str] The 2-letter country code (as per ISO-3166-1) of the country in which the occurrence was recorded. See here http://en.wikipedia.org/wiki/ISO_3166-1_alpha-2 • basisOfRecord – [str] Basis of record, as defined in the BasisOfRecord enum http://gbif.github.io/ gbif-api/apidocs/org/gbif/api/vocabulary/BasisOfRecord.html Acceptable values are – FOSSIL_SPECIMEN An occurrence record describing a fossilized specimen. – HUMAN_OBSERVATION An occurrence record describing an observation made by one or more people. – LIVING_SPECIMEN An occurrence record describing a living specimen. – MACHINE_OBSERVATION An occurrence record describing an observation made by a machine. – MATERIAL_CITATION An occurrence record based on a reference to a scholarly pub- lication. – OBSERVATION An occurrence record describing an observation. – OCCURRENCE An existence of an organism at a particular place and time. No more specific basis. – PRESERVED_SPECIMEN An occurrence record describing a preserved specimen. • year – [int] The 4 digit year. A year of 98 will be interpreted as AD 98. Supports range queries, smaller,larger (e.g., 1990,1991, whereas 1991,1990 wouldn’t work) • publishingCountry – [str] The 2-letter country code (as per ISO-3166-1) of the coun- try in which the occurrence was recorded. Returns An object of class GbifMap For mvt format, see https://github.com/tilezen/mapbox-vector-tile to decode, and example below Usage: from pygbif import maps out = maps.map(taxonKey = 2435098) out.response out.path out.img out.plot() out = maps.map(taxonKey = 2480498, year = range(2008, 2011+1)) (continues on next page) pygbif Documentation, Release 0.6.3 (continued from previous page) out.response out.path out.img out.plot() # srs maps.map(taxonKey = 2480498, year = 2010, srs = "EPSG:3857") # bin maps.map(taxonKey = 212, year = 1998, bin = "hex", hexPerTile = 30, style = "classic-noborder.poly") # style maps.map(taxonKey = 2480498, style = "purpleYellow.point").plot() # basisOfRecord maps.map(taxonKey = 2480498, year = 2010, basisOfRecord = "HUMAN_OBSERVATION", bin = "hex", hexPerTile = 500).plot() maps.map(taxonKey = 2480498, year = 2010, basisOfRecord = ["HUMAN_OBSERVATION", "LIVING_SPECIMEN"], hexPerTile = 500, bin = "hex").plot() # map vector tiles, gives back raw bytes from pygbif import maps x = maps.map(taxonKey = 2480498, year = 2010, format = ".mvt") x.response x.path x.img # None import mapbox_vector_tile mapbox_vector_tile.decode(x.response.content) 4.7 utils module utils module API: • wkt_rewind Example usage: from pygbif import utils x = 'POLYGON((144.6 13.2, 144.6 13.6, 144.9 13.6, 144.9 13.2, 144.6 13.2))' utils.wkt_rewind(x) 4.7.1 utils API utils.wkt_rewind(digits=None) reverse WKT winding order Parameters • x – [str] WKT string • digits – [int] number of digits after decimal to use for the return string. by default, we use the mean number of digits in your string. Returns a string pygbif Documentation, Release 0.6.3 Usage: from pygbif import wkt_rewind x = 'POLYGON((144.6 13.2, 144.6 13.6, 144.9 13.6, 144.9 13.2, 144.6 13.2))' wkt_rewind(x) wkt_rewind(x, digits = 0) wkt_rewind(x, digits = 3) wkt_rewind(x, digits = 7) pygbif modules Introduction to pygbif modules. occurrence module The occurrence module: core GBIF occurrence data, including count, search, and download APIs. registry module The registry module: including datasets, installations, networks, nodes, and organizations. species module The species module: including name search, lookup, suggest, usage, and backbone search. maps module The maps module: including map. utils module The utils module: including wkt_rewind. CHAPTER 5 All the rest 5.1 Changelog 5.1.1 0.6.3 (2023-05-25) • added support for predicates: isNull, isNotNull, in and not #92, #102 and #103 • added support for nested queries/dictionaries #104 • deprecated the add_predicate function and added add_pred_dict to accomodate for newly supported predicates to ensure that the arguments that are sent are added in the payload function #108 • added support for multiple download formats #105 • updated operators and look-up tables #107 • included documentation on newly supported predicates and dictionaries #106 5.1.2 0.6.2 (2023-01-24) • update to fix requesting GBIF downloads • minor documentation updates #95 and #99 5.1.3 0.6.1 (2022-06-23) • update to fix broken dependencies #93 • minor documentation updates pygbif Documentation, Release 0.6.3 5.1.4 0.6.0 (2021-07-08) • Fix for occurrences.download when giving geometry as a string rather than using add_geometry; predicates were being split on whitespace, which doesn’t work for WKT #81 #84 • Moved to using the logging module instead of print() for giving information on occurrence download methods #78 • Clarify that occurrences.count for length 1 inputs only; see occurrences.search for > 1 value #75 #77 • Improved documentation for species.name_usage method, mostly for the language parameter #68 • Gains download method download_cancel for cancelling/deleting a download request #59 5.1.5 0.5.0 (2020-09-29) • occurrences.search now supports recordedByID and identifiedByID search parameters #62 • clean up the Contributing file, thanks @niconoe #64 • clean up internal imports in the library, thanks @niconoe #65 • fix usage of is and ==, was using them inappropriately sometimes (via https://realpython.com/ python-is-identity-vs-equality/), #69 • remove redundant parameter in a doc string, thanks @faroit #71 • make a test for internal fxn gbif_GET_write more general to avoid errors if GBIF changes content type response header slightly #72 5.1.6 0.4.0 (2019-11-20) • changed base url to https for all requests; was already https for maps and downloads in previous versions • occurrences, species, and registry modules gain docstrings with brief summary of each method • pygbif gains ability to cache http requests. caching is off by default. See ?pygbif.caching for all the details #52 #56 via @nleguillarme • made note in docs that if you are trying to get the same behavior as the GBIF website for name searching, species.name_backbone is likely what you want #55 thanks @qgroom • for parameters that expect a bool, convert them to lowercase strings internally before doing HTTP requests 5.1.7 0.3.0 (2019-01-25) • pygbif is Python 3 only now #19 • Gains maps module with maps.map method for working with the GBIF maps API #41 #49 • Gains new module utils with one method wkt_rewind #46 thanks @aubreymoore for the inspiration • Fixed bug in registry.installations: typo in one of the parameters identifierTyp instead of identifierType #48 thanks @data-biodiversity-aq • Link to GitHub issues from Changelog • Fix a occurrence download test #47 • Much more thorough docs #25 pygbif Documentation, Release 0.6.3 5.1.8 0.2.0 (2016-10-18) • Download methods much improved #16 #27 thanks @jlegind @stijnvanhoey @peterdesmet ! • MULTIPOLYGON now supported in geometry parameter #35 • Fixed docs for occurrences.get, and occurrences.get_verbatim, occurrences.get_fragment and demo that used occurrence keys that no longer exist in GBIF #39 • Added organizations method to registry module #12 • Added remainder of datasets methods: registry.dataset_search (including faceting support #37) and reg- istry.dataset_suggest, for the /dataset/search and /dataset/suggest routes, respectively #40 • Added remainder of species methods: species.name_lookup (including faceting support #38) and species.name_usage, for the /species/search and /species routes, respectively #18 • Added more tests to cover new methods • Changed species.name_suggest to give back data stucture as received from GBIF. We used to parse out the classification data, but for simplicity and speed, that is left up to the user now. • start parameter in species.name_suggest, occurrences.download_list, registry.organizations, registry.nodes, reg- istry.networks, and registry.installations, changed to offset to match GBIF API and match usage throughout remainder of pygbif 5.1.9 0.1.5.4 (2016-10-01) • Added many new occurrence.search parameters, including repatriated, kingdomKey, phylumKey, classKey, or- derKey, familyKey, genusKey, subgenusKey, establishmentMeans, facet, facetMincount, facetMultiselect, and support for facet paging via **kwargs #30 #34 • Fixes to **kwargs in occurrence.search so that facet parameters can be parsed correctly and requests GET request options are collected correctly #36 • Added spellCheck parameter to occurrence.search that goes along with the q parameter to optionally spell check full text searches #31 5.1.10 0.1.4 (2016-06-04) • Added variable types throughout docs • Changed default limit value to 300 for occurrences.search method • tox now included, via @xrotwang #20 • Added more registry methods #11 • Started occurrence download methods #16 • Added more names methods #18 • All requests now send user-agent headers with requests and pygbif versions #13 • Bug fix for occurrences.download_get #23 • Fixed bad example for occurrences.get #22 • Fixed wheel to be universal for 2 and 3 #10 • Improved documentation a lot, autodoc methods now pygbif Documentation, Release 0.6.3 5.1.11 0.1.1 (2015-11-03) • Fixed distribution for pypi 5.1.12 0.1.0 (2015-11-02) • First release 5.2 Contributors • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> 5.3 Contributing 5.3.1 Bug reports Please report bug reports on our issue tracker. 5.3.2 Feature requests Please put feature requests on our issue tracker. 5.3.3 Pull requests When you submit a PR you’ll see a template that pops up - it’s reproduced here. • Provide a general summary of your changes in the Title • Describe your changes in detail • If the PR closes an issue make sure include e.g., fix #4 or similar, or if just relates to an issue make sure to mention it like #4 • If introducing a new feature or changing behavior of existing methods/functions, include an example if possible to do in brief form • Did you remember to include tests? Unless you’re changing docs/grammar, please include new tests for your change pygbif Documentation, Release 0.6.3 5.3.4 Writing tests We’re using nose for testing. See the nose docs for help on contributing to or writing tests. Before running tests for the first time, you’ll need install pygbif dependencies, but also nose and a couple other packages: $ pip install -e . $ pip install nose vcrpy coverage The Makefile has a task for testing under Python 3: $ make test 5.3.5 Code formatting We’re using the Black formatter, so make sure you use that before submitting code - there’s lots of text editor integra- tions, a command line tool, etc. 5.4 Contributor Code of Conduct As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion. Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory com- ments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers. This Code of Conduct is adapted from the Contributor Covenant (http:contributor-covenant.org), version 1.0.0, avail- able at http://contributor-covenant.org/version/1/0/0/ 5.5 LICENSE Copyright (C) 2019 <NAME> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documen- tation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. pygbif Documentation, Release 0.6.3 THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PAR- TICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFT- WARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Changelog See what has changed in recent pygbif versions. Contributors pygbif contributors. Contributing Learn how to contribute to the pygbif project. Contributor Code of Conduct Expected behavior in this community. By participating in this project you agree to abide by its terms. LICENSE The pygbif license. 5.6 Indices and tables • genindex • modindex • search Python Module Index p pygbif, 33 49 pygbif Documentation, Release 0.6.3 50 Python Module Index
nplplot
cran
R
Package ‘nplplot’ October 13, 2022 Version 4.6 Date 2022-05-18 Title Plotting Linkage and Association Results Author <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Provides routines for plotting linkage and association results along a chromosome, with marker names displayed along the top border. There are also routines for generating BED and BedGraph custom tracks for viewing in the UCSC genome browser. The data reformatting program Mega2 uses this package to plot output from a variety of programs. License GPL (>= 3) URL https://watson.hgen.pitt.edu/register/ NeedsCompilation no RoxygenNote 7.1.2 Repository CRAN Date/Publication 2022-05-18 22:30:03 UTC R topics documented: bedplo... 2 genomeplo... 3 lods... 4 lods... 4 nplplo... 5 nplplot.mult... 7 nplplot.ol... 9 prepareplo... 12 bedplot Creation of BED and BedGraph custom tracks Description Generates matched sets of files for linkage or association statistics along a chromosome for viewing in the UCSC genome browser from an input file containing a table of marker names, physical positions and one or more statistical scores. Usage bedplot(bed.data) Arguments bed.data File containing a table of marker names, physical position and scores. Details bed.data example: Marker Position TRAIT_ALL M1 144255 0.670 - 144305 0.640 M3 144355 0.590 - 144378 0.600 M2 144400 0.610 Bedplot creates two types of files: a BED.* file containing a custom BED annotation track and a BedGraph.* file custom BedGraph annotation track. These files have the same suffix as the input bed.data file. When there are multiple scores in bed.data file, a matched pair of BedGraph track BED track files is created for each score, labelled with the score names, as well as the chromosome numbers, e.g. BedGraph.score1.* and BED.score1.*, BedGraph.score2.* and BED.score2.*, etc. Value TRUE or FALSE depending on whether runs successfully. Examples ## Not run: bedplot("bed.data.05") genomeplot Creation of Genome Graph files Description The genomeplot function generates two formatted files, one containing “chromosome base” for- matted genome data and the other containing marker-specific results with dbSNP SNP IDs for displaying genome-wide data sets in the UCSC genome browser. Usage genomeplot(gg.data) Arguments gg.data a file containing chromosome, marker, physical position and scores. Details gg.data example: Chromosome Marker Position TRAIT_ALL 5 M1 0.000 0.670 5 - 2.500 0.640 5 M3 5.000 0.590 5 - 6.500 0.600 5 M2 8.000 0.610 8 M4 0.000 0.670 8 - 2.500 0.640 8 M6 5.000 0.590 8 - 6.500 0.600 8 M5 8.000 0.610 Two files are created, “GG.positons.all” for the “chromosome base” format, and “GG.markers.all” for the marker-names based format. When there are multiple scores in gg.data file, this results in matched pairs of files, one for each score, labelled with the score names, e.g. GG.positions.score1.all, and GG.markers.score1.all, GG.positions.score2.all and GG.markers.score2.all, and so on. Value TRUE or FALSE depending on whether runs successfully. Examples ## Not run: genomeplot("GG.data.all") lods1 LOD score table for chromosome 1 Description This is a data frame with the first two columns containing marker names and positions, followed by three columns of LOD scores. Usage data(lods1) Format There are 100 markers in the table. lods2 LOD score table for chromosome 2 Description This is a data frame with the first two columns containing marker names and positions, followed by three columns of LOD scores. Usage data(lods2) Format There are 87 markers in the table. nplplot Plotting statistics along a chromosome Description Plots linkage or association statistics along a chromosome, contained within a data frame or a file. Marker names are displayed along the top border. Usage nplplot(plotdata=NULL, filename=NULL, yline=2.0, ymin=0, ymax=3.0, header=TRUE, yfix=FALSE, title=NULL, draw.lgnd=TRUE, xlabl="", ylabl="", lgndx=NULL, lgndy=NULL, lgndtxt=NULL, cex.legend = 0.7, cex.axis=0.7, tcl=1, bw=TRUE, my.colors=NULL, ltypes=NULL, ptypes=NULL, na.rm=TRUE, plot.width=0.0, ...) Arguments plotdata A data frame containing marker names in the first column, marker map positions in the second column, and statistical scores in column 3 onwards. filename A table format file containing the plot data as described above. header TRUE or FALSE depending on whether the plotdata or file has a header line. yline Y-value for displaying a horizontal cut-off line. If ’yfix’ is set to TRUE and Y-line falls outside of [ymin, ymax], then the cut-off line is omitted. ymin, ymax Y-axis minimum and maximum values. If non-NULL values are provided, and yfix is set to TRUE, then the plot area will be cropped to these values. If yfix is set to FALSE, then ymin and ymax values are ignored. yfix TRUE or FALSE to denote whether plot area should be cropped to the ymin, ymax values. This has no effect if ymin, ymax values are NULL. title Used as the subtitle of the plot. xlabl X-axis label. May interfere with the display of the subtitle provided as the title argument. ylabl Y-axis label. draw.lgnd TRUE or FALSE denoting whether a plot legend should be displayed. lgndx X coordinate for the legend box, passed to the legend command. Ignored if draw.legend is set to FALSE. If set to NULL with draw.legend set to TRUE, the X-coordinate is automatically calculated. lgndy Y coordinate for the legend box, passed to the legend command. Ignored if draw.legend is set to FALSE. If set to NULL with draw.legend set to TRUE, the Y-coordinate is automatically calculated. lgndtxt Vector of strings to use in the legend. cex.legend Character scaling for legend, passed as the cex argument to the legend com- mand. cex.axis Character scaling for the axis, passed to the axis command for drawing the top border. tcl Length of ticks for the top border, passed to the axis command. bw TRUE or FALSE depending on whether plots should be drawn in color. If set to FALSE, then the colors defined by my.colors are used. my.colors Vector of color specifications as described in the par command. Ignored if bw above is set to FALSE. If bw is to TRUE and my.colors is set to NULL, the rainbow palette will be used. ltypes Vector of line types for the plots. Each non-zero line type is passed on to a lines command. Use 0 or ’none’ if a line is to be skipped. If NULL, no lines will be drawn. For line types see the par command. If set to "default", line-types 1 through the number of plots is used. ptypes Vector of characters giving the point types, to be passed onto the points com- mand. Use ’none’ if no points are to be drawn for a score column. If NULL, no points will be displayed. If both the line-type and point-type specification for a results column is set to ’none’, that column will not be plotted. na.rm TRUE or FALSE depending on whether points with Y-coordinates set to NAs should be skipped. Setting na.rm to TRUE eliminates discontinuities in the plots. plot.width A number giving the width of the plot in inches. This is used to decide whether some marker names should be dynamically hidden, if they are too close to each other along the top border. If set to 0, the default page-size is used to set the width. ... Further graphical parameters to be passed onto the ’plot’, ’lines’ and ’points’ commands. Details The nplplot function draws multiple curves within a single plot by automatically calling ’plot’, ’lines’, and ’points’ multiple times, thus making it easy for the user to plot many columns of results using a single plot command. It is intended for the display of linkage and association analysis results such as LOD scores and P-values. It allows the marker names to be displayed along the top border of the plot, as well as a significance threshold line. The input plot data has to be in a specific tabular format with each column separated by white-space : Here is an example: Marker Position score1 score2 score3 d1s228 0.00 0.546 0.345 0.142 d1s429 1.00 0.346 0.335 0.252 d1s347 2.00 0.446 0.245 0.342 This example file contains a header, therefore the header argument should be set to TRUE. Lines 2-4 contain scores at various marker positions. Missing scores can be denoted with either "." or "NA". The position column cannot have missing data. There can be any number of score columns within a file and will be plotted as separate curves within the same plot. Each file is plotted as a separate plot. Value TRUE or FALSE depending on whether the plot data was successfully plotted. See Also nplplot.multi, nplplot.old Examples # plot with legend par(omi=c(0.05, 0.05, 0.5, 0.05)) data(lods1, package="nplplot") nplplot(plotdata=lods1, draw.lgnd=TRUE) # plot without legend data(lods2, package="nplplot") nplplot(plotdata=lods2, draw.lgnd=FALSE) # plotting from a data file datadir <- paste(system.file("data", package="nplplot"), .Platform$file.sep, sep="") nplplot(filename=paste(datadir, "lods2.txt.gz", sep="")) nplplot.multi Plotting linkage or association statistics for multiple results files Description Wrapper function for the ’nplplot’ function. Creates mutiple plots from a list of plot files, with custom graphical parameters set by header files. Usage nplplot.multi(filenames, plotdata = NULL, col=2, row=2, mode="l", output="screen", headerfiles=NULL, lgnd="page", customtracks=FALSE, mega2mapfile=NULL, pagewidth=NULL, pageheight=NULL, topmargin=0.25, ...) Arguments filenames Vector of strings giving file names containing tables of linkage analysis results. See nplplot for a description of the file format. plotdata List of dataframes by chromosome containing tables of linkage analysis results. See nplplot for a description of the format. col Integer indicating number of columns of plots to be drawn on a page. row Integer signifying number of rows of plots will be drawn on a page. mode ’p’ or ’l’ to denote ’portrait’ or ’landscape’ mode. output String giving file name to save plots in. If set to ’screen’, plots will be displayed and not saved. The file format is determined by the filename extension: ’.pdf’ for PDF, or ’.ps’ for postscript. If no extension is provided, or is not recognized, a PDF file will be produced with ’.pdf’ appended to the file name. headerfiles Files containing R language commands to set various plot parameters, which are passed onto the nplplot command. The recommended use is to have one headerfile per plot file. For a list of parameters, consult the nplplot documen- tation. If the number of headerfiles is fewer than plot files, the last header file will be reused as many times as needed. If more headerfiles are provided than necessary, the last ones will be ignored. lgnd TRUE, FALSE, ’page’ or a list consisting of plot numbers. If a single value is given, TRUE causes legends to be drawn inside every plot, FALSE omits legends altogether, and ’page’ causes a legend to be drawn inside the first plot on every page. If a list of numbers is provided, only plots corresponding to these numbers will have legends. customtracks TRUE or FALSE. If set to TRUE, data files are created to draw custom tracks within the UCSC genome browser in BED format, as well as a combined data file to add a genome-wide track over all chromosomes present in the data. If set to TRUE, a mega2mapfile also needs to be supplied (see below). mega2mapfile Mega2 annotated format map file containing physical positions for all the mark- ers present in the nplplot input data files. Rather than a file name, the name of a data.frame containing what would have been read from the file, may be given. pagewidth A number denoting width of the plot page in inches. If set to NULL, a width of 7.0 is used for the plot area. Assumes that a margin of 0.5 will be available around the plot area for axis annotations. pageheight A number denoting height of the plot page in inches. If set to NULL, a height of 10.0 is used for the plot area. Assumes that a margin of 0.5 will be available around the plot area for axis annotations. topmargin A number denoting the width of the outside top margin of each plot. Since this contains marker names, it may need to be increased to accommodate long names. ... Further graphical parameters to be passed onto the ’plot’, ’lines’ and ’points’ commands within nplplot. Details This function is designed for use within the Mega2 software to generate graphical output for some of the target analysis options, namely Merlin, SimWalk2 and Allegro. It calls nplplot repeatedly to create plots corresponding to each input file. The input arguments control characteristics of all plots together, whereas the header files allow customization within each plot. Thus, it is expected that there should be as many header files as there are plot data files. This function can also be used to create custom tracks within the UCSC genome browser, as well as a genome-wide plot. To use this feature, make sure that the names of the nplplot input data files each have a "Mega2-style" chromosome extension (01 through 09 for chromosomes 1 thorugh 9, otherwise the chromosome number, or X for the human X-chromosome, 23). To make this function more useful to other R programs, you may directly supply a data.frame for the mapfile argument and a list of data.frames for the plotdata argument and NULL for the filename argument. (The name of each list element is the corresponding chromosome.) Value TRUE or FALSE depending on whether all plot commands were successful. See Also nplplot, nplplot.old Examples datadir <- paste(system.file("data", package="nplplot"), .Platform$file.sep, sep="") f1 <- paste(datadir, "lods1.txt.gz", sep="") f2 <- paste(datadir, "lods2.txt.gz", sep="") h1 <- system.file("extdata","lods1header.R",package="nplplot") h2 <- system.file("extdata","lods2header.R",package="nplplot") nplplot.multi(c(f1, f2), col=1, row=2, output="screen", headerfiles=c(h1, h2), topmargin=0.5) nplplot.old LOD score plotting (old version of nplplot) Description Plots score curves contained within one or more specified results files. Usage nplplot.old(files, col=2, row=2, mode="p", output="screen", yline=2.0, ymin=NULL, ymax=NULL, yfix=FALSE, batch=FALSE, headerfiles=NULL, titles=NULL, xlabl="", ylabl="", lgnd="page", lgndx=NULL, lgndy=NULL, bw=TRUE, na.rm=TRUE) Arguments files List of files names (strings). Each file produces a separate plot. col For multiple plots on a single page of pdf or postscript output, this item defines the number of columns of plots, and should be an integer greater than or equal to 1. Default is set to 2. row For multiple plots on a page of pdf or postscript output, this defines the number of rows of plots, (value should be 1 or greater). Default value is set to 2. mode Orientation for pdf or postscript output, "p" for portrait "l" for landscape. output File name for saving plots; "screen", the default causes the plots to be displayed on the screen. To produce a pdf file use the extension .pdf. To produce a postscript file, use the .ps file name extension. If no extension is given a pdf file is produced. yline Y-value for displaying a horizontal cut-off line. ymin, ymax Y-axis minimum and maximum values with default values NULL. If non-NULL values are provided, and yfix is set to TRUE, then the plot area will be cropped to these values. yfix Set to TRUE or FALSE depending on whether ymin and ymax should be en- forced across all plots irrespective of whether the plot data lie within these bounds. Ignored if ymin or ymax are set to NULL. batch TRUE or FALSE, to determine whether the display screen should be closed. If nplplot is called within R, this should be set to FALSE. headerfiles List of file names, one for each data file specified above. Each header-file con- tains a string with column names corresponding to the columns in the data file. These column names are used in the plot legend. If set to NULL (the default), nplplot uses the first item in each column of a data file as plot legend. If a head- erfile is provided, then nplplot will attempt to read in the first line of the datafile as data, so the user should be careful not to put in a headerline as well as a headerfile. titles Array of strings denoting titles for each plot. If there are not enough titles, the last string is recycled for the remaining plots. Default is an empty string. xlabl Array of strings, to use as the x-axis label on each plot. ylabl Array of strings to use as the y-axis label on each plot. lgnd TRUE, FALSE, "page" or a list of plot numbers denoting whether the legend should be drawn in all plots, none, first plot on a page, or specific plot numbers. Default "page". lgndx NULL or a real value if a specific x-coordinate should be used to position the legend. Default NULL. lgndy NULL or a real value if a specific y-coordinate should be used to position the legend. Default NULL. bw TRUE or FALSE depending on whether plots should be drawn in color. A list of six colors are defined within nplplot, which are successively used to draw each curve, and reused as necessary. The order in which these colors are used is: magenta, lightblue, grey, navyblue, lightcyan and pink. The 7th color, reserved for black and white plots is black. na.rm TRUE or FALSE depending on whether NAs should be removed prior to plotting the data. Including NAs will produce broken plots, when lines are drawn. This may be desirable in some cases, if missing data needs to be reported. Details Usually these results would be LOD scores, p-values, or log10(p-values). This is targetted towards p-values or LOD scores obtained at various marker positions from statistical analysis of genetic data. A results file has to be in a specific tabular format with each column separated by white-space : A) First line = header line B) Next set of lines = any number of data lines C) Final two lines = line type & point type definition. Here is an example: marker location score1 score2 score3 d1s228 0.00 0.546 0.345 0.142 d1s429 1.00 0.346 0.335 0.252 d1s347 2.00 0.446 0.245 0.342 ltype -99.99 1 2 3 ptype -99.99 15 16 17 In this example, line 1 column headers for the score columns may be used as labels within the legend, as described in the usage of the "headerfile" argument. The first two headers are ignored. Lines 2-4 contain scores at various marker positions. Missing scores can be denoted with either "." or "NA". The position column cannot have missing data. There can be any number of score columns within a file and will be plotted as separate curves within the same plot. Each file is plotted as a separate plot. The last two lines give line types and point types for each curve. A zero line or point type will not plot lines or points for that score column respectively. For allowable ptype values, consult the R documentation for "points". For line types, consult the documentation on "par". The names in the first column are used as axis labels on the top of the plot border. Setting a name in the marker column to "-" will result in no label at that position. Value TRUE or FALSE depending on whether the input files were read in successfully. See Also nplplot, nplplot.multi Examples ## Not run: nplplot.old("lod.1", output="lod.1.ps", batch=T, headerfiles="hdr.1") ## Not run: nplplot.old(c("lod.1", "lod.2"), col=1, row=2, headerfiles=c("hdr.1","hdr.2")) prepareplot Prepare input data files for bedplot and genomeplot Description The prepareplot function prepares input data files for bedplot and genomeplot functions from nplplot- formatted score files and a Mega2 annotated format map file with physical positions. Usage prepareplot(prefix, chrlist=c(1:23,25), mapfile, output="both") Arguments prefix Prefix of the names of R table files, e.g. “RMERLINDATA” for R table files “RMERLINDATA.01”, “RMERLINDATA.02”, etc. Using chrlist below, it automatically finds R table files with the specified prefix and chromosome- specific extensions to convert. Alternatively, prefix may be a list of data.frames named by the chromosomes supplied in chrlist. chrlist List of chromosome numbers to create plots for, default 1 through 23. Chromo- somes 23 and 25 produces files for the X chromosome X, 25 denoting pseudo- autosomal markers on chromosome X. mapfile Mega2 annnotated format map file, containing marker names and and exactly one set of physical positions. mapfile may instead be a data.frame contain- ing the same information as the map file, viz. the marker names and physical positions. output Which plotting function to generate data for, “both” for both bedplot and genome- plot functions, “bed” for generating input files for bedplot function, “GG” for generating input file for genomeplot function. output is set to default “both”. Details mapfile example: Chromosome Map.h.a Name Map.h.m Map.h.f Build52.p 5 0.0 M1 0.0 0.0 144255 5 5.0 M3 2.0 7.0 144355 5 8.0 M2 4.0 12.0 144400 8 0.0 M4 0.0 0.0 144255 8 5.0 M6 2.0 7.0 144355 8 8.0 M5 4.0 12.0 144400 The names of R table files should be linkage or association analysis score files in nplplot-format with Mega2-style file names, i.e., having a common specified prefix and 01-09, 11- 24, X, or XY as suffixes. The list of suffixes are determined by the chromosome list. If this list includes 23 or X, R table files with either the “23” suffix or “X” suffix are accepted. If both files exist, the one with the “X” suffix is read in and the user warned. If the XY chromosome is chosen, R table files can have either “24” or “XY” as a suffix, with “XY” suffixed file having precedence. The prepareplot function generates chromosome-specific formatted score files “bed.data.\#” for use by bedplot with the same suffix as the R table file. If X chromosome is chosen, the output file is named “bed.data.23”. If XY chromosome is chosen, those records on XY chromosome are included in “bed.data.23” file. The output file “bed.data.\#” contains marker names and physical positions followed by one or more score columns. The header is taken from the input score file(s). Prepareplot generates a combined file over all chromosomes “GG.data.all” for genomeplot. For pseudo-autosomal markers denoted by chromosome XY or 24, these scores are assigned the X chromosome. The output file “GG.data.all” contains four or more columns with headings. The first, second and third columns contain chromosomes, marker names and physical positions respectively, followed by one or more score columns with score names as headers. Value TRUE or FALSE depending on whether runs successfully. Examples ## Not run: prepareplot("RMERLINDATA", c(5,8), "map.all", "GG")
eloquentjs-es_thedojo_mx
free_programming_book
Markdown
1000 € | | --- | 1000 € | <NAME> | 200 € | <NAME> | 150 € | <NAME> | 150 € | <NAME> | 111 € | Anonymous | 100 € | <NAME> | 100 € | <NAME> | 100 € | <NAME> | 100 € | <NAME> | 100 € | <NAME> | 75 € | <NAME> | 75 € | <NAME> | 66 € | t.m.k. | 65 € | <NAME> | 60 € | <NAME> | 50 € | <NAME> | 50 € | Anonymous | 50 € | ×5 | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | Chandler | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | jake.ro | 50 € | JS Cheerleader | 50 € | <NAME> | 50 € | <NAME> | 50 € | Jeremy Whitbred | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | Techit U. | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | aravind mohan | 50 € | jsm | 50 € | <NAME> | 45 € | Anonymous | 40 € | <NAME> | 40 € | <NAME> | 33 € | Anonymous | 30 € | <NAME> | 30 € | <NAME> | 30 € | <NAME> | 26 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Alper | 25 € | <NAME> | 25 € | Anonymous | 25 € | ×23 | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | C.H.A.D. | 25 € | Camilo | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Colby | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | ECAD Labs Inc. | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Flaki | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Ignacy | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Johan | 25 € | <NAME> | 25 € | <NAME> | 25 € | Kashyap | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Lev Izraelit | 25 € | Lev Izraelit | 25 € | <NAME> | 25 € | Maarten | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | Marshall | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | SunnyByte | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | YOU,ZONGYAN | 25 € | <NAME> | 25 € | <NAME> | 25 € | <NAME> | 25 € | theo bousquet | 25 € | www.actioncy.co.uk | 25 € | Anonymous | < 25 | ×107 | These are the kind souls who have contributed towards making the second edition of Eloquent JavaScript possible. 10,000 $ | | --- | 7000 € | <NAME> | 1000 € | 1000 $ | Donation from dev team at Ghostery! Thanks for the awesome work! | Anonymous | 1000 $ | <NAME> | 200 $ | <NAME> | 100 € | Anonymous | 100 € | Anonymous | 100 € | I love your book. Thank you very much! | <NAME> | 100 € | We need a second edition of the best, free JavaScript book | <NAME> | 100 € | Anonymous | 100 $ | <NAME> | 100 $ | <NAME> | 100 $ | I fully support this effort. We need more decentralized sources of education for technology related matters. | <NAME> | 100 $ | <NAME>, Franz Inc. | 100 $ | <NAME> | 100 $ | Raynos | 100 $ | <NAME> | 100 $ | I especially liked the intro chapter in which you describe programming as a creative art more than a cold technical task. | <NAME> | 75 $ | tiffon | 75 $ | Anonymous | 56 € | [o]_O | 50 € | I am a T-SQL monkey with the occasional C# app or python script. Much to my dismay, I was assigned the task of fixing some Javascript bugs in an internal deployment tool. All I knew at the time about Javascript was that it sucked, so I tried reading the 1st edition of Eloquent Javascript. Unfortunately, it worked too well: not only did I fix the problems, I now think Javascript is cool :-( | <NAME> | 50 € | <NAME> | 50 € | <NAME> | 50 € | looking forward to see the new edition... | <NAME> | 50 € | Not too interested in the book itself, donating to promote the funding model. I would advise to keep the bitcoins as Bitpay provides a very bad exchange rate. | Ko<NAME> | 50 € | First edition helped me a lot. Looking forward for the next one. | <NAME> | 50 € | Siltaar | 50 € | tmk | 50 € | <NAME> | 50 € | <NAME> | 50 € | Thanks for the great work! I found the 1st edition of Eloquent Javascript to be one of the most pleasant to read and useful programming books I've ever come across. Eagerly awaiting the 2nd edition! | Vincent | 50 € | <NAME> | 40 € | <NAME> | 50 $ | <NAME> | 50 $ | Anonymous | 50 $ | Anonymous | 50 $ | Anonymous | 50 $ | Anonymous | 50 $ | Anonymous | 50 $ | <NAME> | 50 $ | This is one of the best JavaScript books around. Would love to see an updated version- | <NAME> | 50 $ | <NAME> | 50 $ | <NAME> | 50 $ | <NAME> | 50 $ | I loved the first edition. Looking forward to another :) | Anonymous | 40 $ | <NAME> | 40 $ | <NAME> | 40 $ | <NAME> | 40 $ | Keep up the good work. I brag about your current book, and I can't wait to see the new one. | Anonymous | 30 € | Anonymous | 30 € | Anonymous | 30 € | <NAME> | 30 € | <NAME> | 30 € | Anonymous | 25 € | <NAME> | 25 € | Marijn's CodeMirror project is the foundation upon which I've been realizing my own vision of a better way to learn programming, and Eloquent JavaScript was truly a pioneering work. I feel it is extremely important to provide Marijn with the encouragement and financial support he needs. | <NAME> | 25 € | fronx | 25 € | <NAME> | 25 € | One of the best books on programming ever, if you ask me; even if you know your way around programming, it is still worth it to read even the beginner chapters. Can hardly wait to buy the new edition! Best of luck, man! | <NAME> | 25 € | <NAME> | 25 € | Tom | 25 € | More of this sort of thing. | <NAME> | 30 $ | <NAME> | 30 $ | Eloquent JS was my intro to programming as a way of thinking. I've gone back to it a bunch of times and learned new things every time. A beautiful updated version is worth every penyn. | <NAME> | 30 $ | Your first book is amazing. This is my humble way of saying thanks for creating amazing things such as CodeMirror and the first edition of this book. | <NAME> | 30 $ | <NAME> | 30 $ | <NAME> | 30 $ | @fwg | 20 € | Eloquent Javascript has been an invaluable resource both to my learning and teaching of JS over the years. | Anonymous | 20 € | The only useful JavaScript Tutorial ;-) | Anonymous | 20 € | Anonymous | 20 € | Anonymous | 20 € | The first edition is a wonderful book, something I recommend to my employees as a must-read. Thanks for all the hard work! | Anonymous | 20 € | Anonymous | 20 € | Anonymous | 20 € | Anonymous | 20 € | Anonymous | 20 € | Bundyo | 20 € | d4kris | 20 € | <NAME> | 20 € | <NAME> | 20 € | The Web need this. Every good JS developer I have met said he had read your book. An updated edition will continue help everyone writing good Javascript. | <NAME> | 20 € | My most preferred task "author's discretion"; but a Node.js section will be great. Remember, the API documentation indicates the permanence ("stability") of the API. | <NAME> | 20 € | <NAME> | 20 € | It's the #1 i recommend to people wanting to learn JS | larz | 20 € | <NAME>. | 20 € | <NAME> | 20 € | <NAME> | 20 € | nerdess | 20 € | <NAME> | 20 € | <NAME> | 20 € | My preferred task really is "all"! | <NAME> | 20 € | <NAME> | 20 € | Eloquent JavaScript is one of the best resource to learn programming and JavaScript. I'd love to read a new edition. | <NAME> | 20 € | I recommend Eloquent JavaScript to everyone wanting to learn JS as the first book they look at and people usually love it. Hopefully a new edition will get even more people to learn JS! | <NAME> | 20 € | <NAME> | 20 € | Loved the first book. Go hard mate! =) | <NAME> | 20 € | Whenever someone tells me they'd like to learn programming, I point them to Eloquent Javascript! | <NAME> | 25 $ | Eloquent JS is not only the greatest introduction to Javascript that exists, it's one of the most illuminating books about how to reason about your programs and how to start thinking functionally. I think it's the book responsible for most developers making the jump from curious HTML prodders to full-stop intelligent programmers. | <NAME> | 25 $ | Awesome work. I recommend your book to lots of people who are new to programming. I think it is easily one of the best books out there. Thank you and thank you on behalf of everyone I've recommended it to. Also, thank you for Tern.js, CodeMirror and the Haskell code I have perused. I love it when people like you <NAME>, Substack, Fogus and others help build bridges between JavaScript and functional programming programming languages like Haskell and Clojure, and grow the body of work in JavaScript code that use functional paradigms and idioms. Best, Andrew engineer @ famo.us | Anonymous | 25 $ | Anonymous | 25 $ | Anonymous | 25 $ | Anonymous | 25 $ | Anonymous | 25 $ | <NAME> | 25 $ | <NAME> | 25 $ | <NAME> | 25 $ | <NAME> | 25 $ | The original is such a wonderful resource. Thank you very much. Please take $20 for the project and spend the other $5 on a pint of something refreshing. You've earned it again and again. | Hartley | 25 $ | Despite its age, this book provides a wonderful introduction into Javascript. Would love to see it updated for a new generation of developers. | <NAME> | 25 $ | <NAME> | 25 $ | EJS is by far the best JavaScript book I have come across. Really looking forward to the 2nd Edition. | <NAME> | 25 $ | I'm just past being a beginner to JavaScript, but do have older experience in Perl. I think this is a great project. Best wishes for success. I'm retired now, but will try to donate again later. | <NAME> | 25 $ | <NAME> | 25 $ | I've been exposed to Javascript for 10 years and thought I had a reasonable grasp of things but today your book confused me. This is a good thing because I realised there is an awful lot I don't know and am working through your book to start again with a better foundation. | Sean | 25 $ | <NAME> | 25 $ | <NAME> | 25 $ | Trae | 25 $ | <NAME> | 25 $ | <NAME> | 25 $ | This is how education should happen to begin with. Also, love this book. Keep doing what you do. | <NAME> | 25 $ | Anonymous | 18 € | Anonymous | 20 $ | When I use eloquent javascript people hardly notice my disfigurement. | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | Thanks! Love the first edition. | <NAME> | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | I'm going through the first edition right now. It's excellent. Can't wait for the second edition! | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Anonymous | 20 $ | Thanks for your great works! The practice-based introduction is what distinguishes this book from other JS books, so I think more practical chapters would be great. | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | Hi Marijn, I can't tell you how many people I have steered to your book and online version. I still think it's the best introduction to JavaScript around, and am enthusiastic about it becoming even better. | <NAME> | 20 $ | <NAME> | 20 $ | Reading this book helped me understand functional programming for the first time! | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | I love the work you have done, I continue to try to improve my knowledge of objects, loops as I rushed through the book to grasp the concepts as fast as possible. Now working with backbone.js I often refer to the Javascript fundamentals in your book. Plus the Aunt with Cats email is epic. Great work, continue doing awesome stuff :) | <NAME> | 20 $ | The first edition was amazing. Can't wait for the second edition! | <NAME> | 20 $ | I appreciate your work, Marijn, and the way you're funding and licensing it. | <NAME> | 20 $ | Javascript is the world's most popular language -- I'd love to write it as gracefully and minimally as possible -- thanks! | <NAME> | 20 $ | Awesome book, awesome legacy, let's keep it alive. | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | This guy. | <NAME> | 20 $ | LadyMartel | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | I paid about $20 for the paper version and I'm happy to kick in another $20 to see an updated, generally available version. It's excellent work that I have referred others to multiple times. | <NAME> | 20 $ | <NAME> | 20 $ | Loved the first book, really hope you make your goal! | <NAME> | 20 $ | pmThompson | 20 $ | I initially wanted to mark task:None, but I was worried that un-targeted funds could become un-directed work. Besides, *EVERYONE* needs to hire an artist ;-) | RicheTheBuddha | 20 $ | Thank you for working to update this excellent work and putting it out there in a free way. Thank you for the first edition! | <NAME> | 20 $ | I am always in a hurry and first time when I read this book very quick I felt kind of satisfaction and enlightenment with this book more than with any other book...I knew that I had to read this book again and I did...The title Eloquent is not an overstatement...Really the modern introduction on javascript...The definite way to go for writing a book...not only best javascript beginner book but best programming book for any programmer wanna be...Thanks for this book! | <NAME> | 20 $ | Thank you! | Scott Lesser | 20 $ | <NAME> | 20 $ | Sergii | 20 $ | TehShrike | 20 $ | Chapter 6 helped me wrap my head around functional programming. I am in your debt! | Timur | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 20 $ | <NAME> | 15 € | Rock on! | <NAME> | 15 € | Anonymous | 15 € | Anonymous | 15 € | Anonymous | 15 € | Anonymous | 15 € | Anonymous | 15 € | Anonymous | 15 € | It would be incredibly great to be able to not only read the book, but use the book sandbox on iPhone. With v1 AFAIK it was not practical and/or not working. An offline manifest would also be great for smartphones, in particular the (upcoming) Firefox OS based ones. | Anonymous | 15 € | <NAME> | 15 € | <NAME> | 15 € | manichord | 15 € | <NAME> | 15 € | <NAME> | 15 € | Tchesko | 15 € | Great job. It's better than all the other books i have bought far away... Signed : A french reader. | <NAME> | 15 € | <NAME> | 13 € | <NAME> | 12 € | Have fun writing the new book. I look forward to the results! | @partyfists | 15 $ | The book that taught me how Javascript can be beautiful and well written needs this update and I am proud to support it. | Anonymous | 15 $ | I read a large part of your book online. It was awesome, I intended to buy a hardcopy but never did. Hope this compensates you adequately and get you running on the second edition. Looking forward to it. | Anonymous | 15 $ | Anonymous | 15 $ | Anonymous | 15 $ | Like the first book, so would use the second book. Good luck on the rewrite! | <NAME> | 15 $ | Thanks bro. You're teaching this baby bird how to take flight. I guess, "Thanks Daddy!" would be more appropriate! | <NAME> | 15 $ | thanks | <NAME> | 15 $ | <NAME> | 15 $ | Miroki | 15 $ | Patrick | 15 $ | Ramkumar | 15 $ | <NAME> | 15 $ | Sheldon | 15 $ | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | For beginners chapter 6-8 are where you are losing them, when it comes to functions, recursion, method chaining, oop and so on within a few pages. | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | yay i am so excited | Anonymous | 10 € | Anonymous | 10 € | Anonymous | 10 € | bastian | 10 € | Bema | 10 € | Thanks for doing this excellent work, I read the first edition online and leaned a lot. I would like to help translating it to Portuguese, I am Brazilian, so if by any chance you are contacted by other Brazilians willing to do the same job please put us in contact so we can build a working group for this task. Best wishes, Bema | <NAME> | 10 € | this is one of the best programming books ever written regardless of language, thanks! | Bundyo | 10 € | <NAME> | 10 € | I love this book! Still working my way through the first edition but it has been good enough to convince me to contribute to the 2nd. Great job man! | <NAME> | 10 € | Eloquent Javascript is a really good book. Thanks for the hard work. | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | Keep up the great work. I loved the first book. | gasper | 10 € | yay for giving money to people to do cool stuff! | <NAME> | 10 € | GZiolo | 10 € | Good luck! Waiting for updated version. | Jag | 10 € | Great work! | <NAME> | 10 € | <NAME> | 10 € | I hope this book will help to change world! :-) | jjjmmmhhh | 10 € | <NAME> | 10 € | <NAME> | 10 € | Hi. Thank you for all the work so far, looking forward to 2nd edition. | <NAME> | 10 € | Lucas | 10 € | Thank you! | <NAME> | 10 € | Eloquent JavaScript is one of the very few books I did not sell before me and my family moved into another city last year. I really love having it. | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | I learned so much from the first book. It broke down many JavaScript walls for me. Really looking forward to the new edition! | <NAME> | 10 € | I'd prefer as much of a functional programming lean as possible in a rewrite. But, you know, it's your book ;) | <NAME> | 10 € | Eloquent JavaScript has long been my go-to recommendation for people that want to have a good coverage of modern JavaScript programming. I'm really happy that the author is planning a second revision. | <NAME> | 10 € | A great starting point to learn JavaScript! | pixelkritzel | 10 € | PurplePilot | 10 € | qgi | 10 € | Richard | 10 € | Great book! I'm a beginner but it helped me understand so much about Javascript and the fundamentals of the language. | <NAME> | 10 € | It's been 6 whole years. JavaScript and the DOM have changed quite a bit since then. This new version should address that. | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME> | 10 € | <NAME>. | 10 € | Loved the first one. Keep up the good work! | <NAME> | 10 € | whostolemyhat | 10 € | Great book, incredibly useful! | <NAME> | 12 $ | I used your guide to learn JavaScript, and it was great! Thanks alot! | Anonymous | 11 $ | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | You have written an excellent book on my favourite language... You havema made a difference to my life... I can afford only 10 $.. good luck | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | I used the original Eloquent Javascript to learn JS and I am grateful to it. It's a classic that was made available to everyone. I can't wait to see the next version and am happy to support its continued existence as a goto for learning JS especially as JS becomes more and more ubiquitous. | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | still going through the (very well-done) first version, excited to see the updates! | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Thank you so much for your work. I've been working with javascript daily for about a year now, and your tutelage, especially in the way of data structures and programming paradigms, has been a huge boon to my abilities! Keep up the terrific work. | Anonymous | 10 $ | Thanks a lot for writing this. | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Eloquent Javascript served as my intro to JS and it's been with me the whole way. Thanks a bunch. | Anonymous | 10 $ | Anonymous | 10 $ | Anonymous | 10 $ | Audrey | 10 $ | Great work with 1st edition. Please keep up with the good work. | <NAME> | 10 $ | I think the current intro is beautiful, don't change it too much :) Thanks for your efforts! | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | Eloquent JavaScript is my favorite JS book. I'm looking forward to seeing an update. | <NAME> | 10 $ | CyberAP | 10 $ | First edition helped me a lot. Looking forward for a new one. | <NAME> | 10 $ | davewhat | 10 $ | <NAME> | 10 $ | The first edition is how I get people excited about JS. | <NAME> | 10 $ | Your book has been invaluable to me. I recommend it anytime I can, especially to people learning how to program. Thanks a lot! | <NAME> | 10 $ | This book taught me Javascript. | erutan | 10 $ | <NAME> | 10 $ | Great idea! | James | 10 $ | <NAME> | 10 $ | Jan | 10 $ | Jason | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | JavaScript FTW! | max borghino | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | Eloquent JavaScript got me over a lot of my early hurdles, and is still a resource I return to. I've also recommended it to friends who were looking to learn programming. | <NAME> | 10 $ | Heck, I'd have paid just to have one of the bugs speak my bubble! *So* much cooler than any Kickstarter bonus. Also, can't wait to see an updated version of this classic :-) | <NAME> | 10 $ | I loved the original web-version, and have recommended it to colleagues learning JS. For the record, I was always impressed with the in-page editor/executor environment. Being able to test ideas in the same location I was reading about kept me from jarring context-shifts. Making the sample code vanilla-js is a good idea, though. | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | Have been wanting to learn Javascript for a while and something about this seems like a better use of any $ I'd spend on a finished book. | <NAME> | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | Rolf | 10 $ | I bought your first book and it was great :-) | <NAME> | 10 $ | I took <NAME>'s advice and started learning Javascript recently with your online book. I am onto 3rd chapter now and already feel that your book is useful not only for novice programmers but for experienced ones as well. I am mighty impressed with current version and can only imagine what can you do with a re-written version. All the best. Also your background animation rocks. | Sandy | 10 $ | I love the first edition. Your writing made me rediscover the joy of learning to code. | Scott | 10 $ | <NAME> | 10 $ | <NAME> | 10 $ | The first edition of the book really helped me wrap my head around JavaScript. I'm looking forward to the second version. | <NAME> | 10 $ | <NAME> | 10 $ | Thank you for teaching programming via JavaScript. I look forward to buying the book again. | <NAME> | 10 $ | Thank Tou Marijn... I would like to express my gratitude for the work you are doing. Great Job. | <NAME> | 10 $ | Can't wait for the new version! | 谢彪 | 10 $ | Love your book and open source works <3 :-) | Anonymous | 8 $ | Anonymous | 8 $ | Anonymous | 8 $ | Congratulations for the initiative and the book. I home you get all the money you need for write this second version and make an ePub version too :) | <NAME> | 8 $ | <NAME> | 8 $ | @gdi2290 | vobi | 6 € | <NAME> | 8 $ | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | Anonymous | 5 € | <NAME> | 5 € | <NAME> | 5 € | <NAME> | 5 € | <NAME> | 5 € | <NAME> | 5 € | Loved the original. Consider including jquery, it's a pretty standard requirement these days for a JS programmer. | <NAME> | 5 € | Thanks and looking forward to the release! | Jacob | 5 € | <NAME> | 5 € | Lasse | 5 € | <NAME> | 5 € | <NAME> | 5 € | <NAME> | 5 € | <NAME> | 5 € | Sergi | 5 € | Awesome stuff, make it awesomer! | <NAME> | 5 € | Rock on dude. You're doing an awesome thing. | <NAME> | 5 € | <NAME> | 5 € | Wojtek | 5 € | Wolfgang | 5 € | Anonymous | 4 € | Anonymous | 5 $ | <NAME> | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | Anonymous | 5 $ | <NAME> | 5 $ | The first edition of 'Eloquent JavaScript' helped me at the beginning of my development career so much, that I hope for the next generation of JavaScripters to gain the same solid skills. | <NAME> | 5 $ | <NAME> | 5 $ | The original is a classic. Excited to share the new edition with new JS devs! | <NAME> | 5 $ | <NAME> | 5 $ | <NAME> | 5 $ | <NAME> | 5 $ | <NAME> | 5 $ | <NAME> | 5 $ | Thank you for the first book. Glad to see you're working on an update. Good luck with your progress. | <NAME> | 5 $ | Enjoyed the first edition and looking forward to an updated second edition. | <NAME> | 5 $ | Amazing contribution to the javascript community. | <NAME> | 5 $ | <NAME> | 5 $ | Probably the best introduction to Javascript I have encountered! | <NAME> | 5 $ | <NAME> | 5 $ | Stanislav | 5 $ | EloquentJS rocks ! | <NAME> | 5 $ | Thomas | 5 $ | Can't wait! | tucaz | 5 $ | wannianchuan | 5 $ | Like your book, look forward to the second edition. | Wyatt | 5 $ | Yours is the best JS resource available. Keep up the good work. | <NAME> | 5 $ | I am only half-way the 1st edition and I think you have done a great job! Thank you! | Anonymous | 4 $ | Anonymous | 3 € | <NAME> | 3 € | Good job! : ) | <NAME> | 3 € | <NAME> | 3 $ | Anonymous | 2 € | Anonymous | 2 € | Anonymous | 2 $ | Toby | 2 $ | <NAME> | 2 $ | Anonymous | 1 € | Anonymous | 1 € | bzlm | 1 € | hi guys | <NAME> | 1 € | robalarcon | 1 € | I have never read the first edition, but I have used a lot of your software and I'll looking forward for this edition | Anonymous | 1 $ | <NAME> | 1 $ | s5s5 | 1 $ | good job | <NAME> | 1 $ | Good luck & Thank you | Nosotros creemos que estamos creando el sistema para nuestros propios propósitos. Creemos que lo estamos haciendo a nuestra propia imagen... Pero la computadora no es realmente como nosotros. Es una proyección de una parte muy delgada de nosotros mismos: esa porción dedicada a la lógica, el orden, la reglas y la claridad. Este es un libro acerca de instruir computadoras. Hoy en dia las computadoras son tan comunes como los destornilladores (aunque bastante más complejas que estos), y hacer que hagan exactamente lo que quieres que hagan no siempre es fácil. Si la tarea que tienes para tu computadora es común, y bien entendida, tal y como mostrarte tu correo electrónico o funcionar como una calculadora, puedes abrir la aplicación apropiada y ponerte a trabajar en ella. Pero para realizar tareas únicas o abiertas, es posible que no haya una aplicación disponible. Ahí es donde la programación podría entrar en juego. La programación es el acto de construir un programa—un conjunto de instrucciones precisas que le dicen a una computadora qué hacer. Porque las computadoras son bestias tontas y pedantes, la programación es fundamentalmente tediosa y frustrante. Afortunadamente, si puedes superar eso, y tal vez incluso disfrutar el rigor de pensar en términos que las máquinas tontas puedan manejar, la programación puede ser muy gratificante. Te permite hacer en segundos cosas que tardarían para siempre a mano. Es una forma de hacer que tu herramienta computadora haga cosas que antes no podía. Ademas proporciona de un maravilloso ejercicio en pensamiento abstracto. La mayoría de la programación se realiza con lenguajes de programación. Un lenguaje de programación es un lenguaje artificialmente construido que se utiliza para instruir ordenadores. Es interesante que la forma más efectiva que hemos encontrado para comunicarnos con una computadora es bastante parecida a la forma que usamos para comunicarnos entre nosotros. Al igual que los lenguajes humanos, los lenguajes de computación permiten que las palabras y frases sean combinadas de nuevas maneras, lo que nos permite expresar siempre nuevos conceptos. Las interfaces basadas en lenguajes, que en un momento fueron la principal forma de interactuar con las computadoras para la mayoría de las personas, han sido en gran parte reemplazadas con interfaces más simples y limitadas. Pero todavía están allí, si sabes dónde mirar. En un punto, las interfaces basadas en lenguajes, como las terminales BASIC y DOS de los 80 y 90, eran la principal forma de interactuar con las computadoras. Estas han sido reemplazados en gran medida por interfaces visuales, las cuales son más fáciles de aprender pero ofrecen menos libertad. Los lenguajes de computadora todavía están allí, si sabes dónde mirar. Uno de esos lenguajes, JavaScript, está integrado en cada navegador web moderno y, por lo tanto, está disponible en casi todos los dispositivos. Este libro intentará familiarizarte lo suficiente con este lenguaje para poder hacer cosas útiles y divertidas con él. ## Acerca de la programación Además de explicar JavaScript, también introduciré los principios básicos de la programación. La programación, resulta, es difícil. Las reglas fundamentales son típicamente simples y claras, pero los programas construidos en base a estas reglas tienden a ser lo suficientemente complejas como para introducir sus propias reglas y complejidad. De alguna manera, estás construyendo tu propio laberinto, y es posible que te pierdas en él. Habrá momentos en los que leer este libro se sentirá terriblemente frustrante. Si eres nuevo en la programación, habrá mucho material nuevo para digerir. Gran parte de este material sera entonces combinado en formas que requerirán que hagas conexiones adicionales. Depende de ti hacer el esfuerzo necesario. Cuando estés luchando para seguir el libro, no saltes a ninguna conclusión acerca de tus propias capacidades. Estás bien—solo tienes que seguir intentando. Tomate un descanso, vuelve a leer algún material, y asegúrate de leer y comprender los programas de ejemplo y ejercicios. Aprender es un trabajo duro, pero todo lo que aprendes se convertirá en tuyo, y hará que el aprendizaje subsiguiente sea más fácil. Cuando la acción deja de servirte, reúne información; cuando la información deja de servirte, duerme.. Un programa son muchas cosas. Es una pieza de texto escrita por un programador, es la fuerza directriz que hace que la computadora haga lo que hace, son datos en la memoria de la computadora, y sin embargo controla las acciones realizadas en esta misma memoria. Las analogías que intentan comparar programas a objetos con los que estamos familiarizados tienden a fallar. Una analogía que es superficialmente adecuada es el de una máquina—muchas partes separadas tienden a estar involucradas, y para hacer que todo funcione, tenemos que considerar la formas en las que estas partes se interconectan y contribuyen a la operación de un todo. Una computadora es una máquina física que actúa como un anfitrión para estas máquinas inmateriales. Las computadoras en si mismas solo pueden hacer cosas estúpidamente sencillas. La razón por la que son tan útiles es que hacen estas cosas a una velocidad increíblemente alta. Un programa puede ingeniosamente combinar una cantidad enorme de estas acciones simples para realizar cosas bastante complicadas. Un programa es un edificio de pensamiento. No cuesta nada construirlo, no pesa nada, y crece fácilmente bajo nuestras manos que teclean. Pero sin ningún cuidado, el tamaño de un programa y su complejidad crecerán sin control, confundiendo incluso a la persona que lo creó. Mantener programas bajo control es el problema principal de la programación. Cuando un programa funciona, es hermoso. El arte de la programación es la habilidad de controlar la complejidad. Un gran programa es moderado, hecho simple en su complejidad. Algunos programadores creen que esta complejidad se maneja mejor mediante el uso de solo un pequeño conjunto de técnicas bien entendidas en sus programas. Ellos han compuesto reglas estrictas (“mejores prácticas”) que prescriben la forma que los programas deberían tener, y se mantienen cuidadosamente dentro de su pequeña y segura zona. Esto no solamente es aburrido, sino que también es ineficaz. Problemas nuevos a menudo requieren soluciones nuevas. El campo de la programación es joven y todavía se esta desarrollando rápidamente, y es lo suficientemente variado como para tener espacio para aproximaciones salvajemente diferentes. Hay muchos errores terribles que hacer en el diseño de programas, así que ve adelante y cometelos para que los entiendas mejor. La idea de cómo se ve un buen programa se desarrolla con la practica, no se aprende de una lista de reglas. ## Por qué el lenguaje importa Al principio, en el nacimiento de la informática, no habían lenguajes de programación. Los programas se veían mas o menos así: > 00110001 00000000 00000000 00110001 00000001 00000001 00110011 00000001 00000010 01010001 00001011 00000010 00100010 00000010 00001000 01000011 00000001 00000000 01000001 00000001 00000001 00010000 00000010 00000000 01100010 00000000 00000000 Ese es un programa que suma los números del 1 al 10 entre ellos e imprime el resultado: `1 + 2 + .` . Podría ser ejecutado en una simple máquina hipotética. Para programar las primeras computadoras, era necesario colocar grandes arreglos de interruptores en la posición correcta o perforar agujeros en tarjetas de cartón y dárselos a la computadora. Probablemente puedas imaginarte lo tedioso y propenso a errores que era este procedimiento. Incluso escribir programas simples requería de mucha inteligencia y disciplina. Los complejos eran casi inconcebibles. Por supuesto, ingresar manualmente estos patrones arcanos de bits (los unos y ceros) le dieron al programador un profundo sentido de ser un poderoso mago. Y eso tiene que valer algo en términos de satisfacción laboral. {{index memoria, instrucción}} Cada línea del programa anterior contiene una sola instrucción. Podría ser escrito en español así: Almacenar el número 0 en la ubicación de memoria 0. * Almacenar el valor de la ubicación de memoria 1 en la ubicación de memoria 2. * Restar el número 11 del valor en la ubicación de memoria 2. * Si el valor en la ubicación de memoria 2 es el número 0, continuar con la instrucción 9. * Sumar el valor de la ubicación de memoria 1 a la ubicación de memoria 0. * Continuar con la instrucción 3. * Imprimir el valor de la ubicación de memoria 0. Aunque eso ya es más legible que la sopa de bits, es aún difícil de entender. Usar nombres en lugar de números para las instrucciones y ubicaciones de memoria ayuda: > Establecer "total" como 0. Establecer "cuenta" como 1. [loop] Establecer "comparar" como "cuenta". Restar 11 de "comparar". Si "comparar" es cero, continuar en [fin]. Agregar "cuenta" a "total". Agregar 1 a "cuenta". Continuar en [loop]. [fin] Imprimir "total". ¿Puedes ver cómo funciona el programa en este punto? Las primeras dos líneas le dan a dos ubicaciones de memoria sus valores iniciales: se usará `total` para construir el resultado de la computación, y `cuenta` hará un seguimiento del número que estamos mirando actualmente. Las líneas usando `comparar` son probablemente las más extrañas. El programa quiere ver si `cuenta` es igual a 11 para decidir si puede detener su ejecución. Debido a que nuestra máquina hipotética es bastante primitiva, esta solo puede probar si un número es cero y hace una decisión (o salta) basándose en eso. Por lo tanto, usa la ubicación de memoria etiquetada como `comparar` para calcular el valor de `cuenta - 11` y toma una decisión basada en ese valor. Las siguientes dos líneas agregan el valor de `cuenta` al resultado e incrementan `cuenta` en 1 cada vez que el programa haya decidido que `cuenta` todavía no es 11. Aquí está el mismo programa en JavaScript: > let total = 0, cuenta = 1; while (cuenta <= 10) { total += cuenta; cuenta += 1; } console.log(total); // → 55 Esta versión nos da algunas mejoras más. La más importante, ya no hay necesidad de especificar la forma en que queremos que el programa salte hacia adelante y hacia atrás. El constructo del lenguaje `while` se ocupa de eso. Este continúa ejecutando el bloque de código (envuelto en llaves) debajo de el, siempre y cuando la condición que se le dio se mantenga. Esa condición es `cuenta <= 10` , lo que significa “cuenta es menor o igual a 10”. Ya no tenemos que crear un valor temporal y compararlo con cero, lo cual era un detalle poco interesante. Parte del poder de los lenguajes de programación es que se encargan por nosotros de los detalles sin interés. Al final del programa, después de que el `while` haya terminado, la operación `console.log` se usa para mostrar el resultado. {{index “sum function”, “range function”, abstracción, function}} Finalmente, aquí está cómo se vería el programa si tuviéramos acceso a las las convenientes operaciones `rango` y `suma` disponibles, que respectivamente crean una colección de números dentro de un rango y calculan la suma de una colección de números: > console.log(suma(rango(1, 10))); // → 55 La moraleja de esta historia es que el mismo programa se puede expresar en formas largas y cortas, ilegibles y legibles. La primera versión del programa era extremadamente oscura, mientras que esta última es casi Español: muestra en el `log` de la consola la `suma` del `rango` de los números 1 al 10. (En capítulos posteriores veremos cómo definir operaciones como `suma` y `rango` .) Un buen lenguaje de programación ayuda al programador permitiéndole hablar sobre las acciones que la computadora tiene que realizar en un nivel superior. Ayuda a omitir detalles poco interesantes, proporciona bloques de construcción convenientes (como `while` y `console.log` ), te permite que definas tus propios bloques de construcción (como `suma` y `rango` ), y hace que esos bloques sean fáciles de componer. ## ¿Qué es JavaScript? JavaScript se introdujo en 1995 como una forma de agregar programas a páginas web en el navegador Netscape Navigator. El lenguaje ha sido desde entonces adoptado por todos los otros navegadores web principales. Ha hecho que las aplicaciones web modernas sean posibles: aplicaciones con las que puedes interactuar directamente, sin hacer una recarga de página para cada acción. JavaScript también es utilizado en sitios web más tradicionales para proporcionar diversas formas de interactividad e ingenio. Es importante tener en cuenta que JavaScript casi no tiene nada que ver con el lenguaje de programación llamado Java. El nombre similar fue inspirado por consideraciones de marketing, en lugar de buen juicio. Cuando JavaScript estaba siendo introducido, el lenguaje Java estaba siendo fuertemente comercializado y estaba ganando popularidad. Alguien pensó que era una buena idea intentar cabalgar sobre este éxito. Ahora estamos atrapados con el nombre. Después de su adopción fuera de Netscape, un documento estándar fue escrito para describir la forma en que debería funcionar el lenguaje JavaScript, para que las diversas piezas de software que decían ser compatibles con JavaScript en realidad estuvieran hablando del mismo lenguaje. Este se llamo el Estándar ECMAScript, por Ecma International que hizo la estandarización. En la práctica, los términos ECMAScript y JavaScript se puede usar indistintamente, son dos nombres para el mismo lenguaje. Hay quienes dirán cosas terribles sobre JavaScript. Muchas de estas cosas son verdaderas. Cuando estaba comenzando a escribir algo en JavaScript por primera vez, rápidamente comencé a despreciarlo. El lenguaje aceptaba casi cualquier cosa que escribiera, pero la interpretaba de una manera que era completamente diferente de lo que quería decir. Por supuesto, esto tenía mucho que ver con el hecho de que no tenía idea de lo que estaba haciendo, pero hay un problema real aquí: JavaScript es ridículamente liberal en lo que permite. La idea detrás de este diseño era que haría a la programación en JavaScript más fácil para los principiantes. En realidad, lo que mas hace es que encontrar problemas en tus programas sea más difícil porque el sistema no los señalará por ti. Sin embargo, esta flexibilidad también tiene sus ventajas. Deja espacio para muchas técnicas que son imposibles en idiomas más rígidos, y como verás (por ejemplo en el Capítulo 10) se pueden usar para superar algunas de las deficiencias de JavaScript. Después de aprender el idioma correctamente y luego de trabajar con él por un tiempo, he aprendido a querer a JavaScript. Ha habido varias versiones de JavaScript. ECMAScript versión 3 fue la versión mas ampliamente compatible en el momento del ascenso de JavaScript a su dominio, aproximadamente entre 2000 y 2010. Durante este tiempo, se trabajó en marcha hacia una ambiciosa versión 4, que planeaba una serie de radicales mejoras y extensiones al lenguaje. Cambiar un lenguaje vivo y ampliamente utilizado de una manera tan radical resultó ser políticamente difícil, y el trabajo en la versión 4 fue abandonado en 2008, lo que llevó a la versión 5, mucho menos ambiciosa, que se publicaría en el 2009. Luego, en 2015, una actualización importante, incluyendo algunas de las ideas planificadas para la versión 4, fue realizada. Desde entonces hemos tenido actualizaciones nuevas y pequeñas cada año. El hecho de que el lenguaje esté evolucionando significa que los navegadores deben mantenerse constantemente al día, y si estás usando uno más antiguo, puede que este no soporte todas las mejoras. Los diseñadores de lenguajes tienen cuidado de no realizar cualquier cambio que pueda romper los programas ya existentes, de manera que los nuevos navegadores puedan todavía ejecutar programas viejos. En este libro, usaré la versión 2017 de JavaScript. Los navegadores web no son las únicas plataformas en las que se usa JavaScript. Algunas bases de datos, como MongoDB y CouchDB, usan JavaScript como su lenguaje de scripting y consultas. Varias plataformas para programación de escritorio y servidores, más notablemente el proyecto Node.js (el tema del Capítulo 20) proporcionan un entorno para programar en JavaScript fuera del navegador. ## Código, y qué hacer con él Código es el texto que compone los programas. La mayoría de los capítulos en este libro contienen bastante. Creo que leer código y escribir código son partes indispensables del aprendizaje para programar. Trata de no solo echar un vistazo a los ejemplos, léelos atentamente y entiéndelos. Esto puede ser algo lento y confuso al principio, pero te prometo que rápidamente vas agarrar el truco. Lo mismo ocurre con los ejercicios. No supongas que los entiendes hasta que hayas escrito una solución funcional para resolverlos. Te recomiendo que pruebes tus soluciones a los ejercicios en un intérprete real de JavaScript. De esta forma, obtendrás retroalimentación inmediata acerca de que si esta funcionando lo que estás haciendo, y, espero, serás tentado a experimentar e ir más allá de los ejercicios. Al leer este libro en tu navegador, puedes editar (y ejecutar) todos los programas de ejemplo haciendo clic en ellos. Si deseas ejecutar los programas definidos en este libro fuera de la caja de arena del libro, se requiere cierto cuidado. Muchos ejemplos se mantienen por si mismos y deberían de funcionar en cualquier entorno de JavaScript. Pero código en capítulos mas avanzados a menudo se escribe para un entorno específico (el navegador o Node.js) y solo puede ser ejecutado allí. Además, muchos capítulos definen programas más grandes, y las piezas de código que aparecen en ellos dependen de otras piezas o de archivos externos. La caja de arena en el sitio web proporciona enlaces a archivos Zip que contienen todos los scripts y archivos de datos necesarios para ejecutar el código de un capítulo determinado. ## Descripción general de este libro Este libro contiene aproximadamente tres partes. Los primeros 12 capítulos discuten el lenguaje JavaScript en sí. Los siguientes siete capítulos son acerca de los navegadores web y la forma en la que JavaScript es usado para programarlos. Finalmente, dos capítulos están dedicados a Node.js, otro entorno en donde programar JavaScript. A lo largo del libro, hay cinco capítulos de proyectos, que describen programas de ejemplo más grandes para darte una idea de la programación real. En orden de aparición, trabajaremos en la construcción de un robot de entrega, un lenguaje de programación, un juego de plataforma, un programa de pintura y un sitio web dinámico. La parte del lenguaje del libro comienza con cuatro capítulos para presentar la estructura básica del lenguaje de JavaScript. Estos introducen estructuras de control (como la palabra `while` que ya viste en esta introducción), funciones (escribir tus propios bloques de construcción), y estructuras de datos. Después de estos, seras capaz de escribir programas simples. Luego, los Capítulos 5 y 6 introducen técnicas para usar funciones y objetos y asi escribir código más abstracto y de manera que puedas mantener la complejidad bajo control. Después de un primer capítulo de proyecto, la primera parte del libro continúa con los capítulos sobre manejo y solución de errores, en expresiones regulares (una herramienta importante para trabajar con texto), en modularidad (otra defensa contra la complejidad), y en programación asincrónica (que se encarga de eventos que toman tiempo). El segundo capítulo de proyecto concluye la primera parte del libro. La segunda parte, Capítulos 13 a 19, describe las herramientas a las que el JavaScript en un navegador tiene acceso. Aprenderás a mostrar cosas en la pantalla (Capítulos 14 y 17), responder a entradas de usuario (Capitulo 15), y a comunicarte a través de la red (Capitulo 18). Hay dos capítulos de proyectos en este parte. Después de eso, el Capítulo 20 describe Node.js, y el Capitulo 21 construye un pequeño sistema web usando esta herramienta. ## Convenciones tipográficas En este libro, el texto escrito en una fuente `monoespaciada` representará elementos de programas, a veces son fragmentos autosuficientes, y a veces solo se refieren a partes de un programa cercano. Los programas (de los que ya has visto algunos), se escriben de la siguiente manera: > function factorial(numero) { if (numero == 0) { return 1; } else { return factorial(numero - 1) * numero; } } Algunas veces, para mostrar el resultado que produce un programa, la salida esperada se escribe después de el, con dos diagonales y una flecha en frente. > console.log(factorial(8)); // → 40320 ¡Buena suerte! Debajo de la superficie de la máquina, el programa se mueve. Sin esfuerzo, se expande y se contrae. En gran armonía, los electrones se dispersan y se reagrupan. Las figuras en el monitor son tan solo ondas sobre el agua. La esencia se mantiene invisible debajo de la superficie. Dentro del mundo de la computadora, solo existen datos. Puedes leer datos, modificar datos, crear nuevos datos—pero todo lo que no sean datos, no puede ser mencionado. Toda estos datos están almacenados como largas secuencias de bits, y por lo tanto, todos los datos son fundamentalmente parecidos. Los bits son cualquier tipo de cosa que pueda tener dos valores, usualmente descritos como ceros y unos. Dentro de la computadora, estos toman formas tales como cargas eléctricas altas o bajas, una señal fuerte o débil, o un punto brillante u opaco en la superficie de un CD. Cualquier pedazo de información discreta puede ser reducida a una secuencia de ceros y unos y, de esa manera ser representada en bits. Por ejemplo, podemos expresar el numero 13 en bits. Funciona de la misma manera que un número decimal, pero en vez de 10 diferentes dígitos, solo tienes 2, y el peso de cada uno aumenta por un factor de 2 de derecha a izquierda. Aquí tenemos los bits que conforman el número 13, con el peso de cada dígito mostrado debajo de el: > 0 0 0 0 1 1 0 1 128 64 32 16 8 4 2 1 Entonces ese es el número binario 00001101, o 8 + 4 + 1, o 13. ## Valores Imagina un mar de bits, un océano de ellos. Una computadora moderna promedio tiene mas de 30 billones de bits en su almacenamiento de datos volátiles (memoria funcional). El almacenamiento no volátil (disco duro o equivalente) tiende a tener unas cuantas mas ordenes de magnitud. Para poder trabajar con tales cantidades de bits sin perdernos, debemos separarlos en porciones que representen pedazos de información. En un entorno de JavaScript, esas porciones son llamadas valores. Aunque todos los valores están hechos de bits, estos juegan papeles diferentes. Cada valor tiene un tipo que determina su rol. Algunos valores son números, otros son pedazos de texto, otros son funciones, y asi sucesivamente. Para crear un valor, solo debemos de invocar su nombre. Esto es conveniente. No tenemos que recopilar materiales de construcción para nuestros valores, o pagar por ellos. Solo llamamos su nombre, y woosh, ahi lo tienes. Estos no son realmente creados de la nada, por supuesto. Cada valor tiene que ser almacenado en algún sitio, y si quieres usar una cantidad gigante de valores al mismo tiempo, puede que te quedes sin memoria. Afortunadamente, esto solo es un problema si los necesitas todos al mismo tiempo. Tan pronto como dejes de utilizar un valor, este se disipará, dejando atrás sus bits para que estos sean reciclados como material de construcción para la próxima generación de valores. Este capitulo introduce los elementos atómicos de los programas en JavaScript, estos son, los tipos de valores simples y los operadores que actúan en tales valores. ## Números Valores del tipo number (número) son, como es de esperar, valores numéricos. En un programa hecho en JavaScript, se escriben de la siguiente manera: `13` Utiliza eso en un programa, y ocasionara que el patrón de bits que representa el número 13 sea creado dentro de la memoria de la computadora. {{index [number, representación], bit}} JavaScript utiliza un número fijo de bits, específicamente 64 de ellos, para almacenar un solo valor numérico. Solo existen una cantidad finita de patrones que podemos crear con 64 bits, lo que significa que la cantidad de números diferentes que pueden ser representados es limitada. Para una cantidad de N dígitos decimales, la cantidad de números que pueden ser representados es 10N. Del mismo modo, dados 64 dígitos binarios, podemos representar 264 números diferentes, lo que es alrededor de 18 trillones (un 18 con 18 ceros más). Eso es muchísimo. La memoria de un computador solía ser mucho mas pequeña que en la actualidad, y las personas tendían a utilizar grupos de 8 o 16 bits para representar sus números. Era común accidentalmente desbordar esta limitación— terminando con un número que no cupiera dentro de la cantidad dada de bits. Hoy en día, incluso computadoras que caben dentro de tu bolsillo poseen de bastante memoria, por lo que somos libres de usar pedazos de memoria de 64 bits, y solamente nos tenemos que preocupar por desbordamientos de memoria cuando lidiamos con números verdaderamente astronómicos. A pesar de esto, no todos los números enteros por debajo de 18 trillones caben en un número de JavaScript. Esos bits también almacenan números negativos, por lo que un bit indica el signo de un número. Un problema mayor es que los números no enteros tienen que ser representados también. Para hacer esto, algunos de los bits son usados para almacenar la posición del punto decimal. El número entero mas grande que puede ser almacenado está en el rango de los nueve mil billones (15 ceros), que sigue siendo inmenso. Los números fraccionarios se escriben usando un punto: `9.81` Para números muy grandes o muy pequeños, pudiéramos también usar notación científica agregando una e (de “exponente”), seguida por el exponente del número: `2.998e8` Eso es 2.998 × 108 = 299,800,000. Los cálculos con números enteros (también llamados integers) mas pequeños a los nueve mil billones anteriormente mencionados están garantizados a ser siempre precisos. Desafortunadamente, los cálculos con números fraccionarios, generalmente no lo son. Así como π (pi) no puede ser precisamente expresado por un número finito de números decimales, muchos números pierden algo de precisión cuando solo hay 64 bits disponibles para almacenarlos. Esto es una pena, pero solo causa problemas prácticos en situaciones especificas. Lo importante es que debemos ser consciente de estas limitaciones y tratar a los números fraccionarios como aproximaciones, no como valores precisos. ### Aritmética Lo que mayormente se hace con los números es aritmética. Operaciones aritméticas tales como la adición y la multiplicación, toman dos valores numéricos y producen un nuevo valor a raíz de ellos. Asi es como lucen en JavaScript: > 100 + 4 * 11 Los símbolos `+` y `*` son llamados operadores. El primero representa a la adición, y el segundo representa a la multiplicación. Colocar un operador entre dos valores aplicará la operación asociada a esos valores y producirá un nuevo valor. ¿Pero el ejemplo significa “agrega 4 y 100, y multiplica el resultado por 11”, o es la multiplicación aplicada antes de la adición? Como quizás hayas podido adivinar, la multiplicación sucede primero. Pero asi como en las matemáticas, puedes cambiar este orden envolviendo la adición en paréntesis: > (100 + 4) * 11 Para sustraer, existe el operador `-` , y la división puede ser realizada con el operador `/` . Cuando operadores aparecen juntos sin paréntesis, el orden en el cual son aplicados es determinado por la precedencia de los operadores. El ejemplo muestra que la multiplicación es aplicada antes que la adición. El operador `/` tiene la misma precedencia que `*` . Lo mismo aplica para `+` y `-` . Cuando operadores con la misma precedencia aparecen uno al lado del otro, como en `1 - 2 + 1` , estos se aplican de izquierda a derecha: `(1 - 2) + 1` . Estas reglas de precedencia no son algo de lo que deberías preocuparte. Cuando tengas dudas, solo agrega un paréntesis. Existe otro operador aritmético que quizás no reconozcas inmediatamente. El símbolo `%` es utilizado para representar la operación de residuo. `X % Y` es el residuo de dividir `X` entre `Y` . Por ejemplo, `314 % 100` produce `14` , y `144 % 12` produce `0` . La precedencia del residuo es la la misma que la de la multiplicación y la división. Frecuentemente veras que este operador es también conocido como modulo. ### Números especiales Existen 3 valores especiales en JavaScript que son considerados números pero que no se comportan como números normales. Los primeros dos son `Infinity` y `-Infinity` , los cuales representan las infinidades positivas y negativas. `Infinity - 1` aun es `Infinity` , y asi sucesivamente. A pesar de esto, no confíes mucho en computaciones que dependan de infinidades. Estas no son matemáticamente confiables, y puede que muy rápidamente nos resulten en el próximo número especial: `NaN` . `NaN` significa “no es un número” (“Not A Number”), aunque sea un valor del tipo numérico. Obtendras este resultado cuando, por ejemplo, trates de calcular `0 / 0` (cero dividido entre cero), `Infinity - Infinity` , o cualquier otra cantidad de operaciones numéricas que no produzcan un resultado significante. ## Strings El próximo tipo de dato básico es el string. Los Strings son usados para representar texto. Son escritos encerrando su contenido en comillas: > `Debajo en el mar` "Descansa en el océano" 'Flota en el océano' Puedes usar comillas simples, comillas dobles, o comillas invertidas para representar strings, siempre y cuando las comillas al principio y al final coincidan. Casi todo puede ser colocado entre comillas, y JavaScript construirá un valor string a partir de ello. Pero algunos caracteres son mas difíciles. Te puedes imaginar que colocar comillas entre comillas podría ser difícil. Los Newlines (los caracteres que obtienes cuando presionas la tecla de Enter) solo pueden ser incluidos cuando el string está encapsulado con comillas invertidas ( ``` ). Para hacer posible incluir tales caracteres en un string, la siguiente notación es utilizada: cuando una barra invertida ( `\` ) es encontrada dentro de un texto entre comillas, indica que el carácter que le sigue tiene un significado especial. Esto se conoce como escapar el carácter. Una comilla que es precedida por una barra invertida no representará el final del string sino que formara parte del mismo. Cuando el carácter `n` es precedido por una barra invertida, este se interpreta como un Newline (salto de linea). De la mima forma, `t` después de una barra invertida, se interpreta como un character de tabulación. Toma como referencia el siguiente string: ``` "Esta es la primera linea\nY esta es la segunda" ``` El texto actual es este: > Esta es la primera linea Y esta es la segunda Se encuentran, por supuesto, situaciones donde queremos que una barra invertida en un string solo sea una barra invertida, y no un carácter especial. Si dos barras invertidas prosiguen una a la otra, serán colapsadas y sólo una permanecerá en el valor resultante del string. Asi es como el string “Un carácter de salto de linea es escrito así: `"` \n `"` .” puede ser expresado: > Un carácter de salto de linea es escrito así: \"\\n\"." También los strings deben de ser modelados como una serie de bits para poder existir dentro del computador. La forma en la que JavaScript hace esto es basada en el estándar Unicode. Este estándar asigna un número a todo carácter que alguna vez pudieras necesitar, incluyendo caracteres en Griego, Árabe, Japones, Armenio, y asi sucesivamente. Si tenemos un número para representar cada carácter, un string puede ser descrito como una secuencia de números. Y eso es lo que hace JavaScript. Pero hay una complicación: La representación de JavaScript usa 16 bits por cada elemento string, en el cual caben 216 números diferentes. Pero Unicode define mas caracteres que aquellos—aproximadamente el doble, en este momento. Entonces algunos caracteres, como muchos emojis, necesitan ocupar dos “posiciones de caracteres” en los strings de JavaScript. Volveremos a este tema en el Capitulo 5. Los strings no pueden ser divididos, multiplicados, o substraidos, pero el operador `+` puede ser utilizado en ellos. No los agrega, sino que los concatena—pega dos strings juntos. La siguiente línea producirá el string `"concatenar"` : > "con" + "cat" + "e" + "nar" Los valores string tienen un conjunto de funciones (métodos) asociadas, que pueden ser usadas para realizar operaciones en ellos. Regresaremos a estas en el Capítulo 4. Los strings escritos con comillas simples o dobles se comportan casi de la misma manera—La unica diferencia es el tipo de comilla que necesitamos para escapar dentro de ellos. Los strings de comillas inversas, usualmente llamados plantillas literales, pueden realizar algunos trucos más. Mas alla de permitir saltos de lineas, pueden también incrustar otros valores. > `la mitad de 100 es ${100 / 2}` Cuando escribes algo dentro de `${}` en una plantilla literal, el resultado será computado, convertido a string, e incluido en esa posición. El ejemplo anterior produce “la mitad de 100 es 50”. ## Operadores unarios No todo los operadores son simbolos. Algunos se escriben como palabras. Un ejemplo es el operador `typeof` , que produce un string con el nombre del tipo de valor que le demos. > edit & run code by clicking itconsole.log(typeof 4.5) // → number console.log(typeof "x") // → string Usaremos `console.log` en los ejemplos de código para indicar que que queremos ver el resultado de alguna evaluación. Mas acerca de esto esto en el proximo capitulo. En los otros operadores que hemos visto hasta ahora, todos operaban en dos valores, pero `typeof` sola opera con un valor. Los operadores que usan dos valores son llamados operadores binarios, mientras que aquellos operadores que usan uno son llamados operadores unarios. El operador menos puede ser usado tanto como un operador binario o como un operador unario. > console.log(- (10 - 2)) // → -8 ## Valores Booleanos Es frecuentemente util tener un valor que distingue entre solo dos posibilidades, como “si”, y “no”, o “encendido” y “apagado”. Para este propósito, JavaScript tiene el tipo Boolean, que tiene solo dos valores: true (verdadero) y false (falso) que se escriben de la misma forma. ### Comparación Aquí se muestra una forma de producir valores Booleanos: > console.log(3 > 2) // → true console.log(3 < 2) // → false Los signos `>` y `<` son tradicionalmente símbolos para “mayor que” y “menor que”, respectivamente. Ambos son operadores binarios. Aplicarlos resulta en un valor Boolean que indica si la condición que indican se cumple. Los Strings pueden ser comparados de la misma forma. > console.log("Aardvark" < "Zoroaster") // → true La forma en la que los strings son ordenados, es aproximadamente alfabético, aunque no realmente de la misma forma que esperaríamos ver en un diccionario: las letras mayúsculas son siempre “menores que” las letras minúsculas, así que `"Z" < "a"` , y caracteres no alfabéticos (como `!` , `-` y demás) son también incluidos en el ordenamiento. Cuando comparamos strings, JavaScript evalúa los caracteres de izquierda a derecha, comparando los códigos Unicode uno por uno. Otros operadores similares son `>=` (mayor o igual que), `<=` (menor o igual que), `==` (igual a), y `!=` (no igual a). > console.log("Itchy" != "Scratchy") // → true console.log("Manzana" == "Naranja") // → false Solo hay un valor en JavaScript que no es igual a si mismo, y este es `NaN` (“no es un número”). > console.log(NaN == NaN) // → false Se supone que `NaN` denota el resultado de una computación sin sentido, y como tal, no es igual al resultado de ninguna otra computación sin sentido. ### Operadores lógicos También existen algunas operaciones que pueden ser aplicadas a valores Booleanos. JavaScript soporta tres operadores lógicos: and, or, y not. Estos pueden ser usados para “razonar” acerca de valores Booleanos. El operador `&&` representa el operador lógico and. Es un operador binario, y su resultado es verdadero solo si ambos de los valores dados son verdaderos. > console.log(true && false) // → false console.log(true && true) // → true El operador `||` representa el operador lógico or. Lo que produce es verdadero si cualquiera de los valores dados es verdadero. > console.log(false || true) // → true console.log(false || false) // → false Not se escribe como un signo de exclamación ( `!` ). Es un operador unario que voltea el valor dado— `!true` produce `false` y `!false` produce `true` . Cuando estos operadores Booleanos son mezclados con aritmética y con otros operadores, no siempre es obvio cuando son necesarios los paréntesis. En la práctica, usualmente puedes manejarte bien sabiendo que de los operadores que hemos visto hasta ahora, `||` tiene la menor precedencia, luego le sigue `&&` , luego le siguen los operadores de comparación ( `>` , `==` , y demás), y luego el resto. Este orden ha sido determinado para que en expresiones como la siguiente, la menor cantidad de paréntesis posible sea necesaria: > 1 + 1 == 2 && 10 * 10 > 50 El ultimo operador lógico que discutiremos no es unario, tampoco binario, sino ternario, esto es, que opera en tres valores. Es escrito con un signo de interrogación y dos puntos, de esta forma: > console.log(true ? 1 : 2); // → 1 console.log(false ? 1 : 2); // → 2 Este es llamado el operador condicional (o algunas veces simplemente operador ternario ya que solo existe uno de este tipo). El valor a la izquierda del signo de interrogación “decide” cual de los otros dos valores sera retornado. Cuando es verdadero, elige el valor de en medio, y cuando es falso, el valor de la derecha. ## Valores vacíos Existen dos valores especiales, escritos como `null` y `undefined` , que son usados para denotar la ausencia de un valor significativo. Son en si mismos valores, pero no traen consigo información. Muchas operaciones en el lenguaje que no producen un valor significativo (veremos algunas mas adelante), producen `undefined` simplemente porque tienen que producir algún valor. La diferencia en significado entre `undefined` y `null` es un accidente del diseño de JavaScript, y realmente no importa la mayor parte del tiempo. En los casos donde realmente tendríamos que preocuparnos por estos valores, mayormente recomiendo que los trates como intercambiables. ## Conversión de tipo automática En la Introducción, mencione que JavaScript tiende a salirse de su camino para aceptar casi cualquier programa que le demos, incluso programas que hacen cosas extrañas. Esto es bien demostrado por las siguientes expresiones: > console.log(8 * null) // → 0 console.log("5" - 1) // → 4 console.log("5" + 1) // → 51 console.log("cinco" * 2) // → NaN console.log(false == 0) // → true Cuando un operador es aplicado al tipo de valor “incorrecto”, JavaScript silenciosamente convertirá ese valor al tipo que necesita, utilizando una serie de reglas que frecuentemente no dan el resultado que quisieras o esperarías. Esto es llamado coercion de tipo. El `null` en la primera expresión se torna `0` , y el `"5"` en la segunda expresión se torna `5` (de string a número). Sin embargo, en la tercera expresión, `+` intenta realizar una concatenación de string antes que una adición numérica, entonces el `1` es convertido a `"1"` (de número a string) Cuando algo que no se traduce a un número en una manera obvia (tal como `"cinco"` o `undefined` ) es convertido a un número, obtenemos el valor `NaN` . Operaciones aritméticas subsecuentes con `NaN` , continúan produciendo `NaN` , asi que si te encuentras obteniendo uno de estos valores en algun lugar inesperado, busca por coerciones de tipo accidentales. Cuando se utiliza `==` para comparar valores del mismo tipo, el desenlace es fácil de predecir: debemos de obtener verdadero cuando ambos valores son lo mismo, excepto en el caso de `NaN` . Pero cuando los tipos difieren, JavaScript utiliza una serie de reglas complicadas y confusas para determinar que hacer. En la mayoria de los casos, solo tratara de convertir uno de estos valores al tipo del otro valor. Sin embargo, cuando `null` o `undefined` ocurren en cualquiera de los lados del operador, este produce verdadero solo si ambos lados son valores o `null` o `undefined` . > console.log(null == undefined); // → true console.log(null == 0); // → false Este comportamiento es frecuentemente util. Cuando queremos probar si un valor tiene un valor real en vez de `null` o `undefined` , puedes compararlo con `null` usando el operador `==` (o `!=` ). Pero que pasa si queremos probar que algo se refiere precisamente al valor `false` ? Las reglas para convertir strings y números a valores Booleanos, dice que `0` , `NaN` , y el string vació ( `""` ) cuentan como `false` , mientras que todos los otros valores cuentan como `true` . Debido a esto, expresiones como `0 == false` , y `"" == false` son también verdaderas. Cuando no queremos ninguna conversion de tipo automática, existen otros dos operadores adicionales: `===` y `!==` . El primero prueba si un valor es precisamente igual al otro, y el segundo prueba si un valor no es precisamente igual. Entonces `"" === false` es falso, como es de esperarse. Recomiendo usar el operador de comparación de tres caracteres de una manera defensiva para prevenir que conversiones de tipo inesperadas te estorben. Pero cuando estés seguro de que el tipo va a ser el mismo en ambos lados, no es problemático utilizar los operadores mas cortos. ### Corto circuito de operadores lógicos Los operadores lógicos `&&` y `||` , manejan valores de diferentes tipos de una forma peculiar. Ellos convertirán el valor en su lado izquierdo a un tipo Booleano para decidir que hacer, pero dependiendo del operador y el resultado de la conversión, devolverán o el valor original de la izquierda o el valor de la derecha. El operador `||` , por ejemplo, devolverá el valor de su izquierda cuando este puede ser convertido a verdadero y de ser lo contrario devolverá el valor de la derecha. Esto tiene el efecto esperado cuando los valores son Booleanos, pero se comporta de una forma algo análoga con valores de otros tipos. > console.log(null || "usuario") // → usuario console.log("Agnes" || "usuario") // → Agnes Podemos utilizar esta funcionalidad como una forma de recurrir a un valor por defecto. Si tenemos un valor que puede estar vacío, podemos usar `||` después de este para remplazarlo con otro valor. Si el valor inicial puede ser convertido a falso, obtendra el reemplazo en su lugar. El operador `&&` funciona de manera similar, pero de forma opuesta. Cuando el valor a su izquierda es algo que se convierte a falso, devuelve ese valor, y de lo contrario, devuelve el valor a su derecha. Otra propiedad importante de estos dos operadores es que la parte de su derecha solo es evaluada si es necesario. En el caso de de `true || X` , no importa que sea `X` —aun si es una pieza del programa que hace algo terrible—el resultado será verdadero, y `X` nunca sera evaluado. Lo mismo sucede con `false && X` , que es falso e ignorará `X` . Esto es llamado evaluación de corto circuito. El operador condicional funciona de manera similar. Del segundo y tercer valor, solo el que es seleccionado es evaluado. Observamos cuatro tipos de valores de JavaScript en este capítulo: números, textos ( `strings` ), Booleanos, y valores indefinidos. Tales valores son creados escribiendo su nombre ( `true` , `null` ) o valor ( `13` , `"abc"` ). Puedes combinar y transformar valores con operadores. Vimos operadores binarios para aritmética ( `+` , `-` , `*` , `/` , y `%` ), concatenación de strings ( `+` ), comparaciones ( `==` , `!=` , `===` , `!==` , `<` , `>` , `<=` , `>=` ), y lógica ( `&&` , `||` ), así también como varios otros operadores unarios ( `-` para negar un número, `!` para negar lógicamente, y `typeof` para saber el valor de un tipo) y un operador ternario ( `?:` ) para elegir uno de dos valores basándose en un tercer valor. Esto te dá la información suficiente para usar JavaScript como una calculadora de bolsillo, pero no para mucho más. El próximo capitulo empezará a juntar estas expresiones para formar programas básicos. Y mi corazón brilla de un color rojo brillante bajo mi piel transparente y translúcida, y tienen que administrarme 10cc de JavaScript para conseguir que regrese. (respondo bien a las toxinas en la sangre.) Hombre, esa cosa es increible! En este capítulo, comenzaremos a hacer cosas que realmente se pueden llamar programación. Expandiremos nuestro dominio del lenguaje JavaScript más allá de los sustantivos y fragmentos de oraciones que hemos visto hasta ahora, al punto donde podemos expresar prosa significativa. ## Expresiones y declaraciones En el Capítulo 1, creamos valores y les aplicamos operadores a ellos para obtener nuevos valores. Crear valores de esta manera es la sustancia principal de cualquier programa en JavaScript. Pero esa sustancia tiene que enmarcarse en una estructura más grande para poder ser útil. Así que eso es lo que veremos a continuación. Un fragmento de código que produce un valor se llama una expresión. Cada valor que se escribe literalmente (como `22` o `"psicoanálisis"` ) es una expresión. Una expresión entre paréntesis también es una expresión, como lo es un operador binario aplicado a dos expresiones o un operador unario aplicado a una sola. Esto demuestra parte de la belleza de una interfaz basada en un lenguaje. Las expresiones pueden contener otras expresiones de una manera muy similar a como las sub-oraciones en los lenguajes humanos están anidadas, una sub-oración puede contener sus propias sub-oraciones, y así sucesivamente. Esto nos permite construir expresiones que describen cálculos arbitrariamente complejos. Si una expresión corresponde al fragmento de una oración, una declaración en JavaScript corresponde a una oración completa. Un programa es una lista de declaraciones. El tipo más simple de declaración es una expresión con un punto y coma después ella. Esto es un programa: > edit & run code by clicking it1; !false; Sin embargo, es un programa inútil. Una expresión puede estar feliz solo con producir un valor, que luego pueda ser utilizado por el código circundante. Una declaración es independiente por si misma, por lo que equivale a algo solo si afecta al mundo. Podría mostrar algo en la pantalla—eso cuenta como cambiar el mundo—o podría cambiar el estado interno de la máquina en una manera que afectará a las declaraciones que vengan después de ella. Estos cambios se llaman efecto secundarios. Las declaraciones en el ejemplo anterior solo producen los valores `1` y `true` y luego inmediatamente los tira a la basura. Esto no deja ninguna huella en el mundo. Cuando ejecutes este programa, nada observable ocurre. En algunos casos, JavaScript te permite omitir el punto y coma al final de una declaración. En otros casos, tiene que estar allí, o la próxima línea serán tratada como parte de la misma declaración. Las reglas para saber cuando se puede omitir con seguridad son algo complejas y propensas a errores. Asi que en este libro, cada declaración que necesite un punto y coma siempre tendra uno. Te recomiendo que hagas lo mismo, al menos hasta que hayas aprendido más sobre las sutilezas de los puntos y comas que puedan ser omitidos. ## Vinculaciones Cómo mantiene un programa un estado interno? Cómo recuerda cosas? Hasta ahora hemos visto cómo producir nuevos valores a partir de valores anteriores, pero esto no cambia los valores anteriores, y el nuevo valor tiene que ser usado inmediatamente o se disipará nuevamente. Para atrapar y mantener valores, JavaScript proporciona una cosa llamada vinculación, o variable: > let atrapado = 5 * 5; Ese es un segundo tipo de declaración. La palabra especial (palabra clave) `let` indica que esta oración va a definir una vinculación. Le sigue el nombre de la vinculación y, si queremos darle un valor inmediatamente, un operador `=` y una expresión. La declaración anterior crea una vinculación llamada `atrapado` y la usa para capturar el número que se produce al multiplicar 5 por 5. Después de que una vinculación haya sido definida, su nombre puede usarse como una expresión. El valor de tal expresión es el valor que la vinculación mantiene actualmente. Aquí hay un ejemplo: > let diez = 10; console.log(diez * diez); // → 100 Cuando una vinculación señala a un valor, eso no significa que esté atada a ese valor para siempre. El operador `=` puede usarse en cualquier momento en vinculaciones existentes para desconectarlas de su valor actual y hacer que ellas apuntan a uno nuevo: > let humor = "ligero"; console.log(humor); // → ligero humor = "oscuro"; console.log(humor); // → oscuro Deberías imaginar a las vinculaciones como tentáculos, en lugar de cajas. Ellas no contienen valores; ellas los agarran—dos vinculaciones pueden referirse al mismo valor. Un programa solo puede acceder a los valores que todavía pueda referenciar. Cuando necesitas recordar algo, creces un tentáculo para aferrarte a él o vuelves a conectar uno de tus tentáculos existentes a ese algo. Veamos otro ejemplo. Para recordar la cantidad de dólares que Luigi aún te debe, creas una vinculación. Y luego, cuando él te pague de vuelta $35, le das a esta vinculación un nuevo valor: > let deudaLuigi = 140; deudaLuigi = deudaLuigi - 35; console.log(deudaLuigi); // → 105 Cuando defines una vinculación sin darle un valor, el tentáculo no tiene nada que agarrar, por lo que termina en solo aire. Si pides el valor de una vinculación vacía, obtendrás el valor `undefined` . Una sola declaración `let` puede definir múltiples vinculaciones. Las definiciones deben estar separadas por comas. > let uno = 1, dos = 2; console.log(uno + dos); // → 3 Las palabras `var` y `const` también pueden ser usadas para crear vinculaciones, en una manera similar a `let` . > var nombre = "Ayda"; const saludo = "Hola "; console.log(saludo + nombre); // → <NAME> La primera, `var` (abreviatura de “variable”), es la forma en la que se declaraban las vinculaciones en JavaScript previo al 2015. Volveremos a la forma precisa en que difiere de `let` en el próximo capítulo. Por ahora, recuerda que generalmente hace lo mismo, pero raramente la usaremos en este libro porque tiene algunas propiedades confusas. La palabra `const` representa una constante. Define una vinculación constante, que apunta al mismo valor por el tiempo que viva. Esto es útil para vinculaciones que le dan un nombre a un valor para que fácilmente puedas consultarlo más adelante. ## Nombres vinculantes Los nombres de las vinculaciones pueden ser cualquier palabra. Los dígitos pueden ser parte de los nombres de las vinculaciones pueden— `catch22` es un nombre válido, por ejemplo—pero el nombre no debe comenzar con un dígito. El nombre de una vinculación puede incluir signos de dólar ( `$` ) o caracteres de subrayado ( `_` ), pero no otros signos de puntuación o caracteres especiales. Las palabras con un significado especial, como `let` , son palabras claves, y no pueden usarse como nombres vinculantes. También hay una cantidad de palabras que están “reservadas para su uso” en futuras versiones de JavaScript, que tampoco pueden ser usadas como nombres vinculantes. La lista completa de palabras clave y palabras reservadas es bastante larga: > break case catch class const continue debugger default delete do else enum export extends false finally for function if implements import interface in instanceof let new package private protected public return static super switch this throw true try typeof var void while with yield No te preocupes por memorizarlas. Cuando crear una vinculación produzca un error de sintaxis inesperado, observa si estas tratando de definir una palabra reservada. ## El entorno La colección de vinculaciones y sus valores que existen en un momento dado se llama entorno. Cuando se inicia un programa, est entorno no está vacío. Siempre contiene vinculaciones que son parte del estándar del lenguaje, y la mayoría de las veces, también tiene vinculaciones que proporcionan formas de interactuar con el sistema circundante. Por ejemplo, en el navegador, hay funciones para interactuar con el sitio web actualmente cargado y para leer entradas del mouse y teclado. ## Funciones Muchos de los valores proporcionados por el entorno predeterminado tienen el tipo función. Una función es una pieza de programa envuelta en un valor. Dichos valores pueden ser aplicados para ejecutar el programa envuelto. Por ejemplo, en un entorno navegador, la vinculación `prompt` sostiene una función que muestra un pequeño cuadro de diálogo preguntando por entrada del usuario. Esta se usa así: > prompt("Introducir contraseña"); Ejecutar una función también se conoce como invocarla, llamarla, o aplicarla. Puedes llamar a una función poniendo paréntesis después de una expresión que produzca un valor de función. Usualmente usarás directamente el nombre de la vinculación que contenga la función. Los valores entre los paréntesis se dan al programa dentro de la función. En el ejemplo, la función `prompt` usa el string que le damos como el texto a mostrar en el cuadro de diálogo. Los valores dados a las funciones se llaman argumentos. Diferentes funciones pueden necesitar un número diferente o diferentes tipos de argumentos La función `prompt` no se usa mucho en la programación web moderna, sobre todo porque no tienes control sobre la forma en como se ve la caja de diálogo resultante, pero puede ser útil en programas de juguete y experimentos. ## La función console.log En los ejemplos, utilicé `console.log` para dar salida a los valores. La mayoría de los sistemas de JavaScript (incluidos todos los navegadores web modernos y Node.js) proporcionan una función `console.log` que escribe sus argumentos en algun dispositivo de salida de texto. En los navegadores, esta salida aterriza en la consola de JavaScript. Esta parte de la interfaz del navegador está oculta por defecto, pero la mayoría de los navegadores la abren cuando presionas F12 o, en Mac, Command-Option-I. Si eso no funciona, busca en los menús un elemento llamado “herramientas de desarrollador” o algo similar. Al ejecutar los ejemplos (o tu propio código) en las páginas de este libro, la salida de `console.log` se mostrará después del ejemplo, en lugar de en la consola de JavaScript del navegador. > let x = 30; console.log("el valor de x es", x); // → el valor de x es 30 Aunque los nombres de las vinculaciones no puedan contener carácteres de puntos, `console.log` tiene uno. Esto es porque `console.log` no es un vinculación simple. En realidad, es una expresión que obtiene la propiedad `log` del valor mantenido por la vinculación `console` . Averiguaremos qué significa esto exactamente en el Capítulo 4. ## Valores de retorno Mostrar un cuadro de diálogo o escribir texto en la pantalla es un efecto secundario. Muchas funciones son útiles debido a los efectos secundarios que ellas producen. Las funciones también pueden producir valores, en cuyo caso no necesitan tener un efecto secundario para ser útil. Por ejemplo, la función `Math.max` toma cualquier cantidad de argumentos numéricos y devuelve el mayor de ellos. > console.log(Math.max(2, 4)); // → 4 Cuando una función produce un valor, se dice que retorna ese valor. Todo lo que produce un valor es una expresión en JavaScript, lo que significa que las llamadas a funciones se pueden usar dentro de expresiones más grandes. aquí una llamada a `Math.min` , que es lo opuesto a `Math.max` , se usa como parte de una expresión de adición: > console.log(Math.min(2, 4) + 100); // → 102 El próximo capítulo explica cómo escribir tus propias funciones. ## Flujo de control Cuando tu programa contiene más de una declaración, las declaraciones se ejecutan como si fueran una historia, de arriba a abajo. Este programa de ejemplo tiene dos declaraciones. La primera le pide al usuario un número, y la segunda, que se ejecuta después de la primera, muestra el cuadrado de ese número. > let elNumero = Number(prompt("Elige un numero")); console.log("Tu número es la raiz cuadrada de " + elNumero * elNumero); La función `Número` convierte un valor a un número. Necesitamos esa conversión porque el resultado de `prompt` es un valor de string, y nosotros queremos un numero. Hay funciones similares llamadas `String` y `Boolean` que convierten valores a esos tipos. Aquí está la representación esquemática (bastante trivial) de un flujo de control en línea recta: ## Ejecución condicional No todos los programas son caminos rectos. Podemos, por ejemplo, querer crear un camino de ramificación, donde el programa toma la rama adecuada basadandose en la situación en cuestión. Esto se llama ejecución condicional. La ejecución condicional se crea con la palabra clave `if` en JavaScript. En el caso simple, queremos que se ejecute algún código si, y solo si, una cierta condición se cumple. Podríamos, por ejemplo, solo querer mostrar el cuadrado de la entrada si la entrada es realmente un número. > let elNumero = Number(prompt("Elige un numero")); if (!Number.isNaN(elNumero)) { console.log("Tu número es la raiz cuadrada de " + elNumero * elNumero); } Con esta modificación, si ingresas la palabra “loro”, no se mostrara ninguna salida. La palabra clave `if` ejecuta u omite una declaración dependiendo del valor de una expresión booleana. La expresión decisiva se escribe después de la palabra clave, entre paréntesis, seguida de la declaración a ejecutar. La función `Number.isNaN` es una función estándar de JavaScript que retorna `true` solo si el argumento que se le da es `NaN` . Resulta que la función `Number` devuelve `NaN` cuando le pasas un string que no representa un número válido. Por lo tanto, la condición se traduce a “a menos que `elNumero` no sea un número, haz esto”. La declaración debajo del `if` está envuelta en llaves ( `{` y `}` ) en este ejemplo. Estos pueden usarse para agrupar cualquier cantidad de declaraciones en una sola declaración, llamada un bloque. Podrías también haberlas omitido en este caso, ya que solo tienes una sola declaración, pero para evitar tener que pensar si se necesitan o no, la mayoría de los programadores en JavaScript las usan en cada una de sus declaraciones envueltas como esta. Seguiremos esta convención en la mayoria de este libro, a excepción de la ocasional declaración de una sola linea. > if (1 + 1 == 2) console.log("Es verdad"); // → Es verdad A menudo no solo tendrás código que se ejecuta cuando una condición es verdadera, pero también código que maneja el otro caso. Esta ruta alternativa está representado por la segunda flecha en el diagrama. La palabra clave `else` se puede usar, junto con `if` , para crear dos caminos de ejecución alternativos, de una manera separada. > let elNumero = Number(prompt("Elige un numero")); if (!Number.isNaN(elNumero)) { console.log("Tu número es la raiz cuadrada de " + elNumero * elNumero); } else { console.log("Ey. Por qué no me diste un número?"); } Si tenemos más de dos rutas a elegir, múltiples pares de `if` / `else` se pueden “encadenar”. Aquí hay un ejemplo: > let numero = Number(prompt("Elige un numero")); if (numero < 10) { console.log("Pequeño"); } else if (numero < 100) { console.log("Mediano"); } else { console.log("Grande"); } El programa primero comprobará si `numero` es menor que 10. Si lo es, eligira esa rama, mostrara `"Pequeño"` , y está listo. Si no es así, toma la rama `else` , que a su vez contiene un segundo `if` . Si la segunda condición ( `< 100` ) es verdadera, eso significa que el número está entre 10 y 100, y `"Mediano"` se muestra. Si no es así, la segunda y última la rama `else` es elegida. El esquema de este programa se ve así: ## Ciclos while y do Considera un programa que muestra todos los números pares de 0 a 12. Una forma de escribir esto es la siguiente: > console.log(0); console.log(2); console.log(4); console.log(6); console.log(8); console.log(10); console.log(12); Eso funciona, pero la idea de escribir un programa es hacer de algo menos trabajo, no más. Si necesitáramos todos los números pares menores a 1.000, este enfoque sería poco práctico. Lo que necesitamos es una forma de ejecutar una pieza de código multiples veces. Esta forma de flujo de control es llamada un ciclo (o “loop”): El flujo de control de ciclos nos permite regresar a algún punto del programa en donde estábamos antes y repetirlo con nuestro estado del programa actual. Si combinamos esto con una vinculación que cuenta, podemos hacer algo como esta: > let numero = 0; while (numero <= 12) { console.log(numero); numero = numero + 2; } // → 0 // → 2 // … etcetera Una declaración que comienza con la palabra clave `while` crea un ciclo. La palabra `while` es seguida por una expresión en paréntesis y luego por una declaración, muy similar a `if` . El bucle sigue ingresando a esta declaración siempre que la expresión produzca un valor que dé `true` cuando sea convertida a Boolean. La vinculación `numero` demuestra la forma en que una vinculaciónpuede seguir el progreso de un programa. Cada vez que el ciclo se repite, `numero` obtiene un valor que es 2 más que su valor anterior. Al comienzo de cada repetición, se compara con el número 12 para decidir si el trabajo del programa está terminado. Como un ejemplo que realmente hace algo útil, ahora podemos escribir un programa que calcula y muestra el valor de 210 (2 a la 10). Usamos dos vinculaciones: una para realizar un seguimiento de nuestro resultado y una para contar cuántas veces hemos multiplicado este resultado por 2. El ciclo prueba si la segunda vinculación ha llegado a 10 todavía y, si no, actualiza ambas vinculaciones. > let resultado = 1; let contador = 0; while (contador < 10) { resultado = resultado * 2; contador = contador + 1; } console.log(resultado); // → 1024 El contador también podría haber comenzado en `1` y chequear para `<= 10` , pero, por razones que serán evidentes en el Capítulo 4, es una buena idea ir acostumbrandose a contar desde 0. Un ciclo `do` es una estructura de control similar a un ciclo `while` . Difiere solo en un punto: un ciclo `do` siempre ejecuta su cuerpo al menos una vez, y comienza a chequear si debe detenerse solo después de esa primera ejecución. Para reflejar esto, la prueba aparece después del cuerpo del ciclo: > let tuNombre; do { tuNombre = prompt("Quien eres?"); } while (!tuNombre); console.log(tuNombre); Este programa te obligará a ingresar un nombre. Preguntará de nuevo y de nuevo hasta que obtenga algo que no sea un string vacío. Aplicar el operador `!` convertirá un valor a tipo Booleano antes de negarlo y todos los strings, excepto `""` seran convertidas a `true` . Esto significa que el ciclo continúa dando vueltas hasta que proporciones un nombre no-vacío. ## Indentando Código En los ejemplos, he estado agregando espacios adelante de declaraciones que son parte de una declaración más grande. Estos no son necesarios—la computadora aceptará el programa normalmente sin ellos. De hecho, incluso las nuevas líneas en los programas son opcionales. Podrías escribir un programa en una sola línea inmensa si asi quisieras. El rol de esta indentación dentro de los bloques es hacer que la estructura del código se destaque. En código donde se abren nuevos bloques dentro de otros bloques, puede ser difícil ver dónde termina un bloque y donde comienza el otro. Con la indentación apropiada, la forma visual de un programa corresponde a la forma de los bloques dentro de él. Me gusta usar dos espacios para cada bloque abierto, pero los gustos varían—algunas personas usan cuatro espacios, y algunas personas usan carácteres de tabulación. Lo cosa importante es que cada bloque nuevo agregue la misma cantidad de espacio. > if (false != true) { console.log("Esto tiene sentido."); if (1 < 2) { console.log("Ninguna sorpresa alli."); } } La mayoría de los editores de código (incluido el de este libro) ayudaran indentar automáticamente las nuevas líneas con la cantidad adecuada. ## Ciclos for Muchos ciclos siguen el patrón visto en los ejemplos de `while` . Primero una vinculación “contador” se crea para seguir el progreso del ciclo. Entonces viene un ciclo `while` , generalmente con una expresión de prueba que verifica si el contador ha alcanzado su valor final. Al final del cuerpo del ciclo, el el contador se actualiza para mantener un seguimiento del progreso. Debido a que este patrón es muy común, JavaScript y otros lenguajes similares proporcionan una forma un poco más corta y más completa, el ciclo `for` : > for (let numero = 0; numero <= 12; numero = numero + 2) { console.log(numero); } // → 0 // → 2 // … etcetera Este programa es exactamente equivalente al ejemplo anterior de impresión de números pares. El único cambio es que todos las declaraciónes que están relacionadas con el “estado” del ciclo estan agrupadas después del `for` . Los paréntesis después de una palabra clave `for` deben contener dos punto y comas. La parte antes del primer punto y coma inicializa el cicloe, generalmente definiendo una vinculación. La segunda parte es la expresión que chequea si el ciclo debe continuar. La parte final actualiza el estado del ciclo después de cada iteración. En la mayoría de los casos, esto es más corto y conciso que un constructo `while` . Este es el código que calcula 210, usando `for` en lugar de `while` : > let resultado = 1; for (let contador = 0; contador < 10; contador = contador + 1) { resultado = resultado * 2; } console.log(resultado); // → 1024 ## Rompiendo un ciclo Hacer que la condición del ciclo produzca `false` no es la única forma en que el ciclo puede terminar. Hay una declaración especial llamada `break` (“romper”) que tiene el efecto de inmediatamente saltar afuera del ciclo circundante. Este programa ilustra la declaración `break` . Encuentra el primer número que es a la vez mayor o igual a 20 y divisible por 7. > for (let actual = 20; ; actual = actual + 1) { if (actual % 7 == 0) { console.log(actual); break; } } // → 21 Usar el operador restante ( `%` ) es una manera fácil de probar si un número es divisible por otro número. Si lo es, el residuo de su división es cero. El constructo `for` en el ejemplo no tiene una parte que verifique cuando finalizar el ciclo. Esto significa que el ciclo nunca se detendrá a menos que se ejecute la declaración `break` dentro de el. Si eliminases esa declaración `break` o escribieras accidentalmente una condición final que siempre produciera `true` , tu programa estaria atrapado en un ciclo infinito. Un programa atrapado en un ciclo infinito nunca terminará de ejecutarse, lo que generalmente es algo malo. Si creas un ciclo infinito en alguno de los ejemplos en estas páginas, generalmente se te preguntará si deseas detener el script después de algunos segundos. Si eso falla, tendrás que cerrar la pestaña en la que estás trabajando, o en algunos navegadores, cerrar todo tu navegador, para poder recuperarse. La palabra clave `continue` (“continuar”) es similar a `break` , en que influye el progreso de un ciclo. Cuando `continue` se encuentre en el cuerpo de un ciclo, el control salta afuera del cuerpo y continúa con la siguiente iteración del ciclo. ## Actualizando vinculaciones de manera sucinta Especialmente cuando realices un ciclo, un programa a menudo necesita “actualizar” una vinculación para mantener un valor basadandose en el valor anterior de esa vinculación. > contador = contador + 1; JavaScript provee de un atajo para esto: > contador += 1; Atajos similares funcionan para muchos otros operadores, como `resultado *= 2` para duplicar `resultado` o `contador -= 1` para contar hacia abajo. Esto nos permite acortar un poco más nuestro ejemplo de conteo. > for (let numero = 0; numero <= 12; numero += 2) { console.log(numero); } Para `contador += 1` y `contador -= 1` , hay incluso equivalentes más cortos: `contador++` y `contador --` . ## Despachar en un valor con switch No es poco común que el código se vea así: > if (x == "valor1") accion1(); else if (x == "valor2") accion2(); else if (x == "valor3") accion3(); else accionPorDefault(); Existe un constructo llamado `switch` que está destinada a expresar tales “despachos” de una manera más directa. Desafortunadamente, la sintaxis que JavaScript usa para esto (que heredó de la línea lenguajes de programación C/Java) es algo incómoda—una cadena de declaraciones `if` podria llegar a verse mejor. Aquí hay un ejemplo: > switch (prompt("Como esta el clima?")) { case "lluvioso": console.log("Recuerda salir con un paraguas."); break; case "soleado": console.log("Vistete con poca ropa."); case "nublado": console.log("Ve afuera."); break; default: console.log("Tipo de clima desconocido!"); break; } Puedes poner cualquier número de etiquetas de `case` dentro del bloque abierto por `switch` . El programa comenzará a ejecutarse en la etiqueta que corresponde al valor que se le dio a `switch` , o en `default` si no se encuentra ningún valor que coincida. Continuará ejecutándose, incluso a través de otras etiquetas, hasta que llegue a una declaración `break` . En algunos casos, como en el caso `"soleado"` del ejemplo, esto se puede usar para compartir algo de código entre casos (recomienda salir para ambos climas soleado y nublado). Pero ten cuidado—es fácil olvidarse de `break` , lo que hará que el programa ejecute código que no quieres que sea ejecutado. ## Capitalización Los nombres de vinculaciones no pueden contener espacios, sin embargo, a menudo es útil usar múltiples palabras para describir claramente lo que representa la vinculación. Estas son más o menos tus opciones para escribir el nombre de una vinculación con varias palabras en ella: > pequeñatortugaverde pequeña_tortuga_verde PequeñaTortugaVerde pequeñaTortugaVerde El primer estilo puede ser difícil de leer. Me gusta mucho el aspecto del estilo con los guiones bajos, aunque ese estilo es algo fastidioso de escribir. Las funciones estándar de JavaScript, y la mayoría de los programadores de JavaScript, siguen el estilo de abajo: capitalizan cada palabra excepto la primera. No es difícil acostumbrarse a pequeñas cosas así, y programar con estilos de nombres mixtos pueden ser algo discordante para leer, así que seguiremos esta convención. En algunos casos, como en la función `Number` , la primera letra de la vinculación también está en mayúscula. Esto se hizo para marcar esta función como un constructor. Lo que es un constructor quedará claro en el Capítulo 6. Por ahora, lo importante es no ser molestado por esta aparente falta de consistencia. ## Comentarios A menudo, el código en si mismo no transmite toda la información que deseas que un programa transmita a los lectores humanos, o lo transmite de una manera tan críptica que la gente quizás no lo entienda. En otras ocasiones, podrías simplemente querer incluir algunos pensamientos relacionados como parte de tu programa. Esto es para lo qué son los comentarios. Un comentario es una pieza de texto que es parte de un programa pero que es completamente ignorado por la computadora. JavaScript tiene dos formas de escribir comentarios. Para escribir un comentario de una sola línea, puede usar dos caracteres de barras inclinadas ( `//` ) y luego el texto del comentario después. > let balanceDeCuenta = calcularBalance(cuenta); // Es un claro del bosque donde canta un río balanceDeCuenta.ajustar(); // Cuelgan enloquecidamente de las hierbas harapos de plata let reporte = new Reporte(); // Donde el sol de la orgullosa montaña luce: añadirAReporte(balanceDeCuenta, reporte); // Un pequeño valle espumoso de luz. Un comentario `//` va solo haste el final de la línea. Una sección de texto entre `/*` y `*/` se ignorará en su totalidad, independientemente de si contiene saltos de línea. Esto es útil para agregar bloques de información sobre un archivo o un pedazo de programa. > /* Primero encontré este número garabateado en la parte posterior de un viejo cuaderno. Desde entonces, a menudo lo he visto, apareciendo en números de teléfono y en los números de serie de productos que he comprado. Obviamente me gusta, así que decidí quedármelo */ const miNumero = 11213; Ahora sabes que un programa está construido a partir de declaraciones, las cuales a veces pueden contener más declaraciones. Las declaraciones tienden a contener expresiones, que a su vez se pueden construir a partir de expresiones mas pequeñas. Poner declaraciones una despues de otras te da un programa que es ejecutado de arriba hacia abajo. Puedes introducir alteraciones en el flujo de control usando declaraciones condicionales ( `if` , `else` , y `switch` ) y ciclos ( `while` , `do` , y `for` ). Las vinculaciones se pueden usar para archivar datos bajo un nombre, y son utiles para el seguimiento de estado en tu programa. El entorno es el conjunto de vinculaciones que se definen. Los sistemas de JavaScript siempre incluyen por defecto un número de vinculaciones estándar útiles en tu entorno. Las funciones son valores especiales que encapsulan una parte del programa. Puedes invocarlas escribiendo ``` nombreDeLaFuncion(argumento1, argumento2) ``` . Tal llamada a función es una expresión, y puede producir un valor. Si no estas seguro de cómo probar tus soluciones para los ejercicios, consulta la introducción. Cada ejercicio comienza con una descripción del problema. Lee eso y trata de resolver el ejercicio. Si tienes problemas, considera leer las pistas después del ejercicio. Las soluciones completas para los ejercicios no estan incluidas en este libro, pero puedes encontrarlas en línea en eloquentjavascript.net/code. Si quieres aprender algo de los ejercicios, te recomiendo mirar a las soluciones solo despues de que hayas resuelto el ejercicio, o al menos despues de que lo hayas intentando resolver por un largo tiempo y tengas un ligero dolor de cabeza. ### Ciclo de un triángulo Escriba un ciclo que haga siete llamadas a `console.log` para generar el siguiente triángulo: > # ## ### #### ##### ###### ####### Puede ser útil saber que puedes encontrar la longitud de un string escribiendo `.length` después de él: > let abc = "abc"; console.log(abc.length); // → 3 La mayoría de los ejercicios contienen una pieza de código que puedes modificar para resolver el ejercicio. Recuerda que puedes hacer clic en los bloques de código para editarlos. `// Tu código aqui.` Puedes comenzar con un programa que imprima los números del 1 al 7, al que puedes derivar haciendo algunas modificaciones al ejemplo de impresión de números pares dado anteriormente en el capítulo, donde se introdujo el ciclo `for` . Ahora considera la equivalencia entre números y strings de caracteres de numeral. Puedes ir de 1 a 2 agregando 1 ( `+= 1` ). Puedes ir de `"#"` a `"##"` agregando un caracter ( `+= "#"` ). Por lo tanto, tu solución puede seguir de cerca el programa de impresión de números. ### FizzBuzz Escribe un programa que use `console.log` para imprimir todos los números de 1 a 100, con dos excepciones. Para números divisibles por 3, imprime `"Fizz"` en lugar del número, y para los números divisibles por 5 (y no 3), imprime `"Buzz"` en su lugar. Cuando tengas eso funcionando, modifica tu programa para imprimir `"FizzBuzz"` , para números que sean divisibles entre 3 y 5 (y aún imprimir `"Fizz"` o `"Buzz"` para números divisibles por solo uno de ellos). (Esta es en realidad una pregunta de entrevista que se ha dicho elimina un porcentaje significativo de candidatos a programadores. Así que si la puedes resolver, tu valor en el mercado laboral acaba de subir). `// Tu código aquí.` Ir a traves de los números es claramente el trabajo de un ciclo y seleccionar qué imprimir es una cuestión de ejecución condicional. Recuerda el truco de usar el operador restante ( `%` ) para verificar si un número es divisible por otro número (tiene un residuo de cero). En la primera versión, hay tres resultados posibles para cada número, por lo que tendrás que crear una cadena `if` / `else if` / `else` . La segunda versión del programa tiene una solución directa y una inteligente. La manera simple es agregar otra “rama” condicional para probar precisamente la condición dada. Para el método inteligente, crea un string que contenga la palabra o palabras a imprimir e imprimir ya sea esta palabra o el número si no hay una palabra, posiblemente haciendo un buen uso del operador `||` . ### Tablero de ajedrez Escribe un programa que cree un string que represente una cuadrícula de 8 × 8, usando caracteres de nueva línea para separar las líneas. En cada posición de la cuadrícula hay un espacio o un carácter "#". Los caracteres deberían de formar un tablero de ajedrez. Pasar este string a `console.log` debería mostrar algo como esto: > # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Cuando tengas un programa que genere este patrón, define una vinculación `tamaño = 8` y cambia el programa para que funcione con cualquier `tamaño` , dando como salida una cuadrícula con el alto y ancho dados. `// Tu código aquí.` El string se puede construir comenzando con un string vacío ( `""` ) y repetidamente agregando caracteres. Un carácter de nueva línea se escribe `"\n"` . Para trabajar con dos dimensiones, necesitarás un ciclo dentro de un ciclo. Coloca llaves alrededor de los cuerpos de ambos ciclos para hacer fácil de ver dónde comienzan y terminan. Intenta indentar adecuadamente estos cuerpos. El orden de los ciclos debe seguir el orden en el que construimos el string (línea por línea, izquierda a derecha, arriba a abajo). Entonces el ciclo externo maneja las líneas y el ciclo interno maneja los caracteres en una sola linea. Necesitará dos vinculaciones para seguir tu progreso. Para saber si debes poner un espacio o un signo de numeral en una posición determinada, podrías probar si la suma de los dos contadores es par ( `% 2` ). Terminar una línea al agregar un carácter de nueva línea debe suceder después de que la línea ha sido creada, entonces haz esto después del ciclo interno pero dentro del bucle externo. La gente piensa que las ciencias de la computación son el arte de los genios, pero la verdadera realidad es lo opuesto, estas solo consisten en mucha gente haciendo cosas que se construyen una sobre la otra, al igual que un muro hecho de piedras pequeñas. Las funciones son el pan y la mantequilla de la programación en JavaScript. El concepto de envolver una pieza de programa en un valor tiene muchos usos. Esto nos da una forma de estructurar programas más grandes, de reducir la repetición, de asociar nombres con subprogramas y de aislar estos subprogramas unos con otros. La aplicación más obvia de las funciones es definir nuevo vocabulario. Crear nuevas palabras en la prosa suele ser un mal estilo. Pero en la programación, es indispensable. En promedio, un tipico adulto que hable español tiene unas 20,000 palabras en su vocabulario. Pocos lenguajes de programación vienen con 20,000 comandos ya incorporados en el. Y el vocabulario que está disponible tiende a ser más precisamente definido, y por lo tanto menos flexible, que en el lenguaje humano. Por lo tanto, nosotros por lo general tenemos que introducir nuevos conceptos para evitar repetirnos demasiado. ## Definiendo una función Una definición de función es una vinculación regular donde el valor de la vinculación es una función. Por ejemplo, este código define `cuadrado` para referirse a una función que produce el cuadrado de un número dado: > edit & run code by clicking itconst cuadrado = function(x) { return x * x; }; console.log(cuadrado(12)); // → 144 Una función es creada con una expresión que comienza con la palabra clave `function` (“función”). Las funciones tienen un conjunto de parámetros (en este caso, solo `x` ) y un cuerpo, que contiene las declaraciones que deben ser ejecutadas cuando se llame a la función. El cuerpo de la función de una función creada de esta manera siempre debe estar envuelto en llaves, incluso cuando consista en una sola declaración. Una función puede tener múltiples parámetros o ningún parámetro en absoluto. En el siguiente ejemplo, `hacerSonido` no lista ningún nombre de parámetro, mientras que `potencia` enumera dos: > const hacerSonido = function() { console.log("Pling!"); }; hacerSonido(); // → Pling! const potencia = function(base, exponente) { let resultado = 1; for (let cuenta = 0; cuenta < exponente; cuenta++) { resultado *= base; } return resultado; }; console.log(potencia(2, 10)); // → 1024 Algunas funciones producen un valor, como `potencia` y `cuadrado` , y algunas no, como `hacerSonido` , cuyo único resultado es un efecto secundario. Una declaración de `return` determina el valor que es retornado por la función. Cuando el control se encuentre con tal declaración, inmediatamente salta de la función actual y devuelve el valor retornado al código que llamó la función. Una declaración `return` sin una expresión después de ella hace que la función retorne `undefined` . Funciones que no tienen una declaración `return` en absoluto, como `hacerSonido` , similarmente retornan `undefined` . Los parámetros de una función se comportan como vinculaciones regulares, pero sus valores iniciales están dados por el llamador de la función, no por el código en la función en sí. ## Vinculaciones y alcances Cada vinculación tiene un alcace, que correspone a la parte del programa en donde la vinculación es visible. Para vinculaciones definidas fuera de cualquier función o bloque, el alcance es todo el programa—puedes referir a estas vinculaciones en donde sea que quieras. Estas son llamadas globales. Pero las vinculaciones creadas como parámetros de función o declaradas dentro de una función solo puede ser referenciadas en esa función. Estas se llaman locales. Cada vez que se llame a la función, se crean nuevas instancias de estas vinculaciones. Esto proporciona cierto aislamiento entre funciones—cada llamada de función actúa sobre su pequeño propio mundo (su entorno local), y a menudo puede ser entendida sin saber mucho acerca de lo qué está pasando en el entorno global. Vinculaciones declaradas con `let` y `const` son, de hecho, locales al bloque donde esten declarados, así que si creas uno de esas dentro de un ciclo, el código antes y después del ciclo no puede “verlas”. En JavaScript anterior a 2015, solo las funciones creaban nuevos alcances, por lo que las vinculaciones de estilo-antiguo, creadas con la palabra clave `var` , son visibles a lo largo de toda la función en la que aparecen—o en todo el alcance global, si no están dentro de una función. > let x = 10; if (true) { let y = 20; var z = 30; console.log(x + y + z); // → 60 } // y no es visible desde aqui console.log(x + z); // → 40 Cada alcance puede “mirar afuera” hacia al alcance que lo rodee, por lo que `x` es visible dentro del bloque en el ejemplo. La excepción es cuando vinculaciones múltiples tienen el mismo nombre—en ese caso, el código solo puede ver a la vinculación más interna. Por ejemplo, cuando el código dentro de la función `dividirEnDos` se refiera a `numero` , estara viendo su propio `numero` , no el `numero` en el alcance global. > const dividirEnDos = function(numero) { return numero / 2; }; let numero = 10; console.log(dividirEnDos(100)); // → 50 console.log(numero); // → 10 ### Alcance anidado JavaScript no solo distingue entre vinculaciones globales y locales. Bloques y funciones pueden ser creados dentro de otros bloques y funciones, produciendo múltiples grados de localidad. Por ejemplo, esta función—que muestra los ingredientes necesarios para hacer un lote de humus—tiene otra función dentro de ella: > const humus = function(factor) { const ingrediente = function(cantidad, unidad, nombre) { let cantidadIngrediente = cantidad * factor; if (cantidadIngrediente > 1) { unidad += "s"; } console.log(`${cantidadIngrediente} ${unidad} ${nombre}`); }; ingrediente(1, "lata", "garbanzos"); ingrediente(0.25, "taza", "tahini"); ingrediente(0.25, "taza", "jugo de limón"); ingrediente(1, "clavo", "ajo"); ingrediente(2, "cucharada", "aceite de oliva"); ingrediente(0.5, "cucharadita", "comino"); }; El código dentro de la función `ingrediente` puede ver la vinculación `factor` de la función externa. Pero sus vinculaciones locales, como `unidad` o `cantidadIngrediente` , no son visibles para la función externa. En resumen, cada alcance local puede ver también todos los alcances locales que lo contengan. El conjunto de vinculaciones visibles dentro de un bloque está determinado por el lugar de ese bloque en el texto del programa. Cada alcance local puede también ver todos los alcances locales que lo contengan, y todos los alcances pueden ver el alcance global. Este enfoque para la visibilidad de vinculaciones es llamado alcance léxico. ## Funciones como valores Las vinculaciones de función simplemente actúan como nombres para una pieza específica del programa. Tal vinculación se define una vez y nunca cambia. Esto hace que sea fácil confundir la función con su nombre. Pero los dos son diferentes. Un valor de función puede hacer todas las cosas que otros valores pueden hacer—puedes usarlo en expresiones arbitrarias, no solo llamarlo. Es posible almacenar un valor de función en una nueva vinculación, pasarla como argumento a una función, y así sucesivamente. Del mismo modo, una vinculación que contenga una función sigue siendo solo una vinculación regular y se le puede asignar un nuevo valor, asi: > let lanzarMisiles = function() { sistemaDeMisiles.lanzar("ahora"); }; if (modoSeguro) { lanzarMisiles = function() {/* no hacer nada */}; } En el Capitulo 5, discutiremos las cosas interesantes que se pueden hacer al pasar valores de función a otras funciones. ## Notación de declaración Hay una forma ligeramente más corta de crear una vinculación de función. Cuando la palabra clave `function` es usada al comienzo de una declaración, funciona de una manera diferente. > function cuadrado(x) { return x * x; } Esta es una declaración de función. La declaración define la vinculación `cuadrado` y la apunta a la función dada. Esto es un poco mas facil de escribir, y no requiere un punto y coma después de la función. Hay una sutileza con esta forma de definir una función. > console.log("El futuro dice:", futuro()); function futuro() { return "Nunca tendran autos voladores"; } Este código funciona, aunque la función esté definida debajo del código que lo usa. Las declaraciones de funciones no son parte del flujo de control regular de arriba hacia abajo. Estas son conceptualmente trasladadas a la cima de su alcance y pueden ser utilizadas por todo el código en ese alcance. Esto es a veces útil porque nos da la libertad de ordenar el código en una forma que nos parezca significativa, sin preocuparnos por tener que definir todas las funciones antes de que sean utilizadas. ## Funciones de flecha Existe una tercera notación para funciones, que se ve muy diferente de las otras. En lugar de la palabra clave `function` , usa una flecha ( `=>` ) compuesta de los caracteres igual y mayor que (no debe ser confundida con el operador igual o mayor que, que se escribe `>=` ). > const potencia = (base, exponente) => { let resultado = 1; for (let cuenta = 0; cuenta < exponente; cuenta++) { resultado *= base; } return resultado; }; La flecha viene después de la lista de parámetros, y es seguida por el cuerpo de la función. Expresa algo así como “esta entrada (los parámetros) produce este resultado (el cuerpo)”. Cuando solo haya un solo nombre de parámetro, los paréntesis alrededor de la lista de parámetros pueden ser omitidos. Si el cuerpo es una sola expresión, en lugar de un bloque en llaves, esa expresión será retornada por parte de la función. Asi que estas dos definiciones de `cuadrado` hacen la misma cosa: > const cuadrado1 = (x) => { return x * x; }; const cuadrado2 = x => x * x; Cuando una función de flecha no tiene parámetros, su lista de parámetros es solo un conjunto vacío de paréntesis. > const bocina = () => { console.log("Toot"); }; No hay una buena razón para tener ambas funciones de flecha y expresiones `function` en el lenguaje. Aparte de un detalle menor, que discutiremos en Capítulo 6, estas hacen lo mismo. Las funciones de flecha se agregaron en 2015, principalmente para que fuera posible escribir pequeñas expresiones de funciones de una manera menos verbosa. Las usaremos mucho en el Capitulo 5. ## La pila de llamadas La forma en que el control fluye a través de las funciones es algo complicado. Vamos a écharle un vistazo más de cerca. Aquí hay un simple programa que hace unas cuantas llamadas de función: > function saludar(quien) { console.log("Hola " + quien); } saludar("Harry"); console.log("Adios"); Un recorrido por este programa es más o menos así: la llamada a `saludar` hace que el control salte al inicio de esa función (línea 2). La función llama a `console.log` , la cual toma el control, hace su trabajo, y entonces retorna el control a la línea 2. Allí llega al final de la función `saludar` , por lo que vuelve al lugar que la llamó, que es la línea 4. La línea que sigue llama a `console.log` nuevamente. Después que esta función retorna, el programa llega a su fin. Podríamos mostrar el flujo de control esquemáticamente de esta manera: We could show the flow of control schematically like this: > no en una función en saludar en console.log en saludar no en una función en console.log no en una función Ya que una función tiene que regresar al lugar donde fue llamada cuando esta retorna, la computadora debe recordar el contexto de donde sucedió la llamada. En un caso, `console.log` tiene que volver a la función `saludar` cuando está lista. En el otro caso, vuelve al final del programa. El lugar donde la computadora almacena este contexto es la pila de llamadas. Cada vez que se llama a una función, el contexto actual es almacenado en la parte superior de esta “pila”. Cuando una función retorna, elimina el contexto superior de la pila y lo usa para continuar la ejecución. Almacenar esta pila requiere espacio en la memoria de la computadora. Cuando la pila crece demasiado grande, la computadora fallará con un mensaje como “fuera de espacio de pila” o “demasiada recursividad”. El siguiente código ilustra esto haciendo una pregunta realmente difícil a la computadora, que causara un ir y venir infinito entre las dos funciones. Mejor dicho, sería infinito, si la computadora tuviera una pila infinita. Como son las cosas, nos quedaremos sin espacio, o “explotaremos la pila”. > function gallina() { return huevo(); } function huevo() { return gallina(); } console.log(gallina() + " vino primero."); // → ?? ## Argumentos Opcionales El siguiente código está permitido y se ejecuta sin ningún problema: > function cuadrado(x) { return x * x; } console.log(cuadrado(4, true, "erizo")); // → 16 Definimos `cuadrado` con solo un parámetro. Sin embargo, cuando lo llamamos con tres, el lenguaje no se queja. Este ignora los argumentos extra y calcula el cuadrado del primero. JavaScript es de extremadamente mente-abierta sobre la cantidad de argumentos que puedes pasar a una función. Si pasa demasiados, los adicionales son ignorados. Si pasas muy pocos, a los parámetros faltantes se les asigna el valor `undefined` . La desventaja de esto es que es posible—incluso probable—que accidentalmente pases la cantidad incorrecta de argumentos a las funciones. Y nadie te dira nada acerca de eso. La ventaja es que este comportamiento se puede usar para permitir que una función sea llamada con diferentes cantidades de argumentos. Por ejemplo, esta función `menos` intenta imitar al operador `-` actuando ya sea en uno o dos argumentos > function menos(a, b) { if (b === undefined) return -a; else return a - b; } console.log(menos(10)); // → -10 console.log(menos(10, 5)); // → 5 Si escribes un operador `=` después un parámetro, seguido de una expresión, el valor de esa expresión reemplazará al argumento cuando este no sea dado. Por ejemplo, esta versión de `potencia` hace que su segundo argumento sea opcional. Si este no es proporcionado o si pasas el valor `undefined` , este se establecerá en dos y la función se comportará como `cuadrado` . > function potencia(base, exponente = 2) { let resultado = 1; for (let cuenta = 0; cuenta < exponente; cuenta++) { resultado *= base; } return resultado; } console.log(potencia(4)); // → 16 console.log(potencia(2, 6)); // → 64 En el próximo capítulo, veremos una forma en el que el cuerpo de una función puede obtener una lista de todos los argumentos que son pasados. Esto es útil porque hace posible que una función acepte cualquier cantidad de argumentos. Por ejemplo, `console.log` hace esto—muetra en la consola todos los valores que se le den. > console.log("C", "O", 2); // → C O 2 ## Cierre La capacidad de tratar a las funciones como valores, combinado con el hecho de que las vinculaciones locales se vuelven a crear cada vez que una sea función es llamada, trae a la luz una pregunta interesante. Qué sucede con las vinculaciones locales cuando la llamada de función que los creó ya no está activa? El siguiente código muestra un ejemplo de esto. Define una función, `envolverValor` , que crea una vinculación local. Luego retorna una función que accede y devuelve esta vinculación local. > function envolverValor(n) { let local = n; return () => local; } let envolver1 = envolverValor(1); let envolver2 = envolverValor(2); console.log(envolver1()); // → 1 console.log(envolver2()); // → 2 Esto está permitido y funciona como es de esperar—ambas instancias de las vinculaciones todavía pueden ser accedidas. Esta situación es una buena demostración del hecho de que las vinculaciones locales se crean de nuevo para cada llamada, y que las diferentes llamadas no pueden pisotear las distintas vinculaciones locales entre sí. Esta característica—poder hacer referencia a una instancia específica de una vinculación local en un alcance encerrado—se llama cierre. Una función que que hace referencia a vinculaciones de alcances locales alrededor de ella es llamada un cierre. Este comportamiento no solo te libera de tener que preocuparte por la duración de las vinculaciones pero también hace posible usar valores de funciones en algunas formas bastante creativas. Con un ligero cambio, podemos convertir el ejemplo anterior en una forma de crear funciones que multipliquen por una cantidad arbitraria. > function multiplicador(factor) { return numero => numero * factor; } let duplicar = multiplicador(2); console.log(duplicar(5)); // → 10 La vinculación explícita `local` del ejemplo `envolverValor` no es realmente necesaria ya que un parámetro es en sí misma una vinculación local. Pensar en programas de esta manera requiere algo de práctica. Un buen modelo mental es pensar en los valores de función como que contienen tanto el código en su cuerpo tanto como el entorno en el que se crean. Cuando son llamadas, el cuerpo de la función ve su entorno original, no el entorno en el que se realiza la llamada. En el ejemplo, se llama a `multiplicador` y esta crea un entorno en el que su parámetro `factor` está ligado a 2. El valor de función que retorna, el cual se almacena en `duplicar` , recuerda este entorno. Asi que cuando es es llamada, multiplica su argumento por 2. ## Recursión Está perfectamente bien que una función se llame a sí misma, siempre que no lo haga tanto que desborde la pila. Una función que se llama a si misma es llamada recursiva. La recursión permite que algunas funciones sean escritas en un estilo diferente. Mira, por ejemplo, esta implementación alternativa de `potencia` : > function potencia(base, exponente) { if (exponente == 0) { return 1; } else { return base * potencia(base, exponente - 1); } } console.log(potencia(2, 3)); // → 8 Esto es bastante parecido a la forma en la que los matemáticos definen la exponenciación y posiblemente describa el concepto más claramente que la variante con el ciclo. La función se llama a si misma muchas veces con cada vez exponentes más pequeños para lograr la multiplicación repetida. Pero esta implementación tiene un problema: en las implementaciones típicas de JavaScript, es aproximadamente 3 veces más lenta que la versión que usa un ciclo. Correr a través de un ciclo simple es generalmente más barato en terminos de memoria que llamar a una función multiples veces. El dilema de velocidad versus elegancia es interesante. Puedes verlo como una especie de compromiso entre accesibilidad-humana y accesibilidad-maquina. Casi cualquier programa se puede hacer más rápido haciendolo más grande y complicado. El programador tiene que decidir acerca de cual es un equilibrio apropiado. En el caso de la función `potencia` , la versión poco elegante (con el ciclo) sigue siendo bastante simple y fácil de leer. No tiene mucho sentido reemplazarla con la versión recursiva. A menudo, sin embargo, un programa trata con conceptos tan complejos que renunciar a un poco de eficiencia con el fin de hacer que el programa sea más sencillo es útil. Preocuparse por la eficiencia puede ser una distracción. Es otro factor más que complica el diseño del programa, y cuando estás haciendo algo que ya es difícil, añadir algo más de lo que preocuparse puede ser paralizante. Por lo tanto, siempre comienza escribiendo algo que sea correcto y fácil de comprender. Si te preocupa que sea demasiado lento—lo que generalmente no sucede, ya que la mayoría del código simplemente no se ejecuta con la suficiente frecuencia como para tomar cantidades significativas de tiempo—puedes medir luego y mejorar si es necesario. La recursión no siempre es solo una alternativa ineficiente a los ciclos. Algunos problemas son realmente más fáciles de resolver con recursión que con ciclos. En la mayoría de los casos, estos son problemas que requieren explorar o procesar varias “ramas”, cada una de las cuales podría ramificarse de nuevo en aún más ramas. Considera este acertijo: comenzando desde el número 1 y repetidamente agregando 5 o multiplicando por 3, una cantidad infinita de números nuevos pueden ser producidos. ¿Cómo escribirías una función que, dado un número, intente encontrar una secuencia de tales adiciones y multiplicaciones que produzca ese número? Por ejemplo, se puede llegar al número 13 multiplicando primero por 3 y luego agregando 5 dos veces, mientras que el número 15 no puede ser alcanzado de ninguna manera. Aquí hay una solución recursiva: > function encontrarSolucion(objetivo) { function encontrar(actual, historia) { if (actual == objetivo) { return historia; } else if (actual > objetivo) { return null; } else { return encontrar(actual + 5, `(${historia} + 5)`) || encontrar(actual * 3, `(${historia} * 3)`); } } return encontrar(1, "1"); } console.log(encontrarSolucion(24)); // → (((1 * 3) + 5) * 3) Ten en cuenta que este programa no necesariamente encuentra la secuencia de operaciones mas corta. Este está satisfecho cuando encuentra cualquier secuencia que funcione. Está bien si no ves cómo funciona el programa de inmediato. Vamos a trabajar a través de él, ya que es un gran ejercicio de pensamiento recursivo. La función interna `encontrar` es la que hace uso de la recursión real. Esta toma dos argumentos, el número actual y un string que registra cómo se ha alcanzado este número. Si encuentra una solución, devuelve un string que muestra cómo llegar al objetivo. Si no puede encontrar una solución a partir de este número, retorna `null` . Para hacer esto, la función realiza una de tres acciones. Si el número actual es el número objetivo, la historia actual es una forma de llegar a ese objetivo, por lo que es retornada. Si el número actual es mayor que el objetivo, no tiene sentido seguir explorando esta rama ya que tanto agregar como multiplicar solo hara que el número sea mas grande, por lo que retorna `null` . Y finalmente, si aún estamos por debajo del número objetivo, la función intenta ambos caminos posibles que comienzan desde el número actual llamandose a sí misma dos veces, una para agregar y otra para multiplicar. Si la primera llamada devuelve algo que no es `null` , esta es retornada. De lo contrario, se retorna la segunda llamada, independientemente de si produce un string o el valor `null` . Para comprender mejor cómo esta función produce el efecto que estamos buscando, veamos todas las llamadas a `encontrar` que se hacen cuando buscamos una solución para el número 13. > encontrar(1, "1") encontrar(6, "(1 + 5)") encontrar(11, "((1 + 5) + 5)") encontrar(16, "(((1 + 5) + 5) + 5)") muy grande encontrar(33, "(((1 + 5) + 5) * 3)") muy grande encontrar(18, "((1 + 5) * 3)") muy grande encontrar(3, "(1 * 3)") encontrar(8, "((1 * 3) + 5)") encontrar(13, "(((1 * 3) + 5) + 5)") ¡encontrado! La indentación indica la profundidad de la pila de llamadas. La primera vez que `encontrar` es llamada, comienza llamandose a sí misma para explorar la solución que comienza con `(1 + 5)` . Esa llamada hara uso de la recursión aún más para explorar cada solución continuada que produzca un número menor o igual a el número objetivo. Como no encuentra uno que llegue al objetivo, retorna `null` a la primera llamada. Ahí el operador `||` genera la llamada que explora `(1 * 3)` para que esta suceda. Esta búsqueda tiene más suerte—su primera llamada recursiva, a través de otra llamada recursiva, encuentra al número objetivo. Esa llamada más interna retorna un string, y cada uno de los operadores `||` en las llamadas intermedias pasa ese string a lo largo, en última instancia retornando la solución. ## Funciones crecientes Hay dos formas más o menos naturales para que las funciones sean introducidas en los programas. La primera es que te encuentras escribiendo código muy similar múltiples veces. Preferiríamos no hacer eso. Tener más código significa más espacio para que los errores se oculten y más material que leer para las personas que intenten entender el programa. Entonces tomamos la funcionalidad repetida, buscamos un buen nombre para ella, y la ponemos en una función. La segunda forma es que encuentres que necesitas alguna funcionalidad que aún no has escrito y parece que merece su propia función. Comenzarás por nombrar a la función y luego escribirás su cuerpo. Incluso podrías comenzar a escribir código que use la función antes de que definas a la función en sí misma. Que tan difícil te sea encontrar un buen nombre para una función es una buena indicación de cuán claro es el concepto que está tratando de envolver. Veamos un ejemplo. Queremos escribir un programa que imprima dos números, los números de vacas y pollos en una granja, con las palabras `Vacas` y `Pollos` después de ellos, y ceros acolchados antes de ambos números para que siempre tengan tres dígitos de largo. > 007 Vacas 011 Pollos Esto pide una función de dos argumentos—el numero de vacas y el numero de pollos. Vamos a programar. > function imprimirInventarioGranja(vacas, pollos) { let stringVaca = String(vacas); while (stringVaca.length < 3) { stringVaca = "0" + stringVaca; } console.log(`${stringVaca} Vacas`); let stringPollos = String(pollos); while (stringPollos.length < 3) { stringPollos = "0" + stringPollos; } console.log(`${stringPollos} Pollos`); } imprimirInventarioGranja(7, 11); Escribir `.length` después de una expresión de string nos dará la longitud de dicho string. Por lo tanto, los ciclos `while` seguiran sumando ceros delante del string de numeros hasta que este tenga al menos tres caracteres de longitud. Misión cumplida! Pero justo cuando estamos por enviar el código a la agricultora (junto con una considerable factura), ella nos llama y nos dice que ella también comenzó a criar cerdos, y que si no podríamos extender el software para imprimir cerdos también? Claro que podemos. Pero justo cuando estamos en el proceso de copiar y pegar esas cuatro líneas una vez más, nos detenemos y reconsideramos. Tiene que haber una mejor manera. Aquí hay un primer intento: > function imprimirEtiquetaAlcochadaConCeros(numero, etiqueta) { let stringNumero = String(numero); while (stringNumero.length < 3) { stringNumero = "0" + stringNumero; } console.log(`${stringNumero} ${etiqueta}`); } function imprimirInventarioGranja(vacas, pollos, cerdos) { imprimirEtiquetaAlcochadaConCeros(vacas, "Vacas"); imprimirEtiquetaAlcochadaConCeros(pollos, "Pollos"); imprimirEtiquetaAlcochadaConCeros(cerdos, "Cerdos"); } imprimirInventarioGranja(7, 11, 3); Funciona! Pero ese nombre, , es un poco incómodo. Combina tres cosas—impresión, alcochar con ceros y añadir una etiqueta—en una sola función. En lugar de sacar la parte repetida de nuestro programa al por mayor, intentemos elegir un solo concepto. > function alcocharConCeros(numero, amplitud) { let string = String(numero); while (string.length < amplitud) { string = "0" + string; } return string; } function imprimirInventarioGranja(vacas, pollos, cerdos) { console.log(`${alcocharConCeros(vacas, 3)} Vacas`); console.log(`${alcocharConCeros(pollos, 3)} Pollos`); console.log(`${alcocharConCeros(cerdos, 3)} Cerdos`); } imprimirInventarioGranja(7, 16, 3); Una función con un nombre agradable y obvio como `alcocharConCeros` hace que sea más fácil de entender lo que hace para alguien que lee el código. Y tal función es útil en situaciones más alla de este programa en específico. Por ejemplo, podrías usarla para ayudar a imprimir tablas de números en una manera alineada. Que tan inteligente y versátil deberia de ser nuestra función? Podríamos escribir cualquier cosa, desde una función terriblemente simple que solo pueda alcochar un número para que tenga tres caracteres de ancho, a un complicado sistema generalizado de formateo de números que maneje números fraccionarios, números negativos, alineación de puntos decimales, relleno con diferentes caracteres, y así sucesivamente. Un principio útil es no agregar mucho ingenio a menos que estes absolutamente seguro de que lo vas a necesitar. Puede ser tentador escribir “frameworks” generalizados para cada funcionalidad que encuentres. Resiste ese impulso. No realizarás ningún trabajo real de esta manera—solo estarás escribiendo código que nunca usarás. ## Funciones y efectos secundarios Las funciones se pueden dividir aproximadamente en aquellas que se llaman por su efectos secundarios y aquellas que son llamadas por su valor de retorno. (Aunque definitivamente también es posible tener tanto efectos secundarios como devolver un valor en una misma función.) La primera función auxiliar en el ejemplo de la granja, , se llama por su efecto secundario: imprime una línea. La segunda versión, `alcocharConCeros` , se llama por su valor de retorno. No es coincidencia que la segunda sea útil en más situaciones que la primera. Las funciones que crean valores son más fáciles de combinar en nuevas formas que las funciones que directamente realizan efectos secundarios. Una función pura es un tipo específico de función de producción-de-valores que no solo no tiene efectos secundarios pero que tampoco depende de los efectos secundarios de otro código—por ejemplo, no lee vinculaciones globales cuyos valores puedan cambiar. Una función pura tiene la propiedad agradable de que cuando se le llama con los mismos argumentos, siempre produce el mismo valor (y no hace nada más). Una llamada a tal función puede ser sustituida por su valor de retorno sin cambiar el significado del código. Cuando no estás seguro de que una función pura esté funcionando correctamente, puedes probarla simplemente llamándola, y saber que si funciona en ese contexto, funcionará en cualquier contexto. Las funciones no puras tienden a requerir más configuración para poder ser probadas. Aún así, no hay necesidad de sentirse mal cuando escribas funciones que no son puras o de hacer una guerra santa para purgarlas de tu código. Los efectos secundarios a menudo son útiles. No habría forma de escribir una versión pura de `console.log` , por ejemplo, y `console.log` es bueno de tener. Algunas operaciones también son más fáciles de expresar de una manera eficiente cuando usamos efectos secundarios, por lo que la velocidad de computación puede ser una razón para evitar la pureza. Este capítulo te enseñó a escribir tus propias funciones. La palabra clave `function` , cuando se usa como una expresión, puede crear un valor de función. Cuando se usa como una declaración, se puede usar para declarar una vinculación y darle una función como su valor. Las funciones de flecha son otra forma más de crear funciones. > // Define f para sostener un valor de función const f = function(a) { console.log(a + 2); }; // Declara g para ser una función function g(a, b) { return a * b * 3.5; } // Un valor de función menos verboso let h = a => a % 3; Un aspecto clave en para comprender a las funciones es comprender los alcances. Cada bloque crea un nuevo alcance. Los parámetros y vinculaciones declaradas en un determinado alcance son locales y no son visibles desde el exterior. Vinculaciones declaradas con `var` se comportan de manera diferente—terminan en el alcance de la función más cercana o en el alcance global. Separar las tareas que realiza tu programa en diferentes funciones es util. No tendrás que repetirte tanto, y las funciones pueden ayudar a organizar un programa agrupando el código en piezas que hagan cosas especificas. ### Mínimo El capítulo anterior introdujo la función estándar `Math.min` que devuelve su argumento más pequeño. Nosotros podemos construir algo como eso ahora. Escribe una función `min` que tome dos argumentos y retorne su mínimo. > // Tu codigo aqui. console.log(min(0, 10)); // → 0 console.log(min(0, -10)); // → -10 ### Recursión Hemos visto que `%` (el operador de residuo) se puede usar para probar si un número es par o impar usando `% 2` para ver si es divisible entre dos. Aquí hay otra manera de definir si un número entero positivo es par o impar: Define una función recursiva `esPar` que corresponda a esta descripción. La función debe aceptar un solo parámetro (un número entero, positivo) y devolver un Booleano. Pruébalo con 50 y 75. Observa cómo se comporta con -1. Por qué? Puedes pensar en una forma de arreglar esto? > // Tu codigo aqui. console.log(esPar(50)); // → true console.log(esPar(75)); // → false console.log(esPar(-1)); // → ?? Es probable que tu función se vea algo similar a la función interna `encontrar` en la función recursiva `encontrarSolucion` de ejemplo en este capítulo, con una cadena `if` / `else if` / `else` que prueba cuál de los tres casos aplica. El `else` final, correspondiente al tercer caso, hace la llamada recursiva. Cada una de las ramas debe contener una declaración de `return` u organizarse de alguna otra manera para que un valor específico sea retornado. Cuando se le dé un número negativo, la función volverá a repetirse una y otra vez, pasándose a si misma un número cada vez más negativo, quedando así más y más lejos de devolver un resultado. Eventualmente quedandose sin espacio en la pila y abortando el programa. ### Conteo de frijoles Puedes obtener el N-ésimo carácter, o letra, de un string escribiendo `"string"[N]` . El valor devuelto será un string que contiene solo un carácter (por ejemplo, `"f"` ). El primer carácter tiene posición cero, lo que hace que el último se encuentre en la posición `string.` . En otras palabras, un string de dos caracteres tiene una longitud de 2, y sus carácteres tendrán las posiciones 0 y 1. Escribe una función `contarFs` que tome un string como su único argumento y devuelva un número que indica cuántos caracteres “F” en mayúsculas haya en el string. Despues, escribe una función llamada `contarCaracteres` que se comporte como `contarFs` , excepto que toma un segundo argumento que indica el carácter que debe ser contado (en lugar de contar solo caracteres “F” en mayúscula). Reescribe `contarFs` para que haga uso de esta nueva función. > // Tu código aquí. console.log(contarFs("FFC")); // → 2 console.log(contarCaracteres("kakkerlak", "k")); // → 4 TU función necesitará de un ciclo que examine cada carácter en el string. Puede correr desde un índice de cero a uno por debajo de su longitud ( `< string.` ). Si el carácter en la posición actual es el mismo al que se está buscando en la función, agrega 1 a una variable contador. Una vez que el ciclo haya terminado, puedes retornat el contador. Ten cuidado de hacer que todos las vinculaciones utilizadas en la función sean locales a la función usando la palabra clave `let` o `const` . En dos ocasiones me han preguntado, ‘Dinos, Sr. Babbage, si pones montos equivocadas en la máquina, saldrán las respuestas correctas? [...] No soy capaz de comprender correctamente el tipo de confusión de ideas que podrían provocar tal pregunta. Los números, los booleanos y los strings son los átomos que constituyen lasestructuras de datos. Sin embargo, muchos tipos de información requieren más de un átomo. Los objetos nos permiten agrupar valores—incluidos otros objetos— para construir estructuras más complejas. Los programas que hemos construido hasta ahora han estado limitados por el hecho de que estaban operando solo en tipos de datos simples. Este capítulo introducira estructuras de datos básicas. Al final de el, sabrás lo suficiente como para comenzar a escribir programas útiles. El capítulo trabajara a través de un ejemplo de programación más o menos realista, presentando nuevos conceptos según se apliquen al problema en cuestión. El código de ejemplo a menudo se basara en funciones y vinculaciones que fueron introducidas anteriormente en el texto. ## El Hombre Ardilla De vez en cuando, generalmente entre las ocho y las diez de la noche, Jacques se encuentra a si mismo transformándose en un pequeño roedor peludo con una cola espesa. Por un lado, Jacques está muy contento de no tener la licantropía clásica. Convertirse en una ardilla causa menos problemas que convertirse en un lobo. En lugar de tener que preocuparse por accidentalmente comerse al vecino (eso sería incómodo), le preocupa ser comido por el gato del vecino. Después de dos ocasiones en las que se despertó en una rama precariamente delgada de la copa de un roble, desnudo y desorientado, Jacques se ha dedicado a bloquear las puertas y ventanas de su habitación por la noche y pone algunas nueces en el piso para mantenerse ocupado. Eso se ocupa de los problemas del gato y el árbol. Pero Jacques preferiría deshacerse de su condición por completo. Las ocurrencias irregulares de la transformación lo hacen sospechar que estas podrían ser provocadas por algo en especifico. Por un tiempo, creyó que solo sucedia en los días en los que el había estado cerca de árboles de roble. Pero evitar los robles no detuvo el problema. Cambiando a un enfoque más científico, Jacques ha comenzado a mantener un registro diario de todo lo que hace en un día determinado y si su forma cambio. Con esta información el espera reducir las condiciones que desencadenan las transformaciones. Lo primero que el necesita es una estructura de datos para almacenar esta información. ## Conjuntos de datos Para trabajar con una porción de datos digitales, primero debemos encontrar una manera de representarlo en la memoria de nuestra máquina. Digamos, por ejemplo, que queremos representar una colección de los números 2, 3, 5, 7 y 11. Podríamos ponernos creativos con los strings—después de todo, los strings pueden tener cualquier longitud, por lo que podemos poner una gran cantidad de datos en ellos—y usar `"2 3 5 7 11"` como nuestra representación. Pero esto es incómodo. Tendrías que extraer los dígitos de alguna manera y convertirlos a números para acceder a ellos. Afortunadamente, JavaScript proporciona un tipo de datos específicamente para almacenar secuencias de valores. Es llamado array y está escrito como una lista de valores entre corchetes, separados por comas. > edit & run code by clicking itlet listaDeNumeros = [2, 3, 5, 7, 11]; console.log(listaDeNumeros[2]); // → 5 console.log(listaDeNumeros[0]); // → 2 console.log(listaDeNumeros[2 - 1]); // → 3 La notación para llegar a los elementos dentro de un array también utiliza corchetes. Un par de corchetes inmediatamente después de una expresión, con otra expresión dentro de ellos, buscará al elemento en la expresión de la izquierda que corresponde al índice dado por la expresión entre corchetes. El primer índice de un array es cero, no uno. Entonces el primer elemento es alcanzado con `listaDeNumeros[0]` . El conteo basado en cero tiene una larga tradición en el mundo de la tecnología, y en ciertas maneras tiene mucho sentido, pero toma algo de tiempo acostumbrarse. Piensa en el índice como la cantidad de elementos a saltar, contando desde el comienzo del array. ## Propiedades Hasta ahora hemos visto algunas expresiones sospechosas como `miString.length` (para obtener la longitud de un string) y `Math.max` (la función máxima) en capítulos anteriores. Estas son expresiones que acceden a la propiedad de algún valor. En el primer caso, accedemos a la propiedad `length` de el valor en `miString` . En el segundo, accedemos a la propiedad llamada `max` en el objeto `Math` (que es una colección de constantes y funciones relacionadas con las matemáticas). Casi todos los valores de JavaScript tienen propiedades. Las excepciones son `null` y `undefined` . Si intentas acceder a una propiedad en alguno de estos no-valores, obtienes un error. > null.length; // → TypeError: null has no properties Las dos formas principales de acceder a las propiedades en JavaScript son con un punto y con corchetes. Tanto `valor.x` como `valor[x]` acceden una propiedad en `valor` —pero no necesariamente la misma propiedad. La diferencia está en cómo se interpreta `x` . Cuando se usa un punto, la palabra después del punto es el nombre literal de la propiedad. Cuando usas corchetes, la expresión entre corchetes es evaluada para obtener el nombre de la propiedad. Mientras `valor.x` obtiene la propiedad de `valor` llamada “x”, `valor[x]` intenta evaluar la expresión `x` y usa el resultado, convertido en un string, como el nombre de la propiedad. Entonces, si sabes que la propiedad que te interesa se llama color, dices `valor.color` . Si quieres extraer la propiedad nombrado por el valor mantenido en la vinculación `i` , dices `valor[i]` . Los nombres de las propiedades son strings. Pueden ser cualquier string, pero la notación de puntos solo funciona con nombres que se vean como nombres de vinculaciones válidos. Entonces, si quieres acceder a una propiedad llamada 2 o <NAME>, debes usar corchetes: `valor[2]` o `valor["<NAME>"]` . Los elementos en un array son almacenados como propiedades del array, usando números como nombres de propiedad. Ya que no puedes usar la notación de puntos con números, y que generalmente quieres utilizar una vinculación que contenga el índice de cualquier manera, debes de usar la notación de corchetes para llegar a ellos. La propiedad `length` de un array nos dice cuántos elementos este tiene. Este nombre de propiedad es un nombre de vinculación válido, y sabemos su nombre en avance, así que para encontrar la longitud de un array, normalmente escribes `array.length` ya que es más fácil de escribir que `array["length"]` . Ambos objetos de string y array contienen, además de la propiedad `length` , una serie de propiedades que tienen valores de función. > let ouch = "Ouch"; console.log(typeof ouch.toUpperCase); // → function console.log(ouch.toUpperCase()); // → OUCH Cada string tiene una propiedad `toUpperCase` (“a mayúsculas”). Cuando se llame, regresará una copia del string en la que todas las letras han sido convertido a mayúsculas. También hay `toLowerCase` (“a minúsculas”), que hace lo contrario. Curiosamente, a pesar de que la llamada a `toUpperCase` no pasa ningún argumento, la función de alguna manera tiene acceso al string `"Ouch"` , el valor de cuya propiedad llamamos. Cómo funciona esto se describe en el Capítulo 6. Las propiedades que contienen funciones generalmente son llamadas metodos del valor al que pertenecen. Como en, “ `toUpperCase` es un método de string”. Este ejemplo demuestra dos métodos que puedes usar para manipular arrays: > let secuencia = [1, 2, 3]; secuencia.push(4); secuencia.push(5); console.log(secuencia); // → [1, 2, 3, 4, 5] console.log(secuencia.pop()); // → 5 console.log(secuencia); // → [1, 2, 3, 4] El método `push` agrega valores al final de un array, y el el método `pop` hace lo contrario, eliminando el último valor en el array y retornandolo. Estos nombres algo tontos son los términos tradicionales para las operaciones en una pila. Una pila, en programación, es una estructura de datos que te permite agregar valores a ella y volverlos a sacar en el orden opuesto, de modo que lo que se agregó de último se elimine primero. Estas son comunes en la programación—es posible que recuerdes la pila de llamadas en el capítulo anterior, que es una instancia de la misma idea. ## Objetos De vuelta al Hombre-Ardilla. Un conjunto de entradas diarias puede ser representado como un array. Pero estas entradas no consisten en solo un número o un string—cada entrada necesita almacenar una lista de actividades y un valor booleano que indica si Jacques se convirtió en una ardilla o no. Idealmente, nos gustaría agrupar estos en un solo valor y luego agrupar estos valores en un array de registro de entradas. Los valores del tipo objeto son colecciones arbitrarias de propiedades. Una forma de crear un objeto es mediante el uso de llaves como una expresión. > let dia1 = { ardilla: false, eventos: ["trabajo", "toque un arbol", "pizza", "salir a correr"] }; console.log(dia1.ardilla); // → false console.log(dia1.lobo); // → undefined dia1.lobo = false; console.log(dia1.lobo); // → false Dentro de las llaves, hay una lista de propiedades separadas por comas. Cada propiedad tiene un nombre seguido de dos puntos y un valor. Cuando un objeto está escrito en varias líneas, indentar como en el ejemplo ayuda con la legibilidad. Las propiedades cuyos nombres no sean nombres válidos de vinculaciones o números válidos deben estar entre comillas. > let descripciones = { trabajo: "Fui a trabajar", "toque un arbol": "Toque un arbol" }; Esto significa que las llaves tienen dos significados en JavaScript. Al comienzo de una declaración, comienzan un bloque de declaraciones. En cualquier otra posición, describen un objeto. Afortunadamente, es raramente útil comenzar una declaración con un objeto en llaves, por lo que la ambigüedad entre estas dos acciones no es un gran problema. Leer una propiedad que no existe te dará el valor `undefined` . Es posible asignarle un valor a una expresión de propiedad con un operador `=` . Esto reemplazará el valor de la propiedad si ya tenia uno o crea una nueva propiedad en el objeto si no fuera así. Para volver brevemente a nuestro modelo de vinculaciones como tentáculos—Las vinculaciones de propiedad son similares. Ellas agarran valores, pero otras vinculaciones y propiedades pueden estar agarrando esos mismos valores. Puedes pensar en los objetos como pulpos con cualquier cantidad de tentáculos, cada uno de los cuales tiene un nombre tatuado en él. El operador `delete` (“eliminar”) corta un tentáculo de dicho pulpo. Es un operador unario que, cuando se aplica a la propiedad de un objeto, eliminará la propiedad nombrada de dicho objeto. Esto no es algo que hagas todo el tiempo, pero es posible. > let unObjeto = {izquierda: 1, derecha: 2}; console.log(unObjeto.izquierda); // → 1 delete unObjeto.izquierda; console.log(unObjeto.izquierda); // → undefined console.log("izquierda" in unObjeto); // → false console.log("derecha" in unObjeto); // → true El operador binario `in` (“en”), cuando se aplica a un string y un objeto, te dice si ese objeto tiene una propiedad con ese nombre. La diferencia entre darle un valor de `undefined` a una propiedad y eliminarla realmente es que, en el primer caso, el objeto todavía tiene la propiedad (solo que no tiene un valor muy interesante), mientras que en el segundo caso la propiedad ya no está presente e `in` retornara `false` . Para saber qué propiedades tiene un objeto, puedes usar la función `Object.keys` . Le das un objeto y devuelve un array de strings—los nombres de las propiedades del objeto. > console.log(Object.keys({x: 0, y: 0, z: 2})); // → ["x", "y", "z"] Hay una función `Object.assign` que copia todas las propiedades de un objeto a otro. > let objetoA = {a: 1, b: 2}; Object.assign(objetoA, {b: 3, c: 4}); console.log(objetoA); // → {a: 1, b: 3, c: 4} Los arrays son, entonces, solo un tipo de objeto especializado para almacenar secuencias de cosas. Si evalúas `typeof []` , este produce `"object"` . Podrias imaginarlos como pulpos largos y planos con todos sus tentáculos en una fila ordenada, etiquetados con números. Representaremos el diario de Jacques como un array de objetos. > let diario = [ {eventos: ["trabajo", "toque un arbol", "pizza", "sali a correr", "television"], ardilla: false}, {eventos: ["trabajo", "helado", "coliflor", "lasaña", "toque un arbol", "me cepille los dientes"], ardilla: false}, {eventos: ["fin de semana", "monte la bicicleta", "descanso", "nueces", "cerveza"], ardilla: true}, /* y asi sucesivamente... */ ]; ## Mutabilidad Llegaremos a la programación real pronto. Pero primero, hay una pieza más de teoría por entender. Vimos que los valores de objeto pueden ser modificados. Los tipos de valores discutidos en capítulos anteriores, como números, strings y booleanos, son todos inmutables—es imposible cambiar los valores de aquellos tipos. Puedes combinarlos y obtener nuevos valores a partir de ellos, pero cuando tomas un valor de string específico, ese valor siempre será el mismo. El texto dentro de él no puede ser cambiado. Si tienes un string que contiene `"gato"` , no es posible que otro código cambie un carácter en tu string para que deletree `"rato"` . Los objetos funcionan de una manera diferente. Tu puedes cambiar sus propiedades, haciendo que un único valor de objeto tenga contenido diferente en diferentes momentos. Cuando tenemos dos números, 120 y 120, podemos considerarlos el mismo número precisamente, ya sea que hagan referencia o no a los mismos bits físicos. Con los objetos, hay una diferencia entre tener dos referencias a el mismo objeto y tener dos objetos diferentes que contengan las mismas propiedades. Considera el siguiente código: > let objeto1 = {valor: 10}; let objeto2 = objeto1; let objeto3 = {valor: 10}; console.log(objeto1 == objeto2); // → true console.log(objeto1 == objeto3); // → false objeto1.valor = 15; console.log(objeto2.valor); // → 15 console.log(objeto3.valor); // → 10 Las vinculaciones `objeto1` y `objeto2` agarran el mismo objeto, que es la razon por la cual cambiar `objeto1` también cambia el valor de `objeto2` . Se dice que tienen la misma identidad. La vinculación `objeto3` apunta a un objeto diferente, que inicialmente contiene las mismas propiedades que `objeto1` pero vive una vida separada. Las vinculaciones también pueden ser cambiables o constantes, pero esto es independiente de la forma en la que se comportan sus valores. Aunque los valores numéricos no cambian, puedes usar una vinculación `let` para hacer un seguimiento de un número que cambia al cambiar el valor al que apunta la vinculación. Del mismo modo, aunque una vinculación `const` a un objeto no pueda ser cambiada en si misma y continuará apuntando al mismo objeto, los contenidos de ese objeto pueden cambiar. > const puntuacion = {visitantes: 0, locales: 0}; // Esto esta bien puntuacion.visitantes = 1; // Esto no esta permitido puntuacion = {visitantes: 1, locales: 1}; Cuando comparas objetos con el operador `==` en JavaScript, este los compara por identidad: producirá `true` solo si ambos objetos son precisamente el mismo valor. Comparar diferentes objetos retornara `false` , incluso si tienen propiedades idénticas. No hay una operación de comparación “profunda” incorporada en JavaScript, que compare objetos por contenidos, pero es posible que la escribas tu mismo (que es uno de los ejercicios al final de este capítulo). ## El diario del licántropo Asi que Jacques inicia su intérprete de JavaScript y establece el entorno que necesita para mantener su diario. > let diario = []; function añadirEntrada(eventos, ardilla) { diario.push({eventos, ardilla}); } Ten en cuenta que el objeto agregado al diario se ve un poco extraño. En lugar de declarar propiedades como `eventos: eventos` , simplemente da un nombre de propiedad. Este es un atajo que representa lo mismo—si el nombre de propiedad en la notación de llaves no es seguido por un valor, su el valor se toma de la vinculación con el mismo nombre. Entonces, todas las noches a las diez—o algunas veces a la mañana siguiente, después de bajar del estante superior de su biblioteca—Jacques registra el día. So then, every evening at ten—or sometimes the next morning, after climbing down from the top shelf of his bookcase—Jacques records the day. > añadirEntrada(["trabajo", "toque un arbol", "pizza", "sali a correr", "television"], false); añadirEntrada(["trabajo", "helado", "coliflor", "lasaña", "toque un arbol", "me cepille los dientes"], false); añadirEntrada(["fin de semana", "monte la bicicleta", "descanso", "nueces", "cerveza"], true); Una vez que tiene suficientes puntos de datos, tiene la intención de utilizar estadísticas para encontrar cuál de estos eventos puede estar relacionado con la transformación a ardilla. La correlación es una medida de dependencia entre variables estadísticas. Una variable estadística no es lo mismo que una variable de programación. En las estadísticas, normalmente tienes un conjunto de medidas, y cada variable se mide para cada medida. La correlación entre variables generalmente se expresa como un valor que varia de -1 a 1. Una correlación de cero significa que las variables no estan relacionadas. Una correlación de uno indica que las dos están perfectamente relacionadas—si conoces una, también conoces la otra. Uno negativo también significa que las variables están perfectamente relacionadas pero que son opuestas—cuando una es verdadera, la otra es falsa. Para calcular la medida de correlación entre dos variables booleanas, podemos usar el coeficiente phi (ϕ). Esta es una fórmula cuya entrada es una tabla de frecuencias que contiene la cantidad de veces que las diferentes combinaciones de las variables fueron observadas. El resultado de la fórmula es un número entre -1 y 1 que describe la correlación. Podríamos tomar el evento de comer pizza y poner eso en una tabla de frecuencias como esta, donde cada número indica la cantidad de veces que ocurrió esa combinación en nuestras mediciones: Si llamamos a esa tabla n, podemos calcular ϕ usando la siguiente fórmula: ϕ = | | | --- | --- | (Si en este momento estas bajando el libro para enfocarte en un terrible flashback a la clase de matemática de 10° grado—espera! No tengo la intención de torturarte con infinitas páginas de notación críptica—solo esta fórmula para ahora. E incluso con esta, todo lo que haremos es convertirla en JavaScript.) La notación n01 indica el número de mediciones donde la primera variable (ardilla) es falso (0) y la segunda variable (pizza) es verdadera (1). En la tabla de pizza, n01 es 9. El valor n1• se refiere a la suma de todas las medidas donde la primera variable es verdadera, que es 5 en la tabla de ejemplo. Del mismo modo, n•0 se refiere a la suma de las mediciones donde la segunda variable es falsa. Entonces para la tabla de pizza, la parte arriba de la línea de división (el dividendo) sería 1×76−4×9 = 40, y la parte inferior (el divisor) sería la raíz cuadrada de 5×85×10×80, o √340000. Esto da ϕ ≈ 0.069, que es muy pequeño. Comer pizza no parece tener influencia en las transformaciones. ## Calculando correlación Podemos representar una tabla de dos-por-dos en JavaScript con un array de cuatro elementos ( `[76, 9, 4, 1]` ). También podríamos usar otras representaciones, como un array que contiene dos arrays de dos elementos ( `[[76, 9], [4, 1]]` ) o un objeto con nombres de propiedad como `"11"` y `"01"` , pero el array plano es simple y hace que las expresiones que acceden a la tabla agradablemente cortas. Interpretaremos los índices del array como número binarios de dos-bits , donde el dígito más a la izquierda (más significativo) se refiere a la variable ardilla y el digito mas a la derecha (menos significativo) se refiere a la variable de evento. Por ejemplo, el número binario `10` se refiere al caso en que Jacques se convirtió en una ardilla, pero el evento (por ejemplo, “pizza”) no ocurrió. Esto ocurrió cuatro veces. Y dado que el `10` binario es 2 en notación decimal, almacenaremos este número en el índice 2 del array. Esta es la función que calcula el coeficiente ϕ de tal array: > function phi(tabla) { return (tabla[3] * tabla[0] - tabla[2] * tabla[1]) / Math.sqrt((tabla[2] + tabla[3]) * (tabla[0] + tabla[1]) * (tabla[1] + tabla[3]) * (tabla[0] + tabla[2])); } console.log(phi([76, 9, 4, 1])); // → 0.068599434 Esta es una traducción directa de la fórmula ϕ a JavaScript. `Math.sqrt` es la función de raíz cuadrada, proporcionada por el objeto `Math` en un entorno de JavaScript estándar. Tenemos que sumar dos campos de la tabla para obtener campos como n1• porque las sumas de filas o columnas no se almacenan directamente en nuestra estructura de datos. Jacques mantuvo su diario por tres meses. El conjunto de datos resultante está disponible en la caja de arena para este capítulo, donde se almacena en la vinculación `JOURNAL` , y en un archivo descargable. Para extraer una tabla de dos por dos para un evento en específico del diario, debemos hacer un ciclo a traves de todas las entradas y contar cuántas veces ocurre el evento en relación a las transformaciones de ardilla. > function tablaPara(evento, diario) { let tabla = [0, 0, 0, 0]; for (let i = 0; i < diario.length; i++) { let entrada = diario[i], index = 0; if (entrada.eventos.includes(evento)) index += 1; if (entrada.ardilla) index += 2; tabla[index] += 1; } return tabla; } console.log(tablaPara("pizza", JOURNAL)); // → [76, 9, 4, 1] Los array tienen un método `includes` (“incluye”) que verifica si un valor dado existe en el array. La función usa eso para determinar si el nombre del evento en el que estamos interesados forma parte de la lista de eventos para un determinado día. El cuerpo del ciclo en `tablaPara` determina en cual caja de la tabla cae cada entrada del diario al verificar si la entrada contiene el evento específico que nos interesa y si el evento ocurre junto con un incidente de ardilla. El ciclo luego agrega uno a la caja correcta en la tabla. Ahora tenemos las herramientas que necesitamos para calcular las correlaciónes individuales. El único paso que queda es encontrar una correlación para cada tipo de evento que se escribio en el diario y ver si algo se destaca. ## Ciclos de array En la función `tablaPara` , hay un ciclo como este: > for (let i = 0; i < DIARIO.length; i++) { let entrada = DIARIO[i]; // Hacer con algo con la entrada } Este tipo de ciclo es común en JavaScript clasico—ir a traves de los arrays un elemento a la vez es algo que surge mucho, y para hacer eso correrias un contador sobre la longitud del array y elegirías cada elemento en turnos. Hay una forma más simple de escribir tales ciclos en JavaScript moderno. > for (let entrada of DIARIO) { console.log(`${entrada.eventos.length} eventos.`); } Cuando un ciclo `for` se vea de esta manera, con la palabra `of` (“de”) después de una definición de variable, recorrerá los elementos del valor dado después `of` . Esto funciona no solo para arrays, sino también para strings y algunas otras estructuras de datos. Vamos a discutir como funciona en el Capítulo 6. ## El análisis final Necesitamos calcular una correlación para cada tipo de evento que ocurra en el conjunto de datos. Para hacer eso, primero tenemos que encontrar cada tipo de evento. > function eventosDiario(diario) { let eventos = []; for (let entrada of diario) { for (let evento of entrada.eventos) { if (!eventos.includes(evento)) { eventos.push(evento); } } } return eventos; } console.log(eventosDiario(DIARIO)); // → ["zanahoria", "ejercicio", "fin de semana", "pan", …] Yendo a traves de todos los eventos, y agregando aquellos que aún no están en allí en el array `eventos` , la función recolecta cada tipo de evento. Usando eso, podemos ver todos las correlaciones. > for (let evento of eventosDiario(DIARIO)) { console.log(evento + ":", phi(tablaPara(evento, DIARIO))); } // → zanahoria: 0.0140970969 // → ejercicio: 0.0685994341 // → fin de semana: 0.1371988681 // → pan: -0.0757554019 // → pudin: -0.0648203724 // and so on... La mayoría de las correlaciones parecen estar cercanas a cero. Come zanahorias, pan o pudín aparentemente no desencadena la licantropía de ardilla. Parece ocurrir un poco más a menudo los fines de semana. Filtremos los resultados para solo mostrar correlaciones mayores que 0.1 o menores que -0.1. > for (let evento of eventosDiario(DIARIO)) { let correlacion = phi(tablaPara(evento, DIARIO)); if (correlacion > 0.1 || correlacion < -0.1) { console.log(evento + ":", correlacion); } } // → fin de semana: 0.1371988681 // → me cepille los dientes: -0.3805211953 // → dulces: 0.1296407447 // → trabajo: -0.1371988681 // → spaghetti: 0.2425356250 // → leer: 0.1106828054 // → nueces: 0.5902679812 A-ha! Hay dos factores con una correlación que es claramente más fuerte que las otras. Comer nueces tiene un fuerte efecto positivo en la posibilidad de convertirse en una ardilla, mientras que cepillarse los dientes tiene un significativo efecto negativo. Interesante. Intentemos algo. > for (let entrada of DIARIO) { if (entrada.eventos.includes("nueces") && !entrada.eventos.includes("me cepille los dientes")) { entrada.eventos.push("dientes con nueces"); } } console.log(phi(tablaPara("dientes con nueces", DIARIO))); // → 1 Ese es un resultado fuerte. El fenómeno ocurre precisamente cuando Jacques come nueces y no se cepilla los dientes. Si tan solo él no hubiese sido tan flojo con su higiene dental, él nunca habría notado su aflicción. Sabiendo esto, Jacques deja de comer nueces y descubre que sus transformaciones no vuelven. Durante algunos años, las cosas van bien para Jacques. Pero en algún momento él pierde su trabajo. Porque vive en un país desagradable donde no tener trabajo significa que no tiene servicios médicos, se ve obligado a trabajar con a circo donde actua como El Increible Hombre-Ardilla, llenando su boca con mantequilla de maní antes de cada presentación. Un día, harto de esta existencia lamentable, Jacques no puede cambiar de vuelta a su forma humana, salta a través de una grieta en la carpa del circo, y se desvanece en el bosque. Nunca se le ve de nuevo. ## Arrayología avanzada Antes de terminar el capítulo, quiero presentarte algunos conceptos extras relacionados a los objetos. Comenzaré introduciendo algunos en métodos de arrays útiles generalmente. Vimos `push` y `pop` , que agregan y removen elementos en el final de un array, anteriormente en este capítulo. Los métodos correspondientes para agregar y remover cosas en el comienzo de un array se llaman `unshift` y `shift` . > let listaDeTareas = []; function recordar(tarea) { listaDeTareas.push(tarea); } function obtenerTarea() { return listaDeTareas.shift(); } function recordarUrgentemente(tarea) { listaDeTareas.unshift(tarea); } Ese programa administra una cola de tareas. Agregas tareas al final de la cola al llamar `recordar("verduras")` , y cuando estés listo para hacer algo, llamas a `obtenerTarea()` para obtener (y eliminar) el elemento frontal de la cola. La función `recordarUrgentemente` también agrega una tarea pero la agrega al frente en lugar de a la parte posterior de la cola. Para buscar un valor específico, los arrays proporcionan un método `indexOf` (“indice de”). Este busca a través del array desde el principio hasta el final y retorna el índice en el que se encontró el valor solicitado—o -1 si este no fue encontrado. Para buscar desde el final en lugar del inicio, hay un método similar llamado `lastIndexOf` (“ultimo indice de”). > console.log([1, 2, 3, 2, 1].indexOf(2)); // → 1 console.log([1, 2, 3, 2, 1].lastIndexOf(2)); // → 3 Tanto `indexOf` como `lastIndexOf` toman un segundo argumento opcional que indica dónde comenzar la búsqueda. Otro método fundamental de array es `slice` (“rebanar”), que toma índices de inicio y fin y retorna un array que solo tiene los elementos entre ellos. El índice de inicio es inclusivo, el índice final es exclusivo. > console.log([0, 1, 2, 3, 4].slice(2, 4)); // → [2, 3] console.log([0, 1, 2, 3, 4].slice(2)); // → [2, 3, 4] Cuando no se proporcione el índice final, `slice` tomará todos los elementos después del índice de inicio. También puedes omitir el índice de inicio para copiar todo el array. El método `concat` (“concatenar”) se puede usar para unir arrays y asi crear un nuevo array, similar a lo que hace el operador `+` para los strings. El siguiente ejemplo muestra tanto `concat` como `slice` en acción. Toma un array y un índice, y retorna un nuevo array que es una copia del array original pero eliminando al elemento en el índice dado: > function remover(array, indice) { return array.slice(0, indice) .concat(array.slice(indice + 1)); } console.log(remover(["a", "b", "c", "d", "e"], 2)); // → ["a", "b", "d", "e"] Si a `concat` le pasas un argumento que no es un array, ese valor sera agregado al nuevo array como si este fuera un array de un solo elemento. ## Strings y sus propiedades Podemos leer propiedades como `length` y `toUpperCase` de valores string. Pero si intentas agregar una nueva propiedad, esta no se mantiene. > let kim = "Kim"; kim.edad = 88; console.log(kim.edad); // → undefined Los valores de tipo string, número, y Booleano no son objetos, y aunque el lenguaje no se queja si intentas establecer nuevas propiedades en ellos, en realidad no almacena esas propiedades. Como se mencionó antes, tales valores son inmutables y no pueden ser cambiados. Pero estos tipos tienen propiedades integradas. Cada valor de string tiene un numero de metodos. Algunos muy útiles son `slice` e `indexOf` , que se parecen a los métodos de array de los mismos nombres. > console.log("panaderia".slice(0, 3)); // → pan console.log("panaderia".indexOf("a")); // → 1 Una diferencia es que el `indexOf` de un string puede buscar por un string que contenga más de un carácter, mientras que el método correspondiente al array solo busca por un elemento único. > console.log("uno dos tres".indexOf("tres")); // → 8 El método `trim` (“recortar”) elimina los espacios en blanco (espacios, saltos de linea, tabulaciones y caracteres similares) del inicio y final de un string. > console.log(" okey \n ".trim()); // → okey La función `alcocharConCeros` del capítulo anterior también existe como un método. Se llama `padStart` (“alcohar inicio”) y toma la longitud deseada y el carácter de relleno como argumentos. > console.log(String(6).padStart(3, "0")); // → 006 Puedes dividir un string en cada ocurrencia de otro string con el metodo `split` (“dividir”), y unirlo nuevamente con `join` (“unir”). > let oracion = "Los pajaros secretarios se especializan en pisotear"; let palabras = oracion.split(" "); console.log(palabras); // → ["Los", "pajaros", "secretarios", "se", "especializan", "en", "pisotear"] console.log(palabras.join(". ")); // → Los. pajaros. secretarios. se. especializan. en. pisotear Se puede repetir un string con el método `repeat` (“repetir”), el cual crea un nuevo string que contiene múltiples copias concatenadas del string original. > console.log("LA".repeat(3)); // → LALALA Ya hemos visto la propiedad `length` en los valores de tipo string. Acceder a los caracteres individuales en un string es similar a acceder a los elementos de un array (con una diferencia que discutiremos en el Capítulo 6). > let string = "abc"; console.log(string.length); // → 3 console.log(string[1]); // → b ## Parámetros restantes Puede ser útil para una función aceptar cualquier cantidad de argumentos. Por ejemplo, `Math.max` calcula el máximo de todos los argumentos que le son dados. Para escribir tal función, pones tres puntos antes del ultimo parámetro de la función, asi: > function maximo(numeros) { let resultado = -Infinity; for (let numero of numeros) { if (numero > resultado) resultado = numero; } return resultado; } console.log(maximo(4, 1, 9, -2)); // → 9 Cuando se llame a una función como esa, el parámetro restante está vinculado a un array que contiene todos los argumentos adicionales. Si hay otros parámetros antes que él, sus valores no seran parte de ese array. Cuando, tal como en `maximo` , sea el único parámetro, contendrá todos los argumentos. Puedes usar una notación de tres-puntos similar para llamar una función con un array de argumentos. > let numeros = [5, 1, 7]; console.log(max(numeros)); // → 7 Esto “extiende” al array en la llamada de la función, pasando sus elementos como argumentos separados. Es posible incluir un array de esa manera, junto con otros argumentos, como en `max(9, .` . La notación de corchetes para crear arrays permite al operador de tres-puntos extender otro array en el nuevo array: Square bracket array notation similarly allows the triple-dot operator to spread another array into the new array: > let palabras = ["nunca", "entenderas"]; console.log(["tu", palabras, "completamente"]); // → ["tu", "nunca", "entenderas", "completamente"] ## El objeto Math Como hemos visto, `Math` es una bolsa de sorpresas de utilidades relacionadas a los numeros, como `Math.max` (máximo), `Math.min` (mínimo) y `Math.sqrt` (raíz cuadrada). El objeto `Math` es usado como un contenedor que agrupa un grupo de funcionalidades relacionadas. Solo hay un objeto `Math` , y casi nunca es útil como un valor. Más bien, proporciona un espacio de nombre para que todos estas funciones y valores no tengan que ser vinculaciones globales. Tener demasiadas vinculaciones globales “contamina” el espacio de nombres. Cuanto más nombres hayan sido tomados, es más probable que accidentalmente sobrescribas el valor de algunas vinculaciones existentes. Por ejemplo, no es es poco probable que quieras nombrar algo `max` en alguno de tus programas. Dado que la función `max` ya incorporada en JavaScript está escondida dentro del Objeto `Math` , no tenemos que preocuparnos por sobrescribirla. Muchos lenguajes te detendrán, o al menos te advertirán, cuando estes por definir una vinculación con un nombre que ya este tomado. JavaScript hace esto para vinculaciones que hayas declarado con `let` o `const` pero-perversamente-no para vinculaciones estándar, ni para vinculaciones declaradas con `var` o `function` . De vuelta al objeto `Math` . Si necesitas hacer trigonometría, `Math` te puede ayudar. Contiene `cos` (coseno), `sin` (seno) y `tan` (tangente), así como sus funciones inversas, `acos` , `asin` , y `atan` , respectivamente. El número π (pi)—o al menos la aproximación más cercano que cabe en un número de JavaScript—está disponible como `Math.PI` . Hay una vieja tradición en la programación de escribir los nombres de los valores constantes en mayúsculas. > function puntoAleatorioEnCirculo(radio) { let angulo = Math.random() * 2 * Math.PI; return {x: radio * Math.cos(angulo), y: radio * Math.sin(angulo)}; } console.log(puntoAleatorioEnCirculo(2)); // → {x: 0.3667, y: 1.966} Si los senos y los cosenos son algo con lo que no estas familiarizado, no te preocupes. Cuando se usen en este libro, en el Capítulo 14, te los explicaré. El ejemplo anterior usó `Math.random` . Esta es una función que retorna un nuevo número pseudoaleatorio entre cero (inclusivo) y uno (exclusivo) cada vez que la llamas. > console.log(Math.random()); // → 0.36993729369714856 console.log(Math.random()); // → 0.727367032552138 console.log(Math.random()); // → 0.40180766698904335 Aunque las computadoras son máquinas deterministas—siempre reaccionan de la misma manera manera dada la misma entrada—es posible hacer que produzcan números que parecen aleatorios. Para hacer eso, la máquina mantiene algun valor escondido, y cada vez que le pidas un nuevo número aleatorio, realiza calculos complicados en este valor oculto para crear un nuevo valor. Esta almacena un nuevo valor y retorna un número derivado de él. De esta manera, puede producir números nuevos y difíciles de predecir de una manera que parece aleatoria. Si queremos un número entero al azar en lugar de uno fraccionario, podemos usar `Math.floor` (que redondea hacia abajo al número entero más cercano) con el resultado de `Math.random` . > console.log(Math.floor(Math.random() * 10)); // → 2 Multiplicar el número aleatorio por 10 nos da un número mayor que o igual a cero e inferior a 10. Como `Math.floor` redondea hacia abajo, esta expresión producirá, con la misma probabilidad, cualquier número desde 0 hasta 9. También están las funciones `Math.ceil` (que redondea hacia arriba hasta llegar al número entero mas cercano), `Math.round` (al número entero más cercano), y `Math.abs` , que toma el valor absoluto de un número, lo que significa que niega los valores negativos pero deja los positivos tal y como estan. ## Desestructurar Volvamos a la función `phi` por un momento: > function phi(tabla) { return (tabla[3] * tabla[0] - tabla[2] * tabla[1]) / Math.sqrt((tabla[2] + tabla[3]) * (tabla[0] + tabla[1]) * (tabla[1] + tabla[3]) * (tabla[0] + tabla[2])); } Una de las razones por las que esta función es incómoda de leer es que tenemos una vinculación apuntando a nuestro array, pero preferiríamos tener vinculaciones para los elementos del array, es decir, `let n00 = tabla[0]` y así sucesivamente. Afortunadamente, hay una forma concisa de hacer esto en JavaScript. > function phi([n00, n01, n10, n11]) { return (n11 * n00 - n10 * n01) / Math.sqrt((n10 + n11) * (n00 + n01) * (n01 + n11) * (n00 + n10)); } Esto también funciona para vinculaciones creadas con `let` , `var` , o `const` . Si sabes que el valor que estas vinculando es un array, puedes usar corchetes para “mirar dentro” del valor, y asi vincular sus contenidos. Un truco similar funciona para objetos, utilizando llaves en lugar de corchetes. > let {nombre} = {nombre: "Faraji", edad: 23}; console.log(nombre); // → Faraji Ten en cuenta que si intentas desestructurar `null` o `undefined` , obtendrás un error, igual como te pasaria si intentaras acceder directamente a una propiedad de esos valores. ## JSON Ya que las propiedades solo agarran su valor, en lugar de contenerlo, los objetos y arrays se almacenan en la memoria de la computadora como secuencias de bits que contienen las direcciónes—el lugar en la memoria—de sus contenidos. Asi que un array con otro array dentro de el consiste en (al menos) una región de memoria para el array interno, y otra para el array externo, que contiene (entre otras cosas) un número binario que representa la posición del array interno. Si deseas guardar datos en un archivo para más tarde, o para enviarlo a otra computadora a través de la red, tienes que convertir de alguna manera estos enredos de direcciones de memoria a una descripción que se puede almacenar o enviar. Supongo, que podrías enviar toda la memoria de tu computadora junto con la dirección del valor que te interesa, pero ese no parece el mejor enfoque. Lo que podemos hacer es serializar los datos. Eso significa que son convertidos a una descripción plana. Un formato de serialización popular llamado JSON (pronunciado “Jason”), que significa JavaScript Object Notation (“Notación de Objetos JavaScript”). Es ampliamente utilizado como un formato de almacenamiento y comunicación de datos en la Web, incluso en otros lenguajes diferentes a JavaScript. JSON es similar a la forma en que JavaScript escribe arrays y objetos, con algunas restricciones. Todos los nombres de propiedad deben estar rodeados por comillas dobles, y solo se permiten expresiones de datos simples—sin llamadas a función, vinculaciones o cualquier otra cosa que involucre computaciones reales. Los comentarios no están permitidos en JSON. Una entrada de diario podria verse así cuando se representa como datos JSON: > { "ardilla": false, "eventos": ["trabajo", "toque un arbol", "pizza", "sali a correr"] } JavaScript nos da las funciones `JSON.stringify` y `JSON.parse` para convertir datos hacia y desde este formato. El primero toma un valor en JavaScript y retorna un string codificado en JSON. La segunda toma un string como ese y lo convierte al valor que este codifica. > let string = JSON.stringify({ardilla: false, eventos: ["fin de semana"]}); console.log(string); // → {"ardilla":false,"eventos":["fin de semana"]} console.log(JSON.parse(string).eventos); // → ["fin de semana"] Los objetos y arrays (que son un tipo específico de objeto) proporcionan formas de agrupar varios valores en un solo valor. Conceptualmente, esto nos permite poner un montón de cosas relacionadas en un bolso y correr alredor con el bolso, en lugar de envolver nuestros brazos alrededor de todas las cosas individuales, tratando de aferrarnos a ellas por separado. La mayoría de los valores en JavaScript tienen propiedades, las excepciones son `null` y `undefined` . Se accede a las propiedades usando `valor.propiedad` o `valor["propiedad"]` . Los objetos tienden a usar nombres para sus propiedades y almacenar más o menos un conjunto fijo de ellos. Los arrays, por el otro lado, generalmente contienen cantidades variables de valores conceptualmente idénticos y usa números (comenzando desde 0) como los nombres de sus propiedades. Hay algunas propiedades con nombre en los arrays, como `length` y un numero de metodos. Los métodos son funciones que viven en propiedades y (por lo general) actuan sobre el valor del que son una propiedad. Puedes iterar sobre los arrays utilizando un tipo especial de ciclo `for` — ``` for (let elemento of array) ``` . ### La suma de un rango La introducción de este libro aludía a lo siguiente como una buena forma de calcular la suma de un rango de números: > console.log(suma(rango(1, 10))); Escribe una función `rango` que tome dos argumentos, `inicio` y `final` , y retorne un array que contenga todos los números desde `inicio` hasta (e incluyendo) `final` . Luego, escribe una función `suma` que tome un array de números y retorne la suma de estos números. Ejecuta el programa de ejemplo y ve si realmente retorna 55. Como una misión extra, modifica tu función `rango` para tomar un tercer argumento opcional que indique el valor de “paso” utilizado para cuando construyas el array. Si no se da ningún paso, los elementos suben en incrementos de uno, correspondiedo al comportamiento anterior. La llamada a la función `rango(1, 10, 2)` deberia retornar `[1, 3, 5, 7, 9]` . Asegúrate de que también funcione con valores de pasos negativos para que `rango(5, 2, -1)` produzca `[5, 4, 3, 2]` . > // Tu código aquí. console.log(rango(1, 10)); // → [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] console.log(rango(5, 2, -1)); // → [5, 4, 3, 2] console.log(sum(rango(1, 10))); // → 55 Construir un array se realiza más fácilmente al inicializar primero una vinculación a `[]` (un array nuevo y vacío) y llamando repetidamente a su método `push` para agregar un valor. No te olvides de retornar el array al final de la función. Dado que el límite final es inclusivo, deberias usar el operador `<=` en lugar de `<` para verificar el final de tu ciclo. El parámetro de paso puede ser un parámetro opcional que por defecto (usando el operador `=` ) tenga el valor 1. Hacer que `rango` entienda valores de paso negativos es probablemente mas facil de realizar al escribir dos ciclos por separado—uno para contar hacia arriba y otro para contar hacia abajo—ya que la comparación que verifica si el ciclo está terminado necesita ser `>=` en lugar de `<=` cuando se cuenta hacia abajo. También puede que valga la pena utilizar un paso predeterminado diferente, es decir -1, cuando el final del rango sea menor que el inicio. De esa manera, `rango(5, 2)` retornaria algo significativo, en lugar de quedarse atascado en un ciclo infinito. Es posible referirse a parámetros anteriores en el valor predeterminado de un parámetro. ### Revirtiendo un array Los arrays tienen un método `reverse` que cambia al array invirtiendo el orden en que aparecen sus elementos. Para este ejercicio, escribe dos funciones, `revertirArray` y . El primero, `revertirArray` , toma un array como argumento y produce un nuevo array que tiene los mismos elementos pero en el orden inverso. El segundo, , hace lo que hace el método `reverse` : modifica el array dado como argumento invirtiendo sus elementos. Ninguno de los dos puede usar el método `reverse` estándar. Pensando en las notas acerca de los efectos secundarios y las funciones puras en el capítulo anterior, qué variante esperas que sea útil en más situaciones? Cuál corre más rápido? > // Tu código aquí. console.log(revertirArray(["A", "B", "C"])); // → ["C", "B", "A"]; let valorArray = [1, 2, 3, 4, 5]; revertirArrayEnSuLugar(valorArray); console.log(valorArray); // → [5, 4, 3, 2, 1] Hay dos maneras obvias de implementar `revertirArray` . La primera es simplemente pasar a traves del array de entrada de adelante hacia atrás y usar el metodo `unshift` en el nuevo array para insertar cada elemento en su inicio. La segundo es hacer un ciclo sobre el array de entrada de atrás hacia adelante y usar el método `push` . Iterar sobre un array al revés requiere de una especificación (algo incómoda) del ciclo `for` , como `(let i = array.` . Revertir al array en su lugar es más difícil. Tienes que tener cuidado de no sobrescribir elementos que necesitarás luego. Usar `revertirArray` o de lo contrario, copiar toda el array ( `array.slice(0)` es una buena forma de copiar un array) funciona pero estás haciendo trampa. El truco consiste en intercambiar el primer y el último elemento, luego el segundo y el penúltimo, y así sucesivamente. Puedes hacer esto haciendo un ciclo basandote en la mitad de la longitud del array (use `Math.floor` para redondear—no necesitas tocar el elemento del medio en un array con un número impar de elementos) e intercambiar el elemento en la posición `i` con el de la posición `array.` . Puedes usar una vinculación local para aferrarse brevemente a uno de los elementos, sobrescribirlo con su imagen espejo, y luego poner el valor de la vinculación local en el lugar donde solía estar la imagen espejo. ### Una lista Los objetos, como conjuntos genéricos de valores, se pueden usar para construir todo tipo de estructuras de datos. Una estructura de datos común es la lista (no confundir con un array). Una lista es un conjunto anidado de objetos, con el primer objeto conteniendo una referencia al segundo, el segundo al tercero, y así sucesivamente. > let lista = { valor: 1, resto: { valor: 2, resto: { valor: 3, resto: null } } }; Los objetos resultantes forman una cadena, como esta: Algo bueno de las listas es que pueden compartir partes de su estructura. Por ejemplo, si creo dos nuevos valores ``` {valor: 0, resto: lista} ``` y ``` {valor: -1, resto: lista} ``` (con `lista` refiriéndose a la vinculación definida anteriormente), ambos son listas independientes, pero comparten la estructura que conforma sus últimos tres elementos. La lista original también sigue siendo una lista válida de tres elementos. Escribe una función `arrayALista` que construya una estructura de lista como el que se muestra arriba cuando se le da `[1, 2, 3]` como argumento. También escribe una función `listaAArray` que produzca un array de una lista. Luego agrega una función de utilidad `preceder` , que tome un elemento y una lista y creé una nueva lista que agrega el elemento al frente de la lista de entrada, y `posicion` , que toma una lista y un número y retorne el elemento en la posición dada en la lista (con cero refiriéndose al primer elemento) o `undefined` cuando no exista tal elemento. Si aún no lo has hecho, también escribe una versión recursiva de `posicion` . > // Tu código aquí. console.log(arrayALista([10, 20])); // → {valor: 10, resto: {valor: 20, resto: null}} console.log(listaAArray(arrayALista([10, 20, 30]))); // → [10, 20, 30] console.log(preceder(10, preceder(20, null))); // → {valor: 10, resto: {valor: 20, resto: null}} console.log(posicion(arrayALista([10, 20, 30]), 1)); // → 20 Crear una lista es más fácil cuando se hace de atrás hacia adelante. Entonces `arrayALista` podría iterar sobre el array hacia atrás (ver ejercicio anterior) y, para cada elemento, agregar un objeto a la lista. Puedes usar una vinculación local para mantener la parte de la lista que se construyó hasta el momento y usar una asignación como ``` lista = {valor: X, resto: lista} ``` para agregar un elemento. Para correr a traves de una lista (en `listaAArray` y `posicion` ), una especificación del ciclo `for` como esta se puede utilizar: > for (let nodo = lista; nodo; nodo = nodo.resto) {} Puedes ver cómo eso funciona? En cada iteración del ciclo, `nodo` apunta a la sublista actual, y el cuerpo puede leer su propiedad `valor` para obtener el elemento actual. Al final de una iteración, `nodo` se mueve a la siguiente sublista. Cuando eso es nulo, hemos llegado al final de la lista y el ciclo termina. La versión recursiva de `posición` , de manera similar, mirará a una parte más pequeña de la “cola” de la lista y, al mismo tiempo, contara atrás el índice hasta que llegue a cero, en cuyo punto puede retornar la propiedad `valor` del nodo que está mirando. Para obtener el elemento cero de una lista, simplemente toma la propiedad `valor` de su nodo frontal. Para obtener el elemento N + 1, toma el elemento N de la lista que este en la propiedad `resto` de esta lista. ### Comparación profunda El operador `==` compara objetos por identidad. Pero a veces preferirias comparar los valores de sus propiedades reales. Escribe una función `igualdadProfunda` que toma dos valores y retorne `true` solo si tienen el mismo valor o son objetos con las mismas propiedades, donde los valores de las propiedades sean iguales cuando comparadas con una llamada recursiva a `igualdadProfunda` . Para saber si los valores deben ser comparados directamente (usa el operador `==` para eso) o si deben tener sus propiedades comparadas, puedes usar el operador `typeof` . Si produce `"object"` para ambos valores, deberías hacer una comparación profunda. Pero tienes que tomar una excepción tonta en cuenta: debido a un accidente histórico, `typeof null` también produce `"object"` . La función `Object.keys` será útil para cuando necesites revisar las propiedades de los objetos para compararlos. > // Tu código aquí. let objeto = {aqui: {hay: "un"}, objeto: 2}; console.log(igualdadProfunda(objeto, objeto)); // → true console.log(igualdadProfunda(objeto, {aqui: 1, object: 2})); // → false console.log(igualdadProfunda(objeto, {aqui: {hay: "un"}, objeto: 2})); // → true Tu prueba de si estás tratando con un objeto real se verá algo así como ``` typeof x == "object" && x != null ``` . Ten cuidado de comparar propiedades solo cuando ambos argumentos sean objetos. En todo los otros casos, puede retornar inmediatamente el resultado de aplicar `===` . Usa `Object.keys` para revisar las propiedades. Necesitas probar si ambos objetos tienen el mismo conjunto de nombres de propiedad y si esos propiedades tienen valores idénticos. Una forma de hacerlo es garantizar que ambos objetos tengan el mismo número de propiedades (las longitudes de las listas de propiedades son las mismas). Y luego, al hacer un ciclo sobre una de las propiedades del objeto para compararlos, siempre asegúrate primero de que el otro realmente tenga una propiedad con ese mismo nombre. Si tienen el mismo número de propiedades, y todas las propiedades en uno también existen en el otro, tienen el mismo conjunto de nombres de propiedad. Retornar el valor correcto de la función se realiza mejor al inmediatamente retornar falso cuando se encuentre una discrepancia y retornar verdadero al final de la función. Tzu-li y Tzu-ssu estaban jactándose del tamaño de sus ultimos programas. ‘Doscientas mil líneas’, dijo Tzu-li, ‘sin contar los comentarios!’ Tzu-ssu respondió, ‘Pssh, el mío tiene casi un millón de líneas ya.’ El Maestro Yuan-Ma dijo, ‘Mi mejor programa tiene quinientas líneas.’ Al escuchar esto, Tzu-li y Tzu-ssu fueron iluminados. Hay dos formas de construir un diseño de software: Una forma es hacerlo tan simple de manera que no hayan deficiencias obvias, y la otra es hacerlo tan complicado de manera que obviamente no hayan deficiencias. Un programa grande es un programa costoso, y no solo por el tiempo que se necesita para construirlo. El tamaño casi siempre involucra complejidad, y la complejidad confunde a los programadores. A su vez, los programadores confundidos, introducen errores en los programas. Un programa grande entonces proporciona de mucho espacio para que estos bugs se oculten, haciéndolos difíciles de encontrar. Volvamos rapidamente a los dos últimos programas de ejemplo en la introducción. El primero es auto-contenido y solo tiene seis líneas de largo: > edit & run code by clicking itlet total = 0, cuenta = 1; while (cuenta <= 10) { total += cuenta; cuenta += 1; } console.log(total); El segundo depende de dos funciones externas y tiene una línea de longitud: > console.log(suma(rango(1, 10))); Cuál es más probable que contenga un bug? Si contamos el tamaño de las definiciones de `suma` y `rango` , el segundo programa también es grande—incluso puede que sea más grande que el primero. Pero aún así, argumentaria que es más probable que sea correcto. Es más probable que sea correcto porque la solución se expresa en un vocabulario que corresponde al problema que se está resolviendo. Sumar un rango de números no se trata acerca de ciclos y contadores. Se trata acerca de rangos y sumas. Las definiciones de este vocabulario (las funciones `suma` y `rango` ) seguirán involucrando ciclos, contadores y otros detalles incidentales. Pero ya que expresan conceptos más simples que el programa como un conjunto, son más fáciles de realizar correctamente. ## Abstracción En el contexto de la programación, estos tipos de vocabularios suelen ser llamados abstracciones. Las abstracciones esconden detalles y nos dan la capacidad de hablar acerca de los problemas a un nivel superior (o más abstracto). Como una analogía, compara estas dos recetas de sopa de guisantes: Coloque 1 taza de guisantes secos por persona en un recipiente. Agregue agua hasta que los guisantes esten bien cubiertos. Deje los guisantes en agua durante al menos 12 horas. Saque los guisantes del agua y pongalos en una cacerola para cocinar. Agregue 4 tazas de agua por persona. Cubra la sartén y mantenga los guisantes hirviendo a fuego lento durante dos horas. Tome media cebolla por persona. Cortela en piezas con un cuchillo. Agréguela a los guisantes. Tome un tallo de apio por persona. Cortelo en pedazos con un cuchillo. Agréguelo a los guisantes. Tome una zanahoria por persona. Cortela en pedazos. Con un cuchillo! Agregarla a los guisantes. Cocine por 10 minutos más. Y la segunda receta: Por persona: 1 taza de guisantes secos, media cebolla picada, un tallo de apio y una zanahoria. Remoje los guisantes durante 12 horas. Cocine a fuego lento durante 2 horas en 4 tazas de agua (por persona). Picar y agregar verduras. Cocine por 10 minutos más. La segunda es más corta y fácil de interpretar. Pero necesitas entender algunas palabras más relacionadas a la cocina—remojar, cocinar a fuego lento, picar, y, supongo, verduras. Cuando programamos, no podemos confiar en que todas las palabras que necesitaremos estaran esperando por nosotros en el diccionario. Por lo tanto, puedes caer en el patrón de la primera receta—resolviendo los pasos precisos que debe realizar la computadora, uno por uno, ciego a los conceptos de orden superior que estos expresan. En la programación, es una habilidad útil, darse cuenta cuando estás trabajando en un nivel de abstracción demasiado bajo. ## Abstrayendo la repetición Las funciones simples, como las hemos visto hasta ahora, son una buena forma de construir abstracciones. Pero a veces se quedan cortas. Es común que un programa haga algo una determinada cantidad de veces. Puedes escribir un ciclo `for` para eso, de esta manera: > for (let i = 0; i < 10; i++) { console.log(i); } Podemos abstraer “hacer algo N veces” como una función? Bueno, es fácil escribir una función que llame a `console.log` N cantidad de veces. > function repetirLog(n) { for (let i = 0; i < n; i++) { console.log(i); } } Pero, y si queremos hacer algo más que loggear los números? Ya que “hacer algo” se puede representar como una función y que las funciones solo son valores, podemos pasar nuestra acción como un valor de función. > function repetir(n, accion) { for (let i = 0; i < n; i++) { accion(i); } } repetir(3, console.log); // → 0 // → 1 // → 2 No es necesario que le pases una función predefinida a `repetir` . A menudo, desearas crear un valor de función al momento en su lugar. > let etiquetas = []; repetir(5, i => { etiquetas.push(`Unidad ${i + 1}`); }); console.log(etiquetas); // → ["Unidad 1", "Unidad 2", "Unidad 3", "Unidad 4", "Unidad 5"] Esto está estructurado un poco como un ciclo `for` —primero describe el tipo de ciclo, y luego provee un cuerpo. Sin embargo, el cuerpo ahora está escrito como un valor de función, que está envuelto en el paréntesis de la llamada a `repetir` . Por eso es que tiene que cerrarse con el corchete de cierre y paréntesis de cierre. En casos como este ejemplo, donde el cuerpo es una expresión pequeña y única, podrias también omitir las llaves y escribir el ciclo en una sola línea. ## Funciones de orden superior Las funciones que operan en otras funciones, ya sea tomándolas como argumentos o retornandolas, se denominan funciones de orden superior. Como ya hemos visto que las funciones son valores regulares, no existe nada particularmente notable sobre el hecho de que tales funciones existen. El término proviene de las matemáticas, donde la distinción entre funciones y otros valores se toma más en serio. Las funciones de orden superior nos permiten abstraer sobre acciones, no solo sobre valores. Estas vienen en varias formas. Por ejemplo, puedes tener funciones que crean nuevas funciones. > function mayorQue(n) { return m => m > n; } let mayorQue10 = mayorQue(10); console.log(mayorQue10(11)); // → true Y puedes tener funciones que cambien otras funciones. > function ruidosa(funcion) { return (argumentos) => { console.log("llamando con", argumentos); let resultado = funcion(argumentos); console.log("llamada con", argumentos, ", retorno", resultado); return resultado; }; } ruidosa(Math.min)(3, 2, 1); // → llamando con [3, 2, 1] // → llamada con [3, 2, 1] , retorno 1 Incluso puedes escribir funciones que proporcionen nuevos tipos de flujo de control. > function aMenosQue(prueba, entonces) { if (!prueba) entonces(); } repetir(3, n => { aMenosQue(n % 2 == 1, () => { console.log(n, "es par"); }); }); // → 0 es par // → 2 es par Hay un método de array incorporado, `forEach` que proporciona algo como un ciclo `for` / `of` como una función de orden superior. > ["A", "B"].forEach(letra => console.log(letra)); // → A // → B ## Conjunto de datos de códigos Un área donde brillan las funciones de orden superior es en el procesamiento de datos. Para procesar datos, necesitaremos algunos datos reales. Este capítulo usara un conjunto de datos acerca de códigos—sistema de escrituras como Latin, Cirílico, o Arábico. Recuerdas Unicode del Capítulo 1, el sistema que asigna un número a cada carácter en el lenguaje escrito. La mayoría de estos carácteres están asociados a un código específico. El estandar contiene 140 codigos diferentes—81 de los cuales todavía están en uso hoy, y 59 que son históricos. Aunque solo puedo leer con fluidez los caracteres en Latin, aprecio el hecho de que las personas estan escribiendo textos en al menos 80 diferentes sistemas de escritura, muchos de los cuales ni siquiera reconocería. Por ejemplo, aquí está una muestra de escritura a mano en Tamil. El conjunto de datos de ejemplo contiene algunos piezas de información acerca de los 140 codigos definidos en Unicode. Este esta disponible en la caja de arena para este capítulo como la vinculación `SCRIPTS` . La vinculación contiene un array de objetos, cada uno de los cuales describe un codigo. > { name: "Coptic", ranges: [[994, 1008], [11392, 11508], [11513, 11520]], direction: "ltr", year: -200, living: false, link: "https://en.wikipedia.org/wiki/Coptic_alphabet" } Tal objeto te dice el nombre del codigo, los rangos de Unicode asignados a él, la dirección en la que está escrito, la tiempo de origen (aproximado), si todavía está en uso, y un enlace a más información. La dirección en la que esta escrito puede ser `"ltr"` (left-to-right) para izquierda a derecha, `"rtl"` (right-to-left) para derecha a izquierda (la forma en que se escriben los textos en árabe y en hebreo), o `"ttb"` (top-to-bottom) para de arriba a abajo (como con la escritura de Mongolia). La propiedad `ranges` contiene un array de rangos de caracteres Unicode, cada uno de los cuales es un array de dos elementos que contiene límites inferior y superior. Se asignan los códigos de caracteres dentro de estos rangos al codigo. El limite más bajo es inclusivo (el código 994 es un carácter Copto) y el límite superior es no-inclusivo (el código 1008 no lo es). ## Filtrando arrays Para encontrar los codigos en el conjunto de datos que todavía están en uso, la siguiente función podría ser útil. Filtra hacia afuera los elementos en un array que no pasen una prueba: > function filtrar(array, prueba) { let pasaron = []; for (let elemento of array) { if (prueba(elemento)) { pasaron.push(elemento); } } return pasaron; } console.log(filtrar(SCRIPTS, codigo => codigo.living)); // → [{name: "Adlam", …}, …] La función usa el argumento llamado `prueba` , un valor de función, para llenar una “brecha” en el cálculo—el proceso de decidir qué elementos recolectar. Observa cómo la función `filtrar` , en lugar de eliminar elementos del array existente, crea un nuevo array solo con los elementos que pasan la prueba. Esta función es pura. No modifica el array que se le es dado. Al igual que `forEach` , `filtrar` es un método de array estándar, este esta incorporado como `filter` . El ejemplo definió la función solo para mostrar lo que hace internamente. A partir de ahora, la usaremos así en su lugar: > console.log(SCRIPTS.filter(codigo => codigo.direction == "ttb")); // → [{name: "Mongolian", …}, …] ## Transformando con map Digamos que tenemos un array de objetos que representan codigos, producidos al filtrar el array `SCRIPTS` de alguna manera. Pero queremos un array de nombres, que es más fácil de inspeccionar El método `map` (“mapear”) transforma un array al aplicar una función a todos sus elementos y construir un nuevo array a partir de los valores retornados. El nuevo array tendrá la misma longitud que el array de entrada, pero su contenido ha sido mapeado a una nueva forma en base a la función. > function map(array, transformar) { let mapeados = []; for (let elemento of array) { mapeados.push(transformar(elemento)); } return mapeados; } let codigosDerechaAIzquierda = SCRIPTS.filter(codigo => codigo.direction == "rtl"); console.log(map(codigosDerechaAIzquierda, codigo => codigo.name)); // → ["Adlam", "Arabic", "Imperial Aramaic", …] Al igual que `forEach` y `filter` , `map` es un método de array estándar. ## Resumiendo con reduce Otra cosa común que hacer con arrays es calcular un valor único a partir de ellos. Nuestro ejemplo recurrente, sumar una colección de números, es una instancia de esto. Otro ejemplo sería encontrar el codigo con la mayor cantidad de caracteres. La operación de orden superior que representa este patrón se llama reduce (“reducir”)—a veces también llamada fold (“doblar”). Esta construye un valor al repetidamente tomar un solo elemento del array y combinándolo con el valor actual. Al sumar números, comenzarías con el número cero y, para cada elemento, agregas eso a la suma. Los parámetros para `reduce` son, además del array, una función de combinación y un valor de inicio. Esta función es un poco menos sencilla que `filter` y `map` , así que mira atentamente: > function reduce(array, combinar, inicio) { let actual = inicio; for (let elemento of array) { actual = combinar(actual, elemento); } return actual; } console.log(reduce([1, 2, 3, 4], (a, b) => a + b, 0)); // → 10 El método de array estándar `reduce` , que por supuesto corresponde a esta función tiene una mayor comodidad. Si tu array contiene al menos un elemento, tienes permitido omitir el argumento `inicio` . El método tomará el primer elemento del array como su valor de inicio y comienza a reducir a partir del segundo elemento. > console.log([1, 2, 3, 4].reduce((a, b) => a + b)); // → 10 Para usar `reduce` (dos veces) para encontrar el codigo con la mayor cantidad de caracteres, podemos escribir algo como esto: > function cuentaDeCaracteres(codigo) { return codigo.ranges.reduce((cuenta, [desde, hasta]) => { return cuenta + (hasta - desde); }, 0); } console.log(SCRIPTS.reduce((a, b) => { return cuentaDeCaracteres(a) < cuentaDeCaracteres(b) ? b : a; })); // → {name: "Han", …} La función `cuentaDeCaracteres` reduce los rangos asignados a un codigo sumando sus tamaños. Ten en cuenta el uso de la desestructuración en el parámetro lista de la función reductora. La segunda llamada a `reduce` luego usa esto para encontrar el codigo más grande al comparar repetidamente dos scripts y retornando el más grande. El codigo Han tiene más de 89,000 caracteres asignados en el Estándar Unicode, por lo que es, por mucho, el mayor sistema de escritura en el conjunto de datos. Han es un codigo (a veces) usado para texto chino, japonés y coreano. Esos idiomas comparten muchos caracteres, aunque tienden a escribirlos de manera diferente. El consorcio Unicode (con sede en EE.UU.) decidió tratarlos como un único sistema de escritura para ahorrar códigos de caracteres. Esto se llama unificación Han y aún enoja bastante a algunas personas. ## Composabilidad Considera cómo habríamos escrito el ejemplo anterior (encontrar el código más grande) sin funciones de orden superior. El código no es mucho peor. > let mayor = null; for (let codigo of SCRIPTS) { if (mayor == null || cuentaDeCaracteres(mayor) < cuentaDeCaracteres(codigo)) { mayor = codigo; } } console.log(mayor); // → {name: "Han", …} Hay algunos vinculaciones más, y el programa tiene cuatro líneas más. Pero todavía es bastante legible. Las funciones de orden superior comienzan a brillar cuando necesitas componer operaciones. Como ejemplo, vamos a escribir código que encuentre el año de origen promedio para los codigos vivos y muertos en el conjunto de datos. > function promedio(array) { return array.reduce((a, b) => a + b) / array.length; } console.log(Math.round(promedio( SCRIPTS.filter(codigo => codigo.living).map(codigo => codigo.year)))); // → 1185 console.log(Math.round(promedio( SCRIPTS.filter(codigo => !codigo.living).map(codigo => codigo.year)))); // → 209 Entonces, los codigos muertos en Unicode son, en promedio, más antiguos que los vivos. Esta no es una estadística terriblemente significativa o sorprendente. Pero espero que aceptes que el código utilizado para calcularlo no es difícil de leer. Puedes verlo como una tubería: comenzamos con todos los codigos, filtramos los vivos (o muertos), tomamos los años de aquellos, los promediamos, y redondeamos el resultado. Definitivamente también podrías haber escribir este codigo como un gran ciclo. > let total = 0, cuenta = 0; for (let codigo of SCRIPTS) { if (codigo.living) { total += codigo.year; cuenta += 1; } } console.log(Math.round(total / cuenta)); // → 1185 Pero es más difícil de ver qué se está calculando y cómo. Y ya que los resultados intermedios no se representan como valores coherentes, sería mucho más trabajo extraer algo así como `promedio` en una función aparte. En términos de lo que la computadora realmente está haciendo, estos dos enfoques también son bastante diferentes. El primero creará nuevos arrays al ejecutar `filter` y `map` , mientras que el segundo solo computa algunos números, haciendo menos trabajo. Por lo general, puedes permitirte el enfoque legible, pero si estás procesando arrays enormes, y haciendolo muchas veces, el estilo menos abstracto podría ser mejor debido a la velocidad extra. ## Strings y códigos de caracteres Un uso del conjunto de datos sería averiguar qué código esta usando una pieza de texto. Veamos un programa que hace esto. Recuerda que cada codigo tiene un array de rangos para los códigos de caracteres asociados a el. Entonces, dado un código de carácter, podríamos usar una función como esta para encontrar el codigo correspondiente (si lo hay): > function codigoCaracter(codigo_caracter) { for (let codigo of SCRIPTS) { if (codigo.ranges.some(([desde, hasta]) => { return codigo_caracter >= desde && codigo_caracter < hasta; })) { return codigo; } } return null; } console.log(codigoCaracter(121)); // → {name: "Latin", …} El método `some` (“alguno”) es otra función de orden superior. Toma una función de prueba y te dice si esa función retorna verdadero para cualquiera de los elementos en el array. Pero cómo obtenemos los códigos de los caracteres en un string? En el Capítulo 1 mencioné que los strings de JavaScript estan codificados como una secuencia de números de 16 bits. Estos se llaman unidades de código. Inicialmente se suponía que un código de carácter Unicode encajara dentro de esa unidad (lo que da un poco más de 65,000 caracteres). Cuando quedó claro que esto no seria suficiente, muchas las personas se resistieron a la necesidad de usar más memoria por carácter. Para apaciguar estas preocupaciones, UTF-16, el formato utilizado por los strings de JavaScript, fue inventado. Este describe la mayoría de los caracteres mas comunes usando una sola unidad de código de 16 bits, pero usa un par de dos de esas unidades para otros caracteres. Al dia de hoy UTF-16 generalmente se considera como una mala idea. Parece casi intencionalmente diseñado para invitar a errores. Es fácil escribir programas que pretenden que las unidades de código y caracteres son la misma cosa. Y si tu lenguaje no usa caracteres de dos unidades, esto parecerá funcionar simplemente bien. Pero tan pronto como alguien intente usar dicho programa con algunos menos comunes caracteres chinos, este se rompe. Afortunadamente, con la llegada del emoji, todo el mundo ha empezado a usar caracteres de dos unidades, y la carga de lidiar con tales problemas esta bastante mejor distribuida. Desafortunadamente, las operaciones obvias con strings de JavaScript, como obtener su longitud a través de la propiedad `length` y acceder a su contenido usando corchetes, trata solo con unidades de código. > // Dos caracteres emoji, caballo y zapato let caballoZapato = "🐴👟"; console.log(caballoZapato.length); // → 4 console.log(caballoZapato[0]); // → ((Medio-carácter inválido)) console.log(caballoZapato.charCodeAt(0)); // → 55357 (Código del medio-carácter) console.log(caballoZapato.codePointAt(0)); // → 128052 (Código real para emoji de caballo) El método `charCodeAt` de JavaScript te da una unidad de código, no un código de carácter completo. El método `codePointAt` , añadido despues, si da un carácter completo de Unicode. Entonces podríamos usarlo para obtener caracteres de un string. Pero el argumento pasado a `codePointAt` sigue siendo un índice en la secuencia de unidades de código. Entonces, para hacer un ciclo a traves de todos los caracteres en un string, todavía tendríamos que lidiar con la cuestión de si un carácter ocupa una o dos unidades de código. En el capítulo anterior, mencioné que el ciclo `for` / `of` también se puede usar en strings. Como `codePointAt` , este tipo de ciclo se introdujo en un momento en que las personas eran muy conscientes de los problemas con UTF-16. Cuando lo usas para hacer un ciclo a traves de un string, te da caracteres reales, no unidades de código. > let dragonRosa = "🐉🌹"; for (let caracter of dragonRosa) { console.log(caracter); } // → 🐉 // → 🌹 Si tienes un caracter (que será un string de unidades de uno o dos códigos), puedes usar `codePointAt(0)` para obtener su código. ## Reconociendo texto Tenemos una función `codigoCaracter` y una forma de correctamente hacer un ciclo a traves de caracteres. El siguiente paso sería contar los caracteres que pertenecen a cada codigo. La siguiente abstracción de conteo será útil para eso: > function contarPor(elementos, nombreGrupo) { let cuentas = []; for (let elemento of elementos) { let nombre = nombreGrupo(elemento); let conocido = cuentas.findIndex(c => c.nombre == nombre); if (conocido == -1) { cuentas.push({nombre, cuenta: 1}); } else { cuentas[conocido].cuenta++; } } return cuentas; } console.log(contarPor([1, 2, 3, 4, 5], n => n > 2)); // → [{nombre: false, cuenta: 2}, {nombre: true, cuenta: 3}] La función `contarPor` espera una colección (cualquier cosa con la que podamos hacer un ciclo `for` / `of` ) y una función que calcula un nombre de grupo para un elemento dado. Retorna un array de objetos, cada uno de los cuales nombra un grupo y te dice la cantidad de elementos que se encontraron en ese grupo. Utiliza otro método de array— `findIndex` (“encontrar index”). Este método es algo así como `indexOf` , pero en lugar de buscar un valor específico, este encuentra el primer valor para el cual la función dada retorna verdadero. Como `indexOf` , retorna -1 cuando no se encuentra dicho elemento. Usando `contarPor` , podemos escribir la función que nos dice qué codigos se usan en una pieza de texto. > function codigosTexto(texto) { let codigos = contarPor(texto, caracter => { let codigo = codigoCaracter(caracter.codePointAt(0)); return codigo ? codigo.name : "ninguno"; }).filter(({name}) => name != "ninguno"); let total = codigos.reduce((n, {count}) => n + count, 0); if (total == 0) return "No se encontraron codigos"; return codigos.map(({name, count}) => { return `${Math.round(count * 100 / total)}% ${name}`; }).join(", "); } console.log(codigosTexto('英国的狗说"woof", 俄罗斯的狗说"тяв"')); // → 61% Han, 22% Latin, 17% Cyrillic La función primero cuenta los caracteres por nombre, usando `codigoCaracter` para asignarles un nombre, y recurre al string `"ninguno"` para caracteres que no son parte de ningún codigo. La llamada `filter` deja afuera las entrada para `"ninguno"` del array resultante, ya que no estamos interesados ​​en esos caracteres. Para poder calcular porcentajes, primero necesitamos la cantidad total de caracteres que pertenecen a un codigo, lo que podemos calcular con `reduce` . Si no se encuentran tales caracteres, la función retorna un string específico. De lo contrario, transforma las entradas de conteo en strings legibles con `map` y luego las combina con `join` . Ser capaz de pasar valores de función a otras funciones es un aspecto profundamente útil de JavaScript. Nos permite escribir funciones que modelen calculos con “brechas” en ellas. El código que llama a estas funciones pueden llenar estas brechas al proporcionar valores de función. Los arrays proporcionan varios métodos útiles de orden superior. Puedes usar `forEach` para recorrer los elementos en un array. El método `filter` retorna un nuevo array que contiene solo los elementos que pasan una función de predicado. Transformar un array al poner cada elemento a través de una función se hace con `map` . Puedes usar `reduce` para combinar todos los elementos en una array a un solo valor. El método `some` prueba si algun elemento coincide con una función de predicado determinada. Y `findIndex` encuentra la posición del primer elemento que coincide con un predicado. ### Aplanamiento Use el método `reduce` en combinación con el método `concat` para “aplanar” un array de arrays en un único array que tenga todos los elementos de los arrays originales. > let arrays = [[1, 2, 3], [4, 5], [6]]; // Tu código aquí. // → [1, 2, 3, 4, 5, 6] ### Tu propio ciclo Escriba una función de orden superior llamada `ciclo` que proporcione algo así como una declaración de ciclo `for` . Esta toma un valor, una función de prueba, una función de actualización y un cuerpo de función. En cada iteración, primero ejecuta la función de prueba en el valor actual del ciclo y se detiene si esta retorna falso. Luego llama al cuerpo de función, dándole el valor actual. Y finalmente, llama a la función de actualización para crear un nuevo valor y comienza desde el principio. Cuando definas la función, puedes usar un ciclo regular para hacer los ciclos reales. > // Tu código aquí. loop(3, n => n > 0, n => n - 1, console.log); // → 3 // → 2 // → 1 ### Cada De forma análoga al método `some` , los arrays también tienen un método `every` (“cada”). Este retorna true cuando la función dada devuelve verdadero para cada elemento en el array. En cierto modo, `some` es una versión del operador `||` que actúa en arrays, y `every` es como el operador `&&` . Implementa `every` como una función que tome un array y una función predicado como parámetros. Escribe dos versiones, una usando un ciclo y una usando el método `some` . > function cada(array, test) { // Tu código aquí. } console.log(cada([1, 3, 5], n => n < 10)); // → true console.log(cada([2, 4, 16], n => n < 10)); // → false console.log(cada([], n => n < 10)); // → true Al igual que el operador `&&` , el método `every` puede dejar de evaluar más elementos tan pronto como haya encontrado uno que no coincida. Entonces la versión basada en un ciclo puede saltar fuera del ciclo—con `break` o `return` —tan pronto como se encuentre con un elemento para el cual la función predicado retorne falso. Si el ciclo corre hasta su final sin encontrar tal elemento, sabemos que todos los elementos coinciden y debemos retornar verdadero. Para construir `cada` usando `some` , podemos aplicar las leyes De Morgan, que establecen que `a && b` es igual a `!(!a ||! b)` . Esto puede ser generalizado a arrays, donde todos los elementos del array coinciden si no hay elemento en el array que no coincida. ### Dirección de Escritura Dominante Escriba una función que calcule la dirección de escritura dominante en un string de texto. Recuerde que cada objeto de codigo tiene una propiedad `direction` que puede ser `"ltr"` (de izquierda a derecha), `"rtl"` (de derecha a izquierda), o `"ttb"` (arriba a abajo). La dirección dominante es la dirección de la mayoría de los caracteres que tienen un código asociado a ellos. Las funciones `codigoCaracter` y `contarPor` definidas anteriormente en el capítulo probablemente seran útiles aquí. > function direccionDominante(texto) { // Tu código aquí. } console.log(direccionDominante("Hola!")); // → ltr console.log(direccionDominante("Hey, مساء الخير")); // → rtl Tu solución puede parecerse mucho a la primera mitad del ejemplo `codigosTexto` . De nuevo debes contar los caracteres por el criterio basado en `codigoCaracter` , y luego filtrar hacia afuera la parte del resultado que se refiere a caracteres sin interés (que no tengan codigos). Encontrar la dirección con la mayor cantidad de caracteres se puede hacer con `reduce` . Si no está claro cómo, refiérate al ejemplo anterior en el capítulo, donde se usa `reduce` para encontrar el código con la mayoría de los caracteres. Un tipo de datos abstracto se realiza al escribir un tipo especial de programa [...] que define el tipo en base a las operaciones que puedan ser realizadas en él. El Capítulo 4 introdujo los objetos en JavaScript. En la cultura de la programación, tenemos una cosa llamada programación orientada a objetos, la cual es un conjunto de técnicas que usan objetos (y conceptos relacionados) como el principio central de la organización del programa. Aunque nadie realmente está de acuerdo con su definición exacta, la programación orientada a objetos ha contribuido al diseño de muchos lenguajes de programación, incluyendo JavaScript. Este capítulo describirá la forma en la que estas ideas pueden ser aplicadas en JavaScript. ## Encapsulación La idea central en la programación orientada a objetos es dividir a los programas en piezas más pequeñas y hacer que cada pieza sea responsable de gestionar su propio estado. De esta forma, los conocimientos acerca de como funciona una parte del programa pueden mantenerse locales a esa pieza. Alguien trabajando en otra parte del programa no tiene que recordar o ni siquiera tener una idea de ese conocimiento. Cada vez que los detalles locales cambien, solo el código directamente a su alrededor debe ser actualizado. Las diferentes piezas de un programa como tal, interactúan entre sí a través de interfaces, las cuales son conjuntos limitados de funciones y vinculaciones que proporcionan funcionalidades útiles en un nivel más abstracto, ocultando asi su implementación interna. Tales piezas del programa se modelan usando objetos. Sus interfaces consisten en un conjunto específico de métodos y propiedades. Las propiedades que son parte de la interfaz se llaman publicas. Las otras, las cuales no deberian ser tocadas por el código externo , se les llama privadas. Muchos lenguajes proporcionan una forma de distinguir entre propiedades publicas y privadas, y ademas evitarán que el código externo pueda acceder a las privadas por completo. JavaScript, una vez más tomando el enfoque minimalista, no hace esto. Todavía no, al menos—hay trabajo en camino para agregar esto al lenguaje. Aunque el lenguaje no tenga esta distinción incorporada, los programadores de JavaScript estan usando esta idea con éxito .Típicamente, la interfaz disponible se describe en la documentación o en los comentarios. También es común poner un carácter de guión bajo ( `_` ) al comienzo de los nombres de las propiedades para indicar que estas propiedades son privadas. Separar la interfaz de la implementación es una gran idea. Esto usualmente es llamado encapsulación. Los métodos no son más que propiedades que tienen valores de función. Este es un método simple: > edit & run code by clicking itlet conejo = {}; conejo.hablar = function(linea) { console.log(`El conejo dice '${linea}'`); }; conejo.hablar("Estoy vivo."); // → El conejo dice 'Estoy vivo.' Por lo general, un método debe hacer algo en el objeto con que se llamó. Cuando una función es llamada como un método—buscada como una propiedad y llamada inmediatamente, como en `objeto.metodo()` —la vinculación llamada `this` (“este”) en su cuerpo apunta automáticamente al objeto en la cual fue llamada. > function hablar(linea) { console.log(`El conejo ${this.tipo} dice '${linea}'`); } let conejoBlanco = {tipo: "blanco", hablar}; let conejoHambriento = {tipo: "hambriento", hablar}; conejoBlanco.hablar("Oh mis orejas y bigotes, " + "que tarde se esta haciendo!"); // → El conejo blanco dice 'Oh mis orejas y bigotes, que // tarde se esta haciendo!' conejoHambriento.hablar("Podria comerme una zanahoria ahora mismo."); // → El conejo hambriento dice 'Podria comerme una zanahoria ahora mismo.' Puedes pensar en `this` como un parámetro extra que es pasado en una manera diferente. Si quieres pasarlo explícitamente, puedes usar el método `call` (“llamar”) de una función, que toma el valor de `this` como primer argumento y trata a los argumentos adicionales como parámetros normales. > hablar.call(conejoHambriento, "Burp!"); // → El conejo hambriento dice 'Burp!' Como cada función tiene su propia vinculación `this` , cuyo valor depende de la forma en como esta se llama, no puedes hacer referencia al `this` del alcance envolvente en una función regular definida con la palabra clave `function` . Las funciones de flecha son diferentes—no crean su propia vinculación `this` , pero pueden ver la vinculación `this` del alcance a su alrededor. Por lo tanto, puedes hacer algo como el siguiente código, que hace referencia a `this` desde adentro de una función local: > function normalizar() { console.log(this.coordinadas.map(n => n / this.length)); } normalizar.call({coordinadas: [0, 2, 3], length: 5}); // → [0, 0.4, 0.6] Si hubieras escrito el argumento para `map` usando la palabra clave `function` , el código no funcionaría. ## Prototipos Observa atentamente. > let vacio = {}; console.log(vacio.toString); // → function toString(){…} console.log(vacio.toString()); // → [object Object] Saqué una propiedad de un objeto vacío. Magia! Bueno, en realidad no. Simplemente he estado ocultando información acerca de como funcionan los objetos en JavaScript. En adición a su conjunto de propiedades, la mayoría de los objetos también tienen un prototipo. Un prototipo es otro objeto que se utiliza como una reserva de propiedades alternativa. Cuando un objeto recibe una solicitud por una propiedad que este no tiene, se buscará en su prototipo la propiedad, luego en el prototipo del prototipo y asi sucesivamente. Asi que, quién es el prototipo de ese objeto vacío? Es el gran prototipo ancestral, la entidad detrás de casi todos los objetos, `Object.prototype` (“Objeto.prototipo”). > console.log(Object.getPrototypeOf({}) == Object.prototype); // → true console.log(Object.getPrototypeOf(Object.prototype)); // → null Como puedes adivinar, `Object.` (“Objeto.obtenerPrototipoDe”) retorna el prototipo de un objeto. Las relaciones prototipo de los objetos en JavaScript forman una estructura en forma de árbol, y en la raíz de esta estructura se encuentra `Object.prototype` . Este proporciona algunos métodos que pueden ser accedidos por todos los objetos, como `toString` , que convierte un objeto en una representación de tipo string. Muchos objetos no tienen `Object.prototype` directamente como su prototipo, pero en su lugar tienen otro objeto que proporciona un conjunto diferente de propiedades predeterminadas. Las funciones derivan de `Function.` , y los arrays derivan de `Array.prototype` . > console.log(Object.getPrototypeOf(Math.max) == Function.prototype); // → true console.log(Object.getPrototypeOf([]) == Array.prototype); // → true Tal prototipo de objeto tendrá en si mismo un prototipo, a menudo `Object.prototype` , por lo que aún proporciona indirectamente métodos como `toString` . Puede usar `Object.create` para crear un objeto con un prototipo especifico. > let conejoPrototipo = { hablar(linea) { console.log(`El conejo ${this.tipo} dice '${linea}'`); } }; let conejoAsesino = Object.create(conejoPrototipo); conejoAsesino.tipo = "asesino"; conejoAsesino.hablar("SKREEEE!"); // → El conejo asesino dice 'SKREEEE!' Una propiedad como `hablar(linea)` en una expresión de objeto es un atajo para definir un método. Esta crea una propiedad llamada `hablar` y le da una función como su valor. El conejo “prototipo” actúa como un contenedor para las propiedades que son compartidas por todos los conejos. Un objeto de conejo individual, como el conejo asesino, contiene propiedades que aplican solo a sí mismo—en este caso su tipo—y deriva propiedades compartidas desde su prototipo. ## Clases El sistema de prototipos en JavaScript se puede interpretar como un enfoque informal de un concepto orientado a objetos llamado clasees. Una clase define la forma de un tipo de objeto—qué métodos y propiedades tiene este. Tal objeto es llamado una instancia de la clase. Los prototipos son útiles para definir propiedades en las cuales todas las instancias de una clase compartan el mismo valor, como métodos. Las propiedades que difieren por instancia, como la propiedad `tipo` en nuestros conejos, necesitan almacenarse directamente en los objetos mismos. Entonces, para crear una instancia de una clase dada, debes crear un objeto que derive del prototipo adecuado, pero también debes asegurarte de que, en sí mismo, este objeto tenga las propiedades que las instancias de esta clase se supone que tengan. Esto es lo que una función constructora hace. > function crearConejo(tipo) { let conejo = Object.create(conejoPrototipo); conejo.tipo = tipo; return conejo; } JavaScript proporciona una manera de hacer que la definición de este tipo de funciones sea más fácil. Si colocas la palabra clave `new` (“new”) delante de una llamada de función, la función sera tratada como un constructor. Esto significa que un objeto con el prototipo adecuado es creado automáticamente, vinculado a `this` en la función, y retornado al final de la función. El objeto prototipo utilizado al construir objetos se encuentra al tomar la propiedad `prototype` de la función constructora. > function Conejo(tipo) { this.tipo = tipo; } Conejo.prototype.hablar = function(linea) { console.log(`El conejo ${this.tipo} dice '${linea}'`); }; let conejoRaro = new Conejo("raro"); Los constructores (todas las funciones, de hecho) automáticamente obtienen una propiedad llamada `prototype` , que por defecto contiene un objeto simple y vacío, que deriva de `Object.prototype` . Puedes sobrescribirlo con un nuevo objeto si asi quieres. O puedes agregar propiedades al objeto ya existente, como lo hace el ejemplo. Por convención, los nombres de los constructores tienen la primera letra en mayúscula para que se puedan distinguir fácilmente de otras funciones. Es importante entender la distinción entre la forma en que un prototipo está asociado con un constructor (a través de su propiedad `prototype` ) y la forma en que los objetos tienen un prototipo (que se puede encontrar con `Object.` ). El prototipo real de un constructor es `Function.` , ya que los constructores son funciones. Su propiedad `prototype` contiene el prototipo utilizado para las instancias creadas a traves de el. > console.log(Object.getPrototypeOf(Conejo) == Function.prototype); // → true console.log(Object.getPrototypeOf(conejoRaro) == Conejo.prototype); // → true ## Notación de clase Entonces, las clasees en JavaScript son funciones constructoras con una propiedad prototipo. Así es como funcionan, y hasta 2015, esa era la manera en como tenías que escribirlas. Estos días, tenemos una notación menos incómoda. > class Conejo { constructor(tipo) { this.tipo = tipo; } hablar(linea) { console.log(`El conejo ${this.tipo} dice '${linea}'`); } } let conejoAsesino = new Conejo("asesino"); let conejoNegro = new Conejo("negro"); La palabra clave `class` (“clase”) comienza una declaración de clase, que nos permite definir un constructor y un conjunto de métodos, todo en un solo lugar. Cualquier número de métodos se pueden escribir dentro de las llaves de la declaración. El metodo llamado `constructor` es tratado de una manera especial. Este proporciona la función constructora real, que estará vinculada al nombre `Conejo` . Los otros metodos estaran empacados en el prototipo de ese constructor. Por lo tanto, la declaración de clase anterior es equivalente a la definición de constructor en la sección anterior. Solo que se ve mejor. Actualmente las declaraciones de clase solo permiten que los metodos—propiedades que contengan funciones—puedan ser agregados al prototipo. Esto puede ser algo inconveniente para cuando quieras guardar un valor no-funcional allí. La próxima versión del lenguaje probablemente mejore esto. Por ahora, tú puedes crear tales propiedades al manipular directamente el prototipo después de haber definido la clase. Al igual que `function` , `class` se puede usar tanto en posiciones de declaración como de expresión. Cuando se usa como una expresión, no define una vinculación, pero solo produce el constructor como un valor. Tienes permitido omitir el nombre de clase en una expresión de clase. > let objeto = new class { obtenerPalabra() { return "hola"; } }; console.log(objeto.obtenerPalabra()); // → hola ## Sobreescribiendo propiedades derivadas Cuando le agregas una propiedad a un objeto, ya sea que esté presente en el prototipo o no, la propiedad es agregada al objeto en si mismo. Si ya había una propiedad con el mismo nombre en el prototipo, esta propiedad ya no afectará al objeto, ya que ahora está oculta detrás de la propiedad del propio objeto. > Rabbit.prototype.dientes = "pequeños"; console.log(conejoAsesino.dientes); // → pequeños conejoAsesino.dientes = "largos, filosos, y sangrientos"; console.log(conejoAsesino.dientes); // → largos, filosos, y sangrientos console.log(conejoNegro.dientes); // → pequeños console.log(Rabbit.prototype.dientes); // → pequeños El siguiente diagrama esboza la situación después de que este código ha sido ejecutado. Los prototipos de `Conejo` y `Object` se encuentran detrás de `conejoAsesino` como una especie de telón de fondo, donde las propiedades que no se encuentren en el objeto en sí mismo puedan ser buscadas. Sobreescribir propiedades que existen en un prototipo puede ser algo útil que hacer. Como muestra el ejemplo de los dientes de conejo, esto se puede usar para expresar propiedades excepcionales en instancias de una clase más genérica de objetos, dejando que los objetos no-excepcionales tomen un valor estándar desde su prototipo. También puedes sobreescribir para darle a los prototipos estándar de función y array un método diferente `toString` al del objeto prototipo básico. > console.log(Array.prototype.toString == Object.prototype.toString); // → false console.log([1, 2].toString()); // → 1,2 Llamar a `toString` en un array da un resultado similar al de una llamada `.` en él—pone comas entre los valores del array. Llamar directamente a `Object.` con un array produce un string diferente. Esa función no sabe acerca de los arrays, por lo que simplemente pone la palabra object y el nombre del tipo entre corchetes. > console.log(Object.prototype.toString.call([1, 2])); // → [object Array] ## Mapas Vimos a la palabra map usada en el capítulo anterior para una operación que transforma una estructura de datos al aplicar una función en sus elementos. Un mapa (sustantivo) es una estructura de datos que asocia valores (las llaves) con otros valores. Por ejemplo, es posible que desees mapear nombres a edades. Es posible usar objetos para esto. > let edades = { Boris: 39, Liang: 22, Júlia: 62 }; console.log(`Júlia tiene ${edades["Júlia"]}`); // → Júlia tiene 62 console.log("Se conoce la edad de Jack?", "Jack" in edades); // → Se conoce la edad de Jack? false console.log("Se conoce la edad de toString?", "toString" in edades); // → Se conoce la edad de toString? true Aquí, los nombres de las propiedades del objeto son los nombres de las personas, y los valores de las propiedades sus edades. Pero ciertamente no incluimos a nadie llamado toString en nuestro mapa. Sin embargo, debido a que los objetos simples se derivan de `Object.prototype` , parece que la propiedad está ahí. Como tal, usar objetos simples como mapas es peligroso. Hay varias formas posibles de evitar este problema. Primero, es posible crear objetos sin ningun prototipo. Si pasas `null` a `Object.create` , el objeto resultante no se derivará de `Object.prototype` y podra ser usado de forma segura como un mapa. > console.log("toString" in Object.create(null)); // → false Los nombres de las propiedades de los objetos deben ser strings. Si necesitas un mapa cuyas claves no puedan ser convertidas fácilmente a strings—como objetos—no puedes usar un objeto como tu mapa. Afortunadamente, JavaScript viene con una clase llamada `Map` que esta escrita para este propósito exacto. Esta almacena un mapeo y permite cualquier tipo de llaves. > let edades = new Map(); edades.set("Boris", 39); edades.set("Liang", 22); edades.set("Júlia", 62); console.log(`Júlia tiene ${edades.get("Júlia")}`); // → Júlia tiene 62 console.log("Se conoce la edad de Jack?", edades.has("Jack")); // → Se conoce la edad de Jack? false console.log(edades.has("toString")); // → false Los métodos `set` (“establecer”), `get` (“obtener”), y `has` (“tiene”) son parte de la interfaz del objeto `Map` . Escribir una estructura de datos que pueda actualizarse rápidamente y buscar en un gran conjunto de valores no es fácil, pero no tenemos que preocuparnos acerca de eso. Alguien más lo hizo por nosotros, y podemos utilizar esta simple interfaz para usar su trabajo. Si tienes un objeto simple que necesitas tratar como un mapa por alguna razón, es útil saber que `Object.keys` solo retorna las llaves propias del objeto, no las que estan en el prototipo. Como alternativa al operador `in` , puedes usar el método `hasOwnProperty` (“tienePropiaPropiedad”), el cual ignora el prototipo del objeto. > console.log({x: 1}.hasOwnProperty("x")); // → true console.log({x: 1}.hasOwnProperty("toString")); // → false ## Polimorfismo Cuando llamas a la función `String` (que convierte un valor a un string) en un objeto, llamará al método `toString` en ese objeto para tratar de crear un string significativo a partir de el. Mencioné que algunos de los prototipos estándar definen su propia versión de `toString` para que puedan crear un string que contenga información más útil que `"[object Object]"` . También puedes hacer eso tú mismo. > Conejo.prototype.toString = function() { return `un conejo ${this.tipo}`; }; console.log(String(conejoNegro)); // → un conejo negro Esta es una instancia simple de una idea poderosa. Cuando un pedazo de código es escrito para funcionar con objetos que tienen una cierta interfaz—en este caso, un método `toString` —cualquier tipo de objeto que soporte esta interfaz se puede conectar al código, y simplemente funcionará. Esta técnica se llama polimorfismo. El código polimórfico puede funcionar con valores de diferentes formas, siempre y cuando soporten la interfaz que este espera. Mencioné en el Capítulo 4 que un ciclo `for` / `of` puede recorrer varios tipos de estructuras de datos. Este es otro caso de polimorfismo—tales ciclos esperan que la estructura de datos exponga una interfaz específica, lo que hacen los arrays y strings. Y también puedes agregar esta interfaz a tus propios objetos! Pero antes de que podamos hacer eso, necesitamos saber qué son los símbolos. ## Símbolos Es posible que múltiples interfaces usen el mismo nombre de propiedad para diferentes cosas. Por ejemplo, podría definir una interfaz en la que se suponga que el método `toString` convierte el objeto a una pieza de hilo. No sería posible para un objeto ajustarse a esa interfaz y al uso estándar de `toString` . Esa sería una mala idea, y este problema no es muy común. La mayoria de los programadores de JavaScript simplemente no piensan en eso. Pero los diseñadores del lenguaje, cuyo trabajo es pensar acerca de estas cosas, nos han proporcionado una solución de todos modos. Cuando afirmé que los nombres de propiedad son strings, eso no fue del todo preciso. Usualmente lo son, pero también pueden ser símbolos. Los símbolos son valores creados con la función `Symbol` . A diferencia de los strings, los símbolos recién creados son únicos—no puedes crear el mismo símbolo dos veces. > let simbolo = Symbol("nombre"); console.log(simbolo == Symbol("nombre")); // → false Conejo.prototype[simbolo] = 55; console.log(conejoNegro[simbolo]); // → 55 El string que pases a `Symbol` es incluido cuando lo conviertas a string, y puede hacer que sea más fácil reconocer un símbolo cuando, por ejemplo, lo muestres en la consola. Pero no tiene sentido más allá de eso—múltiples símbolos pueden tener el mismo nombre. Al ser únicos y utilizables como nombres de propiedad, los símbolos son adecuados para definir interfaces que pueden vivir pacíficamente junto a otras propiedades, sin importar cuáles sean sus nombres. > const simboloToString = Symbol("toString"); Array.prototype[simboloToString] = function() { return `${this.length} cm de hilo azul`; }; console.log([1, 2].toString()); // → 1,2 console.log([1, 2][simboloToString]()); // → 2 cm de hilo azul Es posible incluir propiedades de símbolos en expresiones de objetos y clases usando corchetes alrededor del nombre de la propiedad. Eso hace que se evalúe el nombre de la propiedad, al igual que la notación de corchetes para acceder propiedades, lo cual nos permite hacer referencia a una vinculación que contiene el símbolo. > let objetoString = { [simboloToString]() { return "una cuerda de cañamo"; } }; console.log(objetoString[simboloToString]()); // → una cuerda de cañamo ## La interfaz de iterador Se espera que el objeto dado a un ciclo `for` / `of` sea iterable. Esto significa que tenga un método llamado con el símbolo `Symbol.iterator` (un valor de símbolo definido por el idioma, almacenado como una propiedad de la función `Symbol` ). Cuando sea llamado, ese método debe retornar un objeto que proporcione una segunda interfaz, iteradora. Esta es la cosa real que realiza la iteración. Tiene un método `next` (“siguiente”) que retorna el siguiente resultado. Ese resultado debería ser un objeto con una propiedad `value` (“valor”), que proporciona el siguiente valor, si hay uno, y una propiedad `done` (“listo”) que debería ser cierta cuando no haya más resultados y falso de lo contrario. Ten en cuenta que los nombres de las propiedades `next` , `value` y `done` son simples strings, no símbolos. Solo `Symbol.iterator` , que probablemente sea agregado a un monton de objetos diferentes, es un símbolo real. Podemos usar directamente esta interfaz nosotros mismos. > let iteradorOK = "OK"[Symbol.iterator](); console.log(iteradorOK.next()); // → {value: "O", done: false} console.log(iteradorOK.next()); // → {value: "K", done: false} console.log(iteradorOK.next()); // → {value: undefined, done: true} Implementemos una estructura de datos iterable. Construiremos una clase matriz, que actuara como un array bidimensional. > class Matriz { constructor(ancho, altura, elemento = (x, y) => undefined) { this.ancho = ancho; this.altura = altura; this.contenido = []; for (let y = 0; y < altura; y++) { for (let x = 0; x < ancho; x++) { this.contenido[y * ancho + x] = elemento(x, y); } } } obtener(x, y) { return this.contenido[y * this.ancho + x]; } establecer(x, y, valor) { this.contenido[y * this.ancho + x] = valor; } } La clase almacena su contenido en un único array de elementos altura × ancho. Los elementos se almacenan fila por fila, por lo que, por ejemplo, el tercer elemento en la quinta fila es (utilizando indexación basada en cero) almacenado en la posición 4 × ancho + 2. La función constructora toma un ancho, una altura y una función opcional de contenido que se usará para llenar los valores iniciales. Hay métodos `obtener` y `establecer` para recuperar y actualizar elementos en la matriz. Al hacer un ciclo sobre una matriz, generalmente estás interesado en la posición tanto de los elementos como de los elementos en sí mismos, así que haremos que nuestro iterador produzca objetos con propiedades `x` , `y` , y `value` (“valor”). > class IteradorMatriz { constructor(matriz) { this.x = 0; this.y = 0; this.matriz = matriz; } next() { if (this.y == this.matriz.altura) return {done: true}; let value = {x: this.x, y: this.y, value: this.matriz.obtener(this.x, this.y)}; this.x++; if (this.x == this.matriz.ancho) { this.x = 0; this.y++; } return {value, done: false}; } } La clase hace un seguimiento del progreso de iterar sobre una matriz en sus propiedades `x` y `y` . El método `next` (“siguiente”) comienza comprobando si la parte inferior de la matriz ha sido alcanzada. Si no es así, primero crea el objeto que contiene el valor actual y luego actualiza su posición, moviéndose a la siguiente fila si es necesario. Configuremos la clase `Matriz` para que sea iterable. A lo largo de este libro, Ocasionalmente usaré la manipulación del prototipo después de los hechos para agregar métodos a clases, para que las piezas individuales de código permanezcan pequeñas y autónomas. En un programa regular, donde no hay necesidad de dividir el código en pedazos pequeños, declararias estos métodos directamente en la clase. > Matriz.prototype[Symbol.iterator] = function() { return new IteradorMatriz(this); }; Ahora podemos recorrer una matriz con `for` / `of` . > let matriz = new Matriz(2, 2, (x, y) => `valor ${x},${y}`); for (let {x, y, value} of matriz) { console.log(x, y, value); } // → 0 0 valor 0,0 // → 1 0 valor 1,0 // → 0 1 valor 0,1 // → 1 1 valor 1,1 ## Getters, setters y estáticos A menudo, las interfaces consisten principalmente de métodos, pero también está bien incluir propiedades que contengan valores que no sean de función. Por ejemplo, los objetos `Map` tienen una propiedad `size` (“tamaño”) que te dice cuántas claves hay almacenanadas en ellos. Ni siquiera es necesario que dicho objeto calcule y almacene tales propiedades directamente en la instancia. Incluso las propiedades que pueden ser accedidas directamente pueden ocultar una llamada a un método. Tales métodos se llaman getters, y se definen escribiendo `get` (“obtener”) delante del nombre del método en una expresión de objeto o declaración de clase. > let tamañoCambiante = { get tamaño() { return Math.floor(Math.random() * 100); } }; console.log(tamañoCambiante.tamaño); // → 73 console.log(tamañoCambiante.tamaño); // → 49 Cuando alguien lee desde la propiedad `tamaño` de este objeto, el método asociado es llamado. Puedes hacer algo similar cuando se escribe en una propiedad, usando un setter. > class Temperatura { constructor(celsius) { this.celsius = celsius; } get fahrenheit() { return this.celsius * 1.8 + 32; } set fahrenheit(valor) { this.celsius = (valor - 32) / 1.8; } static desdeFahrenheit(valor) { return new Temperatura((valor - 32) / 1.8); } } let temp = new Temperatura(22); console.log(temp.fahrenheit); // → 71.6 temp.fahrenheit = 86; console.log(temp.celsius); // → 30 La clase `Temperatura` te permite leer y escribir la temperatura ya sea en grados Celsius o grados Fahrenheit, pero internamente solo almacena Celsius y convierte automáticamente a Celsius en el getter y setter `fahrenheit` . Algunas veces quieres adjuntar algunas propiedades directamente a tu función constructora, en lugar de al prototipo. Tales métodos no tienen acceso a una instancia de clase, pero pueden, por ejemplo, ser utilizados para proporcionar formas adicionales de crear instancias. Dentro de una declaración de clase, métodos que tienen `static` (“estatico”) escrito antes su nombre son almacenados en el constructor. Entonces, la clase `Temperatura` te permite escribir `Temperature.` para crear una temperatura usando grados Fahrenheit. ## Herencia Algunas matrices son conocidas por ser simétricas. Si duplicas una matriz simétrico alrededor de su diagonal de arriba-izquierda a derecha-abajo, esta se mantiene igual. En otras palabras, el valor almacenado en x,y es siempre el mismo al de y,x. Imagina que necesitamos una estructura de datos como `Matriz` pero que haga cumplir el hecho de que la matriz es y siga siendo simétrica. Podríamos escribirla desde cero, pero eso implicaría repetir algo de código muy similar al que ya hemos escrito. El sistema de prototipos en JavaScript hace posible crear una nueva clase, parecida a la clase anterior, pero con nuevas definiciones para algunas de sus propiedades. El prototipo de la nueva clase deriva del antiguo prototipo, pero agrega una nueva definición para, por ejemplo, el método `set` . En términos de programación orientada a objetos, esto se llama herencia. La nueva clase hereda propiedades y comportamientos de la vieja clase. > class MatrizSimetrica extends Matriz { constructor(tamaño, elemento = (x, y) => undefined) { super(tamaño, tamaño, (x, y) => { if (x < y) return elemento(y, x); else return elemento(x, y); }); } set(x, y, valor) { super.set(x, y, valor); if (x != y) { super.set(y, x, valor); } } } let matriz = new MatrizSimetrica(5, (x, y) => `${x},${y}`); console.log(matriz.get(2, 3)); // → 3,2 El uso de la palabra `extends` indica que esta clase no debe estar basada directamente en el prototipo de `Objeto` predeterminado, pero de alguna otra clase. Esta se llama la superclase. La clase derivada es la subclase. Para inicializar una instancia de `MatrizSimetrica` , el constructor llama a su constructor de superclase a través de la palabra clave `super` . Esto es necesario porque si este nuevo objeto se comporta (más o menos) como una `Matriz` , va a necesitar las propiedades de instancia que tienen las matrices. En orden para asegurar que la matriz sea simétrica, el constructor ajusta el método `contenido` para intercambiar las coordenadas de los valores por debajo del diagonal. El método `set` nuevamente usa `super` , pero esta vez no para llamar al constructor, pero para llamar a un método específico del conjunto de metodos de la superclase. Estamos redefiniendo `set` pero queremos usar el comportamiento original. Ya que `this.set` se refiere al nuevo método `set` , llamarlo no funcionaria. Dentro de los métodos de clase, `super` proporciona una forma de llamar a los métodos tal y como se definieron en la superclase. La herencia nos permite construir tipos de datos ligeramente diferentes a partir de tipos de datos existentes con relativamente poco trabajo. Es una parte fundamental de la tradición orientada a objetos, junto con la encapsulación y el polimorfismo. Pero mientras que los últimos dos son considerados como ideas maravillosas en la actualidad, la herencia es más controversial. Mientras que la encapsulación y el polimorfismo se pueden usar para separar piezas de código entre sí, reduciendo el enredo del programa en general, la herencia fundamentalmente vincula las clases, creando mas enredo. Al heredar de una clase, generalmente tienes que saber más sobre cómo funciona que cuando simplemente la usas. La herencia puede ser una herramienta útil, y la uso de vez en cuando en mis propios programas, pero no debería ser la primera herramienta que busques, y probablemente no deberías estar buscando oportunidades para construir jerarquías (árboles genealógicos de clases) de clases en una manera activa. ## El operador instanceof Ocasionalmente es útil saber si un objeto fue derivado de una clase específica. Para esto, JavaScript proporciona un operador binario llamado `instanceof` (“instancia de”). > console.log( new MatrizSimetrica(2) instanceof MatrizSimetrica); // → true console.log(new MatrizSimetrica(2) instanceof Matriz); // → true console.log(new Matriz(2, 2) instanceof MatrizSimetrica); // → false console.log([1] instanceof Array); // → true El operador verá a través de los tipos heredados, por lo que una `MatrizSimetrica` es una instancia de `Matriz` . El operador también se puede aplicar a constructores estándar como `Array` . Casi todos los objetos son una instancia de `Object` . Entonces los objetos hacen más que solo tener sus propias propiedades. Ellos tienen prototipos, que son otros objetos. Estos actuarán como si tuvieran propiedades que no tienen mientras su prototipo tenga esa propiedad. Los objetos simples tienen `Object.prototype` como su prototipo. Los constructores, que son funciones cuyos nombres generalmente comienzan con una mayúscula, se pueden usar con el operador `new` para crear nuevos objetos. El prototipo del nuevo objeto será el objeto encontrado en la propiedad `prototype` del constructor. Puedes hacer un buen uso de esto al poner las propiedades que todos los valores de un tipo dado comparten en su prototipo. Hay una notación de `class` que proporciona una manera clara de definir un constructor y su prototipo. Puedes definir getters y setters para secretamente llamar a métodos cada vez que se acceda a la propiedad de un objeto. Los métodos estáticos son métodos almacenados en el constructor de clase, en lugar de su prototipo. El operador `instanceof` puede, dado un objeto y un constructor, decir si ese objeto es una instancia de ese constructor. Una cosa útil que hacer con los objetos es especificar una interfaz para ellos y decirle a todos que se supone que deben hablar con ese objeto solo a través de esa interfaz. El resto de los detalles que componen tu objeto ahora estan encapsulados, escondidos detrás de la interfaz. Más de un tipo puede implementar la misma interfaz. El código escrito para utilizar una interfaz automáticamente sabe cómo trabajar con cualquier cantidad de objetos diferentes que proporcionen la interfaz. Esto se llama polimorfismo. Al implementar múltiples clases que difieran solo en algunos detalles, puede ser útil escribir las nuevas clases como subclases de una clase existente, heredando parte de su comportamiento. ### Un tipo vector Escribe una clase `Vec` que represente un vector en un espacio de dos dimensiones. Toma los parámetros (numericos) `x` y `y` , que debería guardar como propiedades del mismo nombre. Dale al prototipo de `Vector` dos métodos, `mas` y `menos` , los cuales toman otro vector como parámetro y retornan un nuevo vector que tiene la suma o diferencia de los valores x y y de los dos vectores ( `this` y el parámetro). Agrega una propiedad getter llamada `longitud` al prototipo que calcule la longitud del vector—es decir, la distancia del punto (x, y) desde el origen (0, 0). > // Your code here. console.log(new Vector(1, 2).mas(new Vector(2, 3))); // → Vector{x: 3, y: 5} console.log(new Vector(1, 2).menos(new Vector(2, 3))); // → Vector{x: -1, y: -1} console.log(new Vector(3, 4).longitud); // → 5 Mira de nuevo al ejemplo de la clase `Conejo` si no recuerdas muy bien como se ven las declaraciones de clases. Agregar una propiedad getter al constructor se puede hacer al poner la palabra `get` antes del nombre del método. Para calcular la distancia desde (0, 0) a (x, y), puedes usar el teorema de Pitágoras, que dice que el cuadrado de la distancia que estamos buscando es igual al cuadrado de la coordenada x más el cuadrado de la coordenada y. Por lo tanto, √(x2 + y2) es el número que quieres, y `Math.sqrt` es la forma en que calculas una raíz cuadrada en JavaScript. ### Conjuntos El entorno de JavaScript estándar proporciona otra estructura de datos llamada `Set` (“Conjunto”). Al igual que una instancia de `Map` , un conjunto contiene una colección de valores. Pero a diferencia de `Map` , este no asocia valores con otros—este solo rastrea qué valores son parte del conjunto. Un valor solo puede ser parte de un conjunto una vez—agregarlo de nuevo no tiene ningún efecto. Escribe una clase llamada `Conjunto` . Como `Set` , debe tener los métodos `add` (“añadir”), `delete` (“eliminar”), y `has` (“tiene”). Su constructor crea un conjunto vacío, `añadir` agrega un valor al conjunto (pero solo si no es ya un miembro), `eliminar` elimina su argumento del conjunto (si era un miembro) y `tiene` retorna un valor booleano que indica si su argumento es un miembro del conjunto. Usa el operador `===` , o algo equivalente como `indexOf` , para determinar si dos valores son iguales. Proporcionale a la clase un método estático `desde` que tome un objeto iterable como argumento y cree un grupo que contenga todos los valores producidos al iterar sobre el. > class Conjunto { // Tu código aquí. } let conjunto = Conjunto.desde([10, 20]); console.log(conjunto.tiene(10)); // → true console.log(conjunto.tiene(30)); // → false conjunto.añadir(10); conjunto.eliminar(10); console.log(conjunto.tiene(10)); // → false La forma más fácil de hacer esto es almacenar un array con los miembros del conjunto en una propiedad de instancia. Los métodos `includes` o `indexOf` pueden ser usados para verificar si un valor dado está en el array. El constructor de clase puede establecer la colección de miembros como un array vacio. Cuando se llama a `añadir` , debes verificar si el valor dado esta en el conjunto y agregarlo, por ejemplo con `push` , de lo contrario. Eliminar un elemento de un array, en `eliminar` , es menos sencillo, pero puedes usar `filter` para crear un nuevo array sin el valor. No te olvides de sobrescribir la propiedad que sostiene los miembros del conjunto con la versión recién filtrada del array. El método `desde` puede usar un bucle `for` / `of` para obtener los valores de el objeto iterable y llamar a `añadir` para ponerlos en un conjunto recien creado. ### Conjuntos Iterables Haz iterable la clase `Conjunto` del ejercicio anterior. Puedes remitirte a la sección acerca de la interfaz del iterador anteriormente en el capítulo si ya no recuerdas muy bien la forma exacta de la interfaz. Si usaste un array para representar a los miembros del conjunto, no solo retornes el iterador creado llamando al método `Symbol.iterator` en el array. Eso funcionaría, pero frustra el propósito de este ejercicio. Está bien si tu iterador se comporta de manera extraña cuando el conjunto es modificado durante la iteración. > // Tu código aquí (y el codigo del ejercicio anterior) for (let valor of Conjunto.desde(["a", "b", "c"])) { console.log(valor); } // → a // → b // → c Probablemente valga la pena definir una nueva clase `IteradorConjunto` . Las instancias de Iterador deberian tener una propiedad que rastree la posición actual en el conjunto. Cada vez que se invoque a `next` , este comprueba si está hecho, y si no, se mueve más allá del valor actual y lo retorna. La clase `Conjunto` recibe un método llamado por `Symbol.iterator` que, cuando se llama, retorna una nueva instancia de la clase de iterador para ese grupo. ### Tomando un método prestado Anteriormente en el capítulo mencioné que el metodo `hasOwnProperty` de un objeto puede usarse como una alternativa más robusta al operador `in` cuando quieras ignorar las propiedades del prototipo. Pero, ¿y si tu mapa necesita incluir la palabra `"hasOwnProperty"` ? Ya no podrás llamar a ese método ya que la propiedad del objeto oculta el valor del método. ¿Puedes pensar en una forma de llamar `hasOwnProperty` en un objeto que tiene una propia propiedad con ese nombre? > let mapa = {uno: true, dos: true, hasOwnProperty: true}; // Arregla esta llamada console.log(mapa.hasOwnProperty("uno")); // → true [...] la pregunta de si las Maquinas Pueden Pensar [...] es tan relevante como la pregunta de si los Submarinos Pueden Nadar. En los capítulos de “proyectos”, dejaré de golpearte con teoría nueva por un breve momento y en su lugar vamos a trabajar juntos en un programa. La teoría es necesaria para aprender a programar, pero leer y entender programas reales es igual de importante. Nuestro proyecto en este capítulo es construir un autómata, un pequeño programa que realiza una tarea en un mundo virtual. Nuestro autómata será un robot de entregas por correo que recoge y deja paquetes. ## VillaPradera El pueblo de VillaPradera no es muy grande. Este consiste de 11 lugares con 14 caminos entre ellos. Puede ser describido con este array de caminos: > edit & run code by clicking itconst caminos = [ "Casa de Alicia-Casa de Bob", "Casa de Alicia-Cabaña", "Casa de Alicia-Oficina de Correos", "Casa de Bob-Ayuntamiento", "Casa de Daria-Casa de Ernie", "Casa de Daria-Ayuntamiento", "Casa de Ernie-Casa de Grete", "Casa de Grete-Granja", "Casa de Grete-Tienda", "Mercado-Granja", "Mercado-Oficina de Correos", "Mercado-Tienda", "Mercado-Ayuntamiento", "Tienda-Ayuntamiento" ]; La red de caminos en el pueblo forma un grafo. Un grafo es una colección de puntos (lugares en el pueblo) con líneas entre ellos (caminos). Este grafo será el mundo por el que nuestro robot se movera. El array de strings no es muy fácil de trabajar. En lo que estamos interesados es en los destinos a los que podemos llegar desde un lugar determinado. Vamos a convertir la lista de caminos en una estructura de datos que, para cada lugar, nos diga a donde se pueda llegar desde allí. > function construirGrafo(bordes) { let grafo = Object.create(null); function añadirBorde(desde, hasta) { if (grafo[desde] == null) { grafo[desde] = [hasta]; } else { grafo[desde].push(hasta); } } for (let [desde, hasta] of bordes.map(c => c.split("-"))) { añadirBorde(desde, hasta); añadirBorde(hasta, desde); } return grafo; } const grafoCamino = construirGrafo(roads); Dado un conjunto de bordes, `construirGrafo` crea un objeto de mapa que, para cada nodo, almacena un array de nodos conectados. Utiliza el método `split` para ir de los strings de caminos, que tienen la forma `"Comienzo-Final"` , a arrays de dos elementos que contienen el inicio y el final como strings separados. ## La tarea Nuestro robot se moverá por el pueblo. Hay paquetes en varios lugares, cada uno dirigido a otro lugar. El robot tomara paquetes cuando los encuentre y los entregara cuando llegue a sus destinos. El autómata debe decidir, en cada punto, a dónde ir después. Ha finalizado su tarea cuando se han entregado todos los paquetes. Para poder simular este proceso, debemos definir un mundo virtual que pueda describirlo. Este modelo nos dice dónde está el robot y dónde estan los paquetes. Cuando el robot ha decidido moverse a alguna parte, necesitamos actualizar el modelo para reflejar la nueva situación. Si estás pensando en términos de programación orientada a objetos, tu primer impulso podría ser comenzar a definir objetos para los diversos elementos en el mundo. Una clase para el robot, una para un paquete, tal vez una para los lugares. Estas podrían tener propiedades que describen su estado actual, como la pila de paquetes en un lugar, que podríamos cambiar al actualizar el mundo. Esto está mal. Al menos, usualmente lo esta. El hecho de que algo suena como un objeto no significa automáticamente que debe ser un objeto en tu programa. Escribir por reflejo las clases para cada concepto en tu aplicación tiende a dejarte con una colección de objetos interconectados donde cada uno tiene su propio estado interno y cambiante. Tales programas a menudo son difíciles de entender y, por lo tanto, fáciles de romper. En lugar de eso, condensemos el estado del pueblo hasta el mínimo conjunto de valores que lo definan. Está la ubicación actual del robot y la colección de paquetes no entregados, cada uno de los cuales tiene una ubicación actual y una dirección de destino. Eso es todo. Y mientras estamos en ello, hagámoslo de manera que no cambiemos este estado cuándo se mueva el robot, sino calcular un nuevo estado para la situación después del movimiento. > class EstadoPueblo { constructor(lugar, paquetes) { this.lugar = lugar; this.paquetes = paquetes; } mover(destino) { if (!grafoCamino[this.lugar].includes(destino)) { return this; } else { let paquetes = this.paquetes.map(p => { if (p.lugar != this.lugar) return p; return {lugar: destino, direccion: p.direccion}; }).filter(p => p.lugar != p.direccion); return new EstadoPueblo(destino, paquetes); } } } En el método `mover` es donde ocurre la acción. Este primero verifica si hay un camino que va del lugar actual al destino, y si no, retorna el estado anterior, ya que este no es un movimiento válido. Luego crea un nuevo estado con el destino como el nuevo lugar del robot. Pero también necesita crear un nuevo conjunto de paquetes—los paquetes que el robot esta llevando (que están en el lugar actual del robot) necesitan de moverse también al nuevo lugar. Y paquetes que están dirigidos al nuevo lugar donde deben de ser entregados—es decir, deben de eliminarse del conjunto de paquetes no entregados. La llamada a `map` se encarga de mover los paquetes, y la llamada a `filter` hace la entrega. Los objetos de paquete no se modifican cuando se mueven, sino que se vuelven a crear. El método `movee` nos da un nuevo estado de aldea, pero deja el viejo completamente intacto > let primero = new EstadoPueblo( "Oficina de Correos", [{lugar: "Oficina de Correos", direccion: "Casa de Alicia"}] ); let siguiente = primero.mover("Casa de Alicia"); console.log(siguiente.lugar); // → Casa de Alicia console.log(siguiente.parcels); // → [] console.log(primero.lugar); // → Oficina de Correos Mover hace que se entregue el paquete, y esto se refleja en el próximo estado. Pero el estado inicial todavía describe la situación donde el robot está en la oficina de correos y el paquete aun no ha sido entregado. ## Datos persistentes Las estructuras de datos que no cambian se llaman inmutables o persistentes. Se comportan de manera muy similar a los strings y números en que son quienes son, y se quedan así, en lugar de contener diferentes cosas en diferentes momentos. En JavaScript, casi todo puede ser cambiado, así que trabajar con valores que se supone que sean persistentes requieren cierta restricción. Hay una función llamada `Object.freeze` (“Objeto.congelar”) que cambia un objeto de manera que escribir en sus propiedades sea ignorado. Podrías usar eso para asegurarte de que tus objetos no cambien, si quieres ser cuidadoso. La congelación requiere que la computadora haga un trabajo extra e ignorar actualizaciones es probable que confunda a alguien tanto como para que hagan lo incorrecto. Por lo general, prefiero simplemente decirle a la gente que un determinado objeto no debe ser molestado, y espero que lo recuerden. > let objeto = Object.freeze({valor: 5}); objeto.valor = 10; console.log(objeto.valor); // → 5 Por qué me salgo de mi camino para no cambiar objetos cuando el lenguaje obviamente está esperando que lo haga? Porque me ayuda a entender mis programas. Esto es acerca de manejar la complejidad nuevamente. Cuando los objetos en mi sistema son cosas fijas y estables, puedo considerar las operaciones en ellos de forma aislada—moverse a la casa de Alicia desde un estado de inicio siempre produce el mismo nuevo estado. Cuando los objetos cambian con el tiempo, eso agrega una dimensión completamente nueva de complejidad a este tipo de razonamiento. Para un sistema pequeño como el que estamos construyendo en este capítulo, podriamos manejar ese poco de complejidad adicional. Pero el límite más importante sobre qué tipo de sistemas podemos construir es cuánto podemos entender. Cualquier cosa que haga que tu código sea más fácil de entender hace que sea posible construir un sistema más ambicioso. Lamentablemente, aunque entender un sistema basado en estructuras de datos persistentes es más fácil, diseñar uno, especialmente cuando tu lenguaje de programación no ayuda, puede ser un poco más difícil. Buscaremos oportunidades para usar estructuras de datos persistentes en este libro, pero también utilizaremos las modificables. ## Simulación Un robot de entregas mira al mundo y decide en qué dirección que quiere moverse. Como tal, podríamos decir que un robot es una función que toma un objeto `EstadoPueblo` y retorna el nombre de un lugar cercano. Ya que queremos que los robots sean capaces de recordar cosas, para que puedan hacer y ejecutar planes, también les pasamos su memoria y les permitimos retornar una nueva memoria. Por lo tanto, lo que retorna un robot es un objeto que contiene tanto la dirección en la que quiere moverse como un valor de memoria que se le sera regresado la próxima vez que se llame. > function correrRobot(estado, robot, memoria) { for (let turno = 0;; turno++) { if (estado.paquetes.length == 0) { console.log(`Listo en ${turno} turnos`); break; } let accion = robot(estado, memoria); estado = estado.mover(accion.direccion); memoria = accion.memoria; console.log(`Moverse a ${accion.direccion}`); } } Considera lo que un robot tiene que hacer para “resolver” un estado dado. Debe recoger todos los paquetes visitando cada ubicación que tenga un paquete, y entregarlos visitando cada lugar al que se dirige un paquete, pero solo después de recoger el paquete. Cuál es la estrategia más estúpida que podría funcionar? El robot podría simplemente caminar hacia una dirección aleatoria en cada vuelta. Eso significa, con gran probabilidad, que eventualmente se encontrara con todos los paquetes, y luego también en algún momento llegara a todos los lugares donde estos deben ser entregados. Aqui esta como se podria ver eso: > function eleccionAleatoria(array) { let eleccion = Math.floor(Math.random() * array.length); return array[eleccion]; } function robotAleatorio(estado) { return {direccion: eleccionAleatoria(grafoCamino[estado.lugar])}; } Recuerda que `Math.random ()` retorna un número entre cero y uno, pero siempre debajo de uno. Multiplicar dicho número por la longitud de un array y luego aplicarle `Math.floor` nos da un índice aleatorio para el array. Como este robot no necesita recordar nada, ignora su segundo argumento (recuerda que puedes llamar a las funciones en JavaScript con argumentos adicionales sin efectos negativos) y omite la propiedad `memoria` en su objeto retornado. Para poner en funcionamiento este sofisticado robot, primero necesitaremos una forma de crear un nuevo estado con algunos paquetes. Un método estático (escrito aquí al agregar directamente una propiedad al constructor) es un buen lugar para poner esa funcionalidad. > EstadoPueblo.aleatorio = function(numeroDePaquetes = 5) { let paquetes = []; for (let i = 0; i < numeroDePaquetes; i++) { let direccion = eleccionAleatoria(Object.keys(grafoCamino)); let lugar; do { lugar = eleccionAleatoria(Object.keys(grafoCamino)); } while (lugar == direccion); paquetes.push({lugar, direccion}); } return new EstadoPueblo("Oficina de Correos", paquetes); }; No queremos paquetes que sean enviados desde el mismo lugar al que están dirigidos. Por esta razón, el bucle `do` sigue seleccionando nuevos lugares cuando obtenga uno que sea igual a la dirección. Comencemos un mundo virtual. > correrRobot(EstadoPueblo.aleatorio(), robotAleatorio); // → Moverse a Mercado // → Moverse a Ayuntamiento // → … // → Listo en 63 turnos Le toma muchas vueltas al robot para entregar los paquetes, porque este no está planeando muy bien. Nos ocuparemos de eso pronto. Para una perspectiva más agradable de la simulación, puedes usar la función `runRobotAnimation` que está disponible en el entorno de programación de este capítulo. Esta ejecuta la simulación, pero en lugar de mostrar texto, muestra al robot moviéndose por el mapa del pueblo. > runRobotAnimation(EstadoPueblo.aleatorio(), robotAleatorio); La forma en la que `runRobotAnimation` esta implementada seguirá siendo un misterio por ahora, pero después de que hayas leído capítulos mas avanzados de este libro, que discuten la integración de JavaScript en los navegadores web, podrás adivinar cómo esta funciona. ## La ruta del camión de correos Deberíamos poder hacer algo mucho mejor que el robot aleatorio. Una mejora fácil sería tomar una pista de la forma en que como funciona la entrega de correos en el mundo real. Si encontramos una ruta que pasa por todos los lugares en el pueblo, el robot podría ejecutar esa ruta dos veces, y en ese punto esta garantizado que ha entregado todos los paquetes. Aquí hay una de esas rutas (comenzando desde la Oficina de Correos). > const rutaCorreo = [ "Casa de Alicia", "Cabaña", "Casa de Alicia", "Casa de Bob", "Ayuntamiento", "Casa de Daria", "Casa de Ernie", "GCasa de Grete", "Tienda", "Casa de Grete", "Granja", "Mercado", "Oficina <NAME>" ]; Para implementar el robot que siga la ruta, necesitaremos hacer uso de la memoria del robot. El robot mantiene el resto de su ruta en su memoria y deja caer el primer elemento en cada vuelta. > function robotRuta(estado, memoria) { if (memoria.length == 0) { memoria = rutaCorreo; } return {direction: memoria[0], memoria: memoria.slice(1)}; } Este robot ya es mucho más rápido. Tomará un máximo de 26 turnos (dos veces la ruta de 13 pasos), pero generalmente seran menos. > runRobotAnimation(EstadoPueblo.aleatorio(), robotRuta, []); ## Búsqueda de rutas Aún así, realmente no llamaría seguir ciegamente una ruta fija comportamiento inteligente. El robot podría funcionar más eficientemente si ajustara su comportamiento al trabajo real que necesita hacerse. Para hacer eso, tiene que ser capaz de avanzar deliberadamente hacia un determinado paquete, o hacia la ubicación donde se debe entregar un paquete. Al hacer eso, incluso cuando el objetivo esté a más de un movimiento de distancia, requiere algún tipo de función de búsqueda de ruta. El problema de encontrar una ruta a través de un grafo es un típico problema de búsqueda. Podemos decir si una solución dada (una ruta) es una solución válida, pero no podemos calcular directamente la solución de la misma manera que podríamos para 2 + 2. En cambio, tenemos que seguir creando soluciones potenciales hasta que encontremos una que funcione. El número de rutas posibles a través de un grafo es infinito. Pero cuando buscamos una ruta de A a B, solo estamos interesados en aquellas que comienzan en A. Tampoco nos importan las rutas que visitan el mismo lugar dos veces, definitivamente esa no es la ruta más eficiente en cualquier sitio. Entonces eso reduce la cantidad de rutas que el buscador de rutas tiene que considerar. De hecho, estamos más interesados en la ruta mas corta. Entonces queremos asegurarnos de mirar las rutas cortas antes de mirar las más largas. Un buen enfoque sería “crecer” las rutas desde el punto de partida, explorando cada lugar accesible que aún no ha sido visitado, hasta que una ruta alcanze la meta. De esa forma, solo exploraremos las rutas que son potencialmente interesantes, y encontremos la ruta más corta (o una de las rutas más cortas, si hay más de una) a la meta. Aquí hay una función que hace esto: > function encontrarRuta(grafo, desde, hasta) { let trabajo = [{donde: desde, ruta: []}]; for (let i = 0; i < trabajo.length; i++) { let {donde, ruta} = trabajo[i]; for (let lugar of grafo[donde]) { if (lugar == hasta) return ruta.concat(lugar); if (!trabajo.some(w => w.donde == lugar)) { trabajo.push({donde: lugar, ruta: ruta.concat(lugar)}); } } } } La exploración tiene que hacerse en el orden correcto—los lugares que fueron alcanzados primero deben ser explorados primero. No podemos explorar de inmediato un lugar apenas lo alcanzamos, porque eso significaría que los lugares alcanzados desde allí también se explorarían de inmediato, y así sucesivamente, incluso aunque puedan haber otros caminos más cortos que aún no han sido explorados. Por lo tanto, la función mantiene una lista de trabajo. Esta es un array de lugares que deberían explorarse a continuación, junto con la ruta que nos llevó ahí. Esta comienza solo con la posición de inicio y una ruta vacía. La búsqueda luego opera tomando el siguiente elemento en la lista y explorando eso, lo que significa que todos los caminos que van desde ese lugar son mirados. Si uno de ellos es el objetivo, una ruta final puede ser retornada. De lo contrario, si no hemos visto este lugar antes, un nuevo elemento se agrega a la lista. Si lo hemos visto antes, ya que estamos buscando primero rutas cortas, hemos encontrado una ruta más larga a ese lugar o una precisamente tan larga como la existente, y no necesitamos explorarla. Puedes imaginar esto visualmente como una red de rutas conocidas que se arrastran desde el lugar de inicio, creciendo uniformemente hacia todos los lados (pero nunca enredándose de vuelta a si misma). Tan pronto como el primer hilo llegue a la ubicación objetivo, ese hilo se remonta al comienzo, dándonos asi nuestra ruta. Nuestro código no maneja la situación donde no hay más elementos de trabajo en la lista de trabajo, porque sabemos que nuestro gráfico está conectado, lo que significa que se puede llegar a todos los lugares desde todos los otros lugares. Siempre podremos encontrar una ruta entre dos puntos, y la búsqueda no puede fallar. > function robotOrientadoAMetas({lugar, paquetes}, ruta) { if (ruta.length == 0) { let paquete = paquetes[0]; if (paquete.lugar != lugar) { ruta = encontrarRuta(grafoCamino, lugar, paquete.lugar); } else { ruta = encontrarRuta(grafoCamino, lugar, paquete.direccion); } } return {direccion: ruta[0], memoria: ruta.slice(1)}; } Este robot usa su valor de memoria como una lista de instrucciones para moverse, como el robot que sigue la ruta. Siempre que esa lista esté vacía, este tiene que descubrir qué hacer a continuación. Toma el primer paquete no entregado en el conjunto y, si ese paquete no se ha recogido aún, traza una ruta hacia el. Si el paquete ha sido recogido, todavía debe ser entregado, por lo que el robot crea una ruta hacia la dirección de entrega en su lugar. Veamos cómo le va. > runRobotAnimation(EstadoPueblo.aleatorio(), robotOrientadoAMetas, []); Este robot generalmente termina la tarea de entregar 5 paquetes en 16 turnos aproximadamente. Un poco mejor que `robotRuta` , pero definitivamente no es óptimo. ### Midiendo un robot Es difícil comparar objetivamente robots simplemente dejándolos resolver algunos escenarios. Tal vez un robot acaba de conseguir tareas más fáciles, o el tipo de tareas en las que es bueno, mientras que el otro no. Escribe una función `compararRobots` que toma dos robots (y su memoria de inicio). Debe generar 100 tareas y dejar que cada uno de los robots resuelvan cada una de estas tareas. Cuando terminen, debería generar el promedio de pasos que cada robot tomó por tarea. En favor de lo que es justo, asegúrate de la misma tarea a ambos robots, en lugar de generar diferentes tareas por robot. > function compararRobots(robot1, memoria1, robot2, memoria2) { // Tu código aqui } compararRobots(robotRuta, [], robotOrientadoAMetas, []); Tendrás que escribir una variante de la función `correrRobot` que, en lugar de registrar los eventos en la consola, retorne el número de pasos que le tomó al robot completar la tarea. Tu función de medición puede, en un ciclo, generar nuevos estados y contar los pasos que lleva cada uno de los robots. Cuando has generado suficientes mediciones, puedes usar `console.log` para mostrar el promedio de cada robot, que es la cantidad total de pasos tomados dividido por el número de mediciones ### Eficiencia del robot Puedes escribir un robot que termine la tarea de entrega más rápido que `robotOrientadoAMetas` ? Si observas el comportamiento de ese robot, qué obviamente cosas estúpidas este hace? Cómo podrían mejorarse? Si resolviste el ejercicio anterior, es posible que desees utilizar tu función `compararRobots` para verificar si has mejorado al robot. > // Tu código aqui runRobotAnimation(EstadoPueblo.aleatorio(), tuRobot, memoria); La principal limitación de `robotOrientadoAMetas` es que solo considera un paquete a la vez. A menudo caminará de ida y vuelta por el pueblo porque el paquete que resulta estar mirando sucede que esta en el otro lado del mapa, incluso si hay otros mucho más cerca. Una posible solución sería calcular rutas para todos los paquetes, y luego tomar la más corta. Se pueden obtener incluso mejores resultados, si hay múltiples rutas más cortas, al ir prefiriendo las que van a recoger un paquete en lugar de entregar un paquete. ### Conjunto persistente La mayoría de las estructuras de datos proporcionadas en un entorno de JavaScript estándar no son muy adecuadas para usos persistentes. Los arrays tienen los métodos `slice` y `concat` , que nos permiten fácilmente crear nuevos arrays sin dañar al anterior. Pero `Set` , por ejemplo, no tiene métodos para crear un nuevo conjunto con un elemento agregado o eliminado. Escribe una nueva clase `ConjuntoP` , similar a la clase `Conjunto` del Capitulo 6, que almacena un conjunto de valores. Como `Grupo` , tiene métodos `añadir` , `eliminar` , y `tiene` . Su método `añadir` , sin embargo, debería retornar una nueva instancia de `ConjuntoP` con el miembro dado agregado, y dejar la instancia anterior sin cambios. Del mismo modo, `eliminar` crea una nueva instancia sin un miembro dado. La clase debería funcionar para valores de cualquier tipo, no solo strings. Esta no tiene que ser eficiente cuando se usa con grandes cantidades de valores. El constructor no deberia ser parte de la interfaz de la clase (aunque definitivamente querrás usarlo internamente). En cambio, allí hay una instancia vacía, `ConjuntoP.vacio` , que se puede usar como un valor de inicio. Por qué solo necesitas un valor `ConjuntoP.vacio` , en lugar de tener una función que crea un nuevo mapa vacío cada vez? > class ConjuntoP { // Tu código aqui } let a = ConjuntoP.vacio.añadir("a"); let ab = a.añadir("b"); let b = ab.eliminar("a"); console.log(b.tiene("b")); // → true console.log(a.tiene("b")); // → false console.log(b.tiene("a")); // → false La forma más conveniente de representar el conjunto de valores de miembro sigue siendo un array, ya que son fáciles de copiar. Cuando se agrega un valor al grupo, puedes crear un nuevo grupo con una copia del array original que tiene el valor agregado (por ejemplo, usando `concat` ). Cuando se borra un valor, lo filtra afuera del array. El constructor de la clase puede tomar un array como argumento, y almacenarlo como la (única) propiedad de la instancia. Este array nunca es actualizado. Para agregar una propiedad ( `vacio` ) a un constructor que no sea un método, tienes que agregarlo al constructor después de la definición de la clase, como una propiedad regular. Solo necesita una instancia `vacio` porque todos los conjuntos vacíos son iguales y las instancias de la clase no cambian. Puedes crear muchos conjuntos diferentes de ese único conjunto vacío sin afectarlo. Arreglar errores es dos veces mas difícil que escribir el código en primer lugar. Por lo tanto, si escribes código de la manera más inteligente posible, eres, por definición, no lo suficientemente inteligente como para depurarlo. Los defectos en los programas de computadora usualmente se llaman bugs (o “insectos”). Este nombre hace que los programadores se sientan bien al imaginarlos como pequeñas cosas que solo sucede se arrastran hacia nuestro trabajo. En la realidad, por supuesto, nosotros mismos los ponemos allí. Si un programa es un pensamiento cristalizado, puedes categorizar en grandes rasgos a los bugs en aquellos causados al confundir los pensamientos, y los causados por cometer errores al convertir un pensamiento en código. El primer tipo es generalmente más difícil de diagnosticar y corregir que el último. ## Lenguaje Muchos errores podrían ser señalados automáticamente por la computadora, si esta supiera lo suficiente sobre lo que estamos tratando de hacer. Pero aquí la soltura de JavaScript es un obstáculo. Su concepto de vinculaciones y propiedades es lo suficientemente vago que rara vez atrapará errores ortograficos antes de ejecutar el programa. E incluso entonces, te permite hacer algunas cosas claramente sin sentido, como calcular `true * "mono"` . Hay algunas cosas de las que JavaScript se queja. Escribir un programa que no siga la gramática del lenguaje inmediatamente hara que la computadora se queje. Otras cosas, como llamar a algo que no sea una función o buscar una propiedad en un valor indefinido, causará un error que sera reportado cuando el programa intente realizar la acción. Pero a menudo, tu cálculo sin sentido simplemente producirá `NaN` (no es un número) o un valor indefinido. Y el programa continuara felizmente, convencido de que está haciendo algo significativo. El error solo se manifestara más tarde, después de que el valor falso haya viajado a traves de varias funciones. Puede no desencadenar un error en absoluto, pero en silencio causara que la salida del programa sea incorrecta. Encontrar la fuente de tales problemas puede ser algo difícil. El proceso de encontrar errores—bugs—en los programas se llama depuración. ## Modo estricto JavaScript se puede hacer un poco más estricto al habilitar el modo estricto. Esto se hace al poner el string `"use strict"` (“usar estricto”) en la parte superior de un archivo o cuerpo de función. Aquí hay un ejemplo: > edit & run code by clicking itfunction puedesDetectarElProblema() { "use strict"; for (contador = 0; contador < 10; contador++) { console.log("Feliz feliz"); } } puedesDetectarElProblema(); // → ReferenceError: contador is not defined Normalmente, cuando te olvidas de poner `let` delante de tu vinculación, como con `contador` en el ejemplo, JavaScript silenciosamente crea una vinculación global y utiliza eso. En el modo estricto, se reportara un error en su lugar. Esto es muy útil. Sin embargo, debe tenerse en cuenta que esto no funciona cuando la vinculación en cuestión ya existe como una vinculación global. En ese caso, el ciclo aún sobrescribirá silenciosamente el valor de la vinculación. Otro cambio en el modo estricto es que la vinculación `this` contiene el valor `undefined` en funciones que no se llamen como métodos. Cuando se hace una llamada fuera del modo estricto, `this` se refiere al objeto del alcance global, que es un objeto cuyas propiedades son vinculaciones globales. Entonces, si llamas accidentalmente a un método o constructor incorrectamente en el modo estricto, JavaScript producirá un error tan pronto trate de leer algo de `this` , en lugar de escribirlo felizmente al alcance global. Por ejemplo, considera el siguiente código, que llama una función constructora sin la palabra clave `new` de modo que su `this` no hara referencia a un objeto recién construido: > function Persona(nombre) { this.nombre = nombre; } let ferdinand = Persona("Ferdinand"); // oops console.log(nombre); // → Ferdinand Así que la llamada fraudulenta a `Persona` tuvo éxito pero retorno un valor indefinido y creó la vinculación `nombre` global. En el modo estricto, el resultado es diferente. > "use strict"; function Persona(nombre) { this.nombre = nombre; } let ferdinand = Persona("Ferdinand"); // olvide new // → TypeError: Cannot set property 'nombre' of undefined Se nos dice inmediatamente que algo está mal. Esto es útil. Afortunadamente, los constructores creados con la notación `class` siempre se quejan si se llaman sin `new` , lo que hace que esto sea menos un problema incluso en el modo no-estricto. El modo estricto hace algunas cosas más. No permite darle a una función múltiples parámetros con el mismo nombre y elimina ciertas características problemáticas del lenguaje por completo (como la declaración `with` (“con”), la cual esta tan mal, que no se discute en este libro). En resumen, poner `"use strict"` en la parte superior de tu programa rara vez duele y puede ayudarte a detectar un problema. ## Tipos Algunos lenguajes quieren saber los tipos de todas tus vinculaciones y expresiones incluso antes de ejecutar un programa. Estos te dirán de una vez cuando uses un tipo de una manera inconsistente. JavaScript solo considera a los tipos cuando ejecuta el programa, e incluso a menudo intentara convertir implícitamente los valores al tipo que espera, por lo que no es de mucha ayuda Aún así, los tipos proporcionan un marco útil para hablar acerca de los programas. Muchos errores provienen de estar confundido acerca del tipo de valor que entra o sale de una función. Si tienes esa información anotada, es menos probable que te confundas. Podrías agregar un comentario como arriba de la función `robotOrientadoAMetas` del último capítulo, para describir su tipo. > // (EstadoMundo, Array) → {direccion: string, memoria: Array} function robotOrientadoAMetas(estado, memoria) { // ... } Hay varias convenciones diferentes para anotar programas de JavaScript con tipos. Una cosa acerca de los tipos es que necesitan introducir su propia complejidad para poder describir suficiente código como para poder ser útil. Cual crees que sería el tipo de la función `eleccionAleatoria` que retorna un elemento aleatorio de un array? Deberías introducir un tipo variable, T, que puede representar cualquier tipo, para que puedas darle a `eleccionAleatoria` un tipo como `([T]) → T` (función de un array de Ts a a T). Cuando se conocen los tipos de un programa, es posible que la computadora haga un chequeo por ti, señalando los errores antes de que el programa sea ejecutado. Hay varios dialectos de JavaScript que agregan tipos al lenguaje y y los verifica. El más popular se llama TypeScript. Si estás interesado en agregarle más rigor a tus programas, te recomiendo que lo pruebes. En este libro, continuaremos usando código en JavaScript crudo, peligroso y sin tipos. ## Probando Si el lenguaje no va a hacer mucho para ayudarnos a encontrar errores, tendremos que encontrarlos de la manera difícil: ejecutando el programa y viendo si hace lo correcto. Hacer esto a mano, una y otra vez, es una muy mala idea. No solo es es molesto, también tiende a ser ineficaz, ya que lleva demasiado tiempo probar exhaustivamente todo cada vez que haces un cambio en tu programa. Las computadoras son buenas para las tareas repetitivas, y las pruebas son las tareas repetitivas ideales. Las pruebas automatizadas es el proceso de escribir un programa que prueba otro programa. Escribir pruebas consiste en algo más de trabajo que probar manualmente, pero una vez que lo haz hecho, ganas un tipo de superpoder: solo te tomara unos segundos verificar que tu programa todavía se comporta correctamente en todas las situaciones para las que escribiste tu prueba. Cuando rompas algo, lo notarás inmediatamente, en lugar aleatoriomente encontrarte con el problema en algún momento posterior. Las pruebas usualmente toman la forma de pequeños programas etiquetados que verifican algún aspecto de tu código. Por ejemplo, un conjunto de pruebas para el método (estándar, probablemente ya probado por otra persona) `toUpperCase` podría verse así: > function probar(etiqueta, cuerpo) { if (!cuerpo()) console.log(`Fallo: ${etiqueta}`); } probar("convertir texto Latino a mayúscula", () => { return "hola".toUpperCase() == "HOLA"; }); probar("convertir texto Griego a mayúsculas", () => { return "Χαίρετε".toUpperCase() == "ΧΑΊΡΕΤΕ"; }); probar("no convierte caracteres sin mayúsculas", () => { return "مرحبا".toUpperCase() == "مرحبا"; }); Escribir pruebas de esta manera tiende a producir código bastante repetitivo e incómodo. Afortunadamente, existen piezas de software que te ayudan a construir y ejecutar colecciones de pruebas (suites de prueba) al proporcionar un lenguaje (en forma de funciones y métodos) adecuado para expresar pruebas y obtener información informativa cuando una prueba falla. Estos generalmente se llaman corredores de pruebas. Algunos programas son más fáciles de probar que otros programas. Por lo general, con cuantos más objetos externos interactúe el código, más difícil es establecer el contexto en el cual probarlo. El estilo de programación mostrado en el capítulo anterior, que usa valores persistentes auto-contenidos en lugar de cambiar objetos, tiende a ser fácil de probar. ## Depuración Una vez que notes que hay algo mal con tu programa porque se comporta mal o produce errores, el siguiente paso es descubir cual es el problema. A veces es obvio. El mensaje de error apuntará a una línea específica de tu programa, y si miras la descripción del error y esa línea de código, a menudo puedes ver el problema. Pero no siempre. A veces, la línea que provocó el problema es simplemente el primer lugar en donde un valor extraño producido en otro lugar es usado de una manera inválida. Si has estado resolviendo los ejercicios en capítulos anteriores, probablemente ya habrás experimentado tales situaciones. El siguiente programa de ejemplo intenta convertir un número entero a un string en una base dada (decimal, binario, etc.) al repetidamente seleccionar el último dígito y luego dividiendo el número para deshacerse de este dígito. Pero la extraña salida que produce sugiere que tiene un error. > function numeroAString(n, base = 10) { let resultado = "", signo = ""; if (n < 0) { signo = "-"; n = -n; } do { resultado = String(n % base) + resultado; n /= base; } while (n > 0); return signo + resultado; } console.log(numeroAString(13, 10)); // → 1.5e-3231.3e-3221.3e-3211.3e-3201.3e-3191.3e-3181.3… Incluso si ya ves el problema, finge por un momento que no lo has hecho. Sabemos que nuestro programa no funciona bien, y queremos encontrar por qué. Aquí es donde debes resistir el impulso de comenzar a hacer cambios aleatorios en el código para ver si eso lo mejora. En cambio, piensa. Analiza lo que está sucediendo y piensa en una teoría de por qué podría ser sucediendo. Luego, haz observaciones adicionales para probar esta teoría—o si aún no tienes una teoría, haz observaciones adicionales para ayudarte a que se te ocurra una. Poner algunas llamadas estratégicas a `console.log` en el programa es una buena forma de obtener información adicional sobre lo que está haciendo el programa. En en este caso, queremos que `n` tome los valores `13` , `1` y luego `0` . Vamos a escribir su valor al comienzo del ciclo. > 13 1.3 0.13 0.013 … 1.5e-323 Exacto. Dividir 13 entre 10 no produce un número entero. En lugar de `n /= base` , lo que realmente queremos es `n = Math.` para que el número sea correctamente “desplazado” hacia la derecha. Una alternativa al uso de `console.log` para echarle un vistazo al comportamiento del programa es usar las capacidades del depurador de tu navegador. Los navegadores vienen con la capacidad de establecer un punto de interrupción en una línea específico de tu código. Cuando la ejecución del programa alcanza una línea con un punto de interrupción, este entra en pausa, y puedes inspeccionar los valores de las vinculaciones en ese punto. No entraré en detalles, ya que los depuradores difieren de navegador en navegador, pero mira las herramientas de desarrollador en tu navegador o busca en la Web para obtener más información. Otra forma de establecer un punto de interrupción es incluir una declaración `debugger` (que consiste simplemente de esa palabra clave) en tu programa. Si las herramientas de desarrollador en tu navegador están activas, el programa pausará cada vez que llegue a tal declaración. ## Propagación de errores Desafortunadamente, no todos los problemas pueden ser prevenidos por el programador. Si tu programa se comunica con el mundo exterior de alguna manera, es posible obtener una entrada malformada, sobrecargarse con el trabajo, o la red falle en la ejecución. Si solo estás programando para ti mismo, puedes permitirte ignorar tales problemas hasta que estos ocurran. Pero si construyes algo que va a ser utilizado por cualquier otra persona, generalmente quieres que el programa haga algo mejor que solo estrellarse. A veces lo correcto es tomar la mala entrada en zancada y continuar corriendo. En otros casos, es mejor informar al usuario lo que salió mal y luego darse por vencido. Pero en cualquier situación, el programa tiene que hacer algo activamente en respuesta al problema. Supongamos que tienes una función `pedirEntero` que le pide al usuario un número entero y lo retorna. Qué deberías retornar si la entrada por parte del usuario es “naranja”? Una opción es hacer que retorne un valor especial. Opciones comunes para tales valores son `null` , `undefined` , o -1. > function pedirEntero(pregunta) { let resultado = Number(prompt(pregunta)); if (Number.isNaN(resultado)) return null; else return resultado; } console.log(pedirEntero("Cuantos arboles ves?")); Ahora cualquier código que llame a `pedirEntero` debe verificar si un número real fue leído y, si eso falla, de alguna manera debe recuperarse—tal vez preguntando nuevamente o usando un valor predeterminado. O podría de nuevo retornar un valor especial a su llamada para indicar que no pudo hacer lo que se pidió. En muchas situaciones, principalmente cuando los errores son comunes y la persona que llama debe tenerlos explícitamente en cuenta, retornar un valor especial es una buena forma de indicar un error. Sin embargo, esto tiene sus desventajas. Primero, qué pasa si la función puede retornar cada tipo de valor posible? En tal función, tendrás que hacer algo como envolver el resultado en un objeto para poder distinguir el éxito del fracaso. > function ultimoElemento(array) { if (array.length == 0) { return {fallo: true}; } else { return {elemento: array[array.length - 1]}; } } El segundo problema con retornar valores especiales es que puede conducir a código muy incómodo. Si un fragmento de código llama a `pedirEntero` 10 veces, tiene que comprobar 10 veces si `null` fue retornado. Y si su respuesta a encontrar `null` es simplemente retornar `null` en sí mismo, los llamadores de esa función a su vez tendrán que verificarlo, y así sucesivamente. ## Excepciones Cuando una función no puede continuar normalmente, lo que nos gustaría hacer es simplemente detener lo que estamos haciendo e inmediatamente saltar a un lugar que sepa cómo manejar el problema. Esto es lo que el manejo de excepciones hace. Las excepciones son un mecanismo que hace posible que el código que se encuentre con un problema produzca (o lance) una excepción. Una excepción puede ser cualquier valor. Producir una se asemeja a un retorno súper-cargado de una función: salta no solo de la función actual sino también fuera de sus llamadores, todo el camino hasta la primera llamada que comenzó la ejecución actual. Esto se llama desenrollando la pila. Puede que recuerdes que la pila de llamadas de función fue mencionada en el Capítulo 3. Una excepción se aleja de esta pila, descartando todos los contextos de llamadas que encuentra. Si las excepciones siempre se acercaran al final de la pila, estas no serían de mucha utilidad. Simplemente proporcionarían una nueva forma de explotar tu programa. Su poder reside en el hecho de que puedes establecer “obstáculos” a lo largo de la pila para capturar la excepción, cuando estaesta se dirige hacia abajo. Una vez que hayas capturado una excepción, puedes hacer algo con ella para abordar el problema y luego continuar ejecutando el programa. Aquí hay un ejemplo: > function pedirDireccion(pregunta) { let resultado = prompt(pregunta); if (resultado.toLowerCase() == "izquierda") return "I"; if (resultado.toLowerCase() == "derecha") return "D"; throw new Error("Dirección invalida: " + resultado); } function mirar() { if (pedirDireccion("Hacia que dirección quieres ir?") == "I") { return "una casa"; } else { return "dos osos furiosos"; } } try { console.log("Tu ves", mirar()); } catch (error) { console.log("Algo incorrecto sucedio: " + error); } La palabra clave `throw` (“producir”) se usa para generar una excepción. La captura de una se hace al envolver un fragmento de código en un bloque `try` (“intentar”), seguido de la palabra clave `catch` (“atrapar”). Cuando el código en el bloque `try` cause una excepción para ser producida, se evalúa el bloque `catch` , con el nombre en paréntesis vinculado al valor de la excepción. Después de que el bloque `catch` finaliza, o si el bloque `try` finaliza sin problemas, el programa procede debajo de toda la declaración `try/catch` . En este caso, usamos el constructor `Error` para crear nuestro valor de excepción. Este es un constructor (estándar) de JavaScript que crea un objeto con una propiedad `message` (“mensaje”). En la mayoría de los entornos de JavaScript, las instancias de este constructor también recopilan información sobre la pila de llamadas que existía cuando se creó la excepción, algo llamado seguimiento de la pila. Esta información se almacena en la propiedad `stack` (“pila”) y puede ser útil al intentar depurar un problema: esta nos dice la función donde ocurrió el problema y qué funciones realizaron la llamada fallida. Ten en cuenta que la función `mirar` ignora por completo la posibilidad de que `pedirDireccion` podría salir mal. Esta es la gran ventaja de las excepciones: el código de manejo de errores es necesario solamente en el punto donde el error ocurre y en el punto donde se maneja. Las funciones en el medio puede olvidarse de todo. Bueno, casi... ## Limpiando después de excepciones El efecto de una excepción es otro tipo de flujo de control. Cada acción que podría causar una excepción, que es prácticamente cualquier llamada de función y acceso a propiedades, puede causar al control dejar tu codigo repentinamente. Eso significa que cuando el código tiene varios efectos secundarios, incluso si parece que el flujo de control “regular” siempre sucederá, una excepción puede evitar que algunos de ellos sucedan. Aquí hay un código bancario realmente malo. > const cuentas = { a: 100, b: 0, c: 20 }; function obtenerCuenta() { let nombreCuenta = prompt("Ingrese el nombre de la cuenta"); if (!cuentas.hasOwnProperty(nombreCuenta)) { throw new Error(`La cuenta "${nombreCuenta}" no existe`); } return nombreCuenta; } function transferir(desde, cantidad) { if (cuentas[desde] < cantidad) return; cuentas[desde] -= cantidad; cuentas[obtenerCuenta()] += cantidad; } La función `transferir` transfiere una suma de dinero desde una determinada cuenta a otra, pidiendo el nombre de la otra cuenta en el proceso. Si se le da un nombre de cuenta no válido, `obtenerCuenta` arroja una excepción. Pero `transferir` primero remueve el dinero de la cuenta, y luego llama a `obtenerCuenta` antes de añadirlo a la otra cuenta. Si esto es interrumpido por una excepción en ese momento, solo hará que el dinero desaparezca. Ese código podría haber sido escrito de una manera un poco más inteligente, por ejemplo al llamar `obtenerCuenta` antes de que se comience a mover el dinero. Pero a menudo problemas como este ocurren de maneras más sutiles. Incluso funciones que no parece que lanzarán una excepción podría hacerlo en circunstancias excepcionales o cuando contienen un error de programador. Una forma de abordar esto es usar menos efectos secundarios. De nuevo, un estilo de programación que calcula nuevos valores en lugar de cambiar los datos existentes ayuda. Si un fragmento de código deja de ejecutarse en el medio de crear un nuevo valor, nadie ve el valor a medio terminar, y no hay ningún problema. Pero eso no siempre es práctico. Entonces, hay otra característica que las declaraciones `try` tienen. Estas pueden ser seguidas por un bloque `finally` (“finalmente”) en lugar de o además de un bloque `catch` . Un bloque `finally` dice “no importa lo que pase, ejecuta este código después de intentar ejecutar el código en el bloque `try` .” > function transferir(desde, cantidad) { if (cuentas[desde] < cantidad) return; let progreso = 0; try { cuentas[desde] -= cantidad; progreso = 1; cuentas[obtenerCuenta()] += cantidad; progreso = 2; } finally { if (progreso == 1) { cuentas[desde] += cantidad; } } } Esta versión de la función rastrea su progreso, y si, cuando este terminando, se da cuenta de que fue abortada en un punto donde habia creado un estado de programa inconsistente, repara el daño que hizo. Ten en cuenta que, aunque el código `finally` se ejecuta cuando una excepción deja el bloque `try` , no interfiere con la excepción. Después de que se ejecuta el bloque `finally` , la pila continúa desenrollandose. Escribir programas que funcionan de manera confiable incluso cuando aparecen excepciones en lugares inesperados es muy difícil. Muchas personas simplemente no se molestan, y porque las excepciones suelen reservarse para circunstancias excepcionales, el problema puede ocurrir tan raramente que nunca siquiera es notado. Si eso es algo bueno o algo realmente malo depende de cuánto daño hará el software cuando falle. ## Captura selectiva Cuando una excepción llega hasta el final de la pila sin ser capturada, esta es manejada por el entorno. Lo que esto significa difiere entre los entornos. En los navegadores, una descripción del error generalmente sera escrita en la consola de JavaScript (accesible a través de las herramientas de desarrollador del navegador). Node.js, el entorno de JavaScript sin navegador que discutiremos en el Capítulo 20, es más cuidadoso con la corrupción de datos. Aborta todo el proceso cuando ocurre una excepción no manejada. Para los errores de programador, solo dejar pasar al error es a menudo lo mejor que puedes hacer. Una excepción no manejada es una forma razonable de señalizar un programa roto, y la consola de JavaScript, en los navegadores moderno, te proporcionan cierta información acerca de qué llamdas de función estaban en la pila cuando ocurrió el problema. Para problemas que se espera que sucedan durante el uso rutinario, estrellarse con una excepción no manejada es una estrategia terrible. Usos inválidos del lenguaje, como hacer referencia a vinculaciones inexistentes, buscar una propiedad en `null` , o llamar a algo que no sea una función, también dará como resultado que se levanten excepciones. Tales excepciones también pueden ser atrapadas. Cuando se ingresa en un cuerpo `catch` , todo lo que sabemos es que algo en nuestro cuerpo `try` provocó una excepción. Pero no sabemos que, o cual excepción este causó. JavaScript (en una omisión bastante evidente) no proporciona soporte directo para la captura selectiva de excepciones: o las atrapas todas o no atrapas nada. Esto hace que sea tentador asumir que la excepción que obtienes es en la que estabas pensando cuando escribiste el bloque `catch` . Pero puede que no. Alguna otra suposición podría ser violada, o es posible que hayas introducido un error que está causando una excepción. Aquí está un ejemplo que intenta seguir llamando `pedirDireccion` hasta que obtenga una respuesta válida: > for (;;) { try { let direccion = peirDirrecion("Donde?"); // ← error tipografico! console.log("Tu elegiste ", direccion); break; } catch (e) { console.log ("No es una dirección válida. Inténtalo de nuevo"); } } El constructo `for (;;)` es una forma de crear intencionalmente un ciclo que no termine por si mismo. Salimos del ciclo solamente una cuando dirección válida sea dada. Pero escribimos mal `pedirDireccion` , lo que dará como resultado un error de “variable indefinida”. Ya que el bloque `catch` ignora por completo su valor de excepción ( `e` ), suponiendo que sabe cuál es el problema, trata erróneamente al error de vinculación como indicador de una mala entrada. Esto no solo causa un ciclo infinito, también “entierra” el útil mensaje de error acerca de la vinculación mal escrita. Como regla general, no incluyas excepciones a menos de que sean con el propósito de “enrutarlas” hacia alguna parte—por ejemplo, a través de la red para decirle a otro sistema que nuestro programa se bloqueó. E incluso entonces, piensa cuidadosamente sobre cómo podrias estar ocultando información. Por lo tanto, queremos detectar un tipo de excepción específico. Podemos hacer esto al revisar el bloque `catch` si la excepción que tenemos es en la que estamos interesados ​​y relanzar de otra manera. Pero como hacemos para reconocer una excepción? Podríamos comparar su propiedad `message` con el mensaje de error que sucede estamos esperando. Pero esa es una forma inestable de escribir código—estariamos utilizando información destinada al consumo humano (el mensaje) para tomar una decisión programática. Tan pronto como alguien cambie (o traduzca) el mensaje, el código dejaria de funcionar. En vez de esto, definamos un nuevo tipo de error y usemos `instanceof` para identificarlo. > class ErrorDeEntrada extends Error {} function pedirDireccion(pregunta) { let resultado = prompt(pregunta); if (resultado.toLowerCase() == "izquierda") return "I"; if (resultado.toLowerCase() == "derecha") return "D"; throw new ErrorDeEntrada("Direccion invalida: " + resultado); } La nueva clase de error extiende `Error` . No define su propio constructor, lo que significa que hereda el constructor `Error` , que espera un mensaje de string como argumento. De hecho, no define nada—la clase está vacía. Los objetos `ErrorDeEntrada` se comportan como objetos `Error` , excepto que tienen una clase diferente por la cual podemos reconocerlos. Ahora el ciclo puede atraparlos con mas cuidado. > for (;;) { try { let direccion = pedirDireccion("Donde?"); console.log("Tu eliges ", direccion); break; } catch (e) { if (e instanceof ErrorDeEntrada) { console.log ("No es una dirección válida. Inténtalo de nuevo"); } else { throw e; } } } Esto capturará solo las instancias de `error` y dejará que las excepciones no relacionadas pasen a través. Si reintroduce el error tipográfico, el error de la vinculación indefinida será reportado correctamente. ## Afirmaciones Las afirmaciones son comprobaciones dentro de un programa que verifican que algo este en la forma en la que se supone que debe estar. Se usan no para manejar situaciones que puedan aparecer en el funcionamiento normal, pero para encontrar errores hechos por el programador. Si, por ejemplo, `primerElemento` se describe como una función que nunca se debería invocar en arrays vacíos, podríamos escribirla así: > function primerElemento(array) { if (array.length == 0) { throw new Error("primerElemento llamado con []"); } return array[0]; } Ahora, en lugar de silenciosamente retornar `undefined` (que es lo que obtienes cuando lees una propiedad de array que no existe), esto explotará fuertemente tu programa tan pronto como lo uses mal. Esto hace que sea menos probable que tales errores pasen desapercibidos, y sea más fácil encontrar su causa cuando estos ocurran. No recomiendo tratar de escribir afirmaciones para todos los tipos posibles de entradas erroneas. Eso sería mucho trabajo y llevaría a código muy ruidoso. Querrás reservarlas para errores que son fáciles de hacer (o que te encuentras haciendo constantemente). Los errores y las malas entradas son hechos de la vida. Una parte importante de la programación es encontrar, diagnosticar y corregir errores. Los problemas pueden será más fáciles de notar si tienes un conjunto de pruebas automatizadas o si agregas afirmaciones a tus programas. Por lo general, los problemas causados por factores fuera del control del programa deberían ser manejados con gracia. A veces, cuando el problema pueda ser manejado localmente, los valores de devolución especiales son una buena forma de rastrearlos. De lo contrario, las excepciones pueden ser preferibles. Al lanzar una excepción, se desenrolla la pila de llamadas hasta el próximo bloque `try/catch` o hasta el final de la pila. Se le dará el valor de excepción al bloque `catch` que lo atrape, que debería verificar que en realidad es el tipo esperado de excepción y luego hacer algo con eso. Para ayudar a controlar el impredecible flujo de control causado por las excepciones, los bloques `finally` se pueden usar para asegurarte de que una parte del código siempre se ejecute cuando un bloque termina. ### Reintentar Digamos que tienes una función que, en el 20 por ciento de los casos, multiplica dos números, y en el otro 80 por ciento, genera una excepción del tipo . Escribe una función que envuelva esta torpe función y solo siga intentando hasta que una llamada tenga éxito, después de lo cual retorna el resultado. Asegúrete de solo manejar las excepciones que estás tratando de manejar. > class FalloUnidadMultiplicadora extends Error {} function multiplicacionPrimitiva(a, b) { if (Math.random() < 0.2) { return a * b; } else { throw new FalloUnidadMultiplicadora("Klunk"); } } function multiplicacionConfiable(a, b) { // Tu código aqui. } console.log(multiplicacionConfiable(8, 8)); // → 64 La llamada a definitivamente debería suceder en un bloquear `try` . El bloque `catch` correspondiente debe volver a lanzar la excepción cuando no esta no sea una instancia de y asegurar que la llamada sea reintentada cuando lo es. Para reintentar, puedes usar un ciclo que solo se rompa cuando la llamada tenga éxito, como en el ejemplo de `mirar` anteriormente en este capítulo—o usar recursión y esperar que no obtengas una cadena de fallas tan largas que desborde la pila (lo cual es una apuesta bastante segura). ### La caja bloqueada Considera el siguiente objeto (bastante artificial): > const caja = { bloqueada: true, desbloquear() { this.bloqueada = false; }, bloquear() { this.bloqueada = true; }, _contenido: [], get contenido() { if (this.bloqueada) throw new Error("Bloqueada!"); return this._contenido; } }; Es solo una caja con una cerradura. Hay un array en la caja, pero solo puedes accederlo cuando la caja esté desbloqueada. Acceder directamente a la propiedad privada `_contenido` está prohibido. Escribe una función llamada `conCajaDesbloqueada` que toma un valor de función como su argumento, desbloquea la caja, ejecuta la función y luego se asegura de que la caja se bloquee nuevamente antes de retornar, independientemente de si la función argumento retorno normalmente o lanzo una excepción. > const caja = { bloqueada: true, desbloquear() { this.bloqueada = false; }, bloquear() { this.bloqueada = true; }, _contenido: [], get contenido() { if (this.bloqueada) throw new Error("Bloqueada!"); return this._contenido; } }; function conCajaDesbloqueada(cuerpo) { // Tu código aqui. } conCajaDesbloqueada(function() { caja.contenido.push("moneda de oro"); }); try { conCajaDesbloqueada(function() { throw new Error("Piratas en el horizonte! Abortar!"); }); } catch (e) { console.log("Error encontrado:", e); } console.log(caja.bloqueada); // → true Por puntos extras, asegúrete de que si llamas a `conCajaDesbloqueada` cuando la caja ya está desbloqueada, la caja permanece desbloqueada. Este ejercicio requiere de un bloque `finally` . Tu función deberia primero desbloquear la caja y luego llamar a la función argumento desde dentro de cuerpo `try` . El bloque `finally` después de el debería bloquear la caja nuevamente. Para asegurarte de que no bloqueemos la caja cuando no estaba ya bloqueada, comprueba su bloqueo al comienzo de la función y desbloquea y bloquea solo cuando la caja comenzó bloqueada. Date: 2003-01-30 Categories: Tags: Algunas personas, cuando confrontadas con un problema, piensan ‘Ya sé, usaré expresiones regulares.’ Ahora tienen dos problemas. Yuan-Ma dijo: ‘Cuando cortas contra el grano de la madera, mucha fuerza se necesita. Cuando programas contra el grano del problema, mucho código se necesita. Las herramientas y técnicas de la programación sobreviven y se propagan de una forma caótica y evolutiva. No siempre son los bonitas o las brillantes las que ganan, sino más bien las que funcionan lo suficientemente bien dentro del nicho correcto o que sucede se integran con otra pieza exitosa de tecnología. En este capítulo, discutiré una de esas herramientas, expresiones regulares. Las expresiones regulares son una forma de describir patrones en datos de tipo string. Estas forman un lenguaje pequeño e independiente que es parte de JavaScript y de muchos otros lenguajes y sistemas. Las expresiones regulares son terriblemente incómodas y extremadamente útiles. Su sintaxis es críptica, y la interfaz de programación que JavaScript proporciona para ellas es torpe. Pero son una poderosa herramienta para inspeccionar y procesar cadenas. Entender apropiadamente a las expresiones regulares te hará un programador más efectivo. ## Creando una expresión regular Una expresión regular es un tipo de objeto. Puede ser construido con el constructor `RegExp` o escrito como un valor literal al envolver un patrón en caracteres de barras diagonales ( `/` ). > edit & run code by clicking itlet re1 = new RegExp("abc"); let re2 = /abc/; Ambos objetos de expresión regular representan el mismo patrón: un carácter a seguido por una b seguida de una c. Cuando se usa el constructor `RegExp` , el patrón se escribe como un string normal, por lo que las reglas habituales se aplican a las barras invertidas. La segunda notación, donde el patrón aparece entre caracteres de barras diagonales, trata a las barras invertidas de una forma diferente. Primero, dado que una barra diagonal termina el patrón, tenemos que poner una barra invertida antes de cualquier barra diagonal que queremos sea parte del patrón. En adición, las barras invertidas que no sean parte de códigos especiales de caracteres (como `\n` ) seran preservadas, en lugar de ignoradas, ya que están en strings, y cambian el significado del patrón. Algunos caracteres, como los signos de interrogación pregunta y los signos de adición, tienen significados especiales en las expresiones regulares y deben ir precedidos por una barra inversa si se pretende que representen al caracter en sí mismo. > let dieciochoMas = /dieciocho\+/; ## Probando por coincidencias Los objetos de expresión regular tienen varios métodos. El más simple es `test` (“probar”). Si le pasas un string, retornar un Booleano diciéndote si el string contiene una coincidencia del patrón en la expresión. > console.log(/abc/.test("abcde")); // → true console.log(/abc/.test("abxde")); // → false Una expresión regular que consista solamente de caracteres no especiales simplemente representara esa secuencia de caracteres. Si abc ocurre en cualquier parte del string con la que estamos probando (no solo al comienzo), `test` retornara `true` . ## Conjuntos de caracteres Averiguar si un string contiene abc bien podría hacerse con una llamada a `indexOf` . Las expresiones regulares nos permiten expresar patrones más complicados. Digamos que queremos encontrar cualquier número. En una expresión regular, poner un conjunto de caracteres entre corchetes hace que esa parte de la expresión coincida con cualquiera de los caracteres entre los corchetes. Ambas expresiones coincidiran con todas los strings que contengan un dígito: > console.log(/[0123456789]/.test("en 1992")); // → true console.log(/[0-9]/.test("en 1992")); // → true Dentro de los corchetes, un guion ( `-` ) entre dos caracteres puede ser utilizado para indicar un rango de caracteres, donde el orden es determinado por el número Unicode del carácter. Los caracteres 0 a 9 estan uno al lado del otro en este orden (códigos 48 a 57), por lo que `[0-9]` los cubre a todos y coincide con cualquier dígito. Un numero de caracteres comunes tienen sus propios atajos incorporados. Los dígitos son uno de ellos: `\d` significa lo mismo que `[0-9]` . | Cualquier caracter dígito | | --- | --- | | Un caracter alfanumérico | | Cualquier carácter de espacio en blanco (espacio, tabulación, nueva línea y similar) | | Un caracter que no es un dígito | | Un caracter no alfanumérico | | Un caracter que no es un espacio en blanco | | Cualquier caracter a excepción de una nueva línea | Por lo que podrías coincidir con un formato de fecha y hora como 30-01-2003 15:20 con la siguiente expresión: > let fechaHora = /\d\d-\d\d-\d\d\d\d \d\d:\d\d/; console.log(fechaHora.test("30-01-2003 15:20")); // → true console.log(fechaHora.test("30-jan-2003 15:20")); // → false Eso se ve completamente horrible, no? La mitad de la expresión son barras invertidas, produciendo un ruido de fondo que hace que sea difícil detectar el patrón real que queremos expresar. Veremos una versión ligeramente mejorada de esta expresión más tarde. Estos códigos de barra invertida también pueden usarse dentro de corchetes. Por ejemplo, `[\d.]` representa cualquier dígito o un carácter de punto. Pero el punto en sí mismo, entre corchetes, pierde su significado especial. Lo mismo va para otros caracteres especiales, como `+` . Para invertir un conjunto de caracteres, es decir, para expresar que deseas coincidir con cualquier carácter excepto con los que están en el conjunto—puedes escribir un carácter de intercalación ( `^` ) después del corchete de apertura. > let noBinario = /[^01]/; console.log(noBinario.test("1100100010100110")); // → false console.log(noBinario.test("1100100010200110")); // → true ## Repitiendo partes de un patrón Ya sabemos cómo hacer coincidir un solo dígito. Qué pasa si queremos hacer coincidir un número completo—una secuencia de uno o más dígitos? Cuando pones un signo más ( `+` ) después de algo en una expresión regular, este indica que el elemento puede repetirse más de una vez. Por lo tanto, `/\d+/` coincide con uno o más caracteres de dígitos. > console.log(/'\d+'/.test("'123'")); // → true console.log(/'\d+'/.test("''")); // → false console.log(/'\d*'/.test("'123'")); // → true console.log(/'\d*'/.test("''")); // → true La estrella ( `*` ) tiene un significado similar pero también permite que el patrón coincida cero veces. Algo con una estrella después de el nunca evitara un patrón de coincidirlo—este solo coincidirá con cero instancias si no puede encontrar ningun texto adecuado para coincidir. Un signo de interrogación hace que alguna parte de un patrón sea opcional, lo que significa que puede ocurrir cero o mas veces. En el siguiente ejemplo, el carácter h está permitido, pero el patrón también retorna verdadero cuando esta letra no esta. > let reusar = /reh?usar/; console.log(reusar.test("rehusar")); // → true console.log(reusar.test("reusar")); // → true Para indicar que un patrón deberia ocurrir un número preciso de veces, usa llaves. Por ejemplo, al poner `{4}` después de un elemento, hace que requiera que este ocurra exactamente cuatro veces. También es posible especificar un rango de esta manera: `{2,4}` significa que el elemento debe ocurrir al menos dos veces y como máximo cuatro veces. Aquí hay otra versión del patrón fecha y hora que permite días tanto en dígitos individuales como dobles, meses y horas. Es también un poco más fácil de descifrar. > let fechaHora = /\d{1,2}-\d{1,2}-\d{4} \d{1,2}:\d{2}/; console.log(fechaHora.test("30-1-2003 8:45")); // → true También puedes especificar rangos de final abierto al usar llaves omitiendo el número después de la coma. Entonces, `{5,}` significa cinco o más veces. ## Agrupando subexpresiones Para usar un operador como `*` o `+` en más de un elemento a la vez, tienes que usar paréntesis. Una parte de una expresión regular que se encierre entre paréntesis cuenta como un elemento único en cuanto a los operadores que la siguen están preocupados. > let caricaturaLlorando = /boo+(hoo+)+/i; console.log(caricaturaLlorando.test("Boohoooohoohooo")); // → true El primer y segundo caracter `+` aplican solo a la segunda o en boo y hoo, respectivamente. El tercer `+` se aplica a la totalidad del grupo `(hoo+)` , haciendo coincidir una o más secuencias como esa. La `i` al final de la expresión en el ejemplo hace que esta expresión regular sea insensible a mayúsculas y minúsculas, lo que permite que coincida con la letra mayúscula B en el string que se le da de entrada, asi el patrón en sí mismo este en minúsculas. ## Coincidencias y grupos El método `test` es la forma más simple de hacer coincidir una expresión. Solo te dice si coincide y nada más. Las expresiones regulares también tienen un método `exec` (“ejecutar”) que retorna `null` si no se encontró una coincidencia y retorna un objeto con información sobre la coincidencia de lo contrario. > let coincidencia = /\d+/.exec("uno dos 100"); console.log(coincidencia); // → ["100"] console.log(coincidencia.index); // → 8 Un objeto retornado por `exec` tiene una propiedad `index` (“indice”) que nos dice donde en el string comienza la coincidencia exitosa. Aparte de eso, el objeto parece (y de hecho es) un array de strings, cuyo primer elemento es el string que coincidio—en el ejemplo anterior, esta es la secuencia de dígitos que estábamos buscando. Los valores de tipo string tienen un método `match` que se comporta de manera similar. > console.log("uno dos 100".match(/\d+/)); // → ["100"] Cuando la expresión regular contenga subexpresiones agrupadas con paréntesis, el texto que coincida con esos grupos también aparecerá en el array. La coincidencia completa es siempre el primer elemento. El siguiente elemento es la parte que coincidio con el primer grupo (el que abre paréntesis primero en la expresión), luego el segundo grupo, y asi sucesivamente. > let textoCitado = /'([^']*)'/; console.log(textoCitado.exec("ella dijo 'hola'")); // → ["'hola'", "hola"] Cuando un grupo no termina siendo emparejado en absoluto (por ejemplo, cuando es seguido de un signo de interrogación), su posición en el array de salida sera `undefined` . Del mismo modo, cuando un grupo coincida multiples veces, solo la ultima coincidencia termina en el array. > console.log(/mal(isimo)?/.exec("mal")); // → ["mal", undefined] console.log(/(\d)+/.exec("123")); // → ["123", "3"] Los grupos pueden ser útiles para extraer partes de un string. Si no solo queremos verificar si un string contiene una fecha pero también extraerla y construir un objeto que la represente, podemos envolver paréntesis alrededor de los patrones de dígitos y tomar directamente la fecha del resultado de `exec` . Pero primero, un breve desvío, en el que discutiremos la forma incorporada de representar valores de fecha y hora en JavaScript. ## La clase Date (“Fecha”) JavaScript tiene una clase estándar para representar fechas—o mejor dicho, puntos en el tiempo. Se llama `Date` . Si simplemente creas un objeto fecha usando `new` , obtienes la fecha y hora actual. > console.log(new Date()); // → Mon Nov 13 2017 16:19:11 GMT+0100 (CET) También puedes crear un objeto para un tiempo específico. > console.log(new Date(2009, 11, 9)); // → Wed Dec 09 2009 00:00:00 GMT+0100 (CET) console.log(new Date(2009, 11, 9, 12, 59, 59, 999)); // → Wed Dec 09 2009 12:59:59 GMT+0100 (CET) JavaScript usa una convención en donde los números de los meses comienzan en cero (por lo que Diciembre es 11), sin embargo, los números de los días comienzan en uno. Esto es confuso y tonto. Ten cuidado. Los últimos cuatro argumentos (horas, minutos, segundos y milisegundos) son opcionales y se toman como cero cuando no se dan. Las marcas de tiempo se almacenan como la cantidad de milisegundos desde el inicio de 1970, en la zona horaria UTC. Esto sigue una convención establecida por el “Tiempo Unix”, el cual se inventó en ese momento. Puedes usar números negativos para los tiempos anteriores a 1970. Usar el método `getTime` (“obtenerTiempo”) en un objeto fecha retorna este número. Es bastante grande, como te puedes imaginar. > console.log(new Date(2013, 11, 19).getTime()); // → 1387407600000 console.log(new Date(1387407600000)); // → Thu Dec 19 2013 00:00:00 GMT+0100 (CET) Si le das al constructor `Date` un único argumento, ese argumento sera tratado como un conteo de milisegundos. Puedes obtener el recuento de milisegundos actual creando un nuevo objeto `Date` y llamando `getTime` en él o llamando a la función `Date.now` . Los objetos de fecha proporcionan métodos como `getFullYear` (“obtenerAñoCompleto”), `getMonth` (“obtenerMes”), `getDate` (“obtenerFecha”), `getHours` (“obtenerHoras”), `getMinutes` (“obtenerMinutos”), y `getSeconds` (“obtenerSegundos”) para extraer sus componentes. Además de `getFullYear` , también existe `getYear` (“obtenerAño”), que te da como resultado un valor de año de dos dígitos bastante inútil (como `93` o `14` ). Al poner paréntesis alrededor de las partes de la expresión en las que estamos interesados, ahora podemos crear un objeto de fecha a partir de un string. > function obtenerFecha(string) { let [_, dia, mes, año] = /(\d{1,2})-(\d{1,2})-(\d{4})/.exec(string); return new Date(año, mes - 1, dia); } console.log(obtenerFecha("30-1-2003")); // → Thu Jan 30 2003 00:00:00 GMT+0100 (CET) La vinculación `_` (guion bajo) es ignorada, y solo se usa para omitir el elemento de coincidencia completa en el array retornado por `exec` . ## Palabra y límites de string Desafortunadamente, `obtenerFecha` felizmente también extraerá la absurda fecha 00-1-3000 del string `"100-1-30000"` . Una coincidencia puede suceder en cualquier lugar del string, por lo que en este caso, esta simplemente comenzará en el segundo carácter y terminara en el penúltimo carácter. Si queremos hacer cumplir que la coincidencia deba abarcar el string completamente, puedes agregar los marcadores `^` y `$` . El signo de intercalación ("^") coincide con el inicio del string de entrada, mientras que el signo de dólar coincide con el final. Entonces, `/^\d+$/` coincide con un string compuesto por uno o más dígitos, `/^!/` coincide con cualquier string que comience con un signo de exclamación, y `/x^/` no coincide con ningun string (no puede haber una x antes del inicio del string). Si, por el otro lado, solo queremos asegurarnos de que la fecha comience y termina en un límite de palabras, podemos usar el marcador `\b` . Un límite de palabra puede ser el inicio o el final del string o cualquier punto en el string que tenga un carácter de palabra (como en `\w` ) en un lado y un carácter de no-palabra en el otro. > console.log(/cat/.test("concatenar")); // → true console.log(/\bcat\b/.test("concatenar")); // → false Ten en cuenta que un marcador de límite no coincide con un carácter real. Solo hace cumplir que la expresión regular coincida solo cuando una cierta condición se mantenga en el lugar donde aparece en el patrón. ## Patrones de elección Digamos que queremos saber si una parte del texto contiene no solo un número pero un número seguido de una de las palabras cerdo, vaca, o pollo, o cualquiera de sus formas plurales. Podríamos escribir tres expresiones regulares y probarlas a su vez, pero hay una manera más agradable. El carácter de tubería ( `|` ) denota una elección entre el patrón a su izquierda y el patrón a su derecha. Entonces puedo decir esto: > let conteoAnimales = /\b\d+ (cerdo|vaca|pollo)s?\b/; console.log(conteoAnimales.test("15 cerdo")); // → true console.log(conteoAnimales.test("15 cerdopollos")); // → false Los paréntesis se pueden usar para limitar la parte del patrón a la que aplica el operador de tuberia, y puedes poner varios de estos operadores unos a los lados de los otros para expresar una elección entre más de dos alternativas. ## Las mecánicas del emparejamiento Conceptualmente, cuando usas `exec` o `test` el motor de la expresión regular busca una coincidencia en tu string al tratar de hacer coincidir la expresión primero desde el comienzo del string, luego desde el segundo caracter, y así sucesivamente hasta que encuentra una coincidencia o llega al final del string. Retornara la primera coincidencia que se puede encontrar o fallara en encontrar cualquier coincidencia. Para realmente hacer la coincidencia, el motor trata una expresión regular algo así como un diagrama de flujo. Este es el diagrama para la expresión de ganado en el ejemplo anterior: Nuestra expresión coincide si podemos encontrar un camino desde el lado izquierdo del diagrama al lado derecho. Mantenemos una posición actual en el string, y cada vez que nos movemos a través de una caja, verificaremos que la parte del string después de nuestra posición actual coincida con esa caja. Entonces, si tratamos de coincidir `"los 3 cerdos"` desde la posición 4, nuestro progreso a través del diagrama de flujo se vería así: En la posición 4, hay un límite de palabra, por lo que podemos pasar la primera caja. * Aún en la posición 4, encontramos un dígito, por lo que también podemos pasar la segunda caja. * En la posición 5, una ruta regresa a antes de la segunda caja (dígito), mientras que la otro se mueve hacia adelante a través de la caja que contiene un caracter de espacio simple. Hay un espacio aquí, no un dígito, asi que debemos tomar el segundo camino. * Ahora estamos en la posición 6 (el comienzo de “cerdos”) y en el camino de tres vías en el diagrama. No vemos “vaca” o “pollo” aquí, pero vemos “cerdo”, entonces tomamos esa rama. * En la posición 9, después de la rama de tres vías, un camino se salta la caja s y va directamente al límite de la palabra final, mientras que la otra ruta coincide con una s. Aquí hay un carácter s, no una palabra límite, por lo que pasamos por la caja s. * Estamos en la posición 10 (al final del string) y solo podemos hacer coincidir una palabra límite. El final de un string cuenta como un límite de palabra, así que pasamos por la última caja y hemos emparejado con éxito este string. ## Retrocediendo La expresión regular `/` coincide con un número binario seguido de una b, un número hexadecimal (es decir, en base 16, con las letras a a f representando los dígitos 10 a 15) seguido de una h, o un número decimal regular sin caracter de sufijo. Este es el diagrama correspondiente: Al hacer coincidir esta expresión, a menudo sucederá que la rama superior (binaria) sea ingresada aunque la entrada en realidad no contenga un número binario. Al hacer coincidir el string `"103"` , por ejemplo, queda claro solo en el 3 que estamos en la rama equivocada. El string si coincide con la expresión, pero no con la rama en la que nos encontramos actualmente. Entonces el “emparejador” retrocede. Al ingresar a una rama, este recuerda su posición actual (en este caso, al comienzo del string, justo después del primer cuadro de límite en el diagrama) para que pueda retroceder e intentar otra rama si la actual no funciona. Para el string `"103"` , después de encontrar los 3 caracteres, comenzará a probar la rama para números hexadecimales, que falla nuevamente porque no hay h después del número. Por lo tanto, intenta con la rama de número decimal. Esta encaja, y se informa de una coincidencia después de todo. El emparejador se detiene tan pronto como encuentra una coincidencia completa. Esto significa que si múltiples ramas podrían coincidir con un string, solo la primera (ordenado por donde las ramas aparecen en la expresión regular) es usada. El retroceso también ocurre para repetición de operadores como + y `*` . Si hace coincidir `/^.*x/` contra `"abcxe"` , la parte `.*` intentará primero consumir todo el string. El motor entonces se dará cuenta de que necesita una x para que coincida con el patrón. Como no hay x al pasar el final del string, el operador de estrella intenta hacer coincidir un caracter menos. Pero el emparejador tampoco encuentra una x después de `abcx` , por lo que retrocede nuevamente, haciendo coincidir el operador de estrella con `abc` . Ahora encuentra una x donde lo necesita e informa de una coincidencia exitosa de las posiciones 0 a 4. Es posible escribir expresiones regulares que harán un monton de retrocesos. Este problema ocurre cuando un patrón puede coincidir con una pieza de entrada en muchas maneras diferentes. Por ejemplo, si nos confundimos mientras escribimos una expresión regular de números binarios, podríamos accidentalmente escribir algo como `/([01]+)+b/` . Si intentas hacer coincidir eso con algunas largas series de ceros y unos sin un caracter b al final, el emparejador primero pasara por el ciclo interior hasta que se quede sin dígitos. Entonces nota que no hay b, asi que retrocede una posición, atraviesa el ciclo externo una vez, y se da por vencido otra vez, tratando de retroceder fuera del ciclo interno una vez más. Continuará probando todas las rutas posibles a través de estos dos bucles. Esto significa que la cantidad de trabajo se duplica con cada caracter. Incluso para unas pocas docenas de caracters, la coincidencia resultante tomará prácticamente para siempre. ## El método replace Los valores de string tienen un método `replace` (“reemplazar”) que se puede usar para reemplazar parte del string con otro string. > console.log("papa".replace("p", "m")); // → mapa El primer argumento también puede ser una expresión regular, en cuyo caso ña primera coincidencia de la expresión regular es reemplazada. Cuando una opción `g` (para global) se agrega a la expresión regular, todas las coincidencias en el string será reemplazadas, no solo la primera. > console.log("Borobudur".replace(/[ou]/, "a")); // → Barobudur console.log("Borobudur".replace(/[ou]/g, "a")); // → Barabadar Hubiera sido sensato si la elección entre reemplazar una coincidencia o todas las coincidencias se hiciera a través de un argumento adicional en `replace` o proporcionando un método diferente, `replaceAll` (“reemplazarTodas”). Pero por alguna desafortunada razón, la elección se basa en una propiedad de los expresiones regulares en su lugar. El verdadero poder de usar expresiones regulares con `replace` viene del hecho de que podemos referirnos a grupos coincidentes en la string de reemplazo. Por ejemplo, supongamos que tenemos una gran string que contenga los nombres de personas, un nombre por línea, en el formato `Apellido, Nombre` . Si deseamos intercambiar estos nombres y eliminar la coma para obtener un formato `Nombre Apellido` , podemos usar el siguiente código: > console.log( "Liskov, Barbara\nMcCarthy, John\nWadler, Philip" .replace(/(\w+), (\w+)/g, "$2 $1")); // → <NAME> // <NAME> // <NAME> Los `$1` y `$2` en el string de reemplazo se refieren a los grupos entre paréntesis del patrón. `$1` se reemplaza por el texto que coincide con el primer grupo, `$2` por el segundo, y así sucesivamente, hasta `$9` . Puedes hacer referencia a la coincidencia completa con `$&` . Es posible pasar una función, en lugar de un string, como segundo argumento para `replace` . Para cada reemplazo, la función será llamada con los grupos coincidentes (así como con la coincidencia completa) como argumentos, y su valor de retorno se insertará en el nuevo string. Aquí hay un pequeño ejemplo: > let s = "la cia y el fbi"; console.log(s.replace(/\b(fbi|cia)\b/g, str => str.toUpperCase())); // → la CIA y el FBI Y aquí hay uno más interesante: > let almacen = "1 limon, 2 lechugas, y 101 huevos"; function menosUno(coincidencia, cantidad, unidad) { cantidad = Number(cantidad) - 1; if (cantidad == 1) { // solo queda uno, remover la 's' unidad = unidad.slice(0, unidad.length - 1); } else if (cantidad == 0) { cantidad = "sin"; } return cantidad + " " + unidad; } console.log(almacen.replace(/(\d+) (\w+)/g, menosUno)); // → sin limon, 1 lechuga, y 100 huevos Esta función toma un string, encuentra todas las ocurrencias de un número seguido de una palabra alfanumérica, y retorna un string en la que cada ocurrencia es decrementada por uno. El grupo `(\d+)` termina como el argumento `cantidad` para la función, y el grupo `(\w+)` se vincula a `unidad` . La función convierte `cantidad` a un número—lo que siempre funciona, ya que coincidio con `\d+` —y realiza algunos ajustes en caso de que solo quede uno o cero. ## Codicia Es posible usar `replace` para escribir una función que elimine todo los comentarios de un fragmento de código JavaScript. Aquí hay un primer intento: > function removerComentarios(codigo) { return codigo.replace(/\/\/.*|\/*[^]**\//g, ""); } console.log(removerComentarios("1 + /* 2 */3")); // → 1 + 3 console.log(removerComentarios("x = 10;// ten!")); // → x = 10; console.log(removerComentarios("1 /* a */+/* b */ 1")); // → 1 1 La parte anterior al operador o coincide con dos caracteres de barra inclinada seguido de cualquier número de caracteres que no sean nuevas lineas. La parte para los comentarios de líneas múltiples es más complicado. Usamos `[^]` (cualquier caracter que no está en el conjunto de caracteres vacíos) como una forma de unir cualquier caracter. No podemos simplemente usar un punto aquí porque los comentarios de bloque pueden continuar en una nueva línea, y el carácter del período no coincide con caracteres de nuevas lineas. Pero la salida de la última línea parece haber salido mal. Por qué? La parte `[^]*` de la expresión, como describí en la sección retroceder, primero coincidirá tanto como sea posible. Si eso causa un falo en la siguiente parte del patrón, el emparejador retrocede un caracter e intenta nuevamente desde allí. En el ejemplo, el emparejador primero intenta emparejar el resto del string y luego se mueve hacia atrás desde allí. Este encontrará una ocurrencia de `*/` después de retroceder cuatro caracteres y emparejar eso. Esto no es lo que queríamos, la intención era hacer coincidir un solo comentario, no ir hasta el final del código y encontrar el final del último comentario de bloque. Debido a este comportamiento, decimos que los operadores de repetición ( `+` , `*` , `?` y `{}` ) son _ codiciosos, lo que significa que coinciden con tanto como pueden y retroceden desde allí. Si colocas un signo de interrogación después de ellos ( `+?` , `*?` , `??` , `{}?` ), se vuelven no-codiciosos y comienzan a hacer coincidir lo menos posible, haciendo coincidir más solo cuando el patrón restante no se ajuste a la coincidencia más pequeña. Y eso es exactamente lo que queremos en este caso. Al hacer que la estrella coincida con el tramo más pequeño de caracteres que nos lleve a un `*/` , consumimos un comentario de bloque y nada más. > function removerComentarios(codigo) { return codigo.replace(/\/\/.*|\/*[^]*?*\//g, ""); } console.log(removerComentarios("1 /* a */+/* b */ 1")); // → 1 + 1 Una gran cantidad de errores en los programas de expresiones regulares se pueden rastrear a intencionalmente usar un operador codicioso, donde uno que no sea codicioso trabajaria mejor. Al usar un operador de repetición, considera la variante no-codiciosa primero. ## Creando objetos RegExp dinámicamente Hay casos en los que quizás no sepas el patrón exacto necesario para coincidir cuando estes escribiendo tu código. Imagina que quieres buscar el nombre del usuario en un texto y encerrarlo en caracteres de subrayado para que se destaque. Como solo sabrás el nombrar una vez que el programa se está ejecutando realmente, no puedes usar la notación basada en barras. Pero puedes construir un string y usar el constructor `RegExp` en el. Aquí hay un ejemplo: > let nombre = "harry"; let texto = "Harry es un personaje sospechoso."; let regexp = new RegExp("\\b(" + nombre + ")\\b", "gi"); console.log(texto.replace(regexp, "_$1_")); // → _Harry_ es un personaje sospechoso. Al crear los marcadores de límite `\b` , tenemos que usar dos barras invertidas porque las estamos escribiendo en un string normal, no en una expresión regular contenida en barras. El segundo argumento para el constructor `RegExp` contiene las opciones para la expresión regular—en este caso, `"gi"` para global e insensible a mayúsculas y minúsculas. Pero, y si el nombre es `"dea+hl[]rd"` porque nuestro usuario es un nerd adolescente? Eso daría como resultado una expresión regular sin sentido que en realidad no coincidirá con el nombre del usuario. Para solucionar esto, podemos agregar barras diagonales inversas antes de cualquier caracter que tenga un significado especial. > let nombre = "dea+hl[]rd"; let texto = "Este sujeto dea+hl[]rd es super fastidioso."; let escapados = nombre.replace(/[\\[.+*?(){|^$]/g, "\\$&"); let regexp = new RegExp("\\b" + escapados + "\\b", "gi"); console.log(texto.replace(regexp, "_$&_")); // → Este sujeto _dea+hl[]rd_ es super fastidioso. ## El método search El método `indexOf` en strings no puede invocarse con una expresión regular. Pero hay otro método, `search` (“buscar”), que espera una expresión regular. Al igual que `indexOf` , retorna el primer índice en que se encontró la expresión, o -1 cuando no se encontró. > console.log(" palabra".search(/\S/)); // → 2 console.log(" ".search(/\S/)); // → -1 Desafortunadamente, no hay forma de indicar que la coincidencia debería comenzar a partir de un desplazamiento dado (como podemos con el segundo argumento para `indexOf` ), que a menudo sería útil. ## La propiedad lastIndex De manera similar el método `exec` no proporciona una manera conveniente de comenzar buscando desde una posición dada en el string. Pero proporciona una manera inconveniente. Los objetos de expresión regular tienen propiedades. Una de esas propiedades es `source` (“fuente”), que contiene el string de donde se creó la expresión. Otra propiedad es `lastIndex` (“ultimoIndice”), que controla, en algunas circunstancias limitadas, donde comenzará la siguiente coincidencia. Esas circunstancias son que la expresión regular debe tener la opción global ( `g` ) o adhesiva ( `y` ) habilitada, y la coincidencia debe suceder a través del método `exec` . De nuevo, una solución menos confusa hubiese sido permitir que un argumento adicional fuera pasado a `exec` , pero la confusión es una característica esencial de la interfaz de las expresiones regulares de JavaScript. > let patron = /y/g; patron.lastIndex = 3; let coincidencia = patron.exec("xyzzy"); console.log(coincidencia.index); // → 4 console.log(patron.lastIndex); // → 5 Si la coincidencia fue exitosa, la llamada a `exec` actualiza automáticamente a la propiedad `lastIndex` para que apunte después de la coincidencia. Si no se encontraron coincidencias, `lastIndex` vuelve a cero, que es también el valor que tiene un objeto de expresión regular recién construido. La diferencia entre las opciones globales y las adhesivas es que, cuandoa adhesivo está habilitado, la coincidencia solo tendrá éxito si comienza directamente en `lastIndex` , mientras que con global, buscará una posición donde pueda comenzar una coincidencia. > let global = /abc/g; console.log(global.exec("xyz abc")); // → ["abc"] let adhesivo = /abc/y; console.log(adhesivo.exec("xyz abc")); // → null Cuando se usa un valor de expresión regular compartido para múltiples llamadas a `exec` , estas actualizaciones automáticas a la propiedad `lastIndex` pueden causar problemas. Tu expresión regular podría estar accidentalmente comenzando en un índice que quedó de una llamada anterior. > let digito = /\d/g; console.log(digito.exec("aqui esta: 1")); // → ["1"] console.log(digito.exec("y ahora: 1")); // → null Otro efecto interesante de la opción global es que cambia la forma en que funciona el método `match` en strings. Cuando se llama con una expresión global, en lugar de retornar un matriz similar al retornado por `exec` , `match` encontrará todas las coincidencias del patrón en el string y retornar un array que contiene los strings coincidentes. > console.log("Banana".match(/an/g)); // → ["an", "an"] Por lo tanto, ten cuidado con las expresiones regulares globales. Los casos donde son necesarias—llamadas a `replace` y lugares donde deseas explícitamente usar `lastIndex` —son generalmente los únicos lugares donde querras usarlas. ### Ciclos sobre coincidencias Una cosa común que hacer es escanear todas las ocurrencias de un patrón en un string, de una manera que nos de acceso al objeto de coincidencia en el cuerpo del ciclo. Podemos hacer esto usando `lastIndex` y `exec` . > let entrada = "Un string con 3 numeros en el... 42 y 88."; let numero = /\b\d+\b/g; let coincidencia; while (coincidencia = numero.exec(entrada)) { console.log("Se encontro", coincidencia[0], "en", coincidencia.index); } // → Se encontro 3 en 14 // Se encontro 42 en 33 // Se encontro 88 en 38 Esto hace uso del hecho de que el valor de una expresión de asignación ( `=` ) es el valor asignado. Entonces al usar ``` coincidencia = numero. ``` como la condición en la declaración `while` , realizamos la coincidencia al inicio de cada iteración, guardamos su resultado en una vinculación, y terminamos de repetir cuando no se encuentran más coincidencias. ## Análisis de un archivo INI Para concluir el capítulo, veremos un problema que requiere de expresiones regulares. Imagina que estamos escribiendo un programa para recolectar automáticamente información sobre nuestros enemigos de el Internet. (No escribiremos ese programa aquí, solo la parte que lee el archivo de configuración. Lo siento.) El archivo de configuración se ve así: > motordebusqueda=https://duckduckgo.com/?q=$1 malevolencia=9.7 ; los comentarios estan precedidos por un punto y coma... ; cada seccion contiene un enemigo individual [larry] nombrecompleto=<NAME> tipo=bravucon del preescolar sitioweb=http://www.geocities.com/CapeCanaveral/11451 [davaeorn] nombrecompleto=Davaeorn tipo=hechizero malvado directoriosalida=/home/marijn/enemies/davaeorn Las reglas exactas para este formato (que es un formato ampliamente utilizado, usualmente llamado un archivo INI) son las siguientes: Las líneas en blanco y líneas que comienzan con punto y coma se ignoran. * Líneas que contienen un identificador alfanumérico seguido de un carácter `=` agregan una configuración a la sección actual. Nuestra tarea es convertir un string como este en un objeto cuyas propiedades contengas strings para configuraciones sin sección y sub-objetos para secciones, con esos subobjetos conteniendo la configuración de la sección. Dado que el formato debe procesarse línea por línea, dividir el archivo en líneas separadas es un buen comienzo. Usamos `string.` para hacer esto en el Capítulo 4. Algunos sistemas operativos, sin embargo, usan no solo un carácter de nueva línea para separar lineas sino un carácter de retorno de carro seguido de una nueva línea ( `"\r\n"` ). Dado que el método `split` también permite una expresión regular como su argumento, podemos usar una expresión regular como `/\r?\n/` para dividir el string de una manera que permita tanto `"\n"` como `"\r\n"` entre líneas. > function analizarINI(string) { // Comenzar con un objeto para mantener los campos de nivel superior let resultado = {}; let seccion = resultado; string.split(/\r?\n/).forEach(linea => { let coincidencia; if (coincidencia = linea.match(/^(\w+)=(.*)$/)) { seccion[coincidencia[1]] = coincidencia[2]; } else if (coincidencia = linea.match(/^\[(.*)\]$/)) { seccion = resultado[coincidencia[1]] = {}; } else if (!/^\s*(;.*)?$/.test(linea)) { throw new Error("Linea '" + linea + "' no es valida."); } }); return resultado; } console.log(analizarINI(` nombre=Vasilis [direccion] ciudad=Tessaloniki`)); // → {nombre: "Vasilis", direccion: {ciudad: "Tessaloniki"}} El código pasa por las líneas del archivo y crea un objeto. Las propiedades en la parte superior se almacenan directamente en ese objeto, mientras que las propiedades que se encuentran en las secciones se almacenan en un objeto de sección separado. La vinculación `sección` apunta al objeto para la sección actual. Hay dos tipos de de líneas significativas—encabezados de seccion o lineas de propiedades. Cuando una línea es una propiedad regular, esta se almacena en la sección actual. Cuando se trata de un encabezado de sección, se crea un nuevo objeto de sección, y `seccion` se configura para apuntar a él. Nota el uso recurrente de `^` y `$` para asegurarse de que la expresión coincida con toda la línea, no solo con parte de ella. Dejando afuera estos resultados en código que funciona principalmente, pero que se comporta de forma extraña para algunas entradas, lo que puede ser un error difícil de rastrear. El patrón ``` if (coincidencia = string. ``` es similar al truco de usar una asignación como condición para `while` . A menudo no estas seguro de que tu llamada a `match` tendrá éxito, para que puedas acceder al objeto resultante solo dentro de una declaración `if` que pruebe esto. Para no romper la agradable cadena de las formas `else if` , asignamos el resultado de la coincidencia a una vinculación e inmediatamente usamos esa asignación como la prueba para la declaración `if` . Si una línea no es un encabezado de sección o una propiedad, la función verifica si es un comentario o una línea vacía usando la expresión `/^\s*(;.*)?$/` . Ves cómo funciona? La parte entre los paréntesis coincidirá con los comentarios, y el `?` asegura que también coincida con líneas que contengan solo espacios en blanco. Cuando una línea no coincida con cualquiera de las formas esperadas, la función arroja una excepción. ## Caracteres internacionales Debido a la simplista implementación inicial de JavaScript y al hecho de que este enfoque simplista fue luego establecido en piedra como comportamiento estándar, las expresiones regulares de JavaScript son bastante tontas acerca de los caracteres que no aparecen en el idioma inglés. Por ejemplo, en cuanto a las expresiones regulares de JavaScript, una “palabra caracter” es solo uno de los 26 caracteres en el alfabeto latino (mayúsculas o minúsculas), dígitos decimales, y, por alguna razón, el carácter de guion bajo. Cosas como é o β, que definitivamente son caracteres de palabras, no coincidirán con `\w` (y si coincidiran con `\W` mayúscula, la categoría no-palabra). Por un extraño accidente histórico, `\s` (espacio en blanco) no tiene este problema y coincide con todos los caracteres que el estándar Unicode considera espacios en blanco, incluyendo cosas como el (espacio de no separación) y el Separador de vocales Mongol. Otro problema es que, de forma predeterminada, las expresiones regulares funcionan en unidades del código, como se discute en el Capítulo 5, no en caracteres reales. Esto significa que los caracteres que estan compustos de dos unidades de código se comportan de manera extraña. > console.log(/🍎{3}/.test("🍎🍎🍎")); // → false console.log(/<.>/.test("<🌹>")); // → false console.log(/<.>/u.test("<🌹>")); // → true El problema es que la 🍎 en la primera línea se trata como dos unidades de código, y la parte `{3}` se aplica solo a la segunda. Del mismo modo, el punto coincide con una sola unidad de código, no con las dos que componen al emoji de rosa. Debe agregar una opción `u` (para Unicode) a tu expresión regular para hacerla tratar a tales caracteres apropiadamente. El comportamiento incorrecto sigue siendo el predeterminado, desafortunadamente, porque cambiarlo podría causar problemas en código ya existente que depende de él. Aunque esto solo se acaba de estandarizar y aun no es, al momento de escribir este libro, ampliamente compatible con muchs nabegadores, es posible usar `\p` en una expresión regular (que debe tener habilitada la opción Unicode) para que coincida con todos los caracteres a los que el estándar Unicode lis asigna una propiedad determinada. > console.log(/\p{Script=Greek}/u.test("α")); // → true console.log(/\p{Script=Arabic}/u.test("α")); // → false console.log(/\p{Alphabetic}/u.test("α")); // → true console.log(/\p{Alphabetic}/u.test("!")); // → false Unicode define una cantidad de propiedades útiles, aunque encontrar la que necesitas puede no ser siempre trivial. Puedes usar la notación `\p{Property=Value}` para hacer coincidir con cualquier carácter que tenga el valor dado para esa propiedad. Si el nombre de la propiedad se deja afuera, como en `\p{Name}` , se asume que el nombre es una propiedad binaria como `Alfabético` o una categoría como `Número` . Las expresiones regulares son objetos que representan patrones en strings. Ellas usan su propio lenguaje para expresar estos patrones. | Una secuencia de caracteres | | --- | --- | | Cualquier caracter de un conjunto de caracteres | | Cualquier carácter que no este en un conjunto de caracteres | | Cualquier caracter en un rango de caracteres | | Una o más ocurrencias del patrón | | Una o más ocurrencias, no codiciosas | | Cero o más ocurrencias | | Cero o una ocurrencia | | De dos a cuatro ocurrencias | | Un grupo | | Cualquiera de varios patrones | | Cualquier caracter de digito | | Un caracter alfanumérico (“carácter de palabra”) | | Cualquier caracter de espacio en blanco | | Cualquier caracter excepto líneas nuevas | | Un límite de palabra | | Inicio de entrada | | Fin de la entrada | Una expresión regular tiene un método `test` para probar si una determinada string coincide cn ella. También tiene un método `exec` que, cuando una coincidencia es encontrada, retorna un array que contiene todos los grupos que coincidieron. Tal array tiene una propiedad `index` que indica en dónde comenzó la coincidencia. Los strings tienen un método `match` para coincidirlas con una expresión regular y un método `search` para buscar por una, retornando solo la posición inicial de la coincidencia. Su método `replace` puede reemplazar coincidencias de un patrón con un string o función de reemplazo. Las expresiones regulares pueden tener opciones, que se escriben después de la barra que cierra la expresión. La opción `i` hace que la coincidencia no distinga entre mayúsculas y minúsculas. La opción `g` hace que la expresión sea global, que, entre otras cosas, hace que el método `replace` reemplace todas las instancias en lugar de solo la primera. La opción `y` la hace adhesivo, lo que significa que hará que no busque con anticipación y omita la parte del string cuando busque una coincidencia. La opción `u` activa el modo Unicode, lo que soluciona varios problemas alrededor del manejo de caracteres que toman dos unidades de código. Las expresiones regulares son herramientas afiladas con un manejo incómodo. Ellas simplifican algunas tareas enormemente, pero pueden volverse inmanejables rápidamente cuando se aplican a problemas complejos. Parte de saber cómo usarlas es resistiendo el impulso de tratar de calzar cosas que no pueden ser expresadas limpiamente en ellas. Es casi inevitable que, durante el curso de trabajar en estos ejercicios, te sentiras confundido y frustrado por el comportamiento inexplicable de alguna regular expresión. A veces ayuda ingresar tu expresión en una herramienta en línea como debuggex.com para ver si su visualización corresponde a lo que pretendías y a experimentar con la forma en que responde a varios strings de entrada. ### Golf Regexp Golf de Codigo es un término usado para el juego de intentar expresar un programa particular con el menor número de caracteres posible. Similarmente, Golf de Regexp es la práctica de escribir una expresión regular tan pequeña como sea posible para que coincida con un patrón dado, y sólo con ese patrón. Para cada uno de los siguientes elementos, escribe una expresión regular para probar si alguna de las substrings dadas ocurre en un string. La expresión regular debe coincidir solo con strings que contengan una de las substrings descritas. No te preocupes por los límites de palabras a menos que sean explícitamente mencionados. Cuando tu expresión funcione, ve si puedes hacerla más pequeña. car y cat * pop y prop * ferret, ferry, y ferrari * Cualquier palabra que termine ious * Un carácter de espacio en blanco seguido de un punto, coma, dos puntos o punto y coma * Una palabra con mas de seis letras * Una palabra sin la letra e (o E) Consulta la tabla en el resumen del capítulo para ayudarte. Pruebe cada solución con algunos strings de prueba. > // Llena con las expresiones regulares verificar(/.../, ["my car", "bad cats"], ["camper", "high art"]); verificar(/.../, ["pop culture", "mad props"], ["plop", "prrrop"]); verificar(/.../, ["ferret", "ferry", "ferrari"], ["ferrum", "transfer A"]); verificar(/.../, ["how delicious", "spacious room"], ["ruinous", "consciousness"]); verificar(/.../, ["bad punctuation ."], ["escape the period"]); verificar(/.../, ["hottentottententen"], ["no", "hotten totten tenten"]); verificar(/.../, ["red platypus", "wobbling nest"], ["earth bed", "learning ape", "BEET"]); function verificar(regexp, si, no) { // Ignora ejercicios sin terminar if (regexp.source == "...") return; for (let str of si) if (!regexp.test(str)) { console.log(`Fallo al coincidir '${str}'`); } for (let str of no) if (regexp.test(str)) { console.log(`Coincidencia inesperada para '${str}'`); } } ### Estilo entre comillas Imagina que has escrito una historia y has utilizado comillass simples en todas partes para marcar piezas de diálogo. Ahora quieres reemplazar todas las comillas de diálogo con comillas dobles, pero manteniendo las comillas simples usadas en contracciones como aren’t. Piensa en un patrón que distinga de estos dos tipos de uso de citas y crea una llamada al método `replace` que haga el reemplazo apropiado. > let texto = "'I'm the cook,' he said, 'it's my job.'"; // Cambia esta llamada console.log(texto.replace(/A/g, "B")); // → "I'm the cook," he said, "it's my job." La solución más obvia es solo reemplazar las citas con una palabra no personaje en al menos un lado. Algo como `/\W'|'\W/` . Pero también debes tener en cuenta el inicio y el final de la línea. Además, debes asegurarte de que el reemplazo también incluya los caracteres que coincidieron con el patrón `\W` para que estos no sean dejados. Esto se puede hacer envolviéndolos en paréntesis e incluyendo sus grupos en la cadena de reemplazo ( `$1` , `$2` ). Los grupos que no están emparejados serán reemplazados por nada. ### Números otra vez Escribe una expresión que coincida solo con el estilo de números en JavaScript. Esta debe admitir un signo opcional menos o más delante del número, el punto decimal, y la notación de exponente— `5e-3` o `1E10` — nuevamente con un signo opcional en frente del exponente. También ten en cuenta que no es necesario que hayan dígitos delante o detrás del punto, pero el el número no puede ser solo un punto. Es decir, `.5` y `5.` son numeros válidos de JavaScript, pero solo un punto no lo es. > // Completa esta expresión regular. let numero = /^...$/; // Pruebas: for (let str of ["1", "-1", "+15", "1.55", ".5", "5.", "1.3e2", "1E-4", "1e+12"]) { if (!numero.test(str)) { console.log(`Fallo al coincidir '${str}'`); } } for (let str of ["1a", "+-1", "1.2.3", "1+1", "1e4.5", ".5.", "1f5", "."]) { if (numero.test(str)) { console.log(`Incorrectamente acepto '${str}'`); } } Primero, no te olvides de la barra invertida delante del punto. Coincidir el signo opcional delante de el número, así como delante del exponente, se puede hacer con `[+\-]?` o `(\+|-|)` (más, menos o nada). La parte más complicada del ejercicio es el problema hacer coincidir ambos `"5."` y `".5"` sin también coincidir coincidir con `"."` . Para esto, una buena solución es usar el operador `|` para separar los dos casos—ya sea uno o más dígitos seguidos opcionalmente por un punto y cero o más dígitos o un punto seguido de uno o más dígitos. Finalmente, para hacer que la e pueda ser mayuscula o minuscula, agrega una opción `i` a la expresión regular o usa `[eE]` . Date: 2017-11-22 Categories: Tags: Escriba código que sea fácil de borrar, no fácil de extender. El programa ideal tiene una estructura cristalina. La forma en que funciona es fácil de explicar, y cada parte juega un papel bien definido. Un típico programa real crece orgánicamente. Nuevas piezas de funcionalidad se agregan a medida que surgen nuevas necesidades. Estructurar—y preservar la estructura—es trabajo adicional, trabajo que solo valdra la pena en el futuro, la siguiente vez que alguien trabaje en el programa. Así que es tentador descuidarlo, y permitir que las partes del programa se vuelvan profundamente enredadas. Esto causa dos problemas prácticos. En primer lugar, entender tal sistema es difícil. Si todo puede tocar todo lo demás, es difícil ver a cualquier pieza dada de forma aislada. Estas obligado a construir un entendimiento holístico de todo el asunto. En segundo lugar, si quieres usar cualquiera de las funcionalidades de dicho programa en otra situación, reescribirla podria resultar más fácil que tratar de desenredarla de su contexto. El término “gran bola de barro” se usa a menudo para tales programas grandes, sin estructura. Todo se mantiene pegado, y cuando intentas sacar una pieza, todo se desarma y tus manos se ensucian. ## Módulos Los módulos son un intento de evitar estos problemas. Un módulo es una pieza del programa que especifica en qué otras piezas este depende ( sus dependencias) y qué funcionalidad proporciona para que otros módulos usen (su interfaz). Las interfaces de los módulos tienen mucho en común con las interfaces de objetos, como las vimos en el Capítulo 6. Estas hacen parte del módulo disponible para el mundo exterior y mantienen el resto privado. Al restringir las formas en que los módulos interactúan entre sí, el el sistema se parece más a un juego de LEGOS, donde las piezas interactúan a través de conectores bien definidos, y menos como barro, donde todo se mezcla con todo. Las relaciones entre los módulos se llaman dependencias. Cuando un módulo necesita una pieza de otro módulo, se dice que depende de ese módulo. Cuando este hecho está claramente especificado en el módulo en sí, puede usarse para descubrir qué otros módulos deben estar presentes para poder ser capaces de usar un módulo dado y cargar dependencias automáticamente. Para separar módulos de esa manera, cada uno necesita su propio alcance privado. Simplemente poner todo tu código JavaScript en diferentes archivos no satisface estos requisitos. Los archivos aún comparten el mismo espacio de nombres global. Pueden, intencionalmente o accidentalmente, interferir con las vinculaciones de cada uno. Y la estructura de dependencia sigue sin estar clara. Podemos hacerlo mejor, como veremos más adelante en el capítulo. Diseñar una estructura de módulo ajustada para un programa puede ser difícil. En la fase en la que todavía estás explorando el problema, intentando cosas diferentes para ver que funciona, es posible que desees no preocuparte demasiado por eso, ya que puede ser una gran distracción. Una vez que tengas algo que se sienta sólido, es un buen momento para dar un paso atrás y organizarlo. ## Paquetes Una de las ventajas de construir un programa a partir de piezas separadas, y ser capaces de ejecutar esas piezas por si mismas, es que tú podrías ser capaz de aplicar la misma pieza en diferentes programas. Pero cómo se configura esto? Digamos que quiero usar la función `analizarINI` del Capítulo 9 en otro programa. Si está claro de qué depende la función (en este caso, nada), puedo copiar todo el código necesario en mi nuevo proyecto y usarlo. Pero luego, si encuentro un error en ese código, probablemente lo solucione en el programa en el que estoy trabajando en ese momento y me olvido de arreglarlo en el otro programa. Una vez que comience a duplicar código, rápidamente te encontraras perdiendo tiempo y energía moviendo las copias alrededor y manteniéndolas actualizadas. Ahí es donde los paquetes entran. Un paquete es un pedazo de código que puede ser distribuido (copiado e instalado). Puede contener uno o más módulos, y tiene información acerca de qué otros paquetes depende. Un paquete también suele venir con documentación que explica qué es lo que hace, para que las personas que no lo escribieron todavía puedan hacer uso de el. Cuando se encuentra un problema en un paquete, o se agrega una nueva característica, el el paquete es actualizado. Ahora los programas que dependen de él (que también pueden ser otros paquetes) pueden actualizar a la nueva versión. Trabajar de esta manera requiere infraestructura. Necesitamos un lugar para almacenar y encontrar paquetes, y una forma conveniente de instalar y actualizarlos. En el mundo de JavaScript, esta infraestructura es provista por NPM (npmjs.org). NPM es dos cosas: un servicio en línea donde uno puede descargar (y subir) paquetes, y un programa (incluido con Node.js) que te ayuda a instalar y administrarlos. Al momento de escribir esto, hay más de medio millón de paquetes diferentes disponibles en NPM. Una gran parte de ellos son basura, debería mencionar, pero casi todos los paquetes útiles, disponibles públicamente, se puede encontrar allí. Por ejemplo, un analizador de archivos INI, similar al uno que construimos en el Capítulo 9, está disponible bajo el nombre de paquete `ini` . En el Capítulo 20 veremos cómo instalar dichos paquetes de forma local utilizando el programa de línea de comandos `npm` . Tener paquetes de calidad disponibles para descargar es extremadamente valioso. Significa que a menudo podemos evitar tener que reinventar un programa que cien personas han escrito antes, y obtener una implementación sólida y bien probado con solo presionar algunas teclas. El software es barato de copiar, por lo que una vez lo haya escrito alguien, distribuirlo a otras personas es un proceso eficiente. Pero escribirlo en el primer lugar, es trabajo y responder a las personas que han encontrado problemas en el código, o que quieren proponer nuevas características, es aún más trabajo. Por defecto, tu posees el copyright del código que escribes, y otras personas solo pueden usarlo con tu permiso. Pero ya que algunas personas son simplemente agradables, y porque la publicación de un buen software puede ayudarte a hacerte un poco famoso entre los programadores, se publican muchos paquetes bajo una licencia que explícitamente permite a otras personas usarlos. La mayoría del código en NPM esta licenciado de esta manera. Algunas licencias requieren que tu publiques también el código bajo la misma licencia del paquete que estas usando. Otras son menos exigentes, solo requieren que guardes la licencia con el código cuando lo distribuyas. La comunidad de JavaScript principalmente usa ese último tipo de licencia. Al usar paquetes de otras personas, asegúrete de conocer su licencia. ## Módulos improvisados Hasta 2015, el lenguaje JavaScript no tenía un sistema de módulos incorporado. Sin embargo, la gente había estado construyendo sistemas grandes en JavaScript durante más de una década y ellos necesitaban módulos. Así que diseñaron sus propios sistema de módulos arriba del lenguaje. Puedes usar funciones de JavaScript para crear alcances locales, y objetos para representar las interfaces de los módulos. Este es un módulo para ir entre los nombres de los días y números (como son retornados por el método `getDay` de `Date` ). Su interfaz consiste en `diaDeLaSemana.` y `diaDeLaSemana.` , y oculta su vinculación local `nombres` dentro del alcance de una expresión de función que se invoca inmediatamente. > edit & run code by clicking itconst diaDeLaSemana = function() { const nombres = ["Domingo", "Lunes", "Martes", "Miercoles", "Jueves", "Viernes", "Sabado"]; return { nombre(numero) { return nombres[numero]; }, numero(nombre) { return nombres.indexOf(nombre); } }; }(); console.log(diaDeLaSemana.nombre(diaDeLaSemana.numero("Domingo"))); // → Domingo Este estilo de módulos proporciona aislamiento, hasta cierto punto, pero no declara dependencias. En cambio, simplemente pone su interfaz en el alcance global y espera que sus dependencias, si hay alguna, hagan lo mismo. Durante mucho tiempo, este fue el enfoque principal utilizado en la programación web, pero ahora está mayormente obsoleto. Si queremos hacer que las relaciones de dependencia sean parte del código, tendremos que tomar el control de las dependencias que deben ser cargadas. Hacer eso requiere que seamos capaces de ejecutar strings como código. JavaScript puede hacer esto. ## Evaluando datos como código Hay varias maneras de tomar datos (un string de código) y ejecutarlos como una parte del programa actual. La forma más obvia es usar el operador especial `eval` , que ejecuta un string en el alcance actual. Esto usualmente es una mala idea porque rompe algunas de las propiedades que normalmente tienen los alcances, tal como fácilmente predecir a qué vinculación se refiere un nombre dado. > const x = 1; function evaluarYRetornarX(codigo) { eval(codigo); return x; } console.log(evaluarYRetornarX("var x = 2")); // → 2 Una forma menos aterradora de interpretar datos como código es usar el constructor `Function` . Este toma dos argumentos: un string que contiene una lista de nombres de argumentos separados por comas y un string que contiene el cuerpo de la función. > let masUno = Function("n", "return n + 1;"); console.log(masUno(4)); // → 5 Esto es precisamente lo que necesitamos para un sistema de módulos. Podemos envolver el código del módulo en una función y usar el alcance de esa función como el alcance del módulo. ## CommonJS El enfoque más utilizado para incluir módulos en JavaScript es llamado módulos CommonJS. Node.js lo usa, y es el sistema utilizado por la mayoría de los paquetes en NPM. El concepto principal en los módulos CommonJS es una función llamada `require` (“requerir”). Cuando la llamas con el nombre del módulo de una dependencia, esta se asegura de que el módulo sea cargado y retorna su interfaz. Debido a que el cargador envuelve el código del módulo en una función, los módulos obtienen automáticamente su propio alcance local. Todo lo que tienen que hacer es llamar a `require` para acceder a sus dependencias, y poner su interfaz en el objeto vinculado a `exports` (“exportaciones”). Este módulo de ejemplo proporciona una función de formateo de fecha. Utiliza dos paquetes de NPM— `ordinal` para convertir números a strings como `"1st"` y `"2nd"` , y `date-names` para obtener los nombres en inglés de los días de la semana y meses. Este exporta una sola función, `formatDate` , que toma un objeto `Date` y un string plantilla. El string de plantilla puede contener códigos que dirigen el formato, como `YYYY` para todo el año y `Do` para el día ordinal del mes. Podrías darle un string como `"MMMM Do YYYY"` para obtener resultados como “November 22nd 2017”. > const ordinal = require("ordinal"); const {days, months} = require("date-names"); exports.formatDate = function(date, format) { return format.replace(/YYYY|M(MMM)?|Do?|dddd/g, tag => { if (tag == "YYYY") return date.getFullYear(); if (tag == "M") return date.getMonth(); if (tag == "MMMM") return months[date.getMonth()]; if (tag == "D") return date.getDate(); if (tag == "Do") return ordinal(date.getDate()); if (tag == "dddd") return days[date.getDay()]; }); }; La interfaz de `ordinal` es una función única, mientras que `date-names` exporta un objeto que contiene varias cosas—los dos valores que usamos son arrays de nombres. La desestructuración es muy conveniente cuando creamos vinculaciones para interfaces importadas. El módulo agrega su función de interfaz a `exports` , de modo que los módulos que dependen de el tengan acceso a el. Podríamos usar el módulo de esta manera: > const {formatDate} = require("./format-date"); console.log(formatDate(new Date(2017, 9, 13), "dddd the Do")); // → Friday the 13th Podemos definir `require` , en su forma más mínima, así: > require.cache = Object.create(null); function require(nombre) { if (!(nombre in require.cache)) { let codigo = leerArchivo(nombre); let modulo = {exportaciones: {}}; require.cache[nombre] = modulo; let envolvedor = Function("require, exportaciones, modulo", codigo); envolvedor(require, modulo.exportaciones, modulo); } return require.cache[nombre].exportaciones; } En este código, `leerArchivo` es una función inventada que lee un archivo y retorna su contenido como un string. El estándar de JavaScript no ofrece tal funcionalidad—pero diferentes entornos de JavaScript, como el navegador y Node.js, proporcionan sus propias formas de acceder a archivos. El ejemplo solo pretende que `leerArchivo` existe. Para evitar cargar el mismo módulo varias veces, `require` mantiene un (caché) almacenado de módulos que ya han sido cargados. Cuando se llama, primero verifica si el módulo solicitado ya ha sido cargado y, si no, lo carga. Esto implica leer el código del módulo, envolverlo en una función y llamárla. La interfaz del paquete `ordinal` que vimos antes no es un objeto, sino una función. Una peculiaridad de los módulos CommonJS es que, aunque el sistema de módulos creará un objeto de interfaz vacío para ti (vinculado a `exports` ), puedes reemplazarlo con cualquier valor al sobrescribir `module.exports` . Esto lo hacen muchos módulos para exportar un valor único en lugar de un objeto de interfaz. Al definir `require` , `exportaciones` y `modulo` como parametros para la función de envoltura generada (y pasando los valores apropiados al llamarla), el cargador se asegura de que estas vinculaciones esten disponibles en el alcance del módulo. La forma en que el string dado a `require` se traduce a un nombre de archivo real o dirección web difiere en diferentes sistemas. Cuando comienza con `"./"` o `"../"` , generalmente se interpreta como relativo al nombre del archivo actual. Entonces `"./` sería el archivo llamado `format-date.js` en el mismo directorio. Cuando el nombre no es relativo, Node.js buscará por un paquete instalado con ese nombre. En el código de ejemplo de este capítulo, interpretaremos esos nombres como referencias a paquetes de NPM. Entraremos en más detalles sobre cómo instalar y usar los módulos de NPM en el Capítulo 20. Ahora, en lugar de escribir nuestro propio analizador de archivos INI, podemos usar uno de NPM: > const {parse} = require("ini"); console.log(parse("x = 10\ny = 20")); // → {x: "10", y: "20"} ## Módulos ECMAScript Los módulos CommonJS funcionan bastante bien y, en combinación con NPM, han permitido que la comunidad de JavaScript comience a compartir código en una gran escala. Pero siguen siendo un poco de un truco con cinta adhesiva. La notación es ligeramente incomoda—las cosas que agregas a `exports` no están disponibles en el alcance local, por ejemplo. Y ya que `require` es una llamada de función normal tomando cualquier tipo de argumento, no solo un string literal, puede ser difícil de determinar las dependencias de un módulo sin correr su código primero. Esta es la razón por la cual el estándar de JavaScript introdujo su propio, sistema de módulos diferente a partir de 2015. Por lo general es llamado módulos ES, donde ES significa ECMAScript. Los principales conceptos de dependencias e interfaces siguen siendo los mismos, pero los detalles difieren. Por un lado, la notación está ahora integrada en el lenguaje. En lugar de llamar a una función para acceder a una dependencia, utilizas una palabra clave `import` (“importar”) especial. > import ordinal from "ordinal"; import {days, months} from "date-names"; export function formatDate(date, format) { /* ... */ } Similarmente, la palabra clave `export` se usa para exportar cosas. Puede aparecer delante de una función, clase o definición de vinculación ( `let` , `const` , o `var` ). La interfaz de un módulo ES no es un valor único, sino un conjunto de vinculaciones con nombres. El módulo anterior vincula `formatDate` a una función. Cuando importas desde otro módulo, importas la vinculación, no el valor, lo que significa que un módulo exportado puede cambiar el valor de la vinculación en cualquier momento, y que los módulos que la importen verán su nuevo valor. Cuando hay una vinculación llamada `default` , esta se trata como el principal valor del módulo exportado. Si importas un módulo como `ordinal` en el ejemplo, sin llaves alrededor del nombre de la vinculación, obtienes su vinculación `default` . Dichos módulos aún pueden exportar otras vinculaciones bajo diferentes nombres ademas de su exportación por `default` . Para crear una exportación por default, escribe `export default` antes de una expresión, una declaración de función o una declaración de clase. > export default ["Invierno", "Primavera", "Verano", "Otoño"]; Es posible renombrar la vinculación importada usando la palabra `as` (“como”). > import {days as nombresDias} from "date-names"; console.log(nombresDias.length); // → 7 Al momento de escribir esto, la comunidad de JavaScript está en proceso de adoptar este estilo de módulos. Pero ha sido un proceso lento. Tomó algunos años, después de que se haya especificado el formato paraq que los navegadores y Node.js comenzaran a soportarlo. Y a pesar de que lo soportan mayormente ahora, este soporte todavía tiene problemas, y la discusión sobre cómo dichos módulos deberían distribuirse a través de NPM todavía está en curso. Muchos proyectos se escriben usando módulos ES y luego se convierten automáticamente a algún otro formato cuando son publicados. Estamos en período de transición en el que se utilizan dos sistemas de módulos diferentes uno al lado del otro, y es útil poder leer y escribir código en cualquiera de ellos. ## Construyendo y empaquetando De hecho, muchos proyectos de JavaScript ni siquiera están, técnicamente, escritos en JavaScript. Hay extensiones, como el dialecto de comprobación de tipos mencionado en el Capítulo 7, que son ampliamente usados. Las personas también suelen comenzar a usar extensiones planificadas para el lenguaje mucho antes de que estas hayan sido agregadas a las plataformas que realmente corren JavaScript. Para que esto sea posible, ellos compilan su código, traduciéndolo del dialecto de JavaScript que eligieron a JavaScript simple y antiguo—o incluso a una versión anterior de JavaScript, para que navegadores antiguos puedan ejecutarlo. Incluir un programa modular que consiste de 200 archivos diferentes en una página web produce sus propios problemas. Si buscar un solo archivo sobre la red tarda 50 milisegundos, cargar todo el programa tardaria 10 segundos, o tal vez la mitad si puedes cargar varios archivos simultáneamente. Eso es mucho tiempo perdido. Ya que buscar un solo archivo grande tiende a ser más rápido que buscar muchos archivos pequeños, los programadores web han comenzado a usar herramientas que convierten sus programas (los cuales cuidadosamente estan dividos en módulos) de nuevo en un único archivo grande antes de publicarlo en la Web. Tales herramientas son llamado empaquetadores. Y podemos ir más allá. Además de la cantidad de archivos, el tamaño de los archivos también determina qué tan rápido se pueden transferir a través de la red. Por lo tanto, la comunidad de JavaScript ha inventado minificadores. Estas son herramientas que toman un programa de JavaScript y lo hacen más pequeño al eliminar automáticamente los comentarios y espacios en blanco, cambia el nombre de las vinculaciones, y reemplaza piezas de código con código equivalente que ocupa menos espacio. Por lo tanto, no es raro que el código que encuentres en un paquete de NPM o que se ejecute en una página web haya pasado por multiples etapas de transformación: conversión de JavaScript moderno a JavaScript histórico, del formato de módulos ES a CommonJS, empaquetado y minificado. No vamos a entrar en los detalles de estas herramientas en este libro, ya que tienden a ser aburridos y cambian rápidamente. Solo ten en cuenta que el código JavaScript que ejecutas a menudo no es el código tal y como fue escrito. ## Diseño de módulos La estructuración de programas es uno de los aspectos más sutiles de la programación. Cualquier pieza de funcionalidad no trivial se puede modelar de varias maneras. Un buen diseño de programa es subjetivo—hay ventajas/desventajas involucradas, y cuestiones de gusto. La mejor manera de aprender el valor de una buena estructura de diseño es leer o trabajar en muchos programas y notar lo que funciona y lo qué no. No asumas que un desastroso doloroso es “solo la forma en que las cosas son ". Puedes mejorar la estructura de casi todo al ponerle mas pensamiento. Un aspecto del diseño de módulos es la facilidad de uso. Si estás diseñando algo que está destinado a ser utilizado por varias personas—o incluso por ti mismo, en tres meses cuando ya no recuerdes los detalles de lo que hiciste—es útil si tu interfaz es simple y predicible. Eso puede significar seguir convenciones existentes. Un buen ejemplo es el paquete `ini` . Este módulo imita el objeto estándar `JSON` al proporcionar las funciones `parse` y `stringify` (para escribir un archivo INI), y, como `JSON` , convierte entre strings y objetos simples. Entonces la interfaz es pequeña y familiar, y después de haber trabajado con ella una vez, es probable que recuerdes cómo usarla. Incluso si no hay una función estándar o un paquete ampliamente utilizado para imitar, puedes mantener tus módulos predecibles mediante el uso de estructuras de datos simples y haciendo una cosa única y enfocada. Muchos de los módulos de análisis de archivos INI en NPM proporcionan una función que lee directamente tal archivo del disco duro y lo analiza, por ejemplo. Esto hace que sea imposible de usar tales módulos en el navegador, donde no tenemos acceso directo al sistema de archivos, y agrega una complejidad que habría sido mejor abordada al componer el módulo con alguna función de lectura de archivos. Lo que apunta a otro aspecto útil del diseño de módulos—la facilidad con la qué algo se puede componer con otro código. Módulos enfocados que que computan valores son aplicables en una gama más amplia de programas que módulos mas grandes que realizan acciones complicadas con efectos secundarios. Un lector de archivos INI que insista en leer el archivo desde el disco es inútil en un escenario donde el contenido del archivo provenga de alguna otra fuente. Relacionadamente, los objetos con estado son a veces útiles e incluso necesarios, pero si se puede hacer algo con una función, usa una función. Varios de los lectores de archivos INI en NPM proporcionan un estilo de interfaz que requiere que primero debes crear un objeto, luego cargar el archivo en tu objeto, y finalmente usar métodos especializados para obtener los resultados. Este tipo de cosas es común en la tradición orientada a objetos, y es terrible. En lugar de hacer una sola llamada de función y seguir adelante, tienes que realizar el ritual de mover tu objeto a través de diversos estados. Y ya que los datos ahora están envueltos en un objeto de tipo especializado, todo el código que interactúa con él tiene que saber sobre ese tipo, creando interdependencias innecesarias. A menudo no se puede evitar la definición de nuevas estructuras de datos—solo unas pocas básicas son provistos por el estándar de lenguaje, y muchos tipos de datos tienen que ser más complejos que un array o un mapa. Pero cuando el array es suficiente, usa un array. Un ejemplo de una estructura de datos un poco más compleja es el grafo de el Capítulo 7. No hay una sola manera obvia de representar un grafo en JavaScript. En ese capítulo, usamos un objeto cuya propiedades contenian arrays de strings—los otros nodos accesibles desde ese nodo. Hay varios paquetes de busqueda de rutas diferentes en NPM, pero ninguno de ellos usa este formato de grafo. Por lo general, estos permiten que los bordes del grafo tengan un peso, el costo o la distancia asociada a ellos, lo que no es posible en nuestra representación. Por ejemplo, está el paquete `dijkstrajs` . Un enfoque bien conocido par la busqueda de rutas, bastante similar a nuestra función `encontrarRuta` , se llama el algoritmo de Dijkstra, después de Edsger Dijkstra, quien fue el primero que lo escribió. El sufijo `js` a menudo se agrega a los nombres de los paquetes para indicar el hecho de que están escritos en JavaScript. Este paquete `dijkstrajs` utiliza un formato de grafo similar al nuestro, pero en lugar de arrays, utiliza objetos cuyos valores de propiedad son números—los pesos de los bordes. Si quisiéramos usar ese paquete, tendríamos que asegurarnos de que nuestro grafo fue almacenado en el formato que este espera. > const {find_path} = require("dijkstrajs"); let grafo = {}; for (let node of Object.keys(roadGraph)) { let edges = graph[node] = {}; for (let dest of roadGraph[node]) { edges[dest] = 1; } } console.log(find_path(grafo, "Oficina de Correos", "Cabaña")); // → ["Oficina de Correos", "Casa de Alice", "Cabaña"] Esto puede ser una barrera para la composición—cuando varios paquetes están usando diferentes estructuras de datos para describir cosas similares, combinarlos es difícil. Por lo tanto, si deseas diseñar para la compibilidad, averigua qué estructura de datos están usando otras personas y, cuando sea posible, sigue su ejemplo. Los módulos proporcionan de estructura a programas más grandes al separar el código en piezas con interfaces y dependencias claras. La interfaz es la parte del módulo que es visible desde otros módulos, y las dependencias son los otros módulos este que utiliza. Debido a que históricamente JavaScript no proporcionó un sistema de módulos, el sistema CommonJS fue construido encima. Entonces, en algún momento, consiguio un sistema incorporado, que ahora coexiste incomodamente con el sistema CommonJS. Un paquete es una porción de código que se puede distribuir por sí misma. NPM es un repositorio de paquetes de JavaScript. Puedes descargar todo tipo de paquetes útiles (e inútiles) de él. ### Un robot modular Estas son las vinculaciones que el proyecto del Capítulo 7 crea: > caminos construirGrafo grafoCamino EstadoPueblo correrRobot eleccionAleatoria robotAleatorio rutaCorreo robotRuta encontrarRuta robotOrientadoAMetas Si tuvieras que escribir ese proyecto como un programa modular, qué módulos crearías? Qué módulo dependería de qué otro módulo, y cómo se verían sus interfaces? Qué piezas es probable que estén disponibles pre-escritas en NPM? Preferirias usar un paquete de NPM o escribirlas tu mismo? Aqui esta lo que habría hecho (pero, una vez más, no hay una sola forma correcta de diseñar un módulo dado): El código usado para construir el camino de grafo vive en el módulo `grafo` . Ya que prefiero usar `dijkstrajs` de NPM en lugar de nuestro propio código de busqueda de rutas, haremos que este construya el tipo de datos de grafos que `dijkstajs` espera. Este módulo exporta una sola función, `construirGrafo` . Haria que `construirGrafo` acepte un array de arrays de dos elementos, en lugar de strings que contengan guiones, para hacer que el módulo sea menos dependiente del formato de entrada. El módulo `caminos` contiene los datos en bruto del camino (el array `caminos` ) y la vinculación `grafoCamino` . Este módulo depende de `./grafo` y exporta el grafo del camino. La clase `EstadoPueblo` vive en el módulo `estado` . Depende del módulo `./caminos` , porque necesita poder verificar que un camino dado existe. También necesita `eleccionAleatoria` . Dado que eso es una función de tres líneas, podríamos simplemente ponerla en el módulo `estado` como una función auxiliar interna. Pero `robotAleatorio` también la necesita. Entonces tendriamos que duplicarla o ponerla en su propio módulo. Dado que esta función existe en NPM en el paquete `random-item` , una buena solución es hacer que ambos módulos dependan de el. Podemos agregar la función `correrRobot` a este módulo también, ya que es pequeña y estrechamente relacionada con la gestión de estado. El módulo exporta tanto la clase `EstadoPueblo` como la función `correrRobot` . Finalmente, los robots, junto con los valores de los que dependen, como `mailRoute` , podrían ir en un módulo `robots-ejemplo` , que depende de `./caminos` y exporta las funciones de robot. Para que sea posible que el `robotOrientadoAMetas` haga busqueda de rutas, este módulo también depende de `dijkstrajs` . Al descargar algo de trabajo a los módulos de NPM, el código se volvió un poco mas pequeño. Cada módulo individual hace algo bastante simple, y puede ser leído por sí mismo. La división del código en módulos también sugiere a menudo otras mejoras para el diseño del programa. En este caso, parece un poco extraño que `EstadoPueblo` y los robots dependan de un grafo de caminos. Podría ser una mejor idea hacer del grafo un argumento para el constructor del estado y hacer que los robots lo lean del objeto estado—esto reduce las dependencias (lo que siempre es bueno) y hace posible ejecutar simulaciones en diferentes mapas (lo cual es aún mejor). Es una buena idea usar módulos de NPM para cosas que podríamos haber escrito nosotros mismos? En principio, sí—para cosas no triviales como la función de busqueda de rutas es probable que cometas errores y pierdas el tiempo escribiendola tú mismo. Para pequeñas funciones como `eleccionAleatoria` , escribirla por ti mismo es lo suficiente fácil. Pero agregarlas donde las necesites tiende a desordenar tus módulos. Sin embargo, tampoco debes subestimar el trabajo involucrado en encontrar un paquete apropiado de NPM. E incluso si encuentras uno, este podría no funcionar bien o faltarle alguna característica que necesitas. Ademas de eso, depender de los paquetes de NPM, significa que debes asegurarte de que están instalados, tienes que distribuirlos con tu programa, y podrías tener que actualizarlos periódicamente. Entonces, de nuevo, esta es una solución con compromisos, y tu puedes decidir de una u otra manera dependiendo sobre cuánto te ayuden los paquetes. ### Módulo de Caminos Escribe un módulo CommonJS, basado en el ejemplo del Capítulo 7, que contenga el array de caminos y exporte la estructura de datos grafo que los representa como `grafoCamino` . Debería depender de un modulo `./grafo` , que exporta una función `construirGrafo` que se usa para construir el grafo. Esta función espera un array de arrays de dos elementos (los puntos de inicio y final de los caminos). > // Añadir dependencias y exportaciones const caminos = [ "Casa de Alicia-Casa de Bob", "Casa de Alicia-Cabaña", "Casa de Alicia-Oficina de Correos", "Casa de Bob-Ayuntamiento", "Casa de Daria-Casa de Ernie", "Casa de Daria-Ayuntamiento", "Casa de Ernie-Casa de Grete", "Casa de Grete-Granja", "Casa de Grete-Tienda", "Mercado-Granja", "Mercado-Oficina de Correos", "Mercado-Tienda", "Mercado-Ayuntamiento", "Tienda-Ayuntamiento" ]; Como este es un módulo CommonJS, debes usar `require` para importar el módulo grafo. Eso fue descrito como exportar una función `construirGrafo` , que puedes sacar de su objeto de interfaz con una declaración `const` de desestructuración. Para exportar `grafoCamino` , agrega una propiedad al objeto `exports` . Ya que `construirGrafo` toma una estructura de datos que no empareja precisamente `caminos` , la división de los strings de los caminis debe ocurrir en tu módulo. ### Dependencias circulares Una dependencia circular es una situación en donde el módulo A depende de B, y B también, directa o indirectamente, depende de A. Muchos sistemas de módulos simplemente prohíbne esto porque cualquiera que sea el orden que elijas para cargar tales módulos, no puedes asegurarse de que las dependencias de cada módulo han sido cargadas antes de que se ejecuten. Los modulos CommonJS permiten una forma limitada de dependencias cíclicas. Siempre que los módulos no reemplacen a su objeto `exports` predeterminado, y no accedan a la interfaz de las demás hasta que terminen de cargar, las dependencias cíclicas están bien. La función `require` dada anteriormente en este capítulo es compatible con este tipo de ciclo de dependencias. Puedes ver cómo maneja los ciclos? Qué iría mal cuando un módulo en un ciclo reemplace su objeto `exports` por defecto? El truco es que `require` agrega módulos a su caché antes de comenzar a cargar el módulo. De esa forma, si se realiza una llamada `require` mientras está ejecutando el intento de cargarlo, ya es conocido y la interfaz actual sera retornada, en lugar de comenzar a cargar el módulo una vez más (lo que eventualmente desbordaría la pila). Si un módulo sobrescribe su valor `module.exports` , cualquier otro módulo que haya recibido su valor de interfaz antes de que termine de cargarse ha conseguido el objeto de interfaz predeterminado (que es probable que este vacío), en lugar del valor de interfaz previsto. Date: 2017-12-21 Categories: Tags: Quién puede esperar tranquilamente mientras el barro se asienta? Quién puede permanecer en calma hasta el momento de actuar? La parte central de una computadora, la parte que lleva a cabo los pasos individuales que componen nuestros programas, es llamada procesador. Los programas que hemos visto hasta ahora son cosas que mantienen al procesador ocupado hasta que hayan terminado su trabajo. La velocidad a la que algo como un ciclo que manipule números pueda ser ejecutado, depende casi completamente de la velocidad del procesador. Pero muchos programas interactúan con cosas fuera del procesador. por ejemplo, podrian comunicarse a través de una red de computadoras o solicitar datos del disco duro—lo que es mucho más lento que obtenerlos desde la memoria. Cuando una cosa como tal este sucediendo, sería una pena dejar que el procesador se mantenga inactivo—podría haber algún otro trabajo que este pueda hacer en el mientras tanto. En parte, esto es manejado por tu sistema operativo, que cambiará el procesador entre múltiples programas en ejecución. Pero eso no ayuda cuando queremos que un unico programa pueda hacer progreso mientras este espera una solicitud de red. ## Asincronicidad En un modelo de programación sincrónico, las cosas suceden una a la vez. Cuando llamas a una función que realiza una acción de larga duración, solo retorna cuando la acción ha terminado y puede retornar el resultado. Esto detiene tu programa durante el tiempo que tome la acción. Un modelo asincrónico permite que ocurran varias cosas al mismo tiempo. Cuando comienzas una acción, tu programa continúa ejecutándose. Cuando la acción termina, el programa es informado y tiene acceso al resultado (por ejemplo, los datos leídos del disco). Podemos comparar a la programación síncrona y asincrónica usando un pequeño ejemplo: un programa que obtiene dos recursos de la red y luego combina resultados. En un entorno síncrono, donde la función de solicitud solo retorna una vez que ha hecho su trabajo, la forma más fácil de realizar esta tarea es realizar las solicitudes una después de la otra. Esto tiene el inconveniente de que la segunda solicitud se iniciará solo cuando la primera haya finalizado. El tiempo total de ejecución será como minimo la suma de los dos tiempos de respuesta. La solución a este problema, en un sistema síncrono, es comenzar hilos adicionales de control. Un hilo es otro programa activo cuya ejecución puede ser intercalada con otros programas por el sistema operativo—ya que la mayoría de las computadoras modernas contienen múltiples procesadores, múltiples hilos pueden incluso ejecutarse al mismo tiempo, en diferentes procesadores. Un segundo hilo podría iniciar la segunda solicitud, y luego ambos subprocesos esperan a que los resultados vuelvan, después de lo cual se vuelven a resincronizar para combinar sus resultados. En el siguiente diagrama, las líneas gruesas representan el tiempo que el programa pasa corriendo normalmente, y las líneas finas representan el tiempo pasado esperando la red. En el modelo síncrono, el tiempo empleado por la red es parte de la línea de tiempo para un hilo de control dado. En el modelo asincrónico, comenzar una acción de red conceptualmente causa una división en la línea del tiempo. El programa que inició la acción continúa ejecutándose, y la acción ocurre junto a el, notificando al programa cuando está termina. Otra forma de describir la diferencia es que esperar que las acciones terminen es implicito en el modelo síncrono, mientras que es explicito, bajo nuestro control, en el asincrónico. La asincronicidad corta en ambos sentidos. Hace que expresar programas que hagan algo no se ajuste al modelo de control lineal más fácil, pero también puede hacer que expresar programas que siguen una línea recta sea más incómodo. Veremos algunas formas de abordar esta incomodidad más adelante en el capítulo. Ambas de las plataformas de programación JavaScript importantes—navegadores y Node.js—realizan operaciones que pueden tomar un tiempo asincrónicamente, en lugar de confiar en hilos. Dado que la programación con hilos es notoriamente difícil (entender lo que hace un programa es mucho más difícil cuando está haciendo varias cosas a la vez), esto es generalmente considerado una buena cosa. ## Tecnología cuervo La mayoría de las personas son conscientes del hecho de que los cuervos son pájaros muy inteligentes. Pueden usar herramientas, planear con anticipación, recordar cosas e incluso comunicarse estas cosas entre ellos. Lo que la mayoría de la gente no sabe, es que son capaces de hacer muchas cosas que mantienen bien escondidas de nosotros. Personas de buena reputación (un tanto excéntricas) expertas en córvidos, me han dicho que la tecnología cuervo no esta muy por detrás de la tecnología humana, y que nos estan alcanzando. Por ejemplo, muchas culturas cuervo tienen la capacidad de construir dispositivos informáticos. Estos no son electrónicos, como lo son los dispositivos informáticos humanos, pero operan a través de las acciones de pequeños insectos, una especie estrechamente relacionada con las termitas, que ha desarrollado una relación simbiótica con los cuervos. Los pájaros les proporcionan comida, y a cambio los insectos construyen y operan sus complejas colonias que, con la ayuda de las criaturas vivientes dentro de ellos, realizan computaciones. Tales colonias generalmente se encuentran en nidos grandes de larga vida. Las aves e insectos trabajan juntos para construir una red de estructuras bulbosas hechas de arcilla, escondidas entre las ramitas del nido, en el que los insectos viven y trabajan. Para comunicarse con otros dispositivos, estas máquinas usan señales de luz. Los cuervos incrustan piezas de material reflectante en tallos de comunicación especial, y los insectos apuntan estos para reflejar la luz hacia otro nido, codificando los datos como una secuencia de flashes rápidos. Esto significa que solo los nidos que tienen una conexión visual ininterrumpida pueden comunicarse entre ellos. Nuestro amigo, el experto en córvidos, ha mapeado la red de nidos de cuervo en el pueblo de Hières-sur-Amby, a orillas del río Ródano. Este mapa muestra los nidos y sus conexiones. En un ejemplo asombroso de evolución convergente, las computadoras cuervo ejecutan JavaScript. En este capítulo vamos a escribir algunas funciones de redes básicas para ellos. ## Devolución de llamadas Un enfoque para la programación asincrónica es hacer que las funciones que realizan una acción lenta, tomen un argumento adicional, una función de devolución de llamada. La acción se inicia y, cuando esta finaliza, la función de devolución es llamada con el resultado. Como ejemplo, la función `setTimeout` , disponible tanto en Node.js como en navegadores, espera una cantidad determinada de milisegundos (un segundo son mil milisegundos) y luego llama una función. > edit & run code by clicking itsetTimeout(() => console.log("Tick"), 500); Esperar no es generalmente un tipo de trabajo muy importante, pero puede ser útil cuando se hace algo como actualizar una animación o verificar si algo está tardando más que una cantidad dada de tiempo. La realización de múltiples acciones asíncronas en una fila utilizando devoluciones de llamada significa que debes seguir pasando nuevas funciones para manejar la continuación de la computación después de las acciones. La mayoría de las computadoras en los nidos de los cuervos tienen un bulbo de almacenamiento de datos a largo plazo, donde las piezas de información se graban en ramitas para que estas puedan ser recuperadas más tarde. Grabar o encontrar un fragmento de información requiere un momento, por lo que la interfaz para el almacenamiento a largo plazo es asíncrona y utiliza funciones de devolución de llamada. Los bulbos de almacenamiento almacenan piezas de JSON-datos codificables bajo nombres. Un cuervo podría almacenar información sobre los lugares donde hay comida escondida bajo el nombre ``` "caches de alimentos" ``` , que podría contener un array de nombres que apuntan a otros datos, que describen el caché real. Para buscar un caché de alimento en los bulbos de almacenamiento del nido Gran Roble, un cuervo podría ejecutar código como este: > import {granRoble} from "./tecnologia-cuervo"; granRoble.leerAlmacenamiento("caches de alimentos", caches => { let primerCache = caches[0]; granRoble.leerAlmacenamiento(primerCache, informacion => { console.log(informacion); }); }); (Todos los nombres de las vinculaciones y los strings se han traducido del lenguaje cuervo a Español.) Este estilo de programación es viable, pero el nivel de indentación aumenta con cada acción asincrónica, ya que terminas en otra función. Hacer cosas más complicadas, como ejecutar múltiples acciones al mismo tiempo, puede ser un poco incómodo. Las computadoras cuervo están construidas para comunicarse usando pares de solicitud-respuesta. Eso significa que un nido envía un mensaje a otro nido, el cual inmediatamente envía un mensaje de vuelta, confirmando el recibo y, posiblemente, incluyendo una respuesta a una pregunta formulada en el mensaje. Cada mensaje está etiquetado con un tipo, que determina cómo este es manejado. Nuestro código puede definir manejadores para tipos de solicitud específicos, y cuando se recibe una solicitud de este tipo, se llama al controlador para que este produzca una respuesta. La interfaz exportada por el módulo `"./` proporciona funciones de devolución de llamada para la comunicación. Los nidos tienen un método `enviar` que envía una solicitud. Este espera el nombre del nido objetivo, el tipo de solicitud y el contenido de la solicitud como sus primeros tres argumentos, y una función a llamar cuando llega una respuesta como su cuarto y último argumento. > granRoble.send("Pastura de Vacas", "nota", "Vamos a graznar fuerte a las 7PM", () => console.log("Nota entregada.")); Pero para hacer nidos capaces de recibir esa solicitud, primero tenemos que definir un tipo de solicitud llamado `"nota"` . El código que maneja las solicitudes debe ejecutarse no solo en este nido-computadora, sino en todos los nidos que puedan recibir mensajes de este tipo. Asumiremos que un cuervo sobrevuela e instala nuestro código controlador en todos los nidos. > import {definirTipoSolicitud} from "./tecnologia-cuervo"; definirTipoSolicitud("nota", (nido, contenido, fuente, listo) => { console.log(`${nido.nombre} recibio nota: ${contenido}`); listo(); }); La función `definirTipoSolicitud` define un nuevo tipo de solicitud. El ejemplo agrega soporte para solicitudes de tipo `"nota"` , que simplemente envían una nota a un nido dado. Nuestra implementación llama a `console.log` para que podamos verificar que la solicitud llegó. Los nidos tienen una propiedad `nombre` que contiene su nombre. El cuarto argumento dado al controlador, `listo` , es una función de devolución de llamada que debe ser llamada cuando se finaliza con la solicitud. Si hubiesemos utilizado el valor de retorno del controlador como el valor de respuesta, eso significaria que un controlador de solicitud no puede realizar acciones asincrónicas por sí mismo. Una función que realiza trabajos asíncronos normalmente retorna antes de que el trabajo este hecho, habiendo arreglado que se llame una devolución de llamada cuando este completada. Entonces, necesitamos algún mecanismo asíncrono, en este caso, otra función de devolución de llamada—para indicar cuándo hay una respuesta disponible. En cierto modo, la asincronía es contagiosa. Cualquier función que llame a una función que funcione asincrónicamente debe ser asíncrona en si misma, utilizando una devolución de llamada o algun mecanismo similar para entregar su resultado. Llamar devoluciones de llamada es algo más involucrado y propenso a errores que simplemente retornar un valor, por lo que necesitar estructurar grandes partes de tu programa de esa manera no es algo muy bueno. ## Promesas Trabajar con conceptos abstractos es a menudo más fácil cuando esos conceptos pueden ser representados por valores. En el caso de acciones asíncronas, podrías, en lugar de organizar a una función para que esta sea llamada en algún momento en el futuro, retornar un objeto que represente este evento en el futuro. Esto es para lo que es la clase estándar `Promise` (“Promesa”). Una promesa es una acción asíncrona que puede completarse en algún punto y producir un valor. Esta puede notificar a cualquier persona que esté interesada cuando su valor este disponible. La forma más fácil de crear una promesa es llamando a `Promise.resolve` (“Promesa.resolver”). Esta función se asegura de que el valor que le des, sea envuelto en una promesa. Si ya es una promesa, simplemente es retornada—de lo contrario, obtienes una nueva promesa que termina de inmediato con tu valor como su resultado. > let quince = Promise.resolve(15); quince.then(valor => console.log(`Obtuve ${valor}`)); // → Obtuve 15 Para obtener el resultado de una promesa, puede usar su método `then` (“entonces”). Este registra una (función de devolución de llamada) para que sea llamada cuando la promesa resuelva y produzca un valor. Puedes agregar múltiples devoluciones de llamada a una única promesa, y serán llamadas, incluso si las agregas después de que la promesa ya haya sido resuelta (terminada). Pero eso no es todo lo que hace el método `then` . Este retorna otra promesa, que resuelve al valor que retorna la función del controlador o, si esa retorna una promesa, espera por esa promesa y luego resuelve su resultado. Es útil pensar acerca de las promesas como dispositivos para mover valores a una realidad asincrónica. Un valor normal simplemente esta allí. Un valor prometido es un valor que podría ya estar allí o podría aparecer en algún momento en el futuro. Las computaciones definidas en términos de promesas actúan en tales valores envueltos y se ejecutan de forma asíncrona a medida los valores se vuelven disponibles. Para crear una promesa, puedes usar `Promise` como un constructor. Tiene una interfaz algo extraña—el constructor espera una función como argumento, a la cual llama inmediatamente, pasando una función que puede usar para resolver la promesa. Funciona de esta manera, en lugar de, por ejemplo, con un método `resolve` , de modo que solo el código que creó la promesa pueda resolverla. Así es como crearía una interfaz basada en promesas para la función `leerAlmacenamiento` . > function almacenamiento(nido, nombre) { return new Promise(resolve => { nido.leerAlmacenamiento(nombre, resultado => resolve(resultado)); }); } almacenamiento(granRoble, "enemigos") .then(valor => console.log("Obtuve", valor)); Esta función asíncrona retorna un valor significativo. Esta es la principal ventaja de las promesas—simplifican el uso de funciones asincrónicas. En lugar de tener que pasar devoluciones de llamadas, las funciones basadas en promesas son similares a las normales: toman entradas como argumentos y retornan su resultado. La única diferencia es que la salida puede que no este disponible inmediatamente. ## Fracaso Las computaciones regulares en JavaScript pueden fallar lanzando una excepción. Las computaciones asincrónicas a menudo necesitan algo así. Una solicitud de red puede fallar, o algún código que sea parte de la computación asincrónica puede arrojar una excepción. Uno de los problemas más urgentes con el estilo de devolución de llamadas en la programación asíncrona es que hace que sea extremadamente difícil asegurarte de que las fallas sean reportadas correctamente a las devoluciones de llamada. Una convención ampliamente utilizada es que el primer argumento para la devolución de llamada es usado para indicar que la acción falló, y el segundo contiene el valor producido por la acción cuando tuvo éxito. Tales funciones de devolución de llamadas siempre deben verificar si recibieron una excepción, y asegurarse de que cualquier problema que causen, incluidas las excepciones lanzadas por las funciones que estas llaman, sean atrapadas y entregadas a la función correcta. Las promesas hacen esto más fácil. Estas pueden ser resueltas (la acción termino con éxito) o rechazadas (esta falló). Los controladores de resolución (registrados con `then` ) solo se llaman cuando la acción es exitosa, y los rechazos se propagan automáticamente a la nueva promesa que es retornada por `then` . Y cuando un controlador arroje una excepción, esto automáticamente hace que la promesa producida por su llamada `then` sea rechazada. Entonces, si cualquier elemento en una cadena de acciones asíncronas falla, el resultado de toda la cadena se marca como rechazado, y no se llaman más manejadores despues del punto en donde falló. Al igual que resolver una promesa proporciona un valor, rechazar una también proporciona uno, generalmente llamado la razón el rechazo. Cuando una excepción en una función de controlador provoca el rechazo, el valor de la excepción se usa como la razón. Del mismo modo, cuando un controlador retorna una promesa que es rechazada, ese rechazo fluye hacia la próxima promesa. Hay una función `Promise.reject` que crea una nueva promesa inmediatamente rechazada. Para manejar explícitamente tales rechazos, las promesas tienen un método `catch` (“atraoar”) que registra un controlador para que sea llamado cuando se rechaze la promesa, similar a cómo los manejadores `then` manejan la resolución normal. También es muy parecido a `then` en que retorna una nueva promesa, que se resuelve en el valor de la promesa original si esta se resuelve normalmente, y al resultado del controlador `catch` de lo contrario. Si un controlador `catch` lanza un error, la nueva promesa también es rechazada. Como una abreviatura, `then` también acepta un manejador de rechazo como segundo argumento, por lo que puedes instalar ambos tipos de controladores en un solo método de llamada. Una función que se pasa al constructor `Promise` recibe un segundo argumento, junto con la función de resolución, que puede usar para rechazar la nueva promesa. Las cadenas de promesas creadas por llamadas a `then` y `catch` puede verse como una tubería a través de la cual los valores asíncronicos o las fallas se mueven. Dado que tales cadenas se crean mediante el registro de controladores, cada enlace tiene un controlador de éxito o un controlador de rechazo (o ambos) asociados a ello. Controladores que no coinciden con ese tipo de resultados (éxito o fracaso) son ignorados. Pero los que sí coinciden son llamados, y su resultado determina qué tipo de valor viene después—éxito cuando retorna un valor que no es una promesa, rechazo cuando arroja una excepción, y el resultado de una promesa cuando retorna una de esas. Al igual que una excepción no detectada es manejada por el entorno, Los entornos de JavaScript pueden detectar cuándo una promesa rechazada no es manejada, y reportará esto como un error. ## Las redes son difíciles Ocasionalmente, no hay suficiente luz para los sistemas de espejos de los cuervos para transmitir una señal, o algo bloquea el camino de la señal. Es posible que se envíe una señal, pero que nunca se reciba. Tal y como es, eso solo causará que la devolución de llamada dada a `send` nunca sea llamada, lo que probablemente hará que el programa se detenga sin siquiera notar que hay un problema. Sería bueno si, después de un determinado período de no obtener una respuesta, una solicitud expirará e informara de un fracaso. A menudo, las fallas de transmisión son accidentes aleatorios, como la luz del faro de un auto interfieriendo con las señales de luz, y simplemente volver a intentar la solicitud puede hacer que esta tenga éxito. Entonces, mientras estamos en eso, hagamos que nuestra función de solicitud automáticamente reintente el envío de la solicitud momentos antes de que se de por vencida. Y, como hemos establecido que las promesas son algo bueno, también haremos que nuestra función de solicitud retorne una promesa. En términos de lo que pueden expresar, las devoluciones de llamada y las promesas son equivalentes. Las funciones basadas en devoluciones de llamadas se pueden envolver para exponer una interfaz basada en promesas, y viceversa. Incluso cuando una solicitud y su respuesta sean entregadas exitosamente, la respuesta puede indicar un error—por ejemplo, si la solicitud intenta utilizar un tipo de solicitud que no haya sido definida o si el controlador genera un error. Para soportar esto, `send` y `definirTipoSolicitud` siguen la convención mencionada anteriormente, donde el primer argumento pasado a las devoluciones de llamada es el motivo del fallo, si lo hay, y el segundo es el resultado real. Estos pueden ser traducidos para prometer resolución y rechazo por parte de nuestra envoltura. > class TiempoDeEspera extends Error {} function request(nido, objetivo, tipo, contenido) { return new Promise((resolve, reject) => { let listo = false; function intentar(n) { nido.send(objetivo, tipo, contenido, (fallo, value) => { listo = true; if (fallo) reject(fallo); else resolve(value); }); setTimeout(() => { if (listo) return; else if (n < 3) intentar(n + 1); else reject(new TiempoDeEspera("Tiempo de espera agotado")); }, 250); } intentar(1); }); } Debido a que las promesas solo se pueden resolver (o rechazar) una vez, esto funcionara. La primera vez que se llame a `resolve` o `reject` se determinara el resultado de la promesa y cualquier llamada subsecuente, como el tiempo de espera que llega después de que finaliza la solicitud, o una solicitud que regresa después de que otra solicitud es finalizada, es ignorada. Para construir un ciclo asincrónico, para los reintentos, necesitamos usar un función recursiva—un ciclo regular no nos permite detenernos y esperar por una acción asincrónica. La función `intentar` hace un solo intento de enviar una solicitud. También establece un tiempo de espera que, si no ha regresado una respuesta después de 250 milisegundos, comienza el próximo intento o, si este es el cuarto intento, rechaza la promesa con una instancia de `TiempoDeEspera` como la razón. Volver a intentar cada cuarto de segundo y rendirse cuando no ha llegado ninguna respuesta después de un segundo es algo definitivamente arbitrario. Es incluso posible, si la solicitud llegó pero el controlador se esta tardando un poco más, que las solicitudes se entreguen varias veces. Escribiremos nuestros manejadores con ese problema en mente—los mensajes duplicados deberían de ser inofensivos. En general, no construiremos una red robusta de clase mundial hoy. Pero eso esta bien—los cuervos no tienen expectativas muy altas todavía cuando se trata de la computación. Para aislarnos por completo de las devoluciones de llamadas, seguiremos adelante y también definiremos un contenedor para `definirTipoSolicitud` que permite que la función controlador pueda retornar una promesa o valor normal, y envia eso hasta la devolución de llamada para nosotros. > function tipoSolicitud(nombre, manejador) { definirTipoSolicitud(nombre, (nido, contenido, fuente, devolucionDeLlamada) => { try { Promise.resolve(manejador(nido, contenido, fuente)) .then(response => devolucionDeLlamada(null, response), failure => devolucionDeLlamada(failure)); } catch (exception) { devolucionDeLlamada(exception); } }); } `Promise.resolve` se usa para convertir el valor retornado por `manejador` a una promesa si no es una ya. Ten en cuenta que la llamada a `manejador` tenía que estar envuelta en un bloque `try` , para asegurarse de que cualquier excepción que aparezca directamente se le dé a la devolución de llamada. Esto ilustra muy bien la dificultad de manejar adecuadamente los errores con devoluciones de llamada crudas—es muy fácil olvidarse de encaminar correctamente excepciones como esa, y si no lo haces, las fallas no se seran informadas a la devolución de llamada correcta. Las promesas hacen esto casi automático, y por lo tanto, son menos propensas a errores. ## Colecciones de promesas Cada computadora nido mantiene un array de otros nidos dentro de la distancia de transmisión en su propiedad `vecinos` . Para verificar cuáles de esos son actualmente accesibles, puede escribir una función que intente enviar un solicitud `"ping"` (una solicitud que simplemente pregunta por una respuesta) para cada de ellos, y ver cuáles regresan. Al trabajar con colecciones de promesas que se ejecutan al mismo tiempo, la función `Promise.all` puede ser útil. Esta retorna una promesa que espera a que se resuelvan todas las promesas del array, y luego resuelve un array de los valores que estas promesas produjeron (en el mismo orden que en el array original). Si alguna promesa es rechazada, el el resultado de `Promise.all` es en sí mismo rechazado. > tipoSolicitud("ping", () => "pong"); function vecinosDisponibles(nido) { let solicitudes = nido.vecinos.map(vecino => { return request(nido, vecino, "ping") .then(() => true, () => false); }); return Promise.all(solicitudes).then(resultado => { return nido.vecinos.filter((_, i) => resultado[i]); }); } Cuando un vecino no este disponible, no queremos que todo la promesa combinada falle, dado que entonces no sabríamos nada. Entonces la función que es mappeada en el conjunto de vecinos para convertirlos en promesas de solicitud vincula a los controladores que hacen las solicitudes exitosas produzcan `true` y las rechazadas produzcan `false` . En el controlador de la promesa combinada, `filter` se usa para eliminar esos elementos de la matriz `vecinos` cuyo valor correspondiente es falso. Esto hace uso del hecho de que `filter` pasa el índice de matriz del elemento actual como segundo argumento para su función de filtrado ( `map` , `some` , y métodos similares de orden superior de arrays hacen lo mismo). ## Inundación de red El hecho de que los nidos solo pueden hablar con sus vecinos inhibe en gran cantidad la utilidad de esta red. Para transmitir información a toda la red, una solución es configurar un tipo de solicitud que sea reenviada automáticamente a los vecinos. Estos vecinos luego la envían a sus vecinos, hasta que toda la red ha recibido el mensaje. > import {todosLados} from "./tecnologia-cuervo"; todosLados(nido => { nido.estado.chismorreo = []; }); function enviarChismorreo(nido, mensaje, exceptoPor = null) { nido.estado.chismorreo.push(mensaje); for (let vecino of nido.vecinos) { if (vecino == exceptoPor) continue; request(nido, vecino, "chismorreo", mensaje); } } requestType("chismorreo", (nido, mensaje, fuente) => { if (nido.estado.chismorreo.includes(mensaje)) return; console.log(`${nido.nombre} recibio chismorreo '${ mensaje}' de ${fuente}`); enviarChismorreo(nido, mensaje, fuente); }); Para evitar enviar el mismo mensaje a traves de la red por siempre, cada nido mantiene un array de strings de chismorreos que ya ha visto. Para definir este array, usaremos la función `todosLados` —que ejecuta código en todos los nidos—para añadir una propiedad al objeto `estado` del nido, que es donde mantendremos estado local del nido. Cuando un nido recibe un mensaje de chisme duplicado, lo cual es muy probable que suceda con todo el mundo reenviando estos a ciegas, lo ignora. Pero cuando recibe un mensaje nuevo, emocionadamente le dice a todos sus vecinos a excepción de quien le envió el mensaje. Esto provocará que una nueva pieza de chismes se propague a través de la red como una mancha de tinta en agua. Incluso cuando algunas conexiones no estan trabajando actualmente, si hay una ruta alternativa a un nido dado, el chisme llegará hasta allí. Este estilo de comunicación de red se llama inundamiento-inunda la red con una pieza de información hasta que todos los nodos la tengan. Podemos llamar a `enviarChismorreo` para ver un mensaje fluir a través del pueblo. > enviarChismorreo(granRoble, "Niños con una pistola de aire en el parque"); ## Enrutamiento de mensajes Si un nodo determinado quiere hablar unicamente con otro nodo, la inundación no es un enfoque muy eficiente. Especialmente cuando la red es grande, daría lugar a una gran cantidad de transferencias de datos inútiles. Un enfoque alternativo es configurar una manera en que los mensajes salten de nodo a nodo, hasta que lleguen a su destino. La dificultad con eso es que requiere de conocimiento sobre el diseño de la red. Para enviar una solicitud hacia la dirección de un nido lejano, es necesario saber qué nido vecino lo acerca más a su destino. Enviar la solicitud en la dirección equivocada no servirá de mucho. Dado que cada nido solo conoce a sus vecinos directos, no tiene la información que necesita para calcular una ruta. De alguna manera debemos extender la información acerca de estas conexiones a todos los nidos. Preferiblemente en una manera que permita ser cambiada con el tiempo, cuando los nidos son abandonados o nuevos nidos son construidos. Podemos usar la inundación de nuevo, pero en lugar de verificar si un determinado mensaje ya ha sido recibido, ahora verificamos si el nuevo conjunto de vecinos de un nido determinado coinciden con el conjunto actual que tenemos para él. > tipoSolicitud("conexiones", (nido, {nombre, vecinos}, fuente) => { let conexiones = nido.estado.conexiones; if (JSON.stringify(conexiones.get(nombre)) == JSON.stringify(vecinos)) return; conexiones.set(nombre, vecinos); difundirConexiones(nido, nombre, fuente); }); function difundirConexiones(nido, nombre, exceptoPor = null) { for (let vecino of nido.vecinos) { if (vecino == exceptoPor) continue; solicitud(nido, vecino, "conexiones", { nombre, vecinos: nido.estado.conexiones.get(nombre) }); } } todosLados(nido => { nido.estado.conexiones = new Map; nido.estado.conexiones.set(nido.nombre, nido.vecinos); difundirConexiones(nido, nido.nombre); }); La comparación usa `JSON.stringify` porque `==` , en objetos o arrays, solo retornara true cuando los dos tengan exactamente el mismo valor, lo cual no es lo que necesitamos aquí. Comparar los strings JSON es una cruda pero efectiva manera de comparar su contenido. Los nodos comienzan inmediatamente a transmitir sus conexiones, lo que debería, a menos que algunos nidos sean completamente inalcanzables, dar rápidamente cada nido un mapa del grafo de la red actual. Una cosa que puedes hacer con grafos es encontrar rutas en ellos, como vimos en el Capítulo 7. Si tenemos una ruta hacia el destino de un mensaje, sabemos en qué dirección enviarlo. Esta función `encontrarRuta` , que se parece mucho a `encontrarRuta` del Capítulo 7, busca por una forma de llegar a un determinado nodo en la red. Pero en lugar de devolver toda la ruta, simplemente retorna el siguiente paso. Ese próximo nido en si mismo, usando su información actual sobre la red, decididira hacia dónde enviar el mensaje. > function encontrarRuta(desde, hasta, conexiones) { let trabajo = [{donde: desde, via: null}]; for (let i = 0; i < trabajo.length; i++) { let {donde, via} = trabajo[i]; for (let siguiente of conexiones.get(donde) || []) { if (siguiente == hasta) return via; if (!trabajo.some(w => w.donde == siguiente)) { trabajo.push({donde: siguiente, via: via || siguiente}); } } } return null; } Ahora podemos construir una función que pueda enviar mensajes de larga distancia. Si el mensaje está dirigido a un vecino directo, se entrega normalmente. Si no, se empaqueta en un objeto y se envía a un vecino que este más cerca del objetivo, usando el tipo de solicitud `"ruta"` , que hace que ese vecino repita el mismo comportamiento. > function solicitudRuta(nido, objetivo, tipo, contenido) { if (nido.vecinos.includes(objetivo)) { return solicitud(nido, objetivo, tipo, contenido); } else { let via = encontrarRuta(nido.nombre, objetivo, nido.estado.conexiones); if (!via) throw new Error(`No hay rutas disponibles hacia ${objetivo}`); return solicitud(nido, via, "ruta", {objetivo, tipo, contenido}); } } tipoSolicitud("ruta", (nido, {objetivo, tipo, contenido}) => { return solicitudRuta(nido, objetivo, tipo, contenido); }); Ahora podemos enviar un mensaje al nido en la torre de la iglesia, que esta a cuatro saltos de red de distancia. > solicitudRuta(granRoble, "Torre de la Iglesia", "nota", "Cuidado con las Palomas!"); Hemos construido varias capas de funcionalidad sobre un sistema de comunicación primitivo para que sea conveniente de usarlo. Este es un buen (aunque simplificado) modelo de cómo las redes de computadoras reales trabajan. Una propiedad distintiva de las redes de computadoras es que no son confiables—las abstracciones construidas encima de ellas pueden ayudar, pero no se puede abstraer la falla de una falla de red. Entonces la programación de redes es típicamente mucho acerca de anticipar y lidiar con fallas. ## Funciones asíncronas Para almacenar información importante, se sabe que los cuervos la duplican a través de los nidos. De esta forma, cuando un halcón destruye un nido, la información no se pierde. Para obtener una pieza de información dada que no este en su propia bulbo de almacenamiento, una computadora nido puede consultar otros nidos al azar en la red hasta que encuentre uno que la tenga. > tipoSolicitud("almacenamiento", (nido, nombre) => almacenamiento(nido, nombre)); function encontrarEnAlmacenamiento(nido, nombre) { return almacenamiento(nido, nombre).then(encontrado => { if (encontrado != null) return encontrado; else return encontrarEnAlmacenamientoRemoto(nido, nombre); }); } function red(nido) { return Array.from(nido.estado.conexiones.keys()); } function encontrarEnAlmacenamientoRemoto(nido, nombre) { let fuentes = red(nido).filter(n => n != nido.nombre); function siguiente() { if (fuentes.length == 0) { return Promise.reject(new Error("No encontrado")); } else { let fuente = fuentes[Math.floor(Math.random() * fuentes.length)]; fuentes = fuentes.filter(n => n != fuente); return solicitudRuta(nido, fuente, "almacenamiento", nombre) .then(valor => valor != null ? valor : siguiente(), siguiente); } } return siguiente(); } Como `conexiones` es un `Map` , `Object.keys` no funciona en él. Este tiene un metódo `keys` , pero que retorna un iterador en lugar de un array. Un iterador (o valor iterable) se puede convertir a un array con la función `Array.from` . Incluso con promesas, este es un código bastante incómodo. Múltiples acciones asincrónicas están encadenadas juntas de maneras no-obvias. Nosotros de nuevo necesitamos una función recursiva ( `siguiente` ) para modelar ciclos a través de nidos. Y lo que el código realmente hace es completamente lineal—siempre espera a que se complete la acción anterior antes de comenzar la siguiente. En un modelo de programación sincrónica, sería más simple de expresar. La buena noticia es que JavaScript te permite escribir código pseudo-sincrónico. Una función `async` es una función que retorna implícitamente una promesa y que puede, en su cuerpo, `await` (“esperar”) otras promesas de una manera que se ve sincrónica. Podemos reescribir ``` encontrarEnAlmacenamiento ``` de esta manera: > async function encontrarEnAlmacenamiento(nido, nombre) { let local = await almacenamiento(nido, nombre); if (local != null) return local; let fuentes = red(nido).filter(n => n != nido.nombre); while (fuentes.length > 0) { let fuente = fuentes[Math.floor(Math.random() * fuentes.length)]; fuentes = fuentes.filter(n => n != fuente); try { let encontrado = await solicitudRuta(nido, fuente, "almacenamiento", nombre); if (encontrado != null) return encontrado; } catch (_) {} } throw new Error("No encontrado"); } Una función `async` está marcada por la palabra `async` antes de la palabra clave `function` . Los métodos también pueden hacerse `async` al escribir `async` antes de su nombre. Cuando se llame a dicha función o método, este retorna una promesa. Tan pronto como el cuerpo retorne algo, esa promesa es resuelta Si arroja una excepción, la promesa es rechazada. > encontrarEnAlmacenamiento(granRoble, "eventos del 2017-12-21") .then(console.log); Dentro de una función `async` , la palabra `await` se puede poner delante de una expresión para esperar a que se resuelva una promesa, y solo entonces continua la ejecución de la función. Tal función ya no se ejecuta, como una función regular de JavaScript de principio a fin de una sola vez. En su lugar, puede ser congelada en cualquier punto que tenga un `await` , y se reanuda en un momento posterior. Para código asincrónico no-trivial, esta notación suele ser más conveniente que usar promesas directamente. Incluso si necesitas hacer algo que no se ajuste al modelo síncrono, como realizar múltiples acciones al mismo tiempo, es fácil combinar `await` con el uso directo de promesas. ## Generadores Esta capacidad de las funciones para pausar y luego reanudarse nuevamente no es exclusiva para las funciones `async` . JavaScript también tiene una caracteristica llamada funciones generador. Estss son similares, pero sin las promesas. Cuando defines una función con `function*` (colocando un asterisco después de la palabra `function` ), se convierte en un generador. Cuando llamas un generador, este retorna un iterador, que ya vimos en el Capítulo 6. > function* potenciacion(n) { for (let actual = n;; actual *= n) { yield actual; } } for (let potencia of potenciacion(3)) { if (potencia > 50) break; console.log(potencia); } // → 3 // → 9 // → 27 Inicialmente, cuando llamas a `potenciacion` , la función se congela en su comienzo. Cada vez que llames `next` en el iterador, la función se ejecuta hasta que encuentre una expresión `yield` (“arrojar”), que la pausa y causa que el valor arrojado se convierta en el siguiente valor producido por el iterador. Cuando la función retorne (la del ejemplo nunca lo hace), el iterador está completo. Escribir iteradores es a menudo mucho más fácil cuando usas funciones generadoras. El iterador para la clase grupal (del ejercicio en el Capítulo 6) se puede escribir con este generador: > Conjunto.prototype[Symbol.iterator] = function*() { for (let i = 0; i < this.miembros.length; i++) { yield this.miembros[i]; } }; Ya no es necesario crear un objeto para mantener el estado de la iteración—los generadores guardan automáticamente su estado local cada vez ellos arrojen. Dichas expresiones `yield` solo pueden ocurrir directamente en la función generadora en sí y no en una función interna que definas dentro de ella. El estado que ahorra un generador, cuando arroja, es solo su entorno local y la posición en la que fue arrojada. Una función `async` es un tipo especial de generador. Produce una promesa cuando se llama, que se resuelve cuando vuelve (termina) y rechaza cuando arroja una excepción. Cuando cede (espera) por una promesa, el resultado de esa promesa (valor o excepción lanzada) es el resultado de la expresión `await` . ## El ciclo de evento Los programas asincrónicos son ejecutados pieza por pieza. Cada pieza puede iniciar algunas acciones y programar código para que se ejecute cuando la acción termine o falle. Entre estas piezas, el programa permanece inactivo, esperando por la siguiente acción. Por lo tanto, las devoluciones de llamada no son llamadas directamente por el código que las programó. Si llamo a `setTimeout` desde adentro de una función, esa función habra retornado para el momento en que se llame a la función de devolución de llamada. Y cuando la devolución de llamada retorne, el control no volvera a la función que la programo. El comportamiento asincrónico ocurre en su propia función de llamada de pila vacía. Esta es una de las razones por las cuales, sin promesas, la gestión de excepciones en el código asincrónico es dificil. Como cada devolución de llamada comienza con una pila en su mayoría vacía, tus manejadores `catch` no estarán en la pila cuando lanzen una excepción. > try { setTimeout(() => { throw new Error("Woosh"); }, 20); } catch (_) { // Esto no se va a ejecutar console.log("Atrapado!"); } No importa que tan cerca los eventos—como tiempos de espera o solicitudes entrantes—sucedan, un entorno de JavaScript solo ejecutará un programa a la vez. Puedes pensar en esto como un gran ciclo alrededor de tu programa, llamado ciclo de evento. Cuando no hay nada que hacer, ese bucle está detenido. Pero a medida que los eventos entran, se agregan a una cola, y su código se ejecuta uno después del otro. Porque no hay dos cosas que se ejecuten al mismo tiempo, código de ejecución lenta puede retrasar el manejo de otros eventos. Este ejemplo establece un tiempo de espera, pero luego se retrasa hasta después del tiempo de espera previsto, lo que hace que el tiempo de espera este tarde. > let comienzo = Date.now(); setTimeout(() => { console.log("Tiempo de espera corrio al ", Date.now() - comienzo); }, 20); while (Date.now() < comienzo + 50) {} console.log("Se desperdicio tiempo hasta el ", Date.now() - comienzo); // → Se desperdicio tiempo hasta el 50 // → Tiempo de espera corrio al 55 Las promesas siempre se resuelven o rechazan como un nuevo evento. Incluso si una promesa ya ha sido resuelta, esperar por ella hará que la devolución de llamada se ejecute después de que el script actual termine, en lugar de hacerlo inmediatamente. > Promise.resolve("Listo").then(console.log); console.log("Yo primero!"); // → Yo primero! // → Listo En capítulos posteriores, veremos otros tipos de eventos que se ejecutan en el ciclo de eventos. ## Errores asincrónicos Cuando tu programa se ejecuta de forma síncrona, de una sola vez, no hay cambios de estado sucediendo aparte de aquellos que el mismo programa realiza. Para los programas asíncronos, esto es diferente—estos pueden tener brechas en su ejecución durante las cuales se podria ejecutar otro código. Veamos un ejemplo. Uno de los pasatiempos de nuestros cuervos es contar la cantidad de polluelos que nacen en el pueblo cada año. Los nidos guardan este recuento en sus bulbos de almacenamiento. El siguiente código intenta enumerar los recuentos de todos los nidos para un año determinado. > function cualquierAlmacenamiento(nido, fuente, nombre) { if (fuente == nido.nombre) return almacenamiento(nido, nombre); else return solicitudRuta(nido, fuente, "almacenamiento", nombre); } async function polluelos(nido, años) { let lista = ""; await Promise.all(red(nido).map(async nombre => { lista += `${nombre}: ${ await cualquierAlmacenamiento(nido, nombre, `polluelos en ${años}`) }\n`; })); return lista; } La parte `async nombre =>` muestra que las funciones de flecha también pueden ser `async` al poner la palabra `async` delante de ellas. El código no parece sospechoso de inmediato... mapea la función de flecha `async` sobre el conjunto de nidos, creando una serie de promesas, y luego usa `Promise.all` para esperar a todos estas antes de retornar la lista que estas construyen. Pero está seriamente roto. Siempre devolverá solo una línea de salida, enumerando al nido que fue más lento en responder. > polluelos(granRoble, 2017).then(console.log); Puedes averiguar por qué? El problema radica en el operador `+=` , que toma el valor actual de `lista` en el momento en que la instrucción comienza a ejecutarse, y luego, cuando el `await` termina, establece que la vinculaciòn `lista` sea ese valor más el string agregado. Pero entre el momento en el que la declaración comienza a ejecutarse y el momento donde termina hay una brecha asincrónica. La expresión `map` se ejecuta antes de que se haya agregado algo a la lista, por lo que cada uno de los operadores `+=` comienza desde un string vacío y termina cuando su recuperación de almacenamiento finaliza, estableciendo `lista` como una lista de una sola línea—el resultado de agregar su línea al string vacío. Esto podría haberse evitado fácilmente retornando las líneas de las promesas mapeadas y llamando a `join` en el resultado de `Promise.all` , en lugar de construir la lista cambiando una vinculación. Como siempre, calcular nuevos valores es menos propenso a errores que cambiar valores existentes. > async function polluelos(nido, año) { let lineas = red(nido).map(async nombre => { return nombre + ": " + await cualquierAlmacenamiento(nido, nombre, `polluelos en ${año}`); }); return (await Promise.all(lineas)).join("\n"); } Errores como este son fáciles de hacer, especialmente cuando se usa `await` , y debes tener en cuenta dónde se producen las brechas en tu código. Una ventaja de la asincronicidad explicita de JavaScript (ya sea a través de devoluciones de llamada, promesas, o `await` ) es que detectar estas brechas es relativamente fácil. La programación asincrónica permite expresar la espera de acciones de larga duración sin congelar el programa durante estas acciones. Los entornos de JavaScript suelen implementar este estilo de programación usando devoluciones de llamada, funciones que son llaman cuando las acciones son completadas. Un ciclo de eventos planifica que dichas devoluciones de llamadas sean llamadas cuando sea apropiado, una después de la otra, para que sus ejecuciones no se superpongan. La programación asíncrona se hace más fácil mediante promesas, objetos que representar acciones que podrían completarse en el futuro, y funciones `async` , que te permiten escribir un programa asíncrono como si fuera sincrónico. ### Siguiendo el bisturí Los cuervos del pueblo poseen un viejo bisturí que ocasionalmente usan en misiones especiales—por ejemplo, para cortar puertas de malla o embalar cosas. Para ser capaces de rastrearlo rápidamente, cada vez que se mueve el bisturí a otro nido, una entrada se agrega al almacenamiento tanto del nido que lo tenía como al nido que lo tomó, bajo el nombre `"bisturí"` , con su nueva ubicación como su valor. Esto significa que encontrar el bisturí es una cuestión de seguir la ruta de navegación de las entradas de almacenamiento, hasta que encuentres un nido que apunte a el nido en si mismo. Escribe una función `async` , `localizarBisturi` que haga esto, comenzando en el nido en el que se ejecute. Puede usar la función ``` cualquierAlmacenamiento ``` definida anteriormente para acceder al almacenamiento en nidos arbitrarios. El bisturí ha estado dando vueltas el tiempo suficiente como para que puedas suponer que cada nido tiene una entrada `bisturí` en su almacenamiento de datos. Luego, vuelve a escribir la misma función sin usar `async` y `await` . Las fallas de solicitud se muestran correctamente como rechazos de la promesa devuelta en ambas versiones? Cómo? > async function localizarBisturi(nido) { // Tu codigo aqui. } function localizarBisturi2(nido) { // Tu codigo aqui. } localizarBisturi(granRoble).then(console.log); // → Tienda del Carnicero Esto se puede realizar con un solo ciclo que busca a través de los nidos, avanzando hacia el siguiente cuando encuentre un valor que no coincida con el nombre del nido actual, y retornando el nombre cuando esta encuentra un valor que coincida. En la función `async` , un ciclo regular `for` o `while` puede ser utilizado. Para hacer lo mismo con una función simple, tendrás que construir tu ciclo usando una función recursiva. La manera más fácil de hacer esto es hacer que esa función retorne una promesa al llamar a `then` en la promesa que recupera el valor de almacenamiento. Dependiendo de si ese valor coincide con el nombre del nido actual, el controlador devuelve ese valor o una promesa adicional creada llamando a la función de ciclo nuevamente. No olvides iniciar el ciclo llamando a la función recursiva una vez desde la función principal. En la función `async` , las promesas rechazadas se convierten en excepciones por `await` Cuando una función `async` arroja una excepción, su promesa es rechazada. Entonces eso funciona. Si implementaste la función no- `async` como se describe anteriormente, la forma en que `then` funciona también provoca automáticamente que una falla termine en la promesa devuelta. Si una solicitud falla, el manejador pasado a `then` no se llama, y ​​la promesa que devuelve se rechaza con la misma razón. ### Construyendo Promise.all Dado un array de promesas, `Promise.all` retorna una promesa que espera a que finalicen todas las promesas del array. Entonces tiene éxito, produciendo un array de valores de resultados. Si una promesa en el array falla, la promesa retornada por `all` también falla, con la razón de la falla proveniente de la promesa fallida. Implemente algo como esto tu mismo como una función regular llamada `Promise_all` . Recuerda que una vez que una promesa ha tenido éxito o ha fallado, no puede tener éxito o fallar de nuevo, y llamadas subsecuentes a las funciones que resuelven son ignoradas. Esto puede simplificar la forma en que manejas la falla de tu promesa. > function Promise_all(promesa) { return new Promise((resolve, reject) => { // Tu codigo aqui. }); } // Codigo de Prueba. Promise_all([]).then(array => { console.log("This should be []:", array); }); function soon(val) { return new Promise(resolve => { setTimeout(() => resolve(val), Math.random() * 500); }); } Promise_all([soon(1), soon(2), soon(3)]).then(array => { console.log("This should be [1, 2, 3]:", array); }); Promise_all([soon(1), Promise.reject("X"), soon(3)]) .then(array => { console.log("We should not get here"); }) .catch(error => { if (error != "X") { console.log("Unexpected failure:", error); } }); La función pasada al constructor `Promise` tendrá que llamar `then` en cada una de las promesas del array dado. Cuando una de ellas tenga éxito, dos cosas deben suceder. El valor resultante debe ser almacenado en la posición correcta de un array de resultados, y debemos verificar si esta fue la última promesa pendiente y terminar nuestra promesa si asi fue. Esto último se puede hacer con un contador que se inicializa con la longitud del array de entrada y del que restamos 1 cada vez que una promesa tenga éxito. Cuando llega a 0, hemos terminado. Asegúrate de tener en cuenta la situación en la que el array de entrada este vacío (y por lo tanto ninguna promesa nunca se resolverá). El manejo de la falla requiere pensar un poco, pero resulta ser extremadamente sencillo. Solo pasa la función `reject` de la promesa de envoltura a cada una de las promesas en el array como manejador `catch` o como segundo argumento a `then` para que una falla en una de ellos desencadene el rechazo de la promesa de envoltura completa. El evaluador, que determina el significado de las expresiones en un lenguaje de programación, es solo otro programa. Construir tu propio lenguaje de programación es sorprendentemente fácil (siempre y cuando no apuntes demasiado alto) y muy esclarecedor. Lo principal que quiero mostrar en este capítulo es que no hay magia involucrada en la construcción de tu propio lenguaje. A menudo he sentido que algunos inventos humanos eran tan inmensamente inteligentes y complicados que nunca podría llegar a entenderlos. Pero con un poco de lectura y experimentación, a menudo resultan ser bastante mundanos. Construiremos un lenguaje de programación llamado Egg. Será un lenguaje pequeño y simple—pero lo suficientemente poderoso como para expresar cualquier computación que puedes pensar. Permitirá una abstracción simple basada en funciones. ## Análisis La parte más visible de un lenguaje de programación es su sintaxis, o notación. Un analizador es un programa que lee una pieza de texto y produce una estructura de datos que refleja la estructura del programa contenido en ese texto. Si el texto no forma un programa válido, el analizador debe señalar el error. Nuestro lenguaje tendrá una sintaxis simple y uniforme. Todo en Egg es una expresión. Una expresión puede ser el nombre de una vinculación (binding), un número, una cadena de texto o una aplicación. Las aplicaciones son usadas para llamadas de función pero también para constructos como `if` o `while` . Para mantener el analizador simple, las cadenas en Egg no soportarán nada parecido a escapes de barra invertida. Una cadena es simplemente una secuencia de caracteres que no sean comillas dobles, envueltas en comillas dobles. Un número es un secuencia de dígitos. Los nombres de vinculaciones pueden consistir en cualquier carácter no que sea un espacio en blanco y eso no tiene un significado especial en la sintaxis. Las aplicaciones se escriben tal y como están en JavaScript, poniendo paréntesis después de una expresión y teniendo cualquier cantidad de argumentos entre esos paréntesis, separados por comas. > hacer(definir(x, 10), si(>(x, 5), imprimir("grande"), imprimir("pequeño"))) La uniformidad del lenguaje Egg significa que las cosas que son operadores en JavaScript (como `>` ) son vinculaciones normales en este lenguaje, aplicadas como cualquier función. Y dado que la sintaxis no tiene un concepto de bloque, necesitamos una construcción `hacer` para representar el hecho de realizar múltiples cosas en secuencia. La estructura de datos que el analizador usará para describir un programa consta de objetos de expresión, cada uno de los cuales tiene una propiedad `tipo` que indica el tipo de expresión que este es y otras propiedades que describen su contenido. Las expresiones de tipo `"valor"` representan strings o números literales. Su propiedad `valor` contiene el string o valor numérico que estos representan. Las expresiones de tipo `"palabra"` se usan para identificadores (nombres). Dichos objetos tienen una propiedad `nombre` que contienen el nombre del identificador como un string. Finalmente, las expresiones `"aplicar"` representan aplicaciones. Tienen una propiedad `operador` que se refiere a la expresión que está siendo aplicada, y una propiedad `argumentos` que contiene un array de expresiones de argumentos. La parte `>(x, 5)` del programa anterior se representaría de esta manera: > { tipo: "aplicar", operador: {tipo: "palabra", nombre: ">"}, argumentos: [ {tipo: "palabra", nombre: "x"}, {tipo: "valor", valor: 5} ] } Tal estructura de datos se llama árbol de sintaxis. Si tu imaginas los objetos como puntos y los enlaces entre ellos como líneas entre esos puntos, tiene una forma similar a un árbol. El hecho de que las expresiones contienen otras expresiones, que a su vez pueden contener más expresiones, es similar a la forma en que las ramas de los árboles se dividen y dividen una y otra vez. Compara esto con el analizador que escribimos para el formato de archivo de configuración en el Capítulo 9, que tenía una estructura simple: dividía la entrada en líneas y manejaba esas líneas una a la vez. Habían solo unas pocas formas simples que se le permitía tener a una línea. Aquí debemos encontrar un enfoque diferente. Las expresiones no están separadas en líneas, y tienen una estructura recursiva. Las expresiones de aplicaciones contienen otras expresiones. Afortunadamente, este problema se puede resolver muy bien escribiendo una función analizadora que es recursiva en una manera que refleje la naturaleza recursiva del lenguaje. Definimos una función `analizarExpresion` , que toma un string como entrada y retorna un objeto que contiene la estructura de datos para la expresión al comienzo del string, junto con la parte del string que queda después de analizar esta expresión Al analizar subexpresiones (el argumento para una aplicación, por ejemplo), esta función puede ser llamada de nuevo, produciendo la expresión del argumento, así como al texto que permanece. A su vez, este texto puede contener más argumentos o puede ser el paréntesis de cierre que finaliza la lista de argumentos. Esta es la primera parte del analizador: > edit & run code by clicking itfunction analizarExpresion(programa) { programa = saltarEspacio(programa); let emparejamiento, expresion; if (emparejamiento = /^"([^"]*)"/.exec(programa)) { expresion = {tipo: "valor", valor: emparejamiento[1]}; } else if (emparejamiento = /^\d+\b/.exec(programa)) { expresion = {tipo: "valor", valor: Number(emparejamiento[0])}; } else if (emparejamiento = /^[^\s(),"]+/.exec(programa)) { expresion = {tipo: "palabra", nombre: emparejamiento[0]}; } else { throw new SyntaxError("Sintaxis inesperada: " + programa); } return aplicarAnalisis(expresion, programa.slice(emparejamiento[0].length)); } function saltarEspacio(string) { let primero = string.search(/\S/); if (primero == -1) return ""; return string.slice(primero); } Dado que Egg, al igual que JavaScript, permite cualquier cantidad de espacios en blanco entre sus elementos, tenemos que remover repetidamente los espacios en blanco del inicio del string del programa. Para esto nos ayuda la función `saltarEspacio` . Después de saltar cualquier espacio en blanco, `analizarExpresion` usa tres expresiones regulares para detectar los tres elementos atómicos que Egg soporta: strings, números y palabras. El analizador construye un tipo diferente de estructura de datos dependiendo de cuál coincida. Si la entrada no coincide con ninguna de estas tres formas, la expresión no es válida, y el analizador arroja un error. Usamos `SyntaxError` en lugar de `Error` como constructor de excepción, el cual es otro tipo de error estándar, dado que es un poco más específico—también es el tipo de error lanzado cuando se intenta ejecutar un programa de JavaScript no válido. Luego cortamos la parte que coincidio del string del programa y pasamos eso, junto con el objeto para la expresión, a `aplicarAnalisis` , el cual verifica si la expresión es una aplicación. Si es así, analiza una lista de los argumentos entre paréntesis. > function aplicarAnalisis(expresion, programa) { programa = saltarEspacio(programa); if (programa[0] != "(") { return {expresion: expresion, resto: programa}; } programa = saltarEspacio(programa.slice(1)); expresion = {tipo: "aplicar", operador: expresion, argumentos: []}; while (programa[0] != ")") { let argumento = analizarExpresion(programa); expresion.argumentos.push(argumento.expresion); programa = saltarEspacio(argumento.resto); if (programa[0] == ",") { programa = saltarEspacio(programa.slice(1)); } else if (programa[0] != ")") { throw new SyntaxError("Experaba ',' o ')'"); } } return aplicarAnalisis(expresion, programa.slice(1)); } Si el siguiente carácter en el programa no es un paréntesis de apertura, esto no es una aplicación, y `aplicarAnalisis` retorna la expresión que se le dio. De lo contrario, salta el paréntesis de apertura y crea el objeto de árbol de sintaxis para esta expresión de aplicación. Entonces, recursivamente llama a `analizarExpresion` para analizar cada argumento hasta que se encuentre el paréntesis de cierre. La recursión es indirecta, a través de `aplicarAnalisis` y `analizarExpresion` llamando una a la otra. Dado que una expresión de aplicación puede ser aplicada a sí misma (como en `multiplicador(2)(1)` ), `aplicarAnalisis` debe, después de haber analizado una aplicación, llamarse asi misma de nuevo para verificar si otro par de paréntesis sigue a continuación. Esto es todo lo que necesitamos para analizar Egg. Envolvemos esto en una conveniente función `analizar` que verifica que ha llegado al final del string de entrada después de analizar la expresión (un programa Egg es una sola expresión), y eso nos da la estructura de datos del programa. > function analizar(programa) { let {expresion, resto} = analizarExpresion(programa); if (saltarEspacio(resto).length > 0) { throw new SyntaxError("Texto inesperado despues de programa"); } return expresion; } console.log(analizar("+(a, 10)")); // → {tipo: "aplicar", // operador: {tipo: "palabra", nombre: "+"}, // argumentos: [{tipo: "palabra", nombre: "a"}, // {tipo: "valor", valor: 10}]} Funciona! No nos da información muy útil cuando falla y no almacena la línea y la columna en que comienza cada expresión, lo que podría ser útil al informar errores más tarde, pero es lo suficientemente bueno para nuestros propósitos. ## The evaluator What can we do with the syntax tree for a program? Run it, of course! And that is what the evaluator does. You give it a syntax tree and a scope object that associates names with values, and it will evaluate the expression that the tree represents and return the value that this produces. > const specialForms = Object.create(null); function evaluate(expresion, scope) { if (expresion.type == "value") { return expresion.value; } else if (expresion.type == "word") { if (expresion.name in scope) { return scope[expresion.name]; } else { throw new ReferenceError( `Undefined binding: ${expresion.name}`); } } else if (expresion.type == "apply") { let {operator, args} = expresion; if (operator.type == "word" && operator.name in specialForms) { return specialForms[operator.name](expresion.args, scope); } else { let op = evaluate(operator, scope); if (typeof op == "function") { return op(args.map(arg => evaluate(arg, scope))); } else { throw new TypeError("Applying a non-function."); } } } } The evaluator has code for each of the expression types. A literal value expression produces its value. (For example, the expression `100` just evaluates to the number 100.) For a binding, we must check whether it is actually defined in the scope and, if it is, fetch the binding’s value. Applications are more involved. If they are a special form, like `if` , we do not evaluate anything and pass the argument expressions, along with the scope, to the function that handles this form. If it is a normal call, we evaluate the operator, verify that it is a function, and call it with the evaluated arguments. We use plain JavaScript function values to represent Egg’s function values. We will come back to this later, when the special form called `fun` is defined. The recursive structure of `evaluate` resembles the similar structure of the parser, and both mirror the structure of the language itself. It would also be possible to integrate the parser with the evaluator and evaluate during parsing, but splitting them up this way makes the program clearer. This is really all that is needed to interpret Egg. It is that simple. But without defining a few special forms and adding some useful values to the environment, you can’t do much with this language yet. ## Special forms The `specialForms` object is used to define special syntax in Egg. It associates words with functions that evaluate such forms. It is currently empty. Let’s add `if` . > specialForms.si = (args, scope) => { if (args.length != 3) { throw new SyntaxError("Wrong number of args to si"); } else if (evaluate(args[0], scope) !== false) { return evaluate(args[1], scope); } else { return evaluate(args[2], scope); } }; Egg’s `si` construct expects exactly three arguments. It will evaluate the first, and if the result isn’t the value `false` , it will evaluate the second. Otherwise, the third gets evaluated. This `if` form is more similar to JavaScript’s ternary `?:` operator than to JavaScript’s `if` . It is an expression, not a statement, and it produces a value, namely the result of the second or third argument. Egg also differs from JavaScript in how it handles the condition value to `if` . It will not treat things like zero or the empty string as false, only the precise value `false` . The reason we need to represent `if` as a special form, rather than a regular function, is that all arguments to functions are evaluated before the function is called, whereas `if` should evaluate only either its second or its third argument, depending on the value of the first. The `while` form is similar. > specialForms.while = (args, scope) => { if (args.length != 2) { throw new SyntaxError("Wrong number of args to while"); } while (evaluate(args[0], scope) !== false) { evaluate(args[1], scope); } // Since undefined does not exist in Egg, we return false, // for lack of a meaningful result. return false; }; Another basic building block is `hacer` , which executes all its arguments from top to bottom. Its value is the value produced by the last argument. > specialForms.hacer = (args, scope) => { let value = false; for (let arg of args) { value = evaluate(arg, scope); } return value; }; To be able to create bindings and give them new values, we also create a form called `definir` . It expects a word as its first argument and an expression producing the value to assign to that word as its second argument. Since `definir` , like everything, is an expression, it must return a value. We’ll make it return the value that was assigned (just like JavaScript’s `=` operator). > specialForms.definir = (args, scope) => { if (args.length != 2 || args[0].type != "word") { throw new SyntaxError("Incorrect use of definir"); } let value = evaluate(args[1], scope); scope[args[0].name] = value; return value; }; ## The environment The scope accepted by `evaluate` is an object with properties whose names correspond to binding names and whose values correspond to the values those bindings are bound to. Let’s define an object to represent the global scope. To be able to use the `if` construct we just defined, we must have access to Boolean values. Since there are only two Boolean values, we do not need special syntax for them. We simply bind two names to the values `true` and `false` and use those. > const topScope = Object.create(null); topScope.true = true; topScope.false = false; We can now evaluate a simple expression that negates a Boolean value. > let prog = parse(`si(true, false, true)`); console.log(evaluate(prog, topScope)); // → false To supply basic arithmetic and comparison operators, we will also add some function values to the scope. In the interest of keeping the code short, we’ll use `Function` to synthesize a bunch of operator functions in a loop, instead of defining them individually. > for (let op of ["+", "-", "*", "/", "==", "<", ">"]) { topScope[op] = Function("a, b", `return a ${op} b;`); } A way to output values is also very useful, so we’ll wrap `console.log` in a function and call it `imprimir` . > topScope.imprimir = value => { console.log(value); return value; }; That gives us enough elementary tools to write simple programs. The following function provides a convenient way to parse a program and run it in a fresh scope. > function run(programa) { return evaluate(parse(programa), Object.create(topScope)); } We’ll use object prototype chains to represent nested scopes, so that the program can add bindings to its local scope without changing the top-level scope. > run(` hacer(definir(total, 0), definir(count, 1), while(<(count, 11), hacer(definir(total, +(total, count)), definir(count, +(count, 1)))), imprimir(total)) `); // → 55 This is the program we’ve seen several times before, which computes the sum of the numbers 1 to 10, expressed in Egg. It is clearly uglier than the equivalent JavaScript program—but not bad for a language implemented in less than 150 lines of code. ## Functions A programming language without functions is a poor programming language indeed. Fortunately, it isn’t hard to add a `fun` construct, which treats its last argument as the function’s body and uses all arguments before that as the names of the function’s parameters. > specialForms.fun = (args, scope) => { if (!args.length) { throw new SyntaxError("Functions need a body"); } let body = args[args.length - 1]; let params = args.slice(0, args.length - 1).map(expr => { if (expr.type != "word") { throw new SyntaxError("Parameter names must be words"); } return expr.name; }); return function() { if (arguments.length != params.length) { throw new TypeError("Wrong number of arguments"); } let localScope = Object.create(scope); for (let i = 0; i < arguments.length; i++) { localScope[params[i]] = arguments[i]; } return evaluate(body, localScope); }; }; Functions in Egg get their own local scope. The function produced by the `fun` form creates this local scope and adds the argument bindings to it. It then evaluates the function body in this scope and returns the result. > run(` hacer(definir(plusOne, fun(a, +(a, 1))), imprimir(plusOne(10))) `); // → 11 run(` hacer(definir(pow, fun(base, exp, si(==(exp, 0), 1, *(base, pow(base, -(exp, 1)))))), imprimir(pow(2, 10))) `); // → 1024 ## Compilation What we have built is an interpreter. During evaluation, it acts directly on the representation of the program produced by the parser. Compilation is the process of adding another step between the parsing and the running of a program, which transforms the program into something that can be evaluated more efficiently by doing as much work as possible in advance. For example, in well-designed languages it is obvious, for each use of a binding, which binding is being referred to, without actually running the program. This can be used to avoid looking up the binding by name every time it is accessed, instead directly fetching it from some predetermined memory location. Traditionally, compilation involves converting the program to machine code, the raw format that a computer’s processor can execute. But any process that converts a program to a different representation can be thought of as compilation. It would be possible to write an alternative evaluation strategy for Egg, one that first converts the program to a JavaScript program, uses `Function` to invoke the JavaScript compiler on it, and then runs the result. When done right, this would make Egg run very fast while still being quite simple to implement. If you are interested in this topic and willing to spend some time on it, I encourage you to try to implement such a compiler as an exercise. ## Cheating When we defined `if` and `while` , you probably noticed that they were more or less trivial wrappers around JavaScript’s own `if` and `while` . Similarly, the values in Egg are just regular old JavaScript values. If you compare the implementation of Egg, built on top of JavaScript, with the amount of work and complexity required to build a programming language directly on the raw functionality provided by a machine, the difference is huge. Regardless, this example hopefully gave you an impression of the way programming languages work. And when it comes to getting something done, cheating is more effective than doing everything yourself. Though the toy language in this chapter doesn’t do anything that couldn’t be done better in JavaScript, there are situations where writing small languages helps get real work done. Such a language does not have to resemble a typical programming language. If JavaScript didn’t come equipped with regular expressions, for example, you could write your own parser and evaluator for regular expressions. Or imagine you are building a giant robotic dinosaur and need to program its behavior. JavaScript might not be the most effective way to do this. You might instead opt for a language that looks like this: > behavior walk perform when destination ahead actions move left-foot move right-foot behavior attack perform when Godzilla in-view actions fire laser-eyes launch arm-rockets This is what is usually called a domain-specific language, a language tailored to express a narrow domain of knowledge. Such a language can be more expressive than a general-purpose language because it is designed to describe exactly the things that need to be described in its domain, and nothing else. ### Arrays Add support for arrays to Egg by adding the following three functions to the top scope: `array(...values)` to construct an array containing the argument values, `length(array)` to get an array’s length, and `element(array, n)` to fetch the nth element from an array. > // Modify these definitions... topScope.array = "..."; topScope.length = "..."; topScope.element = "..."; run(` hacer(definir(sum, fun(array, hacer(definir(i, 0), definir(sum, 0), while(<(i, length(array)), hacer(definir(sum, +(sum, element(array, i))), definir(i, +(i, 1)))), sum))), imprimir(sum(array(1, 2, 3)))) `); // → 6 ### Closure The way we have defined `fun` allows functions in Egg to reference the surrounding scope, allowing the function’s body to use local values that were visible at the time the function was defined, just like JavaScript functions do. The following program illustrates this: function `f` returns a function that adds its argument to `f` ’s argument, meaning that it needs access to the local scope inside `f` to be able to use binding `a` . > run(` hacer(definir(f, fun(a, fun(b, +(a, b)))), imprimir(f(4)(5))) `); // → 9 Go back to the definition of the `fun` form and explain which mechanism causes this to work. Again, we are riding along on a JavaScript mechanism to get the equivalent feature in Egg. Special forms are passed the local scope in which they are evaluated so that they can evaluate their subforms in that scope. The function returned by `fun` has access to the `scope` argument given to its enclosing function and uses that to create the function’s local scope when it is called. This means that the prototype of the local scope will be the scope in which the function was created, which makes it possible to access bindings in that scope from the function. This is all there is to implementing closure (though to compile it in a way that is actually efficient, you’d need to do some more work). ### Comments It would be nice if we could write comments in Egg. For example, whenever we find a hash sign ( `#` ), we could treat the rest of the line as a comment and ignore it, similar to `//` in JavaScript. We do not have to make any big changes to the parser to support this. We can simply change `skipSpace` to skip comments as if they are whitespace so that all the points where `skipSpace` is called will now also skip comments. Make this change. > // This is the old skipSpace. Modify it... function skipSpace(string) { let first = string.search(/\S/); if (first == -1) return ""; return string.slice(first); } console.log(parse("# hello\nx")); // → {type: "word", name: "x"} console.log(parse("a # one\n # two\n()")); // → {type: "apply", // operator: {type: "word", name: "a"}, // args: []} Make sure your solution handles multiple comments in a row, with potentially whitespace between or after them. A regular expression is probably the easiest way to solve this. Write something that matches “whitespace or a comment, zero or more times”. Use the `exec` or `match` method and look at the length of the first element in the returned array (the whole match) to find out how many characters to slice off. ### Fixing scope Currently, the only way to assign a binding a value is `definir` . This construct acts as a way both to define new bindings and to give existing ones a new value. This ambiguity causes a problem. When you try to give a nonlocal binding a new value, you will end up defining a local one with the same name instead. Some languages work like this by design, but I’ve always found it an awkward way to handle scope. Add a special form `set` , similar to `definir` , which gives a binding a new value, updating the binding in an outer scope if it doesn’t already exist in the inner scope. If the binding is not defined at all, throw a `ReferenceError` (another standard error type). The technique of representing scopes as simple objects, which has made things convenient so far, will get in your way a little at this point. You might want to use the `Object.` function, which returns the prototype of an object. Also remember that scopes do not derive from `Object.prototype` , so if you want to call `hasOwnProperty` on them, you have to use this clumsy expression: > Object.prototype.hasOwnProperty.call(scope, name); > specialForms.set = (args, scope) => { // Your code here. }; run(` hacer(definir(x, 4), definir(setx, fun(val, set(x, val))), setx(50), imprimir(x)) `); // → 50 run(`set(quux, true)`); // → Some kind of ReferenceError You will have to loop through one scope at a time, using `Object.` to go the next outer scope. For each scope, use `hasOwnProperty` to find out whether the binding, indicated by the `name` property of the first argument to `set` , exists in that scope. If it does, set it to the result of evaluating the second argument to `set` and then return that value. If the outermost scope is reached ( `Object.` returns null) and we haven’t found the binding yet, it doesn’t exist, and an error should be thrown. Too bad! Same old story! Once you’ve finished building your house you notice you’ve accidentally learned something that you really should have known—before you started. When you open a web page in your browser, the browser retrieves the page’s HTML text and parses it, much like the way our parser from Chapter 12 parsed programs. The browser builds up a model of the document’s structure and uses this model to draw the page on the screen. This representation of the document is one of the toys that a JavaScript program has available in its sandbox. It is a data structure that you can read or modify. It acts as a live data structure: when it’s modified, the page on the screen is updated to reflect the changes. ## Document structure You can imagine an HTML document as a nested set of boxes. Tags such as `<body>` and `</body>` enclose other tags, which in turn contain other tags or text. Here’s the example document from the previous chapter: > edit & run code by clicking it<html> <head> <title>My home page</title> </head> <body> <h1>My home page</h1> <p>Hello, I am Marijn and this is my home page.</p> <p>I also wrote a book! Read it <a href="http://eloquentjavascript.net">here</a>.</p> </body> </htmlThis page has the following structure: The data structure the browser uses to represent the document follows this shape. For each box, there is an object, which we can interact with to find out things such as what HTML tag it represents and which boxes and text it contains. This representation is called the Document Object Model, or DOM for short. The global binding `document` gives us access to these objects. Its `documentElement` property refers to the object representing the `<html>` tag. Since every HTML document has a head and a body, it also has `head` and `body` properties, pointing at those elements. ## Trees Think back to the syntax trees from Chapter 12 for a moment. Their structures are strikingly similar to the structure of a browser’s document. Each node may refer to other nodes, children, which in turn may have their own children. This shape is typical of nested structures where elements can contain subelements that are similar to themselves. We call a data structure a tree when it has a branching structure, has no cycles (a node may not contain itself, directly or indirectly), and has a single, well-defined root. In the case of the DOM, `document.` serves as the root. Trees come up a lot in computer science. In addition to representing recursive structures such as HTML documents or programs, they are often used to maintain sorted sets of data because elements can usually be found or inserted more efficiently in a tree than in a flat array. A typical tree has different kinds of nodes. The syntax tree for the Egg language had identifiers, values, and application nodes. Application nodes may have children, whereas identifiers and values are leaves, or nodes without children. The same goes for the DOM. Nodes for elements, which represent HTML tags, determine the structure of the document. These can have child nodes. An example of such a node is `document.body` . Some of these children can be leaf nodes, such as pieces of text or comment nodes. Each DOM node object has a `nodeType` property, which contains a code (number) that identifies the type of node. Elements have code 1, which is also defined as the constant property `Node.` . Text nodes, representing a section of text in the document, get code 3 ( `Node.TEXT_NODE` ). Comments have code 8 ( `Node.` ). Another way to visualize our document tree is as follows: The leaves are text nodes, and the arrows indicate parent-child relationships between nodes. ## The standard Using cryptic numeric codes to represent node types is not a very JavaScript-like thing to do. Later in this chapter, we’ll see that other parts of the DOM interface also feel cumbersome and alien. The reason for this is that the DOM wasn’t designed for just JavaScript. Rather, it tries to be a language-neutral interface that can be used in other systems as well—not just for HTML but also for XML, which is a generic data format with an HTML-like syntax. This is unfortunate. Standards are often useful. But in this case, the advantage (cross-language consistency) isn’t all that compelling. Having an interface that is properly integrated with the language you are using will save you more time than having a familiar interface across languages. As an example of this poor integration, consider the `childNodes` property that element nodes in the DOM have. This property holds an array-like object, with a `length` property and properties labeled by numbers to access the child nodes. But it is an instance of the `NodeList` type, not a real array, so it does not have methods such as `slice` and `map` . Then there are issues that are simply poor design. For example, there is no way to create a new node and immediately add children or attributes to it. Instead, you have to first create it and then add the children and attributes one by one, using side effects. Code that interacts heavily with the DOM tends to get long, repetitive, and ugly. But these flaws aren’t fatal. Since JavaScript allows us to create our own abstractions, it is possible to design improved ways to express the operations you are performing. Many libraries intended for browser programming come with such tools. ## Moving through the tree DOM nodes contain a wealth of links to other nearby nodes. The following diagram illustrates these: Although the diagram shows only one link of each type, every node has a `parentNode` property that points to the node it is part of, if any. Likewise, every element node (node type 1) has a `childNodes` property that points to an array-like object holding its children. In theory, you could move anywhere in the tree using just these parent and child links. But JavaScript also gives you access to a number of additional convenience links. The `firstChild` and `lastChild` properties point to the first and last child elements or have the value `null` for nodes without children. Similarly, `previousSibling` and `nextSibling` point to adjacent nodes, which are nodes with the same parent that appear immediately before or after the node itself. For a first child, `previousSibling` will be null, and for a last child, `nextSibling` will be null. There’s also the `children` property, which is like `childNodes` but contains only element (type 1) children, not other types of child nodes. This can be useful when you aren’t interested in text nodes. When dealing with a nested data structure like this one, recursive functions are often useful. The following function scans a document for text nodes containing a given string and returns `true` when it has found one: > function talksAbout(node, string) { if (node.nodeType == Node.ELEMENT_NODE) { for (let i = 0; i < node.childNodes.length; i++) { if (talksAbout(node.childNodes[i], string)) { return true; } } return false; } else if (node.nodeType == Node.TEXT_NODE) { return node.nodeValue.indexOf(string) > -1; } } console.log(talksAbout(document.body, "book")); // → true Because `childNodes` is not a real array, we cannot loop over it with `for` / `of` and have to run over the index range using a regular `for` loop or use `Array.from` . The `nodeValue` property of a text node holds the string of text that it represents. ## Finding elements Navigating these links among parents, children, and siblings is often useful. But if we want to find a specific node in the document, reaching it by starting at `document.body` and following a fixed path of properties is a bad idea. Doing so bakes assumptions into our program about the precise structure of the document—a structure you might want to change later. Another complicating factor is that text nodes are created even for the whitespace between nodes. The example document’s `<body>` tag does not have just three children ( `<h1>` and two `<p>` elements) but actually has seven: those three, plus the spaces before, after, and between them. So if we want to get the `href` attribute of the link in that document, we don’t want to say something like “Get the second child of the sixth child of the document body”. It’d be better if we could say “Get the first link in the document”. And we can. > let link = document.body.getElementsByTagName("a")[0]; console.log(link.href); All element nodes have a `getElementsByTagName` method, which collects all elements with the given tag name that are descendants (direct or indirect children) of that node and returns them as an array-like object. To find a specific single node, you can give it an `id` attribute and use `document.` instead. > <p>My ostrich Gertrude:</p> <p><img id="gertrude" src="img/ostrich.png"></p> <script> let ostrich = document.getElementById("gertrude"); console.log(ostrich.src); </script> A third, similar method is ``` getElementsByClassName ``` , which, like `getElementsByTagName` , searches through the contents of an element node and retrieves all elements that have the given string in their `class` attribute. ## Changing the document Almost everything about the DOM data structure can be changed. The shape of the document tree can be modified by changing parent-child relationships. Nodes have a `remove` method to remove them from their current parent node. To add a child node to an element node, we can use `appendChild` , which puts it at the end of the list of children, or `insertBefore` , which inserts the node given as the first argument before the node given as the second argument. > <p>One</p> <p>Two</p> <p>Three</p> <script> let paragraphs = document.body.getElementsByTagName("p"); document.body.insertBefore(paragraphs[2], paragraphs[0]); </scriptA node can exist in the document in only one place. Thus, inserting paragraph Three in front of paragraph One will first remove it from the end of the document and then insert it at the front, resulting in Three/One/Two. All operations that insert a node somewhere will, as a side effect, cause it to be removed from its current position (if it has one). The `replaceChild` method is used to replace a child node with another one. It takes as arguments two nodes: a new node and the node to be replaced. The replaced node must be a child of the element the method is called on. Note that both `replaceChild` and `insertBefore` expect the new node as their first argument. ## Creating nodes Say we want to write a script that replaces all images ( `<img>` tags) in the document with the text held in their `alt` attributes, which specifies an alternative textual representation of the image. This involves not only removing the images but adding a new text node to replace them. Text nodes are created with the `document.` method. > <p>The <img src="img/cat.png" alt="Cat"> in the <img src="img/hat.png" alt="Hat">.</p> <p><button onclick="replaceImages()">Replace</button></p> <script> function replaceImages() { let images = document.body.getElementsByTagName("img"); for (let i = images.length - 1; i >= 0; i--) { let image = images[i]; if (image.alt) { let text = document.createTextNode(image.alt); image.parentNode.replaceChild(text, image); } } } </script> Given a string, `createTextNode` gives us a text node that we can insert into the document to make it show up on the screen. The loop that goes over the images starts at the end of the list. This is necessary because the node list returned by a method like `getElementsByTagName` (or a property like `childNodes` ) is live. That is, it is updated as the document changes. If we started from the front, removing the first image would cause the list to lose its first element so that the second time the loop repeats, where `i` is 1, it would stop because the length of the collection is now also 1. If you want a solid collection of nodes, as opposed to a live one, you can convert the collection to a real array by calling `Array.from` . > let arrayish = {0: "one", 1: "two", length: 2}; let array = Array.from(arrayish); console.log(array.map(s => s.toUpperCase())); // → ["ONE", "TWO"] To create element nodes, you can use the `document.` method. This method takes a tag name and returns a new empty node of the given type. The following example defines a utility `elt` , which creates an element node and treats the rest of its arguments as children to that node. This function is then used to add an attribution to a quote. > <blockquote id="quote"> No book can ever be finished. While working on it we learn just enough to find it immature the moment we turn away from it. </blockquote> <script> function elt(type, children) { let node = document.createElement(type); for (let child of children) { if (typeof child != "string") node.appendChild(child); else node.appendChild(document.createTextNode(child)); } return node; } document.getElementById("quote").appendChild( elt("footer", "—", elt("strong", "<NAME>"), ", preface to the second edition of ", elt("em", "The Open Society and Its Enemies"), ", 1950")); </script## Attributes Some element attributes, such as `href` for links, can be accessed through a property of the same name on the element’s DOM object. This is the case for most commonly used standard attributes. But HTML allows you to set any attribute you want on nodes. This can be useful because it allows you to store extra information in a document. If you make up your own attribute names, though, such attributes will not be present as properties on the element’s node. Instead, you have to use the `getAttribute` and `setAttribute` methods to work with them. > <p data-classified="secret">The launch code is 00000000.</p> <p data-classified="unclassified">I have two feet.</p> <script> let paras = document.body.getElementsByTagName("p"); for (let para of Array.from(paras)) { if (para.getAttribute("data-classified") == "secret") { para.remove(); } } </script> It is recommended to prefix the names of such made-up attributes with `data-` to ensure they do not conflict with any other attributes. There is a commonly used attribute, `class` , which is a keyword in the JavaScript language. For historical reasons—some old JavaScript implementations could not handle property names that matched keywords—the property used to access this attribute is called `className` . You can also access it under its real name, `"class"` , by using the `getAttribute` and `setAttribute` methods. ## Layout You may have noticed that different types of elements are laid out differently. Some, such as paragraphs ( `<p>` ) or headings ( `<h1>` ), take up the whole width of the document and are rendered on separate lines. These are called block elements. Others, such as links ( `<a>` ) or the `<strong>` element, are rendered on the same line with their surrounding text. Such elements are called inline elements. For any given document, browsers are able to compute a layout, which gives each element a size and position based on its type and content. This layout is then used to actually draw the document. The size and position of an element can be accessed from JavaScript. The `offsetWidth` and `offsetHeight` properties give you the space the element takes up in pixels. A pixel is the basic unit of measurement in the browser. It traditionally corresponds to the smallest dot that the screen can draw, but on modern displays, which can draw very small dots, that may no longer be the case, and a browser pixel may span multiple display dots. Similarly, `clientWidth` and `clientHeight` give you the size of the space inside the element, ignoring border width. > <p style="border: 3px solid red"> I'm boxed in </p> <script> let para = document.body.getElementsByTagName("p")[0]; console.log("clientHeight:", para.clientHeight); console.log("offsetHeight:", para.offsetHeight); </script> The most effective way to find the precise position of an element on the screen is the method. It returns an object with `top` , `bottom` , `left` , and `right` properties, indicating the pixel positions of the sides of the element relative to the top left of the screen. If you want them relative to the whole document, you must add the current scroll position, which you can find in the `pageXOffset` and `pageYOffset` bindings. Laying out a document can be quite a lot of work. In the interest of speed, browser engines do not immediately re-layout a document every time you change it but wait as long as they can. When a JavaScript program that changed the document finishes running, the browser will have to compute a new layout to draw the changed document to the screen. When a program asks for the position or size of something by reading properties such as `offsetHeight` or calling , providing correct information also requires computing a layout. A program that repeatedly alternates between reading DOM layout information and changing the DOM forces a lot of layout computations to happen and will consequently run very slowly. The following code is an example of this. It contains two different programs that build up a line of X characters 2,000 pixels wide and measures the time each one takes. > <p><span id="one"></span></p> <p><span id="two"></span></p> <script> function time(name, action) { let start = Date.now(); // Current time in milliseconds action(); console.log(name, "took", Date.now() - start, "ms"); } time("naive", () => { let target = document.getElementById("one"); while (target.offsetWidth < 2000) { target.appendChild(document.createTextNode("X")); } }); // → naive took 32 ms time("clever", function() { let target = document.getElementById("two"); target.appendChild(document.createTextNode("XXXXX")); let total = Math.ceil(2000 / (target.offsetWidth / 5)); target.firstChild.nodeValue = "X".repeat(total); }); // → clever took 1 ms </script## Styling We have seen that different HTML elements are drawn differently. Some are displayed as blocks, others inline. Some add styling— `<strong>` makes its content bold, and `<a>` makes it blue and underlines it. The way an `<img>` tag shows an image or an `<a>` tag causes a link to be followed when it is clicked is strongly tied to the element type. But we can change the styling associated with an element, such as the text color or underline. Here is an example that uses the `style` property: > <p><a href=".">Normal link</a></p> <p><a href="." style="color: green">Green link</a></p> A style attribute may contain one or more declarations, which are a property (such as `color` ) followed by a colon and a value (such as `green` ). When there is more than one declaration, they must be separated by semicolons, as in ``` "color: red; border: none" ``` . A lot of aspects of the document can be influenced by styling. For example, the `display` property controls whether an element is displayed as a block or an inline element. > This text is displayed <strong>inline</strong>, <strong style="display: block">as a block</strong>, and <strong style="display: none">not at all</strong>. The `block` tag will end up on its own line since block elements are not displayed inline with the text around them. The last tag is not displayed at all— `display: none` prevents an element from showing up on the screen. This is a way to hide elements. It is often preferable to removing them from the document entirely because it makes it easy to reveal them again later. JavaScript code can directly manipulate the style of an element through the element’s `style` property. This property holds an object that has properties for all possible style properties. The values of these properties are strings, which we can write to in order to change a particular aspect of the element’s style. > <p id="para" style="color: purple"> Nice text </p> <script> let para = document.getElementById("para"); console.log(para.style.color); para.style.color = "magenta"; </script> Some style property names contain hyphens, such as `font-family` . Because such property names are awkward to work with in JavaScript (you’d have to say `style["font-family"]` ), the property names in the `style` object for such properties have their hyphens removed and the letters after them capitalized ( `style.fontFamily` ). ## Cascading styles The styling system for HTML is called CSS, for Cascading Style Sheets. A style sheet is a set of rules for how to style elements in a document. It can be given inside a `<style>` tag. > <style> strong { font-style: italic; color: gray; } </style> <p>Now <strong>strong text</strong> is italic and gray.</p> The cascading in the name refers to the fact that multiple such rules are combined to produce the final style for an element. In the example, the default styling for `<strong>` tags, which gives them `font-weight: bold` , is overlaid by the rule in the `<style>` tag, which adds `font-style` and `color` . When multiple rules define a value for the same property, the most recently read rule gets a higher precedence and wins. So if the rule in the `<style>` tag included `font-weight: normal` , contradicting the default `font-weight` rule, the text would be normal, not bold. Styles in a `style` attribute applied directly to the node have the highest precedence and always win. It is possible to target things other than tag names in CSS rules. A rule for `.abc` applies to all elements with `"abc"` in their `class` attribute. A rule for `#xyz` applies to the element with an `id` attribute of `"xyz"` (which should be unique within the document). > .subtle { color: gray; font-size: 80%; } #header { background: blue; color: white; } /* p elements with id main and with classes a and b */ p#main.a.b { margin-bottom: 20px; } The precedence rule favoring the most recently defined rule applies only when the rules have the same specificity. A rule’s specificity is a measure of how precisely it describes matching elements, determined by the number and kind (tag, class, or ID) of element aspects it requires. For example, a rule that targets `p.a` is more specific than rules that target `p` or just `.a` and would thus take precedence over them. The notation `p > a {…}` applies the given styles to all `<a>` tags that are direct children of `<p>` tags. Similarly, `p a {…}` applies to all `<a>` tags inside `<p>` tags, whether they are direct or indirect children. ## Query selectors We won’t be using style sheets all that much in this book. Understanding them is helpful when programming in the browser, but they are complicated enough to warrant a separate book. The main reason I introduced selector syntax—the notation used in style sheets to determine which elements a set of styles apply to—is that we can use this same mini-language as an effective way to find DOM elements. The `querySelectorAll` method, which is defined both on the `document` object and on element nodes, takes a selector string and returns a `NodeList` containing all the elements that it matches. > <p>And if you go chasing <span class="animal">rabbits</span></p> <p>And you know you're going to fall</p> <p>Tell 'em a <span class="character">hookah smoking <span class="animal">caterpillar</span></span></p> <p>Has given you the call</p> <script> function count(selector) { return document.querySelectorAll(selector).length; } console.log(count("p")); // All <p> elements // → 4 console.log(count(".animal")); // Class animal // → 2 console.log(count("p .animal")); // Animal inside of <p> // → 2 console.log(count("p > .animal")); // Direct child of <p> // → 1 </script> Unlike methods such as `getElementsByTagName` , the object returned by `querySelectorAll` is not live. It won’t change when you change the document. It is still not a real array, though, so you still need to call `Array.from` if you want to treat it like one. The `querySelector` method (without the `All` part) works in a similar way. This one is useful if you want a specific, single element. It will return only the first matching element or null when no element matches. ## Positioning and animating The `position` style property influences layout in a powerful way. By default it has a value of `static` , meaning the element sits in its normal place in the document. When it is set to `relative` , the element still takes up space in the document, but now the `top` and `left` style properties can be used to move it relative to that normal place. When `position` is set to `absolute` , the element is removed from the normal document flow—that is, it no longer takes up space and may overlap with other elements. Also, its `top` and `left` properties can be used to absolutely position it relative to the top-left corner of the nearest enclosing element whose `position` property isn’t `static` , or relative to the document if no such enclosing element exists. We can use this to create an animation. The following document displays a picture of a cat that moves around in an ellipse: > <p style="text-align: center"> <img src="img/cat.png" style="position: relative"> </p> <script> let cat = document.querySelector("img"); let angle = Math.PI / 2; function animate(time, lastTime) { if (lastTime != null) { angle += (time - lastTime) * 0.001; } cat.style.top = (Math.sin(angle) * 20) + "px"; cat.style.left = (Math.cos(angle) * 200) + "px"; requestAnimationFrame(newTime => animate(newTime, time)); } requestAnimationFrame(animate); </script> Our picture is centered on the page and given a `position` of `relative` . We’ll repeatedly update that picture’s `top` and `left` styles to move it. The script uses to schedule the `animate` function to run whenever the browser is ready to repaint the screen. The `animate` function itself again calls to schedule the next update. When the browser window (or tab) is active, this will cause updates to happen at a rate of about 60 per second, which tends to produce a good-looking animation. If we just updated the DOM in a loop, the page would freeze, and nothing would show up on the screen. Browsers do not update their display while a JavaScript program is running, nor do they allow any interaction with the page. This is why we need —it lets the browser know that we are done for now, and it can go ahead and do the things that browsers do, such as updating the screen and responding to user actions. The animation function is passed the current time as an argument. To ensure that the motion of the cat per millisecond is stable, it bases the speed at which the angle changes on the difference between the current time and the last time the function ran. If it just moved the angle by a fixed amount per step, the motion would stutter if, for example, another heavy task running on the same computer were to prevent the function from running for a fraction of a second. Moving in circles is done using the trigonometry functions `Math.cos` and `Math.sin` . For those who aren’t familiar with these, I’ll briefly introduce them since we will occasionally use them in this book. `Math.cos` and `Math.sin` are useful for finding points that lie on a circle around point (0,0) with a radius of one. Both functions interpret their argument as the position on this circle, with zero denoting the point on the far right of the circle, going clockwise until 2π (about 6.28) has taken us around the whole circle. `Math.cos` tells you the x-coordinate of the point that corresponds to the given position, and `Math.sin` yields the y-coordinate. Positions (or angles) greater than 2π or less than 0 are valid—the rotation repeats so that a+2π refers to the same angle as a. This unit for measuring angles is called radians—a full circle is 2π radians, similar to how it is 360 degrees when measuring in degrees. The constant π is available as `Math.PI` in JavaScript. The cat animation code keeps a counter, `angle` , for the current angle of the animation and increments it every time the `animate` function is called. It can then use this angle to compute the current position of the image element. The `top` style is computed with `Math.sin` and multiplied by 20, which is the vertical radius of our ellipse. The `left` style is based on `Math.cos` and multiplied by 200 so that the ellipse is much wider than it is high. Note that styles usually need units. In this case, we have to append `"px"` to the number to tell the browser that we are counting in pixels (as opposed to centimeters, “ems”, or other units). This is easy to forget. Using numbers without units will result in your style being ignored—unless the number is 0, which always means the same thing, regardless of its unit. JavaScript programs may inspect and interfere with the document that the browser is displaying through a data structure called the DOM. This data structure represents the browser’s model of the document, and a JavaScript program can modify it to change the visible document. The DOM is organized like a tree, in which elements are arranged hierarchically according to the structure of the document. The objects representing elements have properties such as `parentNode` and `childNodes` , which can be used to navigate through this tree. The way a document is displayed can be influenced by styling, both by attaching styles to nodes directly and by defining rules that match certain nodes. There are many different style properties, such as `color` or `display` . JavaScript code can manipulate an element’s style directly through its `style` property. ### Build a table An HTML table is built with the following tag structure: > <table> <tr> <th>name</th> <th>height</th> <th>place</th> </tr> <tr> <td>Kilimanjaro</td> <td>5895</td> <td>Tanzania</td> </tr> </table> For each row, the `<table>` tag contains a `<tr>` tag. Inside of these `<tr>` tags, we can put cell elements: either heading cells ( `<th>` ) or regular cells ( `<td>` ). Given a data set of mountains, an array of objects with `name` , `height` , and `place` properties, generate the DOM structure for a table that enumerates the objects. It should have one column per key and one row per object, plus a header row with `<th>` elements at the top, listing the column names. Write this so that the columns are automatically derived from the objects, by taking the property names of the first object in the data. Add the resulting table to the element with an `id` attribute of `"mountains"` so that it becomes visible in the document. Once you have this working, right-align cells that contain number values by setting their `style.textAlign` property to `"right"` . > <h1>Mountains</h1> <div id="mountains"></div> <script> const MOUNTAINS = [ {name: "Kilimanjaro", height: 5895, place: "Tanzania"}, {name: "Everest", height: 8848, place: "Nepal"}, {name: "<NAME>", height: 3776, place: "Japan"}, {name: "Vaalserberg", height: 323, place: "Netherlands"}, {name: "Denali", height: 6168, place: "United States"}, {name: "Popocatepetl", height: 5465, place: "Mexico"}, {name: "<NAME>", height: 4808, place: "Italy/France"} ]; // Your code here </script> You can use `document.` to create new element nodes, `document.` to create text nodes, and the `appendChild` method to put nodes into other nodes. You’ll want to loop over the key names once to fill in the top row and then again for each object in the array to construct the data rows. To get an array of key names from the first object, `Object.keys` will be useful. To add the table to the correct parent node, you can use `document.` or `document.` to find the node with the proper `id` attribute. ### Elements by tag name The `document.` method returns all child elements with a given tag name. Implement your own version of this as a function that takes a node and a string (the tag name) as arguments and returns an array containing all descendant element nodes with the given tag name. To find the tag name of an element, use its `nodeName` property. But note that this will return the tag name in all uppercase. Use the `toLowerCase` or `toUpperCase` string methods to compensate for this. > <h1>Heading with a <span>span</span> element.</h1> <p>A paragraph with <span>one</span>, <span>two</span> spans.</p> <script> function byTagName(node, tagName) { // Your code here. } console.log(byTagName(document.body, "h1").length); // → 1 console.log(byTagName(document.body, "span").length); // → 3 let para = document.querySelector("p"); console.log(byTagName(para, "span").length); // → 2 </script> The solution is most easily expressed with a recursive function, similar to the `talksAbout` function defined earlier in this chapter. You could call `byTagname` itself recursively, concatenating the resulting arrays to produce the output. Or you could create an inner function that calls itself recursively and that has access to an array binding defined in the outer function, to which it can add the matching elements it finds. Don’t forget to call the inner function once from the outer function to start the process. The recursive function must check the node type. Here we are interested only in node type 1 ( `Node.` ). For such nodes, we must loop over their children and, for each child, see whether the child matches the query while also doing a recursive call on it to inspect its own children. ### The cat’s hat Extend the cat animation defined earlier so that both the cat and his hat ( `<img src="img/` ) orbit at opposite sides of the ellipse. Or make the hat circle around the cat. Or alter the animation in some other interesting way. To make positioning multiple objects easier, it is probably a good idea to switch to absolute positioning. This means that `top` and `left` are counted relative to the top left of the document. To avoid using negative coordinates, which would cause the image to move outside of the visible page, you can add a fixed number of pixels to the position values. > <style>body { min-height: 200px }</style> <img src="img/cat.png" id="cat" style="position: absolute"> <img src="img/hat.png" id="hat" style="position: absolute"> <script> let cat = document.querySelector("#cat"); let hat = document.querySelector("#hat"); let angle = 0; let lastTime = null; function animate(time) { if (lastTime != null) angle += (time - lastTime) * 0.001; lastTime = time; cat.style.top = (Math.sin(angle) * 40 + 40) + "px"; cat.style.left = (Math.cos(angle) * 200 + 230) + "px"; // Your extensions here. requestAnimationFrame(animate); } requestAnimationFrame(animate); </scriptYou have power over your mind—not outside events. Realize this, and you will find strength. Some programs work with direct user input, such as mouse and keyboard actions. That kind of input isn’t available as a well-organized data structure—it comes in piece by piece, in real time, and the program is expected to respond to it as it happens. ## Event handlers Imagine an interface where the only way to find out whether a key on the keyboard is being pressed is to read the current state of that key. To be able to react to keypresses, you would have to constantly read the key’s state so that you’d catch it before it’s released again. It would be dangerous to perform other time-intensive computations since you might miss a keypress. Some primitive machines do handle input like that. A step up from this would be for the hardware or operating system to notice the keypress and put it in a queue. A program can then periodically check the queue for new events and react to what it finds there. Of course, it has to remember to look at the queue, and to do it often, because any time between the key being pressed and the program noticing the event will cause the software to feel unresponsive. This approach is called polling. Most programmers prefer to avoid it. A better mechanism is for the system to actively notify our code when an event occurs. Browsers do this by allowing us to register functions as handlers for specific events. > edit & run code by clicking it<p>Click this document to activate the handler.</p> <script> window.addEventListener("click", () => { console.log("You knocked?"); }); </script> The `window` binding refers to a built-in object provided by the browser. It represents the browser window that contains the document. Calling its `addEventListener` method registers the second argument to be called whenever the event described by its first argument occurs. ## Events and DOM nodes Each browser event handler is registered in a context. In the previous example we called `addEventListener` on the `window` object to register a handler for the whole window. Such a method can also be found on DOM elements and some other types of objects. Event listeners are called only when the event happens in the context of the object they are registered on. > <button>Click me</button> <p>No handler here.</p> <script> let button = document.querySelector("button"); button.addEventListener("click", () => { console.log("Button clicked."); }); </scriptThat example attaches a handler to the button node. Clicks on the button cause that handler to run, but clicks on the rest of the document do not. Giving a node an `onclick` attribute has a similar effect. This works for most types of events—you can attach a handler through the attribute whose name is the event name with `on` in front of it. But a node can have only one `onclick` attribute, so you can register only one handler per node that way. The `addEventListener` method allows you to add any number of handlers so that it is safe to add handlers even if there is already another handler on the element. The `removeEventListener` method, called with arguments similar to `addEventListener` , removes a handler. > <button>Act-once button</button> <script> let button = document.querySelector("button"); function once() { console.log("Done."); button.removeEventListener("click", once); } button.addEventListener("click", once); </script> The function given to `removeEventListener` has to be the same function value that was given to `addEventListener` . So, to unregister a handler, you’ll want to give the function a name ( `once` , in the example) to be able to pass the same function value to both methods. ## Event objects Though we have ignored it so far, event handler functions are passed an argument: the event object. This object holds additional information about the event. For example, if we want to know which mouse button was pressed, we can look at the event object’s `button` property. > <button>Click me any way you want</button> <script> let button = document.querySelector("button"); button.addEventListener("mousedown", event => { if (event.button == 0) { console.log("Left button"); } else if (event.button == 1) { console.log("Middle button"); } else if (event.button == 2) { console.log("Right button"); } }); </script> The information stored in an event object differs per type of event. We’ll discuss different types later in the chapter. The object’s `type` property always holds a string identifying the event (such as `"click"` or `"mousedown"` ). ## Propagation For most event types, handlers registered on nodes with children will also receive events that happen in the children. If a button inside a paragraph is clicked, event handlers on the paragraph will also see the click event. But if both the paragraph and the button have a handler, the more specific handler—the one on the button—gets to go first. The event is said to propagate outward, from the node where it happened to that node’s parent node and on to the root of the document. Finally, after all handlers registered on a specific node have had their turn, handlers registered on the whole window get a chance to respond to the event. At any point, an event handler can call the `stopPropagation` method on the event object to prevent handlers further up from receiving the event. This can be useful when, for example, you have a button inside another clickable element and you don’t want clicks on the button to activate the outer element’s click behavior. The following example registers `"mousedown"` handlers on both a button and the paragraph around it. When clicked with the right mouse button, the handler for the button calls `stopPropagation` , which will prevent the handler on the paragraph from running. When the button is clicked with another mouse button, both handlers will run. > <p>A paragraph with a <button>button</button>.</p> <script> let para = document.querySelector("p"); let button = document.querySelector("button"); para.addEventListener("mousedown", () => { console.log("Handler for paragraph."); }); button.addEventListener("mousedown", event => { console.log("Handler for button."); if (event.button == 2) event.stopPropagation(); }); </script> Most event objects have a `target` property that refers to the node where they originated. You can use this property to ensure that you’re not accidentally handling something that propagated up from a node you do not want to handle. It is also possible to use the `target` property to cast a wide net for a specific type of event. For example, if you have a node containing a long list of buttons, it may be more convenient to register a single click handler on the outer node and have it use the `target` property to figure out whether a button was clicked, rather than register individual handlers on all of the buttons. > <button>A</button> <button>B</button> <button>C</button> <script> document.body.addEventListener("click", event => { if (event.target.nodeName == "BUTTON") { console.log("Clicked", event.target.textContent); } }); </script## Default actions Many events have a default action associated with them. If you click a link, you will be taken to the link’s target. If you press the down arrow, the browser will scroll the page down. If you right-click, you’ll get a context menu. And so on. For most types of events, the JavaScript event handlers are called before the default behavior takes place. If the handler doesn’t want this normal behavior to happen, typically because it has already taken care of handling the event, it can call the `preventDefault` method on the event object. This can be used to implement your own keyboard shortcuts or context menu. It can also be used to obnoxiously interfere with the behavior that users expect. For example, here is a link that cannot be followed: > <a href="https://developer.mozilla.org/">MDN</a> <script> let link = document.querySelector("a"); link.addEventListener("click", event => { console.log("Nope."); event.preventDefault(); }); </scriptTry not to do such things unless you have a really good reason to. It’ll be unpleasant for people who use your page when expected behavior is broken. Depending on the browser, some events can’t be intercepted at all. On Chrome, for example, the keyboard shortcut to close the current tab (control-W or command-W) cannot be handled by JavaScript. ## Key events When a key on the keyboard is pressed, your browser fires a `"keydown"` event. When it is released, you get a `"keyup"` event. > <p>This page turns violet when you hold the V key.</p> <script> window.addEventListener("keydown", event => { if (event.key == "v") { document.body.style.background = "violet"; } }); window.addEventListener("keyup", event => { if (event.key == "v") { document.body.style.background = ""; } }); </script> Despite its name, `"keydown"` fires not only when the key is physically pushed down. When a key is pressed and held, the event fires again every time the key repeats. Sometimes you have to be careful about this. For example, if you add a button to the DOM when a key is pressed and remove it again when the key is released, you might accidentally add hundreds of buttons when the key is held down longer. The example looked at the `key` property of the event object to see which key the event is about. This property holds a string that, for most keys, corresponds to the thing that pressing that key would type. For special keys such as enter, it holds a string that names the key ( `"Enter"` , in this case). If you hold shift while pressing a key, that might also influence the name of the key— `"v"` becomes `"V"` , and `"1"` may become `"!"` , if that is what pressing shift-1 produces on your keyboard. Modifier keys such as shift, control, alt, and meta (command on Mac) generate key events just like normal keys. But when looking for key combinations, you can also find out whether these keys are held down by looking at the `shiftKey` , `ctrlKey` , `altKey` , and `metaKey` properties of keyboard and mouse events. > <p>Press Control-Space to continue.</p> <script> window.addEventListener("keydown", event => { if (event.key == " " && event.ctrlKey) { console.log("Continuing!"); } }); </script> The DOM node where a key event originates depends on the element that has focus when the key is pressed. Most nodes cannot have focus unless you give them a `tabindex` attribute, but things like links, buttons, and form fields can. We’ll come back to form fields in Chapter 18. When nothing in particular has focus, `document.body` acts as the target node of key events. When the user is typing text, using key events to figure out what is being typed is problematic. Some platforms, most notably the virtual keyboard on Android phones, don’t fire key events. But even when you have an old-fashioned keyboard, some types of text input don’t match key presses in a straightforward way, such as input method editor (IME) software used by people whose scripts don’t fit on a keyboard, where multiple key strokes are combined to create characters. To notice when something was typed, elements that you can type into, such as the `<input>` and `<textarea>` tags, fire `"input"` events whenever the user changes their content. To get the actual content that was typed, it is best to directly read it from the focused field. Chapter 18 will show how. ## Pointer events There are currently two widely used ways to point at things on a screen: mice (including devices that act like mice, such as touchpads and trackballs) and touchscreens. These produce different kinds of events. ### Mouse clicks Pressing a mouse button causes a number of events to fire. The `"mousedown"` and `"mouseup"` events are similar to `"keydown"` and `"keyup"` and fire when the button is pressed and released. These happen on the DOM nodes that are immediately below the mouse pointer when the event occurs. After the `"mouseup"` event, a `"click"` event fires on the most specific node that contained both the press and the release of the button. For example, if I press down the mouse button on one paragraph and then move the pointer to another paragraph and release the button, the `"click"` event will happen on the element that contains both those paragraphs. If two clicks happen close together, a `"dblclick"` (double-click) event also fires, after the second click event. To get precise information about the place where a mouse event happened, you can look at its `clientX` and `clientY` properties, which contain the event’s coordinates (in pixels) relative to the top-left corner of the window, or `pageX` and `pageY` , which are relative to the top-left corner of the whole document (which may be different when the window has been scrolled). The following implements a primitive drawing program. Every time you click the document, it adds a dot under your mouse pointer. See Chapter 19 for a less primitive drawing program. > <style> body { height: 200px; background: beige; } .dot { height: 8px; width: 8px; border-radius: 4px; /* rounds corners */ background: blue; position: absolute; } </style> <script> window.addEventListener("click", event => { let dot = document.createElement("div"); dot.className = "dot"; dot.style.left = (event.pageX - 4) + "px"; dot.style.top = (event.pageY - 4) + "px"; document.body.appendChild(dot); }); </script### Mouse motion Every time the mouse pointer moves, a `"mousemove"` event is fired. This event can be used to track the position of the mouse. A common situation in which this is useful is when implementing some form of mouse-dragging functionality. As an example, the following program displays a bar and sets up event handlers so that dragging to the left or right on this bar makes it narrower or wider: > <p>Drag the bar to change its width:</p> <div style="background: orange; width: 60px; height: 20px"> </div> <script> let lastX; // Tracks the last observed mouse X position let bar = document.querySelector("div"); bar.addEventListener("mousedown", event => { if (event.button == 0) { lastX = event.clientX; window.addEventListener("mousemove", moved); event.preventDefault(); // Prevent selection } }); function moved(event) { if (event.buttons == 0) { window.removeEventListener("mousemove", moved); } else { let dist = event.clientX - lastX; let newWidth = Math.max(10, bar.offsetWidth + dist); bar.style.width = newWidth + "px"; lastX = event.clientX; } } </script> Note that the `"mousemove"` handler is registered on the whole window. Even if the mouse goes outside of the bar during resizing, as long as the button is held we still want to update its size. We must stop resizing the bar when the mouse button is released. For that, we can use the `buttons` property (note the plural), which tells us about the buttons that are currently held down. When this is zero, no buttons are down. When buttons are held, its value is the sum of the codes for those buttons—the left button has code 1, the right button 2, and the middle one 4. That way, you can check whether a given button is pressed by taking the remainder of the value of `buttons` and its code. Note that the order of these codes is different from the one used by `button` , where the middle button came before the right one. As mentioned, consistency isn’t really a strong point of the browser’s programming interface. ### Touch events The style of graphical browser that we use was designed with mouse interfaces in mind, at a time where touchscreens were rare. To make the Web “work” on early touchscreen phones, browsers for those devices pretended, to a certain extent, that touch events were mouse events. If you tap your screen, you’ll get `"mousedown"` , `"mouseup"` , and `"click"` events. But this illusion isn’t very robust. A touchscreen works differently from a mouse: it doesn’t have multiple buttons, you can’t track the finger when it isn’t on the screen (to simulate `"mousemove"` ), and it allows multiple fingers to be on the screen at the same time. Mouse events cover touch interaction only in straightforward cases—if you add a `"click"` handler to a button, touch users will still be able to use it. But something like the resizeable bar in the previous example does not work on a touchscreen. There are specific event types fired by touch interaction. When a finger starts touching the screen, you get a `"touchstart"` event. When it is moved while touching, `"touchmove"` events fire. Finally, when it stops touching the screen, you’ll see a `"touchend"` event. Because many touchscreens can detect multiple fingers at the same time, these events don’t have a single set of coordinates associated with them. Rather, their event objects have a `touches` property, which holds an array-like object of points, each of which has its own `clientX` , `clientY` , `pageX` , and `pageY` properties. You could do something like this to show red circles around every touching finger: > <style> dot { position: absolute; display: block; border: 2px solid red; border-radius: 50px; height: 100px; width: 100px; } </style> <p>Touch this page</p> <script> function update(event) { for (let dot; dot = document.querySelector("dot");) { dot.remove(); } for (let i = 0; i < event.touches.length; i++) { let {pageX, pageY} = event.touches[i]; let dot = document.createElement("dot"); dot.style.left = (pageX - 50) + "px"; dot.style.top = (pageY - 50) + "px"; document.body.appendChild(dot); } } window.addEventListener("touchstart", update); window.addEventListener("touchmove", update); window.addEventListener("touchend", update); </script> You’ll often want to call `preventDefault` in touch event handlers to override the browser’s default behavior (which may include scrolling the page on swiping) and to prevent the mouse events from being fired, for which you may also have a handler. ## Scroll events Whenever an element is scrolled, a `"scroll"` event is fired on it. This has various uses, such as knowing what the user is currently looking at (for disabling off-screen animations or sending spy reports to your evil headquarters) or showing some indication of progress (by highlighting part of a table of contents or showing a page number). The following example draws a progress bar above the document and updates it to fill up as you scroll down: > <style> #progress { border-bottom: 2px solid blue; width: 0; position: fixed; top: 0; left: 0; } </style> <div id="progress"></div> <script> // Create some content document.body.appendChild(document.createTextNode( "supercalifragilisticexpialidocious ".repeat(1000))); let bar = document.querySelector("#progress"); window.addEventListener("scroll", () => { let max = document.body.scrollHeight - innerHeight; bar.style.width = `${(pageYOffset / max) * 100}%`; }); </script> Giving an element a `position` of `fixed` acts much like an `absolute` position but also prevents it from scrolling along with the rest of the document. The effect is to make our progress bar stay at the top. Its width is changed to indicate the current progress. We use `%` , rather than `px` , as a unit when setting the width so that the element is sized relative to the page width. The global `innerHeight` binding gives us the height of the window, which we have to subtract from the total scrollable height—you can’t keep scrolling when you hit the bottom of the document. There’s also an `innerWidth` for the window width. By dividing `pageYOffset` , the current scroll position, by the maximum scroll position and multiplying by 100, we get the percentage for the progress bar. Calling `preventDefault` on a scroll event does not prevent the scrolling from happening. In fact, the event handler is called only after the scrolling takes place. ## Focus events When an element gains focus, the browser fires a `"focus"` event on it. When it loses focus, the element gets a `"blur"` event. Unlike the events discussed earlier, these two events do not propagate. A handler on a parent element is not notified when a child element gains or loses focus. The following example displays help text for the text field that currently has focus: > <p>Name: <input type="text" data-help="Your full name"></p> <p>Age: <input type="text" data-help="Your age in years"></p> <p id="help"></p> <script> let help = document.querySelector("#help"); let fields = document.querySelectorAll("input"); for (let field of Array.from(fields)) { field.addEventListener("focus", event => { let text = event.target.getAttribute("data-help"); help.textContent = text; }); field.addEventListener("blur", event => { help.textContent = ""; }); } </script> The window object will receive `"focus"` and `"blur"` events when the user moves from or to the browser tab or window in which the document is shown. ## Load event When a page finishes loading, the `"load"` event fires on the window and the document body objects. This is often used to schedule initialization actions that require the whole document to have been built. Remember that the content of `<script>` tags is run immediately when the tag is encountered. This may be too soon, for example when the script needs to do something with parts of the document that appear after the `<script>` tag. Elements such as images and script tags that load an external file also have a `"load"` event that indicates the files they reference were loaded. Like the focus-related events, loading events do not propagate. When a page is closed or navigated away from (for example, by following a link), a `"beforeunload"` event fires. The main use of this event is to prevent the user from accidentally losing work by closing a document. If you prevent the default behavior on this event and set the `returnValue` property on the event object to a string, the browser will show the user a dialog asking if they really want to leave the page. That dialog might include your string, but because some malicious sites try to use these dialogs to confuse people into staying on their page to look at dodgy weight loss ads, most browsers no longer display them. ## Events and the event loop In the context of the event loop, as discussed in Chapter 11, browser event handlers behave like other asynchronous notifications. They are scheduled when the event occurs but must wait for other scripts that are running to finish before they get a chance to run. The fact that events can be processed only when nothing else is running means that, if the event loop is tied up with other work, any interaction with the page (which happens through events) will be delayed until there’s time to process it. So if you schedule too much work, either with long-running event handlers or with lots of short-running ones, the page will become slow and cumbersome to use. For cases where you really do want to do some time-consuming thing in the background without freezing the page, browsers provide something called web workers. A worker is a JavaScript process that runs alongside the main script, on its own timeline. Imagine that squaring a number is a heavy, long-running computation that we want to perform in a separate thread. We could write a file called `code/` that responds to messages by computing a square and sending a message back. > addEventListener("message", event => { postMessage(event.data * event.data); }); To avoid the problems of having multiple threads touching the same data, workers do not share their global scope or any other data with the main script’s environment. Instead, you have to communicate with them by sending messages back and forth. This code spawns a worker running that script, sends it a few messages, and outputs the responses. > let squareWorker = new Worker("code/squareworker.js"); squareWorker.addEventListener("message", event => { console.log("The worker responded:", event.data); }); squareWorker.postMessage(10); squareWorker.postMessage(24); The `postMessage` function sends a message, which will cause a `"message"` event to fire in the receiver. The script that created the worker sends and receives messages through the `Worker` object, whereas the worker talks to the script that created it by sending and listening directly on its global scope. Only values that can be represented as JSON can be sent as messages—the other side will receive a copy of them, rather than the value itself. ## Timers We saw the `setTimeout` function in Chapter 11. It schedules another function to be called later, after a given number of milliseconds. Sometimes you need to cancel a function you have scheduled. This is done by storing the value returned by `setTimeout` and calling `clearTimeout` on it. > let bombTimer = setTimeout(() => { console.log("BOOM!"); }, 500); if (Math.random() < 0.5) { // 50% chance console.log("Defused."); clearTimeout(bombTimer); } The `cancelAnimationFrame` function works in the same way as `clearTimeout` —calling it on a value returned by will cancel that frame (assuming it hasn’t already been called). A similar set of functions, `setInterval` and `clearInterval` , are used to set timers that should repeat every X milliseconds. > let ticks = 0; let clock = setInterval(() => { console.log("tick", ticks++); if (ticks == 10) { clearInterval(clock); console.log("stop."); } }, 200); ## Debouncing Some types of events have the potential to fire rapidly, many times in a row (the `"mousemove"` and `"scroll"` events, for example). When handling such events, you must be careful not to do anything too time-consuming or your handler will take up so much time that interaction with the document starts to feel slow. If you do need to do something nontrivial in such a handler, you can use `setTimeout` to make sure you are not doing it too often. This is usually called debouncing the event. There are several slightly different approaches to this. In the first example, we want to react when the user has typed something, but we don’t want to do it immediately for every input event. When they are typing quickly, we just want to wait until a pause occurs. Instead of immediately performing an action in the event handler, we set a timeout. We also clear the previous timeout (if any) so that when events occur close together (closer than our timeout delay), the timeout from the previous event will be canceled. > <textarea>Type something here...</textarea> <script> let textarea = document.querySelector("textarea"); let timeout; textarea.addEventListener("input", () => { clearTimeout(timeout); timeout = setTimeout(() => console.log("Typed!"), 500); }); </script> Giving an undefined value to `clearTimeout` or calling it on a timeout that has already fired has no effect. Thus, we don’t have to be careful about when to call it, and we simply do so for every event. We can use a slightly different pattern if we want to space responses so that they’re separated by at least a certain length of time but want to fire them during a series of events, not just afterward. For example, we might want to respond to `"mousemove"` events by showing the current coordinates of the mouse but only every 250 milliseconds. > <script> let scheduled = null; window.addEventListener("mousemove", event => { if (!scheduled) { setTimeout(() => { document.body.textContent = `Mouse at ${scheduled.pageX}, ${scheduled.pageY}`; scheduled = null; }, 250); } scheduled = event; }); </script Event handlers make it possible to detect and react to events happening in our web page. The `addEventListener` method is used to register such a handler. Each event has a type ( `"keydown"` , `"focus"` , and so on) that identifies it. Most events are called on a specific DOM element and then propagate to that element’s ancestors, allowing handlers associated with those elements to handle them. When an event handler is called, it is passed an event object with additional information about the event. This object also has methods that allow us to stop further propagation ( `stopPropagation` ) and prevent the browser’s default handling of the event ( `preventDefault` ). Pressing a key fires `"keydown"` and `"keyup"` events. Pressing a mouse button fires `"mousedown"` , `"mouseup"` , and `"click"` events. Moving the mouse fires `"mousemove"` events. Touchscreen interaction will result in `"touchstart"` , `"touchmove"` , and `"touchend"` events. Scrolling can be detected with the `"scroll"` event, and focus changes can be detected with the `"focus"` and `"blur"` events. When the document finishes loading, a `"load"` event fires on the window. ### Balloon Write a page that displays a balloon (using the balloon emoji, 🎈). When you press the up arrow, it should inflate (grow) 10 percent, and when you press the down arrow, it should deflate (shrink) 10 percent. You can control the size of text (emoji are text) by setting the `font-size` CSS property ( `style.fontSize` ) on its parent element. Remember to include a unit in the value—for example, pixels ( `10px` ). The key names of the arrow keys are `"ArrowUp"` and `"ArrowDown"` . Make sure the keys change only the balloon, without scrolling the page. When that works, add a feature where, if you blow up the balloon past a certain size, it explodes. In this case, exploding means that it is replaced with an 💥 emoji, and the event handler is removed (so that you can’t inflate or deflate the explosion). > <p>🎈</p> <script> // Your code here </script> You’ll want to register a handler for the `"keydown"` event and look at `event.key` to figure out whether the up or down arrow key was pressed. The current size can be kept in a binding so that you can base the new size on it. It’ll be helpful to define a function that updates the size—both the binding and the style of the balloon in the DOM—so that you can call it from your event handler, and possibly also once when starting, to set the initial size. You can change the balloon to an explosion by replacing the text node with another one (using `replaceChild` ) or by setting the `textContent` property of its parent node to a new string. ### Mouse trail In JavaScript’s early days, which was the high time of gaudy home pages with lots of animated images, people came up with some truly inspiring ways to use the language. One of these was the mouse trail—a series of elements that would follow the mouse pointer as you moved it across the page. In this exercise, I want you to implement a mouse trail. Use absolutely positioned `<div>` elements with a fixed size and background color (refer to the code in the “Mouse Clicks” section for an example). Create a bunch of such elements and, when the mouse moves, display them in the wake of the mouse pointer. There are various possible approaches here. You can make your solution as simple or as complex as you want. A simple solution to start with is to keep a fixed number of trail elements and cycle through them, moving the next one to the mouse’s current position every time a `"mousemove"` event occurs. > <style> .trail { /* className for the trail elements */ position: absolute; height: 6px; width: 6px; border-radius: 3px; background: teal; } body { height: 300px; } </style> <script> // Your code here. </scriptCreating the elements is best done with a loop. Append them to the document to make them show up. To be able to access them later to change their position, you’ll want to store the elements in an array. Cycling through them can be done by keeping a counter variable and adding 1 to it every time the `"mousemove"` event fires. The remainder operator ( `% elements.` ) can then be used to get a valid array index to pick the element you want to position during a given event. Another interesting effect can be achieved by modeling a simple physics system. Use the `"mousemove"` event only to update a pair of bindings that track the mouse position. Then use to simulate the trailing elements being attracted to the position of the mouse pointer. At every animation step, update their position based on their position relative to the pointer (and, optionally, a speed that is stored for each element). Figuring out a good way to do this is up to you. ### Tabs Tabbed panels are widely used in user interfaces. They allow you to select an interface panel by choosing from a number of tabs “sticking out” above an element. In this exercise you must implement a simple tabbed interface. Write a function, `asTabs` , that takes a DOM node and creates a tabbed interface showing the child elements of that node. It should insert a list of `<button>` elements at the top of the node, one for each child element, containing text retrieved from the `data-tabname` attribute of the child. All but one of the original children should be hidden (given a `display` style of `none` ). The currently visible node can be selected by clicking the buttons. When that works, extend it to style the button for the currently selected tab differently so that it is obvious which tab is selected. > <tab-panel> <div data-tabname="one">Tab one</div> <div data-tabname="two">Tab two</div> <div data-tabname="three">Tab three</div> </tab-panel> <script> function asTabs(node) { // Your code here. } asTabs(document.querySelector("tab-panel")); </script> One pitfall you might run into is that you can’t directly use the node’s `childNodes` property as a collection of tab nodes. For one thing, when you add the buttons, they will also become child nodes and end up in this object because it is a live data structure. For another, the text nodes created for the whitespace between the nodes are also in `childNodes` but should not get their own tabs. You can use `children` instead of `childNodes` to ignore text nodes. You could start by building up an array of tabs so that you have easy access to them. To implement the styling of the buttons, you could store objects that contain both the tab panel and its button. I recommend writing a separate function for changing tabs. You can either store the previously selected tab and change only the styles needed to hide that and show the new one, or you can just update the style of all tabs every time a new tab is selected. You might want to call this function immediately to make the interface start with the first tab visible. All reality is a game. Much of my initial fascination with computers, like that of many nerdy kids, had to do with computer games. I was drawn into the tiny simulated worlds that I could manipulate and in which stories (sort of) unfolded—more, I suppose, because of the way I projected my imagination into them than because of the possibilities they actually offered. I don’t wish a career in game programming on anyone. Much like the music industry, the discrepancy between the number of eager young people wanting to work in it and the actual demand for such people creates a rather unhealthy environment. But writing games for fun is amusing. This chapter will walk through the implementation of a small platform game. Platform games (or “jump and run” games) are games that expect the player to move a figure through a world, which is usually two-dimensional and viewed from the side, while jumping over and onto things. ## The game Our game will be roughly based on Dark Blue by <NAME>. I chose that game because it is both entertaining and minimalist and because it can be built without too much code. It looks like this: The dark box represents the player, whose task is to collect the yellow boxes (coins) while avoiding the red stuff (lava). A level is completed when all coins have been collected. The player can walk around with the left and right arrow keys and can jump with the up arrow. Jumping is a specialty of this game character. It can reach several times its own height and can change direction in midair. This may not be entirely realistic, but it helps give the player the feeling of being in direct control of the on-screen avatar. The game consists of a static background, laid out like a grid, with the moving elements overlaid on that background. Each field on the grid is either empty, solid, or lava. The moving elements are the player, coins, and certain pieces of lava. The positions of these elements are not constrained to the grid—their coordinates may be fractional, allowing smooth motion. ## The technology We will use the browser DOM to display the game, and we’ll read user input by handling key events. The screen- and keyboard-related code is only a small part of the work we need to do to build this game. Since everything looks like colored boxes, drawing is uncomplicated: we create DOM elements and use styling to give them a background color, size, and position. We can represent the background as a table since it is an unchanging grid of squares. The free-moving elements can be overlaid using absolutely positioned elements. In games and other programs that should animate graphics and respond to user input without noticeable delay, efficiency is important. Although the DOM was not originally designed for high-performance graphics, it is actually better at this than you would expect. You saw some animations in Chapter 14. On a modern machine, a simple game like this performs well, even if we don’t worry about optimization very much. In the next chapter, we will explore another browser technology, the `<canvas>` tag, which provides a more traditional way to draw graphics, working in terms of shapes and pixels rather than DOM elements. ## Levels We’ll want a human-readable, human-editable way to specify levels. Since it is okay for everything to start out on a grid, we could use big strings in which each character represents an element—either a part of the background grid or a moving element. The plan for a small level might look like this: > edit & run code by clicking itlet simpleLevelPlan = ` ...................... ..#................#.. ..#..............=.#.. ..#.........o.o....#.. ..#.@......#####...#.. ..#####............#.. ......#++++++++++++#.. ......##############.. ......................`; Periods are empty space, hash ( `#` ) characters are walls, and plus signs are lava. The player’s starting position is the at sign ( `@` ). Every O character is a coin, and the equal sign ( `=` ) at the top is a block of lava that moves back and forth horizontally. We’ll support two additional kinds of moving lava: the pipe character ( `|` ) creates vertically moving blobs, and `v` indicates dripping lava—vertically moving lava that doesn’t bounce back and forth but only moves down, jumping back to its start position when it hits the floor. A whole game consists of multiple levels that the player must complete. A level is completed when all coins have been collected. If the player touches lava, the current level is restored to its starting position, and the player may try again. ## Reading a level The following class stores a level object. Its argument should be the string that defines the level. > class Level { constructor(plan) { let rows = plan.trim().split("\n").map(l => [l]); this.height = rows.length; this.width = rows[0].length; this.startActors = []; this.rows = rows.map((row, y) => { return row.map((ch, x) => { let type = levelChars[ch]; if (typeof type == "string") return type; this.startActors.push( type.create(new Vec(x, y), ch)); return "empty"; }); }); } } The `trim` method is used to remove whitespace at the start and end of the plan string. This allows our example plan to start with a newline so that all the lines are directly below each other. The remaining string is split on newline characters, and each line is spread into an array, producing arrays of characters. So `rows` holds an array of arrays of characters, the rows of the plan. We can derive the level’s width and height from these. But we must still separate the moving elements from the background grid. We’ll call moving elements actors. They’ll be stored in an array of objects. The background will be an array of arrays of strings, holding field types such as `"empty"` , `"wall"` , or `"lava"` . To create these arrays, we map over the rows and then over their content. Remember that `map` passes the array index as a second argument to the mapping function, which tells us the x- and y-coordinates of a given character. Positions in the game will be stored as pairs of coordinates, with the top left being 0,0 and each background square being 1 unit high and wide. To interpret the characters in the plan, the `Level` constructor uses the `levelChars` object, which maps background elements to strings and actor characters to classes. When `type` is an actor class, its static `create` method is used to create an object, which is added to `startActors` , and the mapping function returns `"empty"` for this background square. The position of the actor is stored as a `Vec` object. This is a two-dimensional vector, an object with `x` and `y` properties, as seen in the exercises of Chapter 6. As the game runs, actors will end up in different places or even disappear entirely (as coins do when collected). We’ll use a `State` class to track the state of a running game. > class State { constructor(level, actors, status) { this.level = level; this.actors = actors; this.status = status; } static start(level) { return new State(level, level.startActors, "playing"); } get player() { return this.actors.find(a => a.type == "player"); } } The `status` property will switch to `"lost"` or `"won"` when the game has ended. This is again a persistent data structure—updating the game state creates a new state and leaves the old one intact. ## Actors Actor objects represent the current position and state of a given moving element in our game. All actor objects conform to the same interface. Their `pos` property holds the coordinates of the element’s top-left corner, and their `size` property holds its size. Then they have an `update` method, which is used to compute their new state and position after a given time step. It simulates the thing the actor does—moving in response to the arrow keys for the player and bouncing back and forth for the lava—and returns a new, updated actor object. A `type` property contains a string that identifies the type of the actor— `"player"` , `"coin"` , or `"lava"` . This is useful when drawing the game—the look of the rectangle drawn for an actor is based on its type. Actor classes have a static `create` method that is used by the `Level` constructor to create an actor from a character in the level plan. It is given the coordinates of the character and the character itself, which is needed because the `Lava` class handles several different characters. This is the `Vec` class that we’ll use for our two-dimensional values, such as the position and size of actors. > class Vec { constructor(x, y) { this.x = x; this.y = y; } plus(other) { return new Vec(this.x + other.x, this.y + other.y); } times(factor) { return new Vec(this.x * factor, this.y * factor); } } The `times` method scales a vector by a given number. It will be useful when we need to multiply a speed vector by a time interval to get the distance traveled during that time. The different types of actors get their own classes since their behavior is very different. Let’s define these classes. We’ll get to their `update` methods later. The player class has a property `speed` that stores its current speed to simulate momentum and gravity. > class Player { constructor(pos, speed) { this.pos = pos; this.speed = speed; } get type() { return "player"; } static create(pos) { return new Player(pos.plus(new Vec(0, -0.5)), new Vec(0, 0)); } } Player.prototype.size = new Vec(0.8, 1.5); Because a player is one-and-a-half squares high, its initial position is set to be half a square above the position where the `@` character appeared. This way, its bottom aligns with the bottom of the square it appeared in. The `size` property is the same for all instances of `Player` , so we store it on the prototype rather than on the instances themselves. We could have used a getter like `type` , but that would create and return a new `Vec` object every time the property is read, which would be wasteful. (Strings, being immutable, don’t have to be re-created every time they are evaluated.) When constructing a `Lava` actor, we need to initialize the object differently depending on the character it is based on. Dynamic lava moves along at its current speed until it hits an obstacle. At that point, if it has a `reset` property, it will jump back to its start position (dripping). If it does not, it will invert its speed and continue in the other direction (bouncing). The `create` method looks at the character that the `Level` constructor passes and creates the appropriate lava actor. > class Lava { constructor(pos, speed, reset) { this.pos = pos; this.speed = speed; this.reset = reset; } get type() { return "lava"; } static create(pos, ch) { if (ch == "=") { return new Lava(pos, new Vec(2, 0)); } else if (ch == "|") { return new Lava(pos, new Vec(0, 2)); } else if (ch == "v") { return new Lava(pos, new Vec(0, 3), pos); } } } Lava.prototype.size = new Vec(1, 1); `Coin` actors are relatively simple. They mostly just sit in their place. But to liven up the game a little, they are given a “wobble”, a slight vertical back-and-forth motion. To track this, a coin object stores a base position as well as a `wobble` property that tracks the phase of the bouncing motion. Together, these determine the coin’s actual position (stored in the `pos` property). > class Coin { constructor(pos, basePos, wobble) { this.pos = pos; this.basePos = basePos; this.wobble = wobble; } get type() { return "coin"; } static create(pos) { let basePos = pos.plus(new Vec(0.2, 0.1)); return new Coin(basePos, basePos, Math.random() * Math.PI * 2); } } Coin.prototype.size = new Vec(0.6, 0.6); In Chapter 14, we saw that `Math.sin` gives us the y-coordinate of a point on a circle. That coordinate goes back and forth in a smooth waveform as we move along the circle, which makes the sine function useful for modeling a wavy motion. To avoid a situation where all coins move up and down synchronously, the starting phase of each coin is randomized. The phase of `Math.sin` ’s wave, the width of a wave it produces, is 2π. We multiply the value returned by `Math.random` by that number to give the coin a random starting position on the wave. We can now define the `levelChars` object that maps plan characters to either background grid types or actor classes. > const levelChars = { ".": "empty", "#": "wall", "+": "lava", "@": Player, "o": Coin, "=": Lava, "|": Lava, "v": Lava }; That gives us all the parts needed to create a `Level` instance. > let simpleLevel = new Level(simpleLevelPlan); console.log(`${simpleLevel.width} by ${simpleLevel.height}`); // → 22 by 9 The task ahead is to display such levels on the screen and to model time and motion inside them. ## Encapsulation as a burden Most of the code in this chapter does not worry about encapsulation very much for two reasons. First, encapsulation takes extra effort. It makes programs bigger and requires additional concepts and interfaces to be introduced. Since there is only so much code you can throw at a reader before their eyes glaze over, I’ve made an effort to keep the program small. Second, the various elements in this game are so closely tied together that if the behavior of one of them changed, it is unlikely that any of the others would be able to stay the same. Interfaces between the elements would end up encoding a lot of assumptions about the way the game works. This makes them a lot less effective—whenever you change one part of the system, you still have to worry about the way it impacts the other parts because their interfaces wouldn’t cover the new situation. Some cutting points in a system lend themselves well to separation through rigorous interfaces, but others don’t. Trying to encapsulate something that isn’t a suitable boundary is a sure way to waste a lot of energy. When you are making this mistake, you’ll usually notice that your interfaces are getting awkwardly large and detailed and that they need to be changed often, as the program evolves. There is one thing that we will encapsulate, and that is the drawing subsystem. The reason for this is that we’ll display the same game in a different way in the next chapter. By putting the drawing behind an interface, we can load the same game program there and plug in a new display module. The encapsulation of the drawing code is done by defining a display object, which displays a given level and state. The display type we define in this chapter is called `DOMDisplay` because it uses DOM elements to show the level. We’ll be using a style sheet to set the actual colors and other fixed properties of the elements that make up the game. It would also be possible to directly assign to the elements’ `style` property when we create them, but that would produce more verbose programs. The following helper function provides a succinct way to create an element and give it some attributes and child nodes: > function elt(name, attrs, children) { let dom = document.createElement(name); for (let attr of Object.keys(attrs)) { dom.setAttribute(attr, attrs[attr]); } for (let child of children) { dom.appendChild(child); } return dom; } A display is created by giving it a parent element to which it should append itself and a level object. > class DOMDisplay { constructor(parent, level) { this.dom = elt("div", {class: "game"}, drawGrid(level)); this.actorLayer = null; parent.appendChild(this.dom); } clear() { this.dom.remove(); } } The level’s background grid, which never changes, is drawn once. Actors are redrawn every time the display is updated with a given state. The `actorLayer` property will be used to track the element that holds the actors so that they can be easily removed and replaced. Our coordinates and sizes are tracked in grid units, where a size or distance of 1 means one grid block. When setting pixel sizes, we will have to scale these coordinates up—everything in the game would be ridiculously small at a single pixel per square. The `scale` constant gives the number of pixels that a single unit takes up on the screen. > const scale = 20; function drawGrid(level) { return elt("table", { class: "background", style: `width: ${level.width * scale}px` }, level.rows.map(row => elt("tr", {style: `height: ${scale}px`}, row.map(type => elt("td", {class: type}))) )); } As mentioned, the background is drawn as a `<table>` element. This nicely corresponds to the structure of the `rows` property of the level—each row of the grid is turned into a table row ( `<tr>` element). The strings in the grid are used as class names for the table cell ( `<td>` ) elements. The spread (triple dot) operator is used to pass arrays of child nodes to `elt` as separate arguments. The following CSS makes the table look like the background we want: > .background { background: rgb(52, 166, 251); table-layout: fixed; border-spacing: 0; } .background td { padding: 0; } .lava { background: rgb(255, 100, 100); } .wall { background: white; } Some of these ( `table-layout` , `border-spacing` , and `padding` ) are used to suppress unwanted default behavior. We don’t want the layout of the table to depend upon the contents of its cells, and we don’t want space between the table cells or padding inside them. The `background` rule sets the background color. CSS allows colors to be specified both as words ( `white` ) or with a format such as `rgb(R, G, B)` , where the red, green, and blue components of the color are separated into three numbers from 0 to 255. So, in `rgb(52, 166, 251)` , the red component is 52, green is 166, and blue is 251. Since the blue component is the largest, the resulting color will be bluish. You can see that in the `.lava` rule, the first number (red) is the largest. We draw each actor by creating a DOM element for it and setting that element’s position and size based on the actor’s properties. The values have to be multiplied by `scale` to go from game units to pixels. > function drawActors(actors) { return elt("div", {}, actors.map(actor => { let rect = elt("div", {class: `actor ${actor.type}`}); rect.style.width = `${actor.size.x * scale}px`; rect.style.height = `${actor.size.y * scale}px`; rect.style.left = `${actor.pos.x * scale}px`; rect.style.top = `${actor.pos.y * scale}px`; return rect; })); } To give an element more than one class, we separate the class names by spaces. In the CSS code shown next, the `actor` class gives the actors their absolute position. Their type name is used as an extra class to give them a color. We don’t have to define the `lava` class again because we’re reusing the class for the lava grid squares we defined earlier. > .actor { position: absolute; } .coin { background: rgb(241, 229, 89); } .player { background: rgb(64, 64, 64); } The `syncState` method is used to make the display show a given state. It first removes the old actor graphics, if any, and then redraws the actors in their new positions. It may be tempting to try to reuse the DOM elements for actors, but to make that work, we would need a lot of additional bookkeeping to associate actors with DOM elements and to make sure we remove elements when their actors vanish. Since there will typically be only a handful of actors in the game, redrawing all of them is not expensive. > DOMDisplay.prototype.syncState = function(state) { if (this.actorLayer) this.actorLayer.remove(); this.actorLayer = drawActors(state.actors); this.dom.appendChild(this.actorLayer); this.dom.className = `game ${state.status}`; this.scrollPlayerIntoView(state); }; By adding the level’s current status as a class name to the wrapper, we can style the player actor slightly differently when the game is won or lost by adding a CSS rule that takes effect only when the player has an ancestor element with a given class. > .lost .player { background: rgb(160, 64, 64); } .won .player { box-shadow: -4px -7px 8px white, 4px -7px 8px white; } After touching lava, the player’s color turns dark red, suggesting scorching. When the last coin has been collected, we add two blurred white shadows—one to the top left and one to the top right—to create a white halo effect. We can’t assume that the level always fits in the viewport—the element into which we draw the game. That is why the `scrollPlayerIntoView` call is needed. It ensures that if the level is protruding outside the viewport, we scroll that viewport to make sure the player is near its center. The following CSS gives the game’s wrapping DOM element a maximum size and ensures that anything that sticks out of the element’s box is not visible. We also give it a relative position so that the actors inside it are positioned relative to the level’s top-left corner. > .game { overflow: hidden; max-width: 600px; max-height: 450px; position: relative; } In the `scrollPlayerIntoView` method, we find the player’s position and update the wrapping element’s scroll position. We change the scroll position by manipulating that element’s `scrollLeft` and `scrollTop` properties when the player is too close to the edge. > DOMDisplay.prototype.scrollPlayerIntoView = function(state) { let width = this.dom.clientWidth; let height = this.dom.clientHeight; let margin = width / 3; // The viewport let left = this.dom.scrollLeft, right = left + width; let top = this.dom.scrollTop, bottom = top + height; let player = state.player; let center = player.pos.plus(player.size.times(0.5)) .times(scale); if (center.x < left + margin) { this.dom.scrollLeft = center.x - margin; } else if (center.x > right - margin) { this.dom.scrollLeft = center.x + margin - width; } if (center.y < top + margin) { this.dom.scrollTop = center.y - margin; } else if (center.y > bottom - margin) { this.dom.scrollTop = center.y + margin - height; } }; The way the player’s center is found shows how the methods on our `Vec` type allow computations with objects to be written in a relatively readable way. To find the actor’s center, we add its position (its top-left corner) and half its size. That is the center in level coordinates, but we need it in pixel coordinates, so we then multiply the resulting vector by our display scale. Next, a series of checks verifies that the player position isn’t outside of the allowed range. Note that sometimes this will set nonsense scroll coordinates that are below zero or beyond the element’s scrollable area. This is okay—the DOM will constrain them to acceptable values. Setting `scrollLeft` to -10 will cause it to become 0. It would have been slightly simpler to always try to scroll the player to the center of the viewport. But this creates a rather jarring effect. As you are jumping, the view will constantly shift up and down. It is more pleasant to have a “neutral” area in the middle of the screen where you can move around without causing any scrolling. We are now able to display our tiny level. > <link rel="stylesheet" href="css/game.css"> <script> let simpleLevel = new Level(simpleLevelPlan); let display = new DOMDisplay(document.body, simpleLevel); display.syncState(State.start(simpleLevel)); </script> The `<link>` tag, when used with `rel="stylesheet"` , is a way to load a CSS file into a page. The file `game.css` contains the styles necessary for our game. ## Motion and collision Now we’re at the point where we can start adding motion—the most interesting aspect of the game. The basic approach, taken by most games like this, is to split time into small steps and, for each step, move the actors by a distance corresponding to their speed multiplied by the size of the time step. We’ll measure time in seconds, so speeds are expressed in units per second. Moving things is easy. The difficult part is dealing with the interactions between the elements. When the player hits a wall or floor, they should not simply move through it. The game must notice when a given motion causes an object to hit another object and respond accordingly. For walls, the motion must be stopped. When hitting a coin, it must be collected. When touching lava, the game should be lost. Solving this for the general case is a big task. You can find libraries, usually called physics engines, that simulate interaction between physical objects in two or three dimensions. We’ll take a more modest approach in this chapter, handling only collisions between rectangular objects and handling them in a rather simplistic way. Before moving the player or a block of lava, we test whether the motion would take it inside of a wall. If it does, we simply cancel the motion altogether. The response to such a collision depends on the type of actor—the player will stop, whereas a lava block will bounce back. This approach requires our time steps to be rather small since it will cause motion to stop before the objects actually touch. If the time steps (and thus the motion steps) are too big, the player would end up hovering a noticeable distance above the ground. Another approach, arguably better but more complicated, would be to find the exact collision spot and move there. We will take the simple approach and hide its problems by ensuring the animation proceeds in small steps. This method tells us whether a rectangle (specified by a position and a size) touches a grid element of the given type. > Level.prototype.touches = function(pos, size, type) { var xStart = Math.floor(pos.x); var xEnd = Math.ceil(pos.x + size.x); var yStart = Math.floor(pos.y); var yEnd = Math.ceil(pos.y + size.y); for (var y = yStart; y < yEnd; y++) { for (var x = xStart; x < xEnd; x++) { let isOutside = x < 0 || x >= this.width || y < 0 || y >= this.height; let here = isOutside ? "wall" : this.rows[y][x]; if (here == type) return true; } } return false; }; The method computes the set of grid squares that the body overlaps with by using `Math.floor` and `Math.ceil` on its coordinates. Remember that grid squares are 1 by 1 units in size. By rounding the sides of a box up and down, we get the range of background squares that the box touches. We loop over the block of grid squares found by rounding the coordinates and return `true` when a matching square is found. Squares outside of the level are always treated as `"wall"` to ensure that the player can’t leave the world and that we won’t accidentally try to read outside of the bounds of our `rows` array. The state `update` method uses `touches` to figure out whether the player is touching lava. > State.prototype.update = function(time, keys) { let actors = this.actors .map(actor => actor.update(time, this, keys)); let newState = new State(this.level, actors, this.status); if (newState.status != "playing") return newState; let player = newState.player; if (this.level.touches(player.pos, player.size, "lava")) { return new State(this.level, actors, "lost"); } for (let actor of actors) { if (actor != player && overlap(actor, player)) { newState = actor.collide(newState); } } return newState; }; The method is passed a time step and a data structure that tells it which keys are being held down. The first thing it does is call the `update` method on all actors, producing an array of updated actors. The actors also get the time step, the keys, and the state, so that they can base their update on those. Only the player will actually read keys, since that’s the only actor that’s controlled by the keyboard. If the game is already over, no further processing has to be done (the game can’t be won after being lost, or vice versa). Otherwise, the method tests whether the player is touching background lava. If so, the game is lost, and we’re done. Finally, if the game really is still going on, it sees whether any other actors overlap the player. Overlap between actors is detected with the `overlap` function. It takes two actor objects and returns true when they touch—which is the case when they overlap both along the x-axis and along the y-axis. > function overlap(actor1, actor2) { return actor1.pos.x + actor1.size.x > actor2.pos.x && actor1.pos.x < actor2.pos.x + actor2.size.x && actor1.pos.y + actor1.size.y > actor2.pos.y && actor1.pos.y < actor2.pos.y + actor2.size.y; } If any actor does overlap, its `collide` method gets a chance to update the state. Touching a lava actor sets the game status to `"lost"` . Coins vanish when you touch them and set the status to `"won"` when they are the last coin of the level. > Lava.prototype.collide = function(state) { return new State(state.level, state.actors, "lost"); }; Coin.prototype.collide = function(state) { let filtered = state.actors.filter(a => a != this); let status = state.status; if (!filtered.some(a => a.type == "coin")) status = "won"; return new State(state.level, filtered, status); }; ## Actor updates Actor objects’ `update` methods take as arguments the time step, the state object, and a `keys` object. The one for the `Lava` actor type ignores the `keys` object. > Lava.prototype.update = function(time, state) { let newPos = this.pos.plus(this.speed.times(time)); if (!state.level.touches(newPos, this.size, "wall")) { return new Lava(newPos, this.speed, this.reset); } else if (this.reset) { return new Lava(this.reset, this.speed, this.reset); } else { return new Lava(this.pos, this.speed.times(-1)); } }; This `update` method computes a new position by adding the product of the time step and the current speed to its old position. If no obstacle blocks that new position, it moves there. If there is an obstacle, the behavior depends on the type of the lava block—dripping lava has a `reset` position, to which it jumps back when it hits something. Bouncing lava inverts its speed by multiplying it by -1 so that it starts moving in the opposite direction. Coins use their `update` method to wobble. They ignore collisions with the grid since they are simply wobbling around inside of their own square. > const wobbleSpeed = 8, wobbleDist = 0.07; Coin.prototype.update = function(time) { let wobble = this.wobble + time * wobbleSpeed; let wobblePos = Math.sin(wobble) * wobbleDist; return new Coin(this.basePos.plus(new Vec(0, wobblePos)), this.basePos, wobble); }; The `wobble` property is incremented to track time and then used as an argument to `Math.sin` to find the new position on the wave. The coin’s current position is then computed from its base position and an offset based on this wave. That leaves the player itself. Player motion is handled separately per axis because hitting the floor should not prevent horizontal motion, and hitting a wall should not stop falling or jumping motion. > const playerXSpeed = 7; const gravity = 30; const jumpSpeed = 17; Player.prototype.update = function(time, state, keys) { let xSpeed = 0; if (keys.ArrowLeft) xSpeed -= playerXSpeed; if (keys.ArrowRight) xSpeed += playerXSpeed; let pos = this.pos; let movedX = pos.plus(new Vec(xSpeed * time, 0)); if (!state.level.touches(movedX, this.size, "wall")) { pos = movedX; } let ySpeed = this.speed.y + time * gravity; let movedY = pos.plus(new Vec(0, ySpeed * time)); if (!state.level.touches(movedY, this.size, "wall")) { pos = movedY; } else if (keys.ArrowUp && ySpeed > 0) { ySpeed = -jumpSpeed; } else { ySpeed = 0; } return new Player(pos, new Vec(xSpeed, ySpeed)); }; The horizontal motion is computed based on the state of the left and right arrow keys. When there’s no wall blocking the new position created by this motion, it is used. Otherwise, the old position is kept. Vertical motion works in a similar way but has to simulate jumping and gravity. The player’s vertical speed ( `ySpeed` ) is first accelerated to account for gravity. We check for walls again. If we don’t hit any, the new position is used. If there is a wall, there are two possible outcomes. When the up arrow is pressed and we are moving down (meaning the thing we hit is below us), the speed is set to a relatively large, negative value. This causes the player to jump. If that is not the case, the player simply bumped into something, and the speed is set to zero. The gravity strength, jumping speed, and pretty much all other constants in this game have been set by trial and error. I tested values until I found a combination I liked. ## Tracking keys For a game like this, we do not want keys to take effect once per keypress. Rather, we want their effect (moving the player figure) to stay active as long as they are held. We need to set up a key handler that stores the current state of the left, right, and up arrow keys. We will also want to call `preventDefault` for those keys so that they don’t end up scrolling the page. The following function, when given an array of key names, will return an object that tracks the current position of those keys. It registers event handlers for `"keydown"` and `"keyup"` events and, when the key code in the event is present in the set of codes that it is tracking, updates the object. > function trackKeys(keys) { let down = Object.create(null); function track(event) { if (keys.includes(event.key)) { down[event.key] = event.type == "keydown"; event.preventDefault(); } } window.addEventListener("keydown", track); window.addEventListener("keyup", track); return down; } const arrowKeys = trackKeys(["ArrowLeft", "ArrowRight", "ArrowUp"]); The same handler function is used for both event types. It looks at the event object’s `type` property to determine whether the key state should be updated to true ( `"keydown"` ) or false ( `"keyup"` ). ## Running the game The function, which we saw in Chapter 14, provides a good way to animate a game. But its interface is quite primitive—using it requires us to track the time at which our function was called the last time around and call again after every frame. Let’s define a helper function that wraps those boring parts in a convenient interface and allows us to simply call `runAnimation` , giving it a function that expects a time difference as an argument and draws a single frame. When the frame function returns the value `false` , the animation stops. > function runAnimation(frameFunc) { let lastTime = null; function frame(time) { if (lastTime != null) { let timeStep = Math.min(time - lastTime, 100) / 1000; if (frameFunc(timeStep) === false) return; } lastTime = time; requestAnimationFrame(frame); } requestAnimationFrame(frame); } I have set a maximum frame step of 100 milliseconds (one-tenth of a second). When the browser tab or window with our page is hidden, calls will be suspended until the tab or window is shown again. In this case, the difference between `lastTime` and `time` will be the entire time in which the page was hidden. Advancing the game by that much in a single step would look silly and might cause weird side effects, such as the player falling through the floor. The function also converts the time steps to seconds, which are an easier quantity to think about than milliseconds. The `runLevel` function takes a `Level` object and a display constructor and returns a promise. It displays the level (in `document.body` ) and lets the user play through it. When the level is finished (lost or won), `runLevel` waits one more second (to let the user see what happens) and then clears the display, stops the animation, and resolves the promise to the game’s end status. > function runLevel(level, Display) { let display = new Display(document.body, level); let state = State.start(level); let ending = 1; return new Promise(resolve => { runAnimation(time => { state = state.update(time, arrowKeys); display.syncState(state); if (state.status == "playing") { return true; } else if (ending > 0) { ending -= time; return true; } else { display.clear(); resolve(state.status); return false; } }); }); } A game is a sequence of levels. Whenever the player dies, the current level is restarted. When a level is completed, we move on to the next level. This can be expressed by the following function, which takes an array of level plans (strings) and a display constructor: > async function runGame(plans, Display) { for (let level = 0; level < plans.length;) { let status = await runLevel(new Level(plans[level]), Display); if (status == "won") level++; } console.log("You've won!"); } Because we made `runLevel` return a promise, `runGame` can be written using an `async` function, as shown in Chapter 11. It returns another promise, which resolves when the player finishes the game. There is a set of level plans available in the `GAME_LEVELS` binding in this chapter’s sandbox. This page feeds them to `runGame` , starting an actual game. > <link rel="stylesheet" href="css/game.css"> <body> <script> runGame(GAME_LEVELS, DOMDisplay); </script> </bodySee if you can beat those. I had quite a lot of fun building them. ### Game over It’s traditional for platform games to have the player start with a limited number of lives and subtract one life each time they die. When the player is out of lives, the game restarts from the beginning. Adjust `runGame` to implement lives. Have the player start with three. Output the current number of lives (using `console.log` ) every time a level starts. > <link rel="stylesheet" href="css/game.css"> <body> <script> // The old runGame function. Modify it... async function runGame(plans, Display) { for (let level = 0; level < plans.length;) { let status = await runLevel(new Level(plans[level]), Display); if (status == "won") level++; } console.log("You've won!"); } runGame(GAME_LEVELS, DOMDisplay); </script> </body### Pausing the game Make it possible to pause (suspend) and unpause the game by pressing the Esc key. This can be done by changing the `runLevel` function to use another keyboard event handler and interrupting or resuming the animation whenever the Esc key is hit. The `runAnimation` interface may not look like it is suitable for this at first glance, but it is if you rearrange the way `runLevel` calls it. When you have that working, there is something else you could try. The way we have been registering keyboard event handlers is somewhat problematic. The `arrowKeys` object is currently a global binding, and its event handlers are kept around even when no game is running. You could say they leak out of our system. Extend `trackKeys` to provide a way to unregister its handlers and then change `runLevel` to register its handlers when it starts and unregister them again when it is finished. > <link rel="stylesheet" href="css/game.css"> <body> <script> // The old runLevel function. Modify this... function runLevel(level, Display) { let display = new Display(document.body, level); let state = State.start(level); let ending = 1; return new Promise(resolve => { runAnimation(time => { state = state.update(time, arrowKeys); display.syncState(state); if (state.status == "playing") { return true; } else if (ending > 0) { ending -= time; return true; } else { display.clear(); resolve(state.status); return false; } }); }); } runGame(GAME_LEVELS, DOMDisplay); </script> </body> An animation can be interrupted by returning `false` from the function given to `runAnimation` . It can be continued by calling `runAnimation` again. So we need to communicate the fact that we are pausing the game to the function given to `runAnimation` . For that, you can use a binding that both the event handler and that function have access to. When finding a way to unregister the handlers registered by `trackKeys` , remember that the exact same function value that was passed to `addEventListener` must be passed to `removeEventListener` to successfully remove a handler. Thus, the `handler` function value created in `trackKeys` must be available to the code that unregisters the handlers. You can add a property to the object returned by `trackKeys` , containing either that function value or a method that handles the unregistering directly. ### A monster It is traditional for platform games to have enemies that you can jump on top of to defeat. This exercise asks you to add such an actor type to the game. We’ll call it a monster. Monsters move only horizontally. You can make them move in the direction of the player, bounce back and forth like horizontal lava, or have any movement pattern you want. The class doesn’t have to handle falling, but it should make sure the monster doesn’t walk through walls. When a monster touches the player, the effect depends on whether the player is jumping on top of them or not. You can approximate this by checking whether the player’s bottom is near the monster’s top. If this is the case, the monster disappears. If not, the game is lost. > <link rel="stylesheet" href="css/game.css"> <style>.monster { background: purple }</style> <body> <script> // Complete the constructor, update, and collide methods class Monster { constructor(pos, /* ... */) {} get type() { return "monster"; } static create(pos) { return new Monster(pos.plus(new Vec(0, -1))); } update(time, state) {} collide(state) {} } Monster.prototype.size = new Vec(1.2, 2); levelChars["M"] = Monster; runLevel(new Level(` .................................. .################################. .#..............................#. .#..............................#. .#..............................#. .#...........................o..#. .#..@...........................#. .##########..............########. ..........#..o..o..o..o..#........ ..........#...........M..#........ ..........################........ .................................. `), DOMDisplay); </script> </bodyIf you want to implement a type of motion that is stateful, such as bouncing, make sure you store the necessary state in the actor object—include it as constructor argument and add it as a property. Remember that `update` returns a new object, rather than changing the old one. When handling collision, find the player in `state.actors` and compare its position to the monster’s position. To get the bottom of the player, you have to add its vertical size to its vertical position. The creation of an updated state will resemble either `Coin` ’s `collide` method (removing the actor) or `Lava` ’s (changing the status to `"lost"` ), depending on the player position. Date: 2009-01-01 Categories: Tags: A student asked, ‘The programmers of old used only simple machines and no programming languages, yet they made beautiful programs. Why do we use complicated machines and programming languages?’. Fu-Tzu replied, ‘The builders of old used only sticks and clay, yet they made beautiful huts.’ So far, we have used the JavaScript language in a single environment: the browser. This chapter and the next one will briefly introduce Node.js, a program that allows you to apply your JavaScript skills outside of the browser. With it, you can build anything from small command line tools to HTTP servers that power dynamic websites. These chapters aim to teach you the main concepts that Node.js uses and to give you enough information to write useful programs for it. They do not try to be a complete, or even a thorough, treatment of the platform. Whereas you could run the code in previous chapters directly on these pages, because it was either raw JavaScript or written for the browser, the code samples in this chapter are written for Node and often won’t run in the browser. If you want to follow along and run the code in this chapter, you’ll need to install Node.js version 10.1 or higher. To do so, go to https://nodejs.org and follow the installation instructions for your operating system. You can also find further documentation for Node.js there. ## Background One of the more difficult problems with writing systems that communicate over the network is managing input and output—that is, the reading and writing of data to and from the network and hard drive. Moving data around takes time, and scheduling it cleverly can make a big difference in how quickly a system responds to the user or to network requests. In such programs, asynchronous programming is often helpful. It allows the program to send and receive data from and to multiple devices at the same time without complicated thread management and synchronization. Node was initially conceived for the purpose of making asynchronous programming easy and convenient. JavaScript lends itself well to a system like Node. It is one of the few programming languages that does not have a built-in way to do in- and output. Thus, JavaScript could be fit onto Node’s rather eccentric approach to in- and output without ending up with two inconsistent interfaces. In 2009, when Node was being designed, people were already doing callback-based programming in the browser, so the community around the language was used to an asynchronous programming style. ## The node command When Node.js is installed on a system, it provides a program called `node` , which is used to run JavaScript files. Say you have a file `hello.js` , containing this code: > let message = "Hello world"; console.log(message); You can then run `node` from the command line like this to execute the program: > $ node hello.js Hello world The `console.log` method in Node does something similar to what it does in the browser. It prints out a piece of text. But in Node, the text will go to the process’s standard output stream, rather than to a browser’s JavaScript console. When running `node` from the command line, that means you see the logged values in your terminal. If you run `node` without giving it a file, it provides you with a prompt at which you can type JavaScript code and immediately see the result. > $ node > 1 + 1 2 > [-1, -2, -3].map(Math.abs) [1, 2, 3] > process.exit(0) $ The `process` binding, just like the `console` binding, is available globally in Node. It provides various ways to inspect and manipulate the current program. The `exit` method ends the process and can be given an exit status code, which tells the program that started `node` (in this case, the command line shell) whether the program completed successfully (code zero) or encountered an error (any other code). To find the command line arguments given to your script, you can read `process.argv` , which is an array of strings. Note that it also includes the name of the `node` command and your script name, so the actual arguments start at index 2. If `showargv.js` contains the statement `console.` , you could run it like this: > $ node showargv.js one --and two ["node", "/tmp/showargv.js", "one", "--and", "two"] All the standard JavaScript global bindings, such as `Array` , `Math` , and `JSON` , are also present in Node’s environment. Browser-related functionality, such as `document` or `prompt` , is not. ## Modules Beyond the bindings I mentioned, such as `console` and `process` , Node puts few additional bindings in the global scope. If you want to access built-in functionality, you have to ask the module system for it. The CommonJS module system, based on the `require` function, was described in Chapter 10. This system is built into Node and is used to load anything from built-in modules to downloaded packages to files that are part of your own program. When `require` is called, Node has to resolve the given string to an actual file that it can load. Pathnames that start with `/` , `./` , or `../` are resolved relative to the current module’s path, where `.` stands for the current directory, `../` for one directory up, and `/` for the root of the file system. So if you ask for `"./` from the file `/` , Node will try to load the file `/` . The `.js` extension may be omitted, and Node will add it if such a file exists. If the required path refers to a directory, Node will try to load the file named `index.js` in that directory. When a string that does not look like a relative or absolute path is given to `require` , it is assumed to refer to either a built-in module or a module installed in a `node_modules` directory. For example, `require("fs")` will give you Node’s built-in file system module. And `require("robot")` might try to load the library found in `node_modules/` . A common way to install such libraries is by using NPM, which we’ll come back to in a moment. Let’s set up a small project consisting of two files. The first one, called `main.js` , defines a script that can be called from the command line to reverse a string. > const {reverse} = require("./reverse"); // Index 2 holds the first actual command line argument let argument = process.argv[2]; console.log(reverse(argument)); The file `reverse.js` defines a library for reversing strings, which can be used both by this command line tool and by other scripts that need direct access to a string-reversing function. > exports.reverse = function(string) { return Array.from(string).reverse().join(""); }; Remember that adding properties to `exports` adds them to the interface of the module. Since Node.js treats files as CommonJS modules, `main.js` can take the exported `reverse` function from `reverse.js` . We can now call our tool like this: > $ node main.js JavaScript tpircSavaJ ## Installing with NPM NPM, which was introduced in Chapter 10, is an online repository of JavaScript modules, many of which are specifically written for Node. When you install Node on your computer, you also get the `npm` command, which you can use to interact with this repository. NPM’s main use is downloading packages. We saw the `ini` package in Chapter 10. We can use NPM to fetch and install that package on our computer. > $ npm install ini npm WARN enoent ENOENT: no such file or directory, open '/tmp/package.json' + [email protected] added 1 package in 0.552s $ node > const {parse} = require("ini"); > parse("x = 1\ny = 2"); { x: '1', y: '2' } After running `npm install` , NPM will have created a directory called `node_modules` . Inside that directory will be an `ini` directory that contains the library. You can open it and look at the code. When we call `require("ini")` , this library is loaded, and we can call its `parse` property to parse a configuration file. By default NPM installs packages under the current directory, rather than in a central place. If you are used to other package managers, this may seem unusual, but it has advantages—it puts each application in full control of the packages it installs and makes it easier to manage versions and clean up when removing an application. ### Package files In the `npm install` example, you could see a warning about the fact that the `package.json` file did not exist. It is recommended to create such a file for each project, either manually or by running `npm init` . It contains some information about the project, such as its name and version, and lists its dependencies. The robot simulation from Chapter 7, as modularized in the exercise in Chapter 10, might have a `package.json` file like this: > { "author": "<NAME>", "name": "eloquent-javascript-robot", "description": "Simulation of a package-delivery robot", "version": "1.0.0", "main": "run.js", "dependencies": { "dijkstrajs": "^1.0.1", "random-item": "^1.0.0" }, "license": "ISC" } When you run `npm install` without naming a package to install, NPM will install the dependencies listed in `package.json` . When you install a specific package that is not already listed as a dependency, NPM will add it to `package.json` . ### Versions A `package.json` file lists both the program’s own version and versions for its dependencies. Versions are a way to deal with the fact that packages evolve separately, and code written to work with a package as it existed at one point may not work with a later, modified version of the package. NPM demands that its packages follow a schema called semantic versioning, which encodes some information about which versions are compatible (don’t break the old interface) in the version number. A semantic version consists of three numbers, separated by periods, such as `2.3.0` . Every time new functionality is added, the middle number has to be incremented. Every time compatibility is broken, so that existing code that uses the package might not work with the new version, the first number has to be incremented. A caret character ( `^` ) in front of the version number for a dependency in `package.json` indicates that any version compatible with the given number may be installed. So, for example, `"^2.` would mean that any version greater than or equal to 2.3.0 and less than 3.0.0 is allowed. The `npm` command is also used to publish new packages or new versions of packages. If you run `npm publish` in a directory that has a `package.json` file, it will publish a package with the name and version listed in the JSON file to the registry. Anyone can publish packages to NPM—though only under a package name that isn’t in use yet since it would be somewhat scary if random people could update existing packages. Since the `npm` program is a piece of software that talks to an open system—the package registry—there is nothing unique about what it does. Another program, `yarn` , which can be installed from the NPM registry, fills the same role as `npm` using a somewhat different interface and installation strategy. This book won’t delve further into the details of NPM usage. Refer to https://npmjs.org for further documentation and a way to search for packages. ## The file system module One of the most commonly used built-in modules in Node is the `fs` module, which stands for file system. It exports functions for working with files and directories. For example, the function called `readFile` reads a file and then calls a callback with the file’s contents. > let {readFile} = require("fs"); readFile("file.txt", "utf8", (error, text) => { if (error) throw error; console.log("The file contains:", text); }); The second argument to `readFile` indicates the character encoding used to decode the file into a string. There are several ways in which text can be encoded to binary data, but most modern systems use UTF-8. So unless you have reasons to believe another encoding is used, pass `"utf8"` when reading a text file. If you do not pass an encoding, Node will assume you are interested in the binary data and will give you a `Buffer` object instead of a string. This is an array-like object that contains numbers representing the bytes (8-bit chunks of data) in the files. > const {readFile} = require("fs"); readFile("file.txt", (error, buffer) => { if (error) throw error; console.log("The file contained", buffer.length, "bytes.", "The first byte is:", buffer[0]); }); A similar function, `writeFile` , is used to write a file to disk. > const {writeFile} = require("fs"); writeFile("graffiti.txt", "Node was here", err => { if (err) console.log(`Failed to write file: ${err}`); else console.log("File written."); }); Here it was not necessary to specify the encoding— `writeFile` will assume that when it is given a string to write, rather than a `Buffer` object, it should write it out as text using its default character encoding, which is UTF-8. The `fs` module contains many other useful functions: `readdir` will return the files in a directory as an array of strings, `stat` will retrieve information about a file, `rename` will rename a file, `unlink` will remove one, and so on. See the documentation at https://nodejs.org for specifics. Most of these take a callback function as the last parameter, which they call either with an error (the first argument) or with a successful result (the second). As we saw in Chapter 11, there are downsides to this style of programming—the biggest one being that error handling becomes verbose and error-prone. Though promises have been part of JavaScript for a while, at the time of writing their integration into Node.js is still a work in progress. There is an object `promises` exported from the `fs` package since version 10.1 that contains most of the same functions as `fs` but uses promises rather than callback functions. > const {readFile} = require("fs").promises; readFile("file.txt", "utf8") .then(text => console.log("The file contains:", text)); Sometimes you don’t need asynchronicity, and it just gets in the way. Many of the functions in `fs` also have a synchronous variant, which has the same name with `Sync` added to the end. For example, the synchronous version of `readFile` is called `readFileSync` . > const {readFileSync} = require("fs"); console.log("The file contains:", readFileSync("file.txt", "utf8")); Do note that while such a synchronous operation is being performed, your program is stopped entirely. If it should be responding to the user or to other machines on the network, being stuck on a synchronous action might produce annoying delays. ## The HTTP module Another central module is called `http` . It provides functionality for running HTTP servers and making HTTP requests. This is all it takes to start an HTTP server: > const {createServer} = require("http"); let server = createServer((request, response) => { response.writeHead(200, {"Content-Type": "text/html"}); response.write(` <h1>Hello!</h1> <p>You asked for <code>${request.url}</code></p>`); response.end(); }); server.listen(8000); console.log("Listening! (port 8000)"); If you run this script on your own machine, you can point your web browser at http://localhost:8000/hello to make a request to your server. It will respond with a small HTML page. The function passed as argument to `createServer` is called every time a client connects to the server. The `request` and `response` bindings are objects representing the incoming and outgoing data. The first contains information about the request, such as its `url` property, which tells us to what URL the request was made. So, when you open that page in your browser, it sends a request to your own computer. This causes the server function to run and send back a response, which you can then see in the browser. To send something back, you call methods on the `response` object. The first, `writeHead` , will write out the response headers (see Chapter 18). You give it the status code (200 for “OK” in this case) and an object that contains header values. The example sets the `Content-Type` header to inform the client that we’ll be sending back an HTML document. Next, the actual response body (the document itself) is sent with `response.write` . You are allowed to call this method multiple times if you want to send the response piece by piece, for example to stream data to the client as it becomes available. Finally, `response.end` signals the end of the response. The call to `server.listen` causes the server to start waiting for connections on port 8000. This is why you have to connect to localhost:8000 to speak to this server, rather than just localhost, which would use the default port 80. When you run this script, the process just sits there and waits. When a script is listening for events—in this case, network connections— `node` will not automatically exit when it reaches the end of the script. To close it, press control-C. A real web server usually does more than the one in the example—it looks at the request’s method (the `method` property) to see what action the client is trying to perform and looks at the request’s URL to find out which resource this action is being performed on. We’ll see a more advanced server later in this chapter. To act as an HTTP client, we can use the `request` function in the `http` module. > const {request} = require("http"); let requestStream = request({ hostname: "eloquentjavascript.net", path: "/20_node.html", method: "GET", headers: {Accept: "text/html"} }, response => { console.log("Server responded with status code", response.statusCode); }); requestStream.end(); The first argument to `request` configures the request, telling Node what server to talk to, what path to request from that server, which method to use, and so on. The second argument is the function that should be called when a response comes in. It is given an object that allows us to inspect the response, for example to find out its status code. Just like the `response` object we saw in the server, the object returned by `request` allows us to stream data into the request with the `write` method and finish the request with the `end` method. The example does not use `write` because `GET` requests should not contain data in their request body. There’s a similar `request` function in the `https` module that can be used to make requests to `https:` URLs. Making requests with Node’s raw functionality is rather verbose. There are much more convenient wrapper packages available on NPM. For example, `node-fetch` provides the promise-based `fetch` interface that we know from the browser. ## Streams We have seen two instances of writable streams in the HTTP examples—namely, the response object that the server could write to and the request object that was returned from `request` . Writable streams are a widely used concept in Node. Such objects have a `write` method that can be passed a string or a `Buffer` object to write something to the stream. Their `end` method closes the stream and optionally takes a value to write to the stream before closing. Both of these methods can also be given a callback as an additional argument, which they will call when the writing or closing has finished. It is possible to create a writable stream that points at a file with the `createWriteStream` function from the `fs` module. Then you can use the `write` method on the resulting object to write the file one piece at a time, rather than in one shot as with `writeFile` . Readable streams are a little more involved. Both the `request` binding that was passed to the HTTP server’s callback and the `response` binding passed to the HTTP client’s callback are readable streams—a server reads requests and then writes responses, whereas a client first writes a request and then reads a response. Reading from a stream is done using event handlers, rather than methods. Objects that emit events in Node have a method called `on` that is similar to the `addEventListener` method in the browser. You give it an event name and then a function, and it will register that function to be called whenever the given event occurs. Readable streams have `"data"` and `"end"` events. The first is fired every time data comes in, and the second is called whenever the stream is at its end. This model is most suited for streaming data that can be immediately processed, even when the whole document isn’t available yet. A file can be read as a readable stream by using the `createReadStream` function from `fs` . This code creates a server that reads request bodies and streams them back to the client as all-uppercase text: > const {createServer} = require("http"); createServer((request, response) => { response.writeHead(200, {"Content-Type": "text/plain"}); request.on("data", chunk => response.write(chunk.toString().toUpperCase())); request.on("end", () => response.end()); }).listen(8000); The `chunk` value passed to the data handler will be a binary `Buffer` . We can convert this to a string by decoding it as UTF-8 encoded characters with its `toString` method. The following piece of code, when run with the uppercasing server active, will send a request to that server and write out the response it gets: > const {request} = require("http"); request({ hostname: "localhost", port: 8000, method: "POST" }, response => { response.on("data", chunk => process.stdout.write(chunk.toString())); }).end("Hello server"); // → HELLO SERVER The example writes to `process.stdout` (the process’s standard output, which is a writable stream) instead of using `console.log` . We can’t use `console.log` because it adds an extra newline character after each piece of text that it writes, which isn’t appropriate here since the response may come in as multiple chunks. ## A file server Let’s combine our newfound knowledge about HTTP servers and working with the file system to create a bridge between the two: an HTTP server that allows remote access to a file system. Such a server has all kinds of uses—it allows web applications to store and share data, or it can give a group of people shared access to a bunch of files. When we treat files as HTTP resources, the HTTP methods `GET` , `PUT` , and `DELETE` can be used to read, write, and delete the files, respectively. We will interpret the path in the request as the path of the file that the request refers to. We probably don’t want to share our whole file system, so we’ll interpret these paths as starting in the server’s working directory, which is the directory in which it was started. If I ran the server from `/tmp/public/` (or `C:\tmp\public\` on Windows), then a request for `/file.txt` should refer to `/` (or `C:\tmp\public\file.` ). We’ll build the program piece by piece, using an object called `methods` to store the functions that handle the various HTTP methods. Method handlers are `async` functions that get the request object as argument and return a promise that resolves to an object that describes the response. > const {createServer} = require("http"); const methods = Object.create(null); createServer((request, response) => { let handler = methods[request.method] || notAllowed; handler(request) .catch(error => { if (error.status != null) return error; return {body: String(error), status: 500}; }) .then(({body, status = 200, type = "text/plain"}) => { response.writeHead(status, {"Content-Type": type}); if (body && body.pipe) body.pipe(response); else response.end(body); }); }).listen(8000); async function notAllowed(request) { return { status: 405, body: `Method ${request.method} not allowed.` }; } This starts a server that just returns 405 error responses, which is the code used to indicate that the server refuses to handle a given method. When a request handler’s promise is rejected, the `catch` call translates the error into a response object, if it isn’t one already, so that the server can send back an error response to inform the client that it failed to handle the request. The `status` field of the response description may be omitted, in which case it defaults to 200 (OK). The content type, in the `type` property, can also be left off, in which case the response is assumed to be plain text. When the value of `body` is a readable stream, it will have a `pipe` method that is used to forward all content from a readable stream to a writable stream. If not, it is assumed to be either `null` (no body), a string, or a buffer, and it is passed directly to the response’s `end` method. To figure out which file path corresponds to a request URL, the `urlPath` function uses Node’s built-in `url` module to parse the URL. It takes its pathname, which will be something like `"/` , decodes that to get rid of the `%20` -style escape codes, and resolves it relative to the program’s working directory. > const {parse} = require("url"); const {resolve, sep} = require("path"); const baseDirectory = process.cwd(); function urlPath(url) { let {pathname} = parse(url); let path = resolve(decodeURIComponent(pathname).slice(1)); if (path != baseDirectory && !path.startsWith(baseDirectory + sep)) { throw {status: 403, body: "Forbidden"}; } return path; } As soon as you set up a program to accept network requests, you have to start worrying about security. In this case, if we aren’t careful, it is likely that we’ll accidentally expose our whole file system to the network. File paths are strings in Node. To map such a string to an actual file, there is a nontrivial amount of interpretation going on. Paths may, for example, include `../` to refer to a parent directory. So one obvious source of problems would be requests for paths like `/../secret_file` . To avoid such problems, `urlPath` uses the `resolve` function from the `path` module, which resolves relative paths. It then verifies that the result is below the working directory. The `process.cwd` function (where `cwd` stands for “current working directory”) can be used to find this working directory. The `sep` binding from the `path` package is the system’s path separator—a backslash on Windows and a forward slash on most other systems. When the path doesn’t start with the base directory, the function throws an error response object, using the HTTP status code indicating that access to the resource is forbidden. We’ll set up the `GET` method to return a list of files when reading a directory and to return the file’s content when reading a regular file. One tricky question is what kind of `Content-Type` header we should set when returning a file’s content. Since these files could be anything, our server can’t simply return the same content type for all of them. NPM can help us again here. The `mime` package (content type indicators like `text/plain` are also called MIME types) knows the correct type for a large number of file extensions. The following `npm` command, in the directory where the server script lives, installs a specific version of `mime` : > $ npm install [email protected] When a requested file does not exist, the correct HTTP status code to return is 404. We’ll use the `stat` function, which looks up information about a file, to find out both whether the file exists and whether it is a directory. > const {createReadStream} = require("fs"); const {stat, readdir} = require("fs").promises; const mime = require("mime"); methods.GET = async function(request) { let path = urlPath(request.url); let stats; try { stats = await stat(path); } catch (error) { if (error.code != "ENOENT") throw error; else return {status: 404, body: "File not found"}; } if (stats.isDirectory()) { return {body: (await readdir(path)).join("\n")}; } else { return {body: createReadStream(path), type: mime.getType(path)}; } }; Because it has to touch the disk and thus might take a while, `stat` is asynchronous. Since we’re using promises rather than callback style, it has to be imported from `promises` instead of directly from `fs` . When the file does not exist, `stat` will throw an error object with a `code` property of `"ENOENT"` . These somewhat obscure, Unix-inspired codes are how you recognize error types in Node. The `stats` object returned by `stat` tells us a number of things about a file, such as its size ( `size` property) and its modification date ( `mtime` property). Here we are interested in the question of whether it is a directory or a regular file, which the `isDirectory` method tells us. We use `readdir` to read the array of files in a directory and return it to the client. For normal files, we create a readable stream with `createReadStream` and return that as the body, along with the content type that the `mime` package gives us for the file’s name. The code to handle `DELETE` requests is slightly simpler. > const {rmdir, unlink} = require("fs").promises; methods.DELETE = async function(request) { let path = urlPath(request.url); let stats; try { stats = await stat(path); } catch (error) { if (error.code != "ENOENT") throw error; else return {status: 204}; } if (stats.isDirectory()) await rmdir(path); else await unlink(path); return {status: 204}; }; When an HTTP response does not contain any data, the status code 204 (“no content”) can be used to indicate this. Since the response to deletion doesn’t need to transmit any information beyond whether the operation succeeded, that is a sensible thing to return here. You may be wondering why trying to delete a nonexistent file returns a success status code, rather than an error. When the file that is being deleted is not there, you could say that the request’s objective is already fulfilled. The HTTP standard encourages us to make requests idempotent, which means that making the same request multiple times produces the same result as making it once. In a way, if you try to delete something that’s already gone, the effect you were trying to do has been achieved—the thing is no longer there. This is the handler for `PUT` requests: > const {createWriteStream} = require("fs"); function pipeStream(from, to) { return new Promise((resolve, reject) => { from.on("error", reject); to.on("error", reject); to.on("finish", resolve); from.pipe(to); }); } methods.PUT = async function(request) { let path = urlPath(request.url); await pipeStream(request, createWriteStream(path)); return {status: 204}; }; We don’t need to check whether the file exists this time—if it does, we’ll just overwrite it. We again use `pipe` to move data from a readable stream to a writable one, in this case from the request to the file. But since `pipe` isn’t written to return a promise, we have to write a wrapper, `pipeStream` , that creates a promise around the outcome of calling `pipe` . When something goes wrong when opening the file, `createWriteStream` will still return a stream, but that stream will fire an `"error"` event. The output stream to the request may also fail, for example if the network goes down. So we wire up both streams’ `"error"` events to reject the promise. When `pipe` is done, it will close the output stream, which causes it to fire a `"finish"` event. That’s the point where we can successfully resolve the promise (returning nothing). The full script for the server is available at https://eloquentjavascript.net/code/file_server.js. You can download that and, after installing its dependencies, run it with Node to start your own file server. And, of course, you can modify and extend it to solve this chapter’s exercises or to experiment. The command line tool `curl` , widely available on Unix-like systems (such as macOS and Linux), can be used to make HTTP requests. The following session briefly tests our server. The `-X` option is used to set the request’s method, and `-d` is used to include a request body. > $ curl http://localhost:8000/file.txt File not found $ curl -X PUT -d hello http://localhost:8000/file.txt $ curl http://localhost:8000/file.txt hello $ curl -X DELETE http://localhost:8000/file.txt $ curl http://localhost:8000/file.txt File not found The first request for `file.txt` fails since the file does not exist yet. The `PUT` request creates the file, and behold, the next request successfully retrieves it. After deleting it with a `DELETE` request, the file is again missing. Node is a nice, small system that lets us run JavaScript in a nonbrowser context. It was originally designed for network tasks to play the role of a node in a network. But it lends itself to all kinds of scripting tasks, and if writing JavaScript is something you enjoy, automating tasks with Node works well. NPM provides packages for everything you can think of (and quite a few things you’d probably never think of), and it allows you to fetch and install those packages with the `npm` program. Node comes with a number of built-in modules, including the `fs` module for working with the file system and the `http` module for running HTTP servers and making HTTP requests. All input and output in Node is done asynchronously, unless you explicitly use a synchronous variant of a function, such as `readFileSync` . When calling such asynchronous functions, you provide callback functions, and Node will call them with an error value and (if available) a result when it is ready. ### Search tool On Unix systems, there is a command line tool called `grep` that can be used to quickly search files for a regular expression. Write a Node script that can be run from the command line and acts somewhat like `grep` . It treats its first command line argument as a regular expression and treats any further arguments as files to search. It should output the names of any file whose content matches the regular expression. When that works, extend it so that when one of the arguments is a directory, it searches through all files in that directory and its subdirectories. Use asynchronous or synchronous file system functions as you see fit. Setting things up so that multiple asynchronous actions are requested at the same time might speed things up a little, but not a huge amount, since most file systems can read only one thing at a time. Your first command line argument, the regular expression, can be found in `process.argv[2]` . The input files come after that. You can use the `RegExp` constructor to go from a string to a regular expression object. Doing this synchronously, with `readFileSync` , is more straightforward, but if you use `fs.promises` again to get promise-returning functions and write an `async` function, the code looks similar. To figure out whether something is a directory, you can again use `stat` (or `statSync` ) and the stats object’s `isDirectory` method. Exploring a directory is a branching process. You can do it either by using a recursive function or by keeping an array of work (files that still need to be explored). To find the files in a directory, you can call `readdir` or `readdirSync` . The strange capitalization—Node’s file system function naming is loosely based on standard Unix functions, such as `readdir` , that are all lowercase, but then it adds `Sync` with a capital letter. To go from a filename read with `readdir` to a full path name, you have to combine it with the name of the directory, putting a slash character ( `/` ) between them. ### Directory creation Though the `DELETE` method in our file server is able to delete directories (using `rmdir` ), the server currently does not provide any way to create a directory. Add support for the `MKCOL` method (“make collection”), which should create a directory by calling `mkdir` from the `fs` module. `MKCOL` is not a widely used HTTP method, but it does exist for this same purpose in the WebDAV standard, which specifies a set of conventions on top of HTTP that make it suitable for creating documents. You can use the function that implements the `DELETE` method as a blueprint for the `MKCOL` method. When no file is found, try to create a directory with `mkdir` . When a directory exists at that path, you can return a 204 response so that directory creation requests are idempotent. If a nondirectory file exists here, return an error code. Code 400 (“bad request”) would be appropriate. ### A public space on the web Since the file server serves up any kind of file and even includes the right `Content-Type` header, you can use it to serve a website. Since it allows everybody to delete and replace files, it would be an interesting kind of website: one that can be modified, improved, and vandalized by everybody who takes the time to create the right HTTP request. Write a basic HTML page that includes a simple JavaScript file. Put the files in a directory served by the file server and open them in your browser. Next, as an advanced exercise or even a weekend project, combine all the knowledge you gained from this book to build a more user-friendly interface for modifying the website—from inside the website. Use an HTML form to edit the content of the files that make up the website, allowing the user to update them on the server by using HTTP requests, as described in Chapter 18. Start by making only a single file editable. Then make it so that the user can select which file to edit. Use the fact that our file server returns lists of files when reading a directory. Don’t work directly in the code exposed by the file server since if you make a mistake, you are likely to damage the files there. Instead, keep your work outside of the publicly accessible directory and copy it there when testing. You can create a `<textarea>` element to hold the content of the file that is being edited. A `GET` request, using `fetch` , can retrieve the current content of the file. You can use relative URLs like index.html, instead of http://localhost:8000/index.html, to refer to files on the same server as the running script. Then, when the user clicks a button (you can use a `<form>` element and `"submit"` event), make a `PUT` request to the same URL, with the content of the `<textarea>` as request body, to save the file. You can then add a `<select>` element that contains all the files in the server’s top directory by adding `<option>` elements containing the lines returned by a `GET` request to the URL `/` . When the user selects another file (a `"change"` event on the field), the script must fetch and display that file. When saving a file, use the currently selected filename. # Code SandboxEloquent JavaScript # Code Sandbox You can use this page to download source code and solutions to exercises for the book Eloquent JavaScript, and to directly run code in the context of chapters from that book, either to solve exercises to simply play around. Chapter: `xxxxxxxxxx` 1 `​` To run this chapter's code locally, use these files: These files contain this chapter’s project code: If you've solved the exercise and want to compare your code with mine, or you really tried, but can't get your code to work, you can (or download it). The base environment for this chapter (if any) is available in the sandbox above, allowing you to run the chapter's examples by simply pasting them into the editor. These are the known mistakes in the third edition of the book. For errata in the first edition, see this page. For the second edition, see this page. To report a problem that is not listed here, send me an email. Issues whose page number is followed by an ordinal number are only present up to the print denoted by that number. I.e. those followed by “1st” were fixed in the second print. ## Chapter 2 Page 34 (1st) Updating Bindings Succintly: Where it says `counter-` it should be `counter--` . ## Chapter 5 Page 91 Composability: Due to an initial mistake in the script data set, the results of the computations on this page differ from the ones with the current, corrected data. The average year for living scripts should be 1165, the average for non-living scripts should be 204. ## Chapter 6 Page 111 (2nd) Inheritance: In the second paragraph below the example code, instead of “ `content` method”, the text should say “ `element` function”. ## Chapter 8 Page 134 (2nd) Error Propagation: In the third paragraph of the section, a function `promptInteger` is referred to. The function is actually called `promptNumber` , and the word “whole” should be dropped from the sentence (it accepts non-whole numbers, too). ## Chapter 10 Page 168 (1st) Modules as Building Blocks: In “each needs it own private scope“, it should say “its own private scope“. ## Chapter 14 Page 234 (2nd) Creating Nodes: In the code, “edition” is misspelled as “editon”. ## Chapter 15 Page 258 Load Event: The description of the `beforeunload` claims that you just need to return a string from your event handler. For handlers registered with `addEventListener` you, in fact, need to call `preventDefault` and set a `returnValue` property to get the warn-on-leave behavior. ## Chapter 16 Page 285 (2nd) Pausing the Game: The text refers to the `arrow` binding, where it should say `arrowKeys` . ## Chapter 20 Page 369 (1st) Directory Creation: `MKCOL` stands for “make collection”, not “make column” as the book claims. ## Chapter 21 Page 373 HTTP Interface: There is a superfluous closing brace at the end of the example JSON snippet. ## Exercise Hints Page 414 A Modular Robot: The `dijkstrajs` package name is misspelled as `dijkstajs` . The dream behind the Web is of a common information space in which we communicate by sharing information. Its universality is essential: the fact that a hypertext link can point to anything, be it personal, local or global, be it draft or highly polished. The next chapters of this book will talk about web browsers. Without web browsers, there would be no JavaScript. Or even if there were, no one would ever have paid any attention to it. Web technology has been decentralized from the start, not just technically but also in the way it evolved. Various browser vendors have added new functionality in ad hoc and sometimes poorly thought-out ways, which then, sometimes, ended up being adopted by others—and finally set down as in standards. This is both a blessing and a curse. On the one hand, it is empowering to not have a central party control a system but have it be improved by various parties working in loose collaboration (or occasionally open hostility). On the other hand, the haphazard way in which the Web was developed means that the resulting system is not exactly a shining example of internal consistency. Some parts of it are downright confusing and poorly conceived. ## Networks and the Internet Computer networks have been around since the 1950s. If you put cables between two or more computers and allow them to send data back and forth through these cables, you can do all kinds of wonderful things. And if connecting two machines in the same building allows us to do wonderful things, connecting machines all over the planet should be even better. The technology to start implementing this vision was developed in the 1980s, and the resulting network is called the Internet. It has lived up to its promise. A computer can use this network to shoot bits at another computer. For any effective communication to arise out of this bit-shooting, the computers on both ends must know what the bits are supposed to represent. The meaning of any given sequence of bits depends entirely on the kind of thing that it is trying to express and on the encoding mechanism used. A network protocol describes a style of communication over a network. There are protocols for sending email, for fetching email, for sharing files, and even for controlling computers that happen to be infected by malicious software. For example, the Hypertext Transfer Protocol (HTTP) is a protocol for retrieving named resources (chunks of information, such as web pages or pictures). It specifies that the side making the request should start with a line like this, naming the resource and the version of the protocol that it is trying to use: > GET /index.html HTTP/1.1 There are a lot more rules about the way the requester can include more information in the request and the way the other side, which returns the resource, packages up its content. We’ll look at HTTP in a little more detail in Chapter 18. Most protocols are built on top of other protocols. HTTP treats the network as a streamlike device into which you can put bits and have them arrive at the correct destination in the correct order. As we saw in Chapter 11, ensuring those things is already a rather difficult problem. The Transmission Control Protocol (TCP) is a protocol that addresses this problem. All Internet-connected devices “speak” it, and most communication on the Internet is built on top of it. A TCP connection works as follows: one computer must be waiting, or listening, for other computers to start talking to it. To be able to listen for different kinds of communication at the same time on a single machine, each listener has a number (called a port) associated with it. Most protocols specify which port should be used by default. For example, when we want to send an email using the SMTP protocol, the machine through which we send it is expected to be listening on port 25. Another computer can then establish a connection by connecting to the target machine using the correct port number. If the target machine can be reached and is listening on that port, the connection is successfully created. The listening computer is called the server, and the connecting computer is called the client. Such a connection acts as a two-way pipe through which bits can flow—the machines on both ends can put data into it. Once the bits are successfully transmitted, they can be read out again by the machine on the other side. This is a convenient model. You could say that TCP provides an abstraction of the network. ## The Web The World Wide Web (not to be confused with the Internet as a whole) is a set of protocols and formats that allow us to visit web pages in a browser. The “Web” part in the name refers to the fact that such pages can easily link to each other, thus connecting into a huge mesh that users can move through. To become part of the Web, all you need to do is connect a machine to the Internet and have it listen on port 80 with the HTTP protocol so that other computers can ask it for documents. Each document on the Web is named by a Uniform Resource Locator (URL), which looks something like this: > http://eloquentjavascript.net/13_browser.html | | | | protocol server path The first part tells us that this URL uses the HTTP protocol (as opposed to, for example, encrypted HTTP, which would be https://). Then comes the part that identifies which server we are requesting the document from. Last is a path string that identifies the specific document (or resource) we are interested in. Machines connected to the Internet get an IP address, which is a number that can be used to send messages to that machine, and looks something like `149.210.142.219` or `2001:4860:4860::8888` . But lists of more or less random numbers are hard to remember and awkward to type, so you can instead register a domain name for a specific address or set of addresses. I registered eloquentjavascript.net to point at the IP address of a machine I control and can thus use that domain name to serve web pages. If you type this URL into your browser’s address bar, the browser will try to retrieve and display the document at that URL. First, your browser has to find out what address eloquentjavascript.net refers to. Then, using the HTTP protocol, it will make a connection to the server at that address and ask for the resource /13_browser.html. If all goes well, the server sends back a document, which your browser then displays on your screen. ## HTML HTML, which stands for Hypertext Markup Language, is the document format used for web pages. An HTML document contains text, as well as tags that give structure to the text, describing things such as links, paragraphs, and headings. A short HTML document might look like this: > edit & run code by clicking it<html> <head> <meta charset="utf-8"> <title>My home page</title> </head> <body> <h1>My home page</h1> <p>Hello, I am Marijn and this is my home page.</p> <p>I also wrote a book! Read it <a href="http://eloquentjavascript.net">here</a>.</p> </body> </html> The tags, wrapped in angle brackets ( `<` and `>` , the symbols for less than and greater than), provide information about the structure of the document. The other text is just plain text. The document starts with `<!doctype html>` , which tells the browser to interpret the page as modern HTML, as opposed to various dialects that were in use in the past. HTML documents have a head and a body. The head contains information about the document, and the body contains the document itself. In this case, the head declares that the title of this document is “My home page” and that it uses the UTF-8 encoding, which is a way to encode Unicode text as binary data. The document’s body contains a heading ( `<h1>` , meaning “heading 1”— `<h2>` to `<h6>` produce subheadings) and two paragraphs ( `<p>` ). Tags come in several forms. An element, such as the body, a paragraph, or a link, is started by an opening tag like `<p>` and ended by a closing tag like `</p>` . Some opening tags, such as the one for the link ( `<a>` ), contain extra information in the form of `name="value"` pairs. These are called attributes. In this case, the destination of the link is indicated with `href="http://` , where `href` stands for “hypertext reference”. Some kinds of tags do not enclose anything and thus do not need to be closed. The metadata tag ``` <meta charset="utf-8"> ``` is an example of this. To be able to include angle brackets in the text of a document, even though they have a special meaning in HTML, yet another form of special notation has to be introduced. A plain opening angle bracket is written as `&lt;` (“less than”), and a closing bracket is written as `&gt;` (“greater than”). In HTML, an ampersand ( `&` ) character followed by a name or character code and a semicolon ( `;` ) is called an entity and will be replaced by the character it encodes. This is analogous to the way backslashes are used in JavaScript strings. Since this mechanism gives ampersand characters a special meaning, too, they need to be escaped as `&amp;` . Inside attribute values, which are wrapped in double quotes, `&quot;` can be used to insert an actual quote character. HTML is parsed in a remarkably error-tolerant way. When tags that should be there are missing, the browser reconstructs them. The way in which this is done has been standardized, and you can rely on all modern browsers to do it in the same way. The following document will be treated just like the one shown previously: > <meta charset=utf-8> <title>My home page</title> <h1>My home page</h1> <p>Hello, I am Marijn and this is my home page. <p>I also wrote a book! Read it <a href=http://eloquentjavascript.net>here</a>. The `<html>` , `<head>` , and `<body>` tags are gone completely. The browser knows that `<meta>` and `<title>` belong in the head and that `<h1>` means the body has started. Furthermore, I am no longer explicitly closing the paragraphs since opening a new paragraph or ending the document will close them implicitly. The quotes around the attribute values are also gone. This book will usually omit the `<html>` , `<head>` , and `<body>` tags from examples to keep them short and free of clutter. But I will close tags and include quotes around attributes. I will also usually omit the doctype and `charset` declaration. This is not to be taken as an encouragement to drop these from HTML documents. Browsers will often do ridiculous things when you forget them. You should consider the doctype and the `charset` metadata to be implicitly present in examples, even when they are not actually shown in the text. ## HTML and JavaScript In the context of this book, the most important HTML tag is `<script>` . This tag allows us to include a piece of JavaScript in a document. > <h1>Testing alert</h1> <script>alert("hello!");</script> Such a script will run as soon as its `<script>` tag is encountered while the browser reads the HTML. This page will pop up a dialog when opened—the `alert` function resembles `prompt` , in that it pops up a little window, but only shows a message without asking for input. Including large programs directly in HTML documents is often impractical. The `<script>` tag can be given an `src` attribute to fetch a script file (a text file containing a JavaScript program) from a URL. > <h1>Testing alert</h1> <script src="code/hello.js"></script> The code/hello.js file included here contains the same program— `alert("hello!")` . When an HTML page references other URLs as part of itself—for example, an image file or a script—web browsers will retrieve them immediately and include them in the page. A script tag must always be closed with `</script>` , even if it refers to a script file and doesn’t contain any code. If you forget this, the rest of the page will be interpreted as part of the script. You can load ES modules (see Chapter 10) in the browser by giving your script tag a `type="module"` attribute. Such modules can depend on other modules by using URLs relative to themselves as module names in `import` declarations. Some attributes can also contain a JavaScript program. The `<button>` tag shown next (which shows up as a button) has an `onclick` attribute. The attribute’s value will be run whenever the button is clicked. > <button onclick="alert('Boom!');">DO NOT PRESS</button> Note that I had to use single quotes for the string in the `onclick` attribute because double quotes are already used to quote the whole attribute. I could also have used `&quot;` . ## In the sandbox Running programs downloaded from the Internet is potentially dangerous. You do not know much about the people behind most sites you visit, and they do not necessarily mean well. Running programs by people who do not mean well is how you get your computer infected by viruses, your data stolen, and your accounts hacked. Yet the attraction of the Web is that you can browse it without necessarily trusting all the pages you visit. This is why browsers severely limit the things a JavaScript program may do: it can’t look at the files on your computer or modify anything not related to the web page it was embedded in. Isolating a programming environment in this way is called sandboxing, the idea being that the program is harmlessly playing in a sandbox. But you should imagine this particular kind of sandbox as having a cage of thick steel bars over it so that the programs playing in it can’t actually get out. The hard part of sandboxing is allowing the programs enough room to be useful yet at the same time restricting them from doing anything dangerous. Lots of useful functionality, such as communicating with other servers or reading the content of the copy-paste clipboard, can also be used to do problematic, privacy-invading things. Every now and then, someone comes up with a new way to circumvent the limitations of a browser and do something harmful, ranging from leaking minor private information to taking over the whole machine that the browser runs on. The browser developers respond by fixing the hole, and all is well again—until the next problem is discovered, and hopefully publicized, rather than secretly exploited by some government agency or mafia. ## Compatibility and the browser wars In the early stages of the Web, a browser called Mosaic dominated the market. After a few years, the balance shifted to Netscape, which was then, in turn, largely supplanted by Microsoft’s Internet Explorer. At any point where a single browser was dominant, that browser’s vendor would feel entitled to unilaterally invent new features for the Web. Since most users used the most popular browser, websites would simply start using those features—never mind the other browsers. This was the dark age of compatibility, often called the browser wars. Web developers were left with not one unified Web but two or three incompatible platforms. To make things worse, the browsers in use around 2003 were all full of bugs, and of course the bugs were different for each browser. Life was hard for people writing web pages. Mozilla Firefox, a not-for-profit offshoot of Netscape, challenged Internet Explorer’s position in the late 2000s. Because Microsoft was not particularly interested in staying competitive at the time, Firefox took a lot of market share away from it. Around the same time, Google introduced its Chrome browser, and Apple’s Safari browser gained popularity, leading to a situation where there were four major players, rather than one. The new players had a more serious attitude toward standards and better engineering practices, giving us less incompatibility and fewer bugs. Microsoft, seeing its market share crumble, came around and adopted these attitudes in its Edge browser, which replaces Internet Explorer. If you are starting to learn web development today, consider yourself lucky. The latest versions of the major browsers behave quite uniformly and have relatively few bugs. Date: 2018-01-08 Categories: Tags: Communication must be stateless in nature [...] such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. The Hypertext Transfer Protocol, already mentioned in Chapter 13, is the mechanism through which data is requested and provided on the World Wide Web. This chapter describes the protocol in more detail and explains the way browser JavaScript has access to it. ## The protocol If you type eloquentjavascript.net/18_http.html into your browser’s address bar, the browser first looks up the address of the server associated with eloquentjavascript.net and tries to open a TCP connection to it on port 80, the default port for HTTP traffic. If the server exists and accepts the connection, the browser might send something like this: > GET /18_http.html HTTP/1.1 Host: eloquentjavascript.net User-Agent: Your browser's name Then the server responds, through that same connection. > HTTP/1.1 200 OK Content-Length: 65585 Content-Type: text/html Last-Modified: Mon, 08 Jan 2018 10:29:45 GMT <!doctype html> ... the rest of the document The browser takes the part of the response after the blank line, its body (not to be confused with the HTML `<body>` tag), and displays it as an HTML document. The information sent by the client is called the request. It starts with this line: > GET /18_http.html HTTP/1.1 The first word is the method of the request. `GET` means that we want to get the specified resource. Other common methods are `DELETE` to delete a resource, `PUT` to create or replace it, and `POST` to send information to it. Note that the server is not obliged to carry out every request it gets. If you walk up to a random website and tell it to `DELETE` its main page, it’ll probably refuse. The part after the method name is the path of the resource the request applies to. In the simplest case, a resource is simply a file on the server, but the protocol doesn’t require it to be. A resource may be anything that can be transferred as if it is a file. Many servers generate the responses they produce on the fly. For example, if you open https://github.com/marijnh, the server looks in its database for a user named “marijnh”, and if it finds one, it will generate a profile page for that user. After the resource path, the first line of the request mentions `HTTP/1.1` to indicate the version of the HTTP protocol it is using. In practice, many sites use HTTP version 2, which supports the same concepts as version 1.1 but is a lot more complicated so that it can be faster. Browsers will automatically switch to the appropriate protocol version when talking to a given server, and the outcome of a request is the same regardless of which version is used. Because version 1.1 is more straightforward and easier to play around with, we’ll focus on that. The server’s response will start with a version as well, followed by the status of the response, first as a three-digit status code and then as a human-readable string. > HTTP/1.1 200 OK Status codes starting with a 2 indicate that the request succeeded. Codes starting with 4 mean there was something wrong with the request. 404 is probably the most famous HTTP status code—it means that the resource could not be found. Codes that start with 5 mean an error happened on the server and the request is not to blame. The first line of a request or response may be followed by any number of headers. These are lines in the form `name: value` that specify extra information about the request or response. These headers were part of the example response: > Content-Length: 65585 Content-Type: text/html Last-Modified: Thu, 04 Jan 2018 14:05:30 GMT This tells us the size and type of the response document. In this case, it is an HTML document of 65,585 bytes. It also tells us when that document was last modified. For most headers, the client and server are free to decide whether to include them in a request or response. But a few are required. For example, the `Host` header, which specifies the hostname, should be included in a request because a server might be serving multiple hostnames on a single IP address, and without that header, the server won’t know which hostname the client is trying to talk to. After the headers, both requests and responses may include a blank line followed by a body, which contains the data being sent. `GET` and `DELETE` requests don’t send along any data, but `PUT` and `POST` requests do. Similarly, some response types, such as error responses, do not require a body. ## Browsers and HTTP As we saw in the example, a browser will make a request when we enter a URL in its address bar. When the resulting HTML page references other files, such as images and JavaScript files, those are also retrieved. A moderately complicated website can easily include anywhere from 10 to 200 resources. To be able to fetch those quickly, browsers will make several `GET` requests simultaneously, rather than waiting for the responses one at a time. HTML pages may include forms, which allow the user to fill out information and send it to the server. This is an example of a form: > edit & run code by clicking it<form method="GET" action="example/message.html"> <p>Name: <input type="text" name="name"></p> <p>Message:<br><textarea name="message"></textarea></p> <p><button type="submit">Send</button></p> </formThis code describes a form with two fields: a small one asking for a name and a larger one to write a message in. When you click the Send button, the form is submitted, meaning that the content of its field is packed into an HTTP request and the browser navigates to the result of that request. When the `<form>` element’s `method` attribute is `GET` (or is omitted), the information in the form is added to the end of the `action` URL as a query string. The browser might make a request to this URL: > GET /example/message.html?name=Jean&message=Yes%3F HTTP/1.1 The question mark indicates the end of the path part of the URL and the start of the query. It is followed by pairs of names and values, corresponding to the `name` attribute on the form field elements and the content of those elements, respectively. An ampersand character ( `&` ) is used to separate the pairs. The actual message encoded in the URL is “Yes?”, but the question mark is replaced by a strange code. Some characters in query strings must be escaped. The question mark, represented as `%3F` , is one of those. There seems to be an unwritten rule that every format needs its own way of escaping characters. This one, called URL encoding, uses a percent sign followed by two hexadecimal (base 16) digits that encode the character code. In this case, 3F, which is 63 in decimal notation, is the code of a question mark character. JavaScript provides the `encodeURIComponent` and `decodeURIComponent` functions to encode and decode this format. > console.log(encodeURIComponent("Yes?")); // → Yes%3F console.log(decodeURIComponent("Yes%3F")); // → Yes? If we change the `method` attribute of the HTML form in the example we saw earlier to `POST` , the HTTP request made to submit the form will use the `POST` method and put the query string in the body of the request, rather than adding it to the URL. > POST /example/message.html HTTP/1.1 Content-length: 24 Content-type: application/x-www-form-urlencoded name=Jean&message=Yes%3F `GET` requests should be used for requests that do not have side effects but simply ask for information. Requests that change something on the server, for example creating a new account or posting a message, should be expressed with other methods, such as `POST` . Client-side software such as a browser knows that it shouldn’t blindly make `POST` requests but will often implicitly make `GET` requests—for example to prefetch a resource it believes the user will soon need. We’ll come back to forms and how to interact with them from JavaScript later in the chapter. ## Fetch The interface through which browser JavaScript can make HTTP requests is called `fetch` . Since it is relatively new, it conveniently uses promises (which is rare for browser interfaces). > fetch("example/data.txt").then(response => { console.log(response.status); // → 200 console.log(response.headers.get("Content-Type")); // → text/plain }); Calling `fetch` returns a promise that resolves to a `Response` object holding information about the server’s response, such as its status code and its headers. The headers are wrapped in a `Map` -like object that treats its keys (the header names) as case insensitive because header names are not supposed to be case sensitive. This means `headers.` and `headers.` will return the same value. Note that the promise returned by `fetch` resolves successfully even if the server responded with an error code. It might also be rejected if there is a network error or if the server that the request is addressed to can’t be found. The first argument to `fetch` is the URL that should be requested. When that URL doesn’t start with a protocol name (such as http:), it is treated as relative, which means it is interpreted relative to the current document. When it starts with a slash (/), it replaces the current path, which is the part after the server name. When it does not, the part of the current path up to and including its last slash character is put in front of the relative URL. To get at the actual content of a response, you can use its `text` method. Because the initial promise is resolved as soon as the response’s headers have been received and because reading the response body might take a while longer, this again returns a promise. > fetch("example/data.txt") .then(resp => resp.text()) .then(text => console.log(text)); // → This is the content of data.txt A similar method, called `json` , returns a promise that resolves to the value you get when parsing the body as JSON or rejects if it’s not valid JSON. By default, `fetch` uses the `GET` method to make its request and does not include a request body. You can configure it differently by passing an object with extra options as a second argument. For example, this request tries to delete `example/data.txt` : > fetch("example/data.txt", {method: "DELETE"}).then(resp => { console.log(resp.status); // → 405 }); The 405 status code means “method not allowed”, an HTTP server’s way of saying “I can’t do that”. To add a request body, you can include a `body` option. To set headers, there’s the `headers` option. For example, this request includes a `Range` header, which instructs the server to return only part of a response. > fetch("example/data.txt", {headers: {Range: "bytes=8-19"}}) .then(resp => resp.text()) .then(console.log); // → the content The browser will automatically add some request headers, such as “Host” and those needed for the server to figure out the size of the body. But adding your own headers is often useful to include things such as authentication information or to tell the server which file format you’d like to receive. ## HTTP sandboxing Making HTTP requests in web page scripts once again raises concerns about security. The person who controls the script might not have the same interests as the person on whose computer it is running. More specifically, if I visit themafia.org, I do not want its scripts to be able to make a request to mybank.com, using identifying information from my browser, with instructions to transfer all my money to some random account. For this reason, browsers protect us by disallowing scripts to make HTTP requests to other domains (names such as themafia.org and mybank.com). This can be an annoying problem when building systems that want to access several domains for legitimate reasons. Fortunately, servers can include a header like this in their response to explicitly indicate to the browser that it is okay for the request to come from another domain: > Access-Control-Allow-Origin: * ## Appreciating HTTP When building a system that requires communication between a JavaScript program running in the browser (client-side) and a program on a server (server-side), there are several different ways to model this communication. A commonly used model is that of remote procedure calls. In this model, communication follows the patterns of normal function calls, except that the function is actually running on another machine. Calling it involves making a request to the server that includes the function’s name and arguments. The response to that request contains the returned value. When thinking in terms of remote procedure calls, HTTP is just a vehicle for communication, and you will most likely write an abstraction layer that hides it entirely. Another approach is to build your communication around the concept of resources and HTTP methods. Instead of a remote procedure called `addUser` , you use a `PUT` request to `/users/larry` . Instead of encoding that user’s properties in function arguments, you define a JSON document format (or use an existing format) that represents a user. The body of the `PUT` request to create a new resource is then such a document. A resource is fetched by making a `GET` request to the resource’s URL (for example, `/user/larry` ), which again returns the document representing the resource. This second approach makes it easier to use some of the features that HTTP provides, such as support for caching resources (keeping a copy on the client for fast access). The concepts used in HTTP, which are well designed, can provide a helpful set of principles to design your server interface around. ## Security and HTTPS Data traveling over the Internet tends to follow a long, dangerous road. To get to its destination, it must hop through anything from coffee shop Wi-Fi hotspots to networks controlled by various companies and states. At any point along its route it may be inspected or even modified. If it is important that something remain secret, such as the password to your email account, or that it arrive at its destination unmodified, such as the account number you transfer money to via your bank’s website, plain HTTP is not good enough. The secure HTTP protocol, used for URLs starting with https://, wraps HTTP traffic in a way that makes it harder to read and tamper with. Before exchanging data, the client verifies that the server is who it claims to be by asking it to prove that it has a cryptographic certificate issued by a certificate authority that the browser recognizes. Next, all data going over the connection is encrypted in a way that should prevent eavesdropping and tampering. Thus, when it works right, HTTPS prevents other people from impersonating the website you are trying to talk to and from snooping on your communication. It is not perfect, and there have been various incidents where HTTPS failed because of forged or stolen certificates and broken software, but it is a lot safer than plain HTTP. ## Form fields Forms were originally designed for the pre-JavaScript Web to allow web sites to send user-submitted information in an HTTP request. This design assumes that interaction with the server always happens by navigating to a new page. But their elements are part of the DOM like the rest of the page, and the DOM elements that represent form fields support a number of properties and events that are not present on other elements. These make it possible to inspect and control such input fields with JavaScript programs and do things such as adding new functionality to a form or using forms and fields as building blocks in a JavaScript application. A web form consists of any number of input fields grouped in a `<form>` tag. HTML allows several different styles of fields, ranging from simple on/off checkboxes to drop-down menus and fields for text input. This book won’t try to comprehensively discuss all field types, but we’ll start with a rough overview. A lot of field types use the `<input>` tag. This tag’s `type` attribute is used to select the field’s style. These are some commonly used `<input>` types: | A single-line text field | | --- | --- | | Same as | | An on/off switch | | (Part of) a multiple-choice field | | Allows the user to choose a file from their computer | Form fields do not necessarily have to appear in a `<form>` tag. You can put them anywhere in a page. Such form-less fields cannot be submitted (only a form as a whole can), but when responding to input with JavaScript, we often don’t want to submit our fields normally anyway. > <p><input type="text" value="abc"> (text)</p> <p><input type="password" value="abc"> (password)</p> <p><input type="checkbox" checked> (checkbox)</p> <p><input type="radio" value="A" name="choice"> <input type="radio" value="B" name="choice" checked> <input type="radio" value="C" name="choice"> (radio)</p> <p><input type="file"> (file)</pThe JavaScript interface for such elements differs with the type of the element. Multiline text fields have their own tag, `<textarea>` , mostly because using an attribute to specify a multiline starting value would be awkward. The `<textarea>` tag requires a matching `</` closing tag and uses the text between those two, instead of the `value` attribute, as starting text. > <textarea> one two three </textarea> Finally, the `<select>` tag is used to create a field that allows the user to select from a number of predefined options. > <select> <option>Pancakes</option> <option>Pudding</option> <option>Ice cream</option> </select> Whenever the value of a form field changes, it will fire a `"change"` event. ## Focus Unlike most elements in HTML documents, form fields can get keyboard focus. When clicked or activated in some other way, they become the currently active element and the recipient of keyboard input. Thus, you can type into a text field only when it is focused. Other fields respond differently to keyboard events. For example, a `<select>` menu tries to move to the option that contains the text the user typed and responds to the arrow keys by moving its selection up and down. We can control focus from JavaScript with the `focus` and `blur` methods. The first moves focus to the DOM element it is called on, and the second removes focus. The value in `document.` corresponds to the currently focused element. > <input type="text"> <script> document.querySelector("input").focus(); console.log(document.activeElement.tagName); // → INPUT document.querySelector("input").blur(); console.log(document.activeElement.tagName); // → BODY </script> For some pages, the user is expected to want to interact with a form field immediately. JavaScript can be used to focus this field when the document is loaded, but HTML also provides the `autofocus` attribute, which produces the same effect while letting the browser know what we are trying to achieve. This gives the browser the option to disable the behavior when it is not appropriate, such as when the user has put the focus on something else. Browsers traditionally also allow the user to move the focus through the document by pressing the tab key. We can influence the order in which elements receive focus with the `tabindex` attribute. The following example document will let the focus jump from the text input to the OK button, rather than going through the help link first: > <input type="text" tabindex=1> <a href=".">(help)</a> <button onclick="console.log('ok')" tabindex=2>OK</button> By default, most types of HTML elements cannot be focused. But you can add a `tabindex` attribute to any element that will make it focusable. A `tabindex` of -1 makes tabbing skip over an element, even if it is normally focusable. ## Disabled fields All form fields can be disabled through their `disabled` attribute. It is an attribute that can be specified without value—the fact that it is present at all disables the element. > <button>I'm all right</button> <button disabled>I'm out</buttonDisabled fields cannot be focused or changed, and browsers make them look gray and faded. When a program is in the process of handling an action caused by some button or other control that might require communication with the server and thus take a while, it can be a good idea to disable the control until the action finishes. That way, when the user gets impatient and clicks it again, they don’t accidentally repeat their action. ## The form as a whole When a field is contained in a `<form>` element, its DOM element will have a `form` property linking back to the form’s DOM element. The `<form>` element, in turn, has a property called `elements` that contains an array-like collection of the fields inside it. The `name` attribute of a form field determines the way its value will be identified when the form is submitted. It can also be used as a property name when accessing the form’s `elements` property, which acts both as an array-like object (accessible by number) and a map (accessible by name). > <form action="example/submit.html"> Name: <input type="text" name="name"><br> Password: <input type="password" name="password"><br> <button type="submit">Log in</button> </form> <script> let form = document.querySelector("form"); console.log(form.elements[1].type); // → password console.log(form.elements.password.type); // → password console.log(form.elements.name.form == form); // → true </script> A button with a `type` attribute of `submit` will, when pressed, cause the form to be submitted. Pressing enter when a form field is focused has the same effect. Submitting a form normally means that the browser navigates to the page indicated by the form’s `action` attribute, using either a `GET` or a `POST` request. But before that happens, a `"submit"` event is fired. You can handle this event with JavaScript and prevent this default behavior by calling `preventDefault` on the event object. > <form action="example/submit.html"> Value: <input type="text" name="value"> <button type="submit">Save</button> </form> <script> let form = document.querySelector("form"); form.addEventListener("submit", event => { console.log("Saving value", form.elements.value.value); event.preventDefault(); }); </script> Intercepting `"submit"` events in JavaScript has various uses. We can write code to verify that the values the user entered make sense and immediately show an error message instead of submitting the form. Or we can disable the regular way of submitting the form entirely, as in the example, and have our program handle the input, possibly using `fetch` to send it to a server without reloading the page. ## Text fields Fields created by `<textarea>` tags, or `<input>` tags with a type of `text` or `password` , share a common interface. Their DOM elements have a `value` property that holds their current content as a string value. Setting this property to another string changes the field’s content. The `selectionStart` and `selectionEnd` properties of text fields give us information about the cursor and selection in the text. When nothing is selected, these two properties hold the same number, indicating the position of the cursor. For example, 0 indicates the start of the text, and 10 indicates the cursor is after the 10th character. When part of the field is selected, the two properties will differ, giving us the start and end of the selected text. Like `value` , these properties may also be written to. Imagine you are writing an article about Khasekhemwy but have some trouble spelling his name. The following code wires up a `<textarea>` tag with an event handler that, when you press F2, inserts the string “Khasekhemwy” for you. > <textarea></textarea> <script> let textarea = document.querySelector("textarea"); textarea.addEventListener("keydown", event => { // The key code for F2 happens to be 113 if (event.keyCode == 113) { replaceSelection(textarea, "Khasekhemwy"); event.preventDefault(); } }); function replaceSelection(field, word) { let from = field.selectionStart, to = field.selectionEnd; field.value = field.value.slice(0, from) + word + field.value.slice(to); // Put the cursor after the word field.selectionStart = from + word.length; field.selectionEnd = from + word.length; } </script> The `replaceSelection` function replaces the currently selected part of a text field’s content with the given word and then moves the cursor after that word so that the user can continue typing. The `"change"` event for a text field does not fire every time something is typed. Rather, it fires when the field loses focus after its content was changed. To respond immediately to changes in a text field, you should register a handler for the `"input"` event instead, which fires for every time the user types a character, deletes text, or otherwise manipulates the field’s content. The following example shows a text field and a counter displaying the current length of the text in the field: > <input type="text"> length: <span id="length">0</span> <script> let text = document.querySelector("input"); let output = document.querySelector("#length"); text.addEventListener("input", () => { output.textContent = text.value.length; }); </script## Checkboxes and radio buttons A checkbox field is a binary toggle. Its value can be extracted or changed through its `checked` property, which holds a Boolean value. > <label> <input type="checkbox" id="purple"> Make this page purple </label> <script> let checkbox = document.querySelector("#purple"); checkbox.addEventListener("change", () => { document.body.style.background = checkbox.checked ? "mediumpurple" : ""; }); </script> The `<label>` tag associates a piece of document with an input field. Clicking anywhere on the label will activate the field, which focuses it and toggles its value when it is a checkbox or radio button. A radio button is similar to a checkbox, but it’s implicitly linked to other radio buttons with the same `name` attribute so that only one of them can be active at any time. > Color: <label> <input type="radio" name="color" value="orange"> Orange </label> <label> <input type="radio" name="color" value="lightgreen"> Green </label> <label> <input type="radio" name="color" value="lightblue"> Blue </label> <script> let buttons = document.querySelectorAll("[name=color]"); for (let button of Array.from(buttons)) { button.addEventListener("change", () => { document.body.style.background = button.value; }); } </script> The square brackets in the CSS query given to `querySelectorAll` are used to match attributes. It selects elements whose `name` attribute is `"color"` . ## Select fields Select fields are conceptually similar to radio buttons—they also allow the user to choose from a set of options. But where a radio button puts the layout of the options under our control, the appearance of a `<select>` tag is determined by the browser. Select fields also have a variant that is more akin to a list of checkboxes, rather than radio boxes. When given the `multiple` attribute, a `<select>` tag will allow the user to select any number of options, rather than just a single option. This will, in most browsers, show up differently than a normal select field, which is typically drawn as a drop-down control that shows the options only when you open it. Each `<option>` tag has a value. This value can be defined with a `value` attribute. When that is not given, the text inside the option will count as its value. The `value` property of a `<select>` element reflects the currently selected option. For a `multiple` field, though, this property doesn’t mean much since it will give the value of only one of the currently selected options. The `<option>` tags for a `<select>` field can be accessed as an array-like object through the field’s `options` property. Each option has a property called `selected` , which indicates whether that option is currently selected. The property can also be written to select or deselect an option. This example extracts the selected values from a `multiple` select field and uses them to compose a binary number from individual bits. Hold control (or command on a Mac) to select multiple options. > <select multiple> <option value="1">0001</option> <option value="2">0010</option> <option value="4">0100</option> <option value="8">1000</option> </select> = <span id="output">0</span> <script> let select = document.querySelector("select"); let output = document.querySelector("#output"); select.addEventListener("change", () => { let number = 0; for (let option of Array.from(select.options)) { if (option.selected) { number += Number(option.value); } } output.textContent = number; }); </script## File fields File fields were originally designed as a way to upload files from the user’s machine through a form. In modern browsers, they also provide a way to read such files from JavaScript programs. The field acts as a kind of gatekeeper. The script cannot simply start reading private files from the user’s computer, but if the user selects a file in such a field, the browser interprets that action to mean that the script may read the file. A file field usually looks like a button labeled with something like “choose file” or “browse”, with information about the chosen file next to it. > <input type="file"> <script> let input = document.querySelector("input"); input.addEventListener("change", () => { if (input.files.length > 0) { let file = input.files[0]; console.log("You chose", file.name); if (file.type) console.log("It has type", file.type); } }); </script> The `files` property of a file field element is an array-like object (again, not a real array) containing the files chosen in the field. It is initially empty. The reason there isn’t simply a `file` property is that file fields also support a `multiple` attribute, which makes it possible to select multiple files at the same time. Objects in the `files` object have properties such as `name` (the filename), `size` (the file’s size in bytes, which are chunks of 8 bits), and `type` (the media type of the file, such as `text/plain` or `image/jpeg` ). What it does not have is a property that contains the content of the file. Getting at that is a little more involved. Since reading a file from disk can take time, the interface must be asynchronous to avoid freezing the document. > <input type="file" multiple> <script> let input = document.querySelector("input"); input.addEventListener("change", () => { for (let file of Array.from(input.files)) { let reader = new FileReader(); reader.addEventListener("load", () => { console.log("File", file.name, "starts with", reader.result.slice(0, 20)); }); reader.readAsText(file); } }); </script> Reading a file is done by creating a `FileReader` object, registering a `"load"` event handler for it, and calling its `readAsText` method, giving it the file we want to read. Once loading finishes, the reader’s `result` property contains the file’s content. `FileReader` s also fire an `"error"` event when reading the file fails for any reason. The error object itself will end up in the reader’s `error` property. This interface was designed before promises became part of the language. You could wrap it in a promise like this: > function readFileText(file) { return new Promise((resolve, reject) => { let reader = new FileReader(); reader.addEventListener( "load", () => resolve(reader.result)); reader.addEventListener( "error", () => reject(reader.error)); reader.readAsText(file); }); } ## Storing data client-side Simple HTML pages with a bit of JavaScript can be a great format for “mini applications”—small helper programs that automate basic tasks. By connecting a few form fields with event handlers, you can do anything from converting between centimeters and inches to computing passwords from a master password and a website name. When such an application needs to remember something between sessions, you cannot use JavaScript bindings—those are thrown away every time the page is closed. You could set up a server, connect it to the Internet, and have your application store something there. We will see how to do that in Chapter 20. But that’s a lot of extra work and complexity. Sometimes it is enough to just keep the data in the browser. The `localStorage` object can be used to store data in a way that survives page reloads. This object allows you to file string values under names. > localStorage.setItem("username", "marijn"); console.log(localStorage.getItem("username")); // → marijn localStorage.removeItem("username"); A value in `localStorage` sticks around until it is overwritten, it is removed with `removeItem` , or the user clears their local data. Sites from different domains get different storage compartments. That means data stored in `localStorage` by a given website can, in principle, be read (and overwritten) only by scripts on that same site. Browsers do enforce a limit on the size of the data a site can store in `localStorage` . That restriction, along with the fact that filling up people’s hard drives with junk is not really profitable, prevents the feature from eating up too much space. The following code implements a crude note-taking application. It keeps a set of named notes and allows the user to edit notes and create new ones. > Notes: <select></select> <button>Add</button><br> <textarea style="width: 100%"></textarea> <script> let list = document.querySelector("select"); let note = document.querySelector("textarea"); let state; function setState(newState) { list.textContent = ""; for (let name of Object.keys(newState.notes)) { let option = document.createElement("option"); option.textContent = name; if (newState.selected == name) option.selected = true; list.appendChild(option); } note.value = newState.notes[newState.selected]; localStorage.setItem("Notes", JSON.stringify(newState)); state = newState; } setState(JSON.parse(localStorage.getItem("Notes")) || { notes: {"shopping list": "Carrots\nRaisins"}, selected: "shopping list" }); list.addEventListener("change", () => { setState({notes: state.notes, selected: list.value}); }); note.addEventListener("change", () => { setState({ notes: Object.assign({}, state.notes, {[state.selected]: note.value}), selected: state.selected }); }); document.querySelector("button") .addEventListener("click", () => { let name = prompt("Note name"); if (name) setState({ notes: Object.assign({}, state.notes, {[name]: ""}), selected: name }); }); </script> The script gets its starting state from the `"Notes"` value stored in `localStorage` or, if that is missing, creates an example state that has only a shopping list in it. Reading a field that does not exist from `localStorage` will yield `null` . Passing `null` to `JSON.parse` will make it parse the string `"null"` and return `null` . Thus, the `||` operator can be used to provide a default value in a situation like this. The `setState` method makes sure the DOM is showing a given state and stores the new state to `localStorage` . Event handlers call this function to move to a new state. The use of `Object.assign` in the example is intended to create a new object that is a clone of the old `state.notes` , but with one property added or overwritten. `Object.assign` takes its first argument and adds all properties from any further arguments to it. Thus, giving it an empty object will cause it to fill a fresh object. The square brackets notation in the third argument is used to create a property whose name is based on some dynamic value. There is another object, similar to `localStorage` , called `sessionStorage` . The difference between the two is that the content of `sessionStorage` is forgotten at the end of each session, which for most browsers means whenever the browser is closed. In this chapter, we discussed how the HTTP protocol works. A client sends a request, which contains a method (usually `GET` ) and a path that identifies a resource. The server then decides what to do with the request and responds with a status code and a response body. Both requests and responses may contain headers that provide additional information. The interface through which browser JavaScript can make HTTP requests is called `fetch` . Making a request looks like this: > fetch("/18_http.html").then(r => r.text()).then(text => { console.log(`The page starts with ${text.slice(0, 15)}`); }); Browsers make `GET` requests to fetch the resources needed to display a web page. A page may also contain forms, which allow information entered by the user to be sent as a request for a new page when the form is submitted. HTML can represent various types of form fields, such as text fields, checkboxes, multiple-choice fields, and file pickers. Such fields can be inspected and manipulated with JavaScript. They fire the `"change"` event when changed, fire the `"input"` event when text is typed, and receive keyboard events when they have keyboard focus. Properties like `value` (for text and select fields) or `checked` (for checkboxes and radio buttons) are used to read or set the field’s content. When a form is submitted, a `"submit"` event is fired on it. A JavaScript handler can call `preventDefault` on that event to disable the browser’s default behavior. Form field elements may also occur outside of a form tag. When the user has selected a file from their local file system in a file picker field, the `FileReader` interface can be used to access the content of this file from a JavaScript program. The `localStorage` and `sessionStorage` objects can be used to save information in a way that survives page reloads. The first object saves the data forever (or until the user decides to clear it), and the second saves it until the browser is closed. ### Content negotiation One of the things HTTP can do is called content negotiation. The `Accept` request header is used to tell the server what type of document the client would like to get. Many servers ignore this header, but when a server knows of various ways to encode a resource, it can look at this header and send the one that the client prefers. The URL https://eloquentjavascript.net/author is configured to respond with either plaintext, HTML, or JSON, depending on what the client asks for. These formats are identified by the standardized media types `text/plain` , `text/html` , and `application/json` . Send requests to fetch all three formats of this resource. Use the `headers` property in the options object passed to `fetch` to set the header named `Accept` to the desired media type. Finally, try asking for the media type `application/` and see which status code that produces. `// Your code here.` Base your code on the `fetch` examples earlier in the chapter. Asking for a bogus media type will return a response with code 406, “Not acceptable”, which is the code a server should return when it can’t fulfill the `Accept` header. ### A JavaScript workbench Build an interface that allows people to type and run pieces of JavaScript code. Put a button next to a `<textarea>` field that, when pressed, uses the `Function` constructor we saw in Chapter 10 to wrap the text in a function and call it. Convert the return value of the function, or any error it raises, to a string and display it below the text field. > <textarea id="code">return "hi";</textarea> <button id="button">Run</button> <pre id="output"></pre> <script> // Your code here. </script> Use `document.` or `document.` to get access to the elements defined in your HTML. An event handler for `"click"` or `"mousedown"` events on the button can get the `value` property of the text field and call `Function` on it. Make sure you wrap both the call to `Function` and the call to its result in a `try` block so you can catch the exceptions it produces. In this case, we really don’t know what type of exception we are looking for, so catch everything. The `textContent` property of the output element can be used to fill it with a string message. Or, if you want to keep the old content around, create a new text node using `document.` and append it to the element. Remember to add a newline character to the end so that not all output appears on a single line. ### Conway’s Game of Life Conway’s Game of Life is a simple simulation that creates artificial “life” on a grid, each cell of which is either alive or not. Each generation (turn), the following rules are applied: Any live cell with fewer than two or more than three live neighbors dies. * Any live cell with two or three live neighbors lives on to the next generation. * Any dead cell with exactly three live neighbors becomes a live cell. A neighbor is defined as any adjacent cell, including diagonally adjacent ones. Note that these rules are applied to the whole grid at once, not one square at a time. That means the counting of neighbors is based on the situation at the start of the generation, and changes happening to neighbor cells during this generation should not influence the new state of a given cell. Implement this game using whichever data structure you find appropriate. Use `Math.random` to populate the grid with a random pattern initially. Display it as a grid of checkbox fields, with a button next to it to advance to the next generation. When the user checks or unchecks the checkboxes, their changes should be included when computing the next generation. > <div id="grid"></div> <button id="next">Next generation</button> <script> // Your code here. </scriptTo solve the problem of having the changes conceptually happen at the same time, try to see the computation of a generation as a pure function, which takes one grid and produces a new grid that represents the next turn. Representing the matrix can be done in the way shown in Chapter 6. You can count live neighbors with two nested loops, looping over adjacent coordinates in both dimensions. Take care not to count cells outside of the field and to ignore the cell in the center, whose neighbors we are counting. Ensuring that changes to checkboxes take effect on the next generation can be done in two ways. An event handler could notice these changes and update the current grid to reflect them, or you could generate a fresh grid from the values in the checkboxes before computing the next turn. If you choose to go with event handlers, you might want to attach attributes that identify the position that each checkbox corresponds to so that it is easy to find out which cell to change. To draw the grid of checkboxes, you can either use a `<table>` element (see Chapter 14) or simply put them all in the same element and put `<br>` (line break) elements between the rows. I look at the many colors before me. I look at my blank canvas. Then, I try to apply colors like words that shape poems, like notes that shape music. The material from the previous chapters gives you all the elements you need to build a basic web application. In this chapter, we will do just that. Our application will be a pixel drawing program, where you can modify a picture pixel by pixel by manipulating a zoomed-in view of it, shown as a grid of colored squares. You can use the program to open image files, scribble on them with your mouse or other pointer device, and save them. This is what it will look like: Painting on a computer is great. You don’t need to worry about materials, skill, or talent. You just start smearing. ## Components The interface for the application shows a big `<canvas>` element on top, with a number of form fields below it. The user draws on the picture by selecting a tool from a `<select>` field and then clicking, touching, or dragging across the canvas. There are tools for drawing single pixels or rectangles, for filling an area, and for picking a color from the picture. We will structure the editor interface as a number of components, objects that are responsible for a piece of the DOM and that may contain other components inside them. The state of the application consists of the current picture, the selected tool, and the selected color. We’ll set things up so that the state lives in a single value, and the interface components always base the way they look on the current state. To see why this is important, let’s consider the alternative—distributing pieces of state throughout the interface. Up to a certain point, this is easier to program. We can just put in a color field and read its value when we need to know the current color. But then we add the color picker—a tool that lets you click the picture to select the color of a given pixel. To keep the color field showing the correct color, that tool would have to know that it exists and update it whenever it picks a new color. If you ever add another place that makes the color visible (maybe the mouse cursor could show it), you have to update your color-changing code to keep that synchronized. In effect, this creates a problem where each part of the interface needs to know about all other parts, which is not very modular. For small applications like the one in this chapter, that may not be a problem. For bigger projects, it can turn into a real nightmare. To avoid this nightmare on principle, we’re going to be strict about data flow. There is a state, and the interface is drawn based on that state. An interface component may respond to user actions by updating the state, at which point the components get a chance to synchronize themselves with this new state. In practice, each component is set up so that when it is given a new state, it also notifies its child components, insofar as those need to be updated. Setting this up is a bit of a hassle. Making this more convenient is the main selling point of many browser programming libraries. But for a small application like this, we can do it without such infrastructure. Updates to the state are represented as objects, which we’ll call actions. Components may create such actions and dispatch them—give them to a central state management function. That function computes the next state, after which the interface components update themselves to this new state. We’re taking the messy task of running a user interface and applying some structure to it. Though the DOM-related pieces are still full of side effects, they are held up by a conceptually simple backbone: the state update cycle. The state determines what the DOM looks like, and the only way DOM events can change the state is by dispatching actions to the state. There are many variants of this approach, each with its own benefits and problems, but their central idea is the same: state changes should go through a single well-defined channel, not happen all over the place. Our components will be classes conforming to an interface. Their constructor is given a state—which may be the whole application state or some smaller value if it doesn’t need access to everything—and uses that to build up a `dom` property. This is the DOM element that represents the component. Most constructors will also take some other values that won’t change over time, such as the function they can use to dispatch an action. Each component has a `syncState` method that is used to synchronize it to a new state value. The method takes one argument, the state, which is of the same type as the first argument to its constructor. ## The state The application state will be an object with `picture` , `tool` , and `color` properties. The picture is itself an object that stores the width, height, and pixel content of the picture. The pixels are stored in an array, in the same way as the matrix class from Chapter 6—row by row, from top to bottom. > edit & run code by clicking itclass Picture { constructor(width, height, pixels) { this.width = width; this.height = height; this.pixels = pixels; } static empty(width, height, color) { let pixels = new Array(width * height).fill(color); return new Picture(width, height, pixels); } pixel(x, y) { return this.pixels[x + y * this.width]; } draw(pixels) { let copy = this.pixels.slice(); for (let {x, y, color} of pixels) { copy[x + y * this.width] = color; } return new Picture(this.width, this.height, copy); } } We want to be able to treat a picture as an immutable value, for reasons that we’ll get back to later in the chapter. But we also sometimes need to update a whole bunch of pixels at a time. To be able to do that, the class has a `draw` method that expects an array of updated pixels—objects with `x` , `y` , and `color` properties—and creates a new picture with those pixels overwritten. This method uses `slice` without arguments to copy the entire pixel array—the start of the slice defaults to 0, and the end defaults to the array’s length. The `empty` method uses two pieces of array functionality that we haven’t seen before. The `Array` constructor can be called with a number to create an empty array of the given length. The `fill` method can then be used to fill this array with a given value. These are used to create an array in which all pixels have the same color. Colors are stored as strings containing traditional CSS color codes made up of a hash sign ( `#` ) followed by six hexadecimal (base-16) digits—two for the red component, two for the green component, and two for the blue component. This is a somewhat cryptic and inconvenient way to write colors, but it is the format the HTML color input field uses, and it can be used in the `fillColor` property of a canvas drawing context, so for the ways we’ll use colors in this program, it is practical enough. Black, where all components are zero, is written `"#000000"` , and bright pink looks like `"#ff00ff"` , where the red and blue components have the maximum value of 255, written `ff` in hexadecimal digits (which use a to f to represent digits 10 to 15). We’ll allow the interface to dispatch actions as objects whose properties overwrite the properties of the previous state. The color field, when the user changes it, could dispatch an object like `{color: field.` , from which this update function can compute a new state. > function updateState(state, action) { return Object.assign({}, state, action); } This rather cumbersome pattern, in which `Object.assign` is used to first add the properties of `state` to an empty object and then overwrite some of those with the properties from `action` , is common in JavaScript code that uses immutable objects. A more convenient notation for this, in which the triple-dot operator is used to include all properties from another object in an object expression, is in the final stages of being standardized. With that addition, you could write `{.` instead. At the time of writing, this doesn’t yet work in all browsers. ## DOM building One of the main things that interface components do is creating DOM structure. We again don’t want to directly use the verbose DOM methods for that, so here’s a slightly expanded version of the `elt` function: > function elt(type, props, children) { let dom = document.createElement(type); if (props) Object.assign(dom, props); for (let child of children) { if (typeof child != "string") dom.appendChild(child); else dom.appendChild(document.createTextNode(child)); } return dom; } The main difference between this version and the one we used in Chapter 16 is that it assigns properties to DOM nodes, not attributes. This means we can’t use it to set arbitrary attributes, but we can use it to set properties whose value isn’t a string, such as `onclick` , which can be set to a function to register a click event handler. This allows the following style of registering event handlers: > <body> <script> document.body.appendChild(elt("button", { onclick: () => console.log("click") }, "The button")); </script> </bodyThe first component we’ll define is the part of the interface that displays the picture as a grid of colored boxes. This component is responsible for two things: showing a picture and communicating pointer events on that picture to the rest of the application. As such, we can define it as a component that knows about only the current picture, not the whole application state. Because it doesn’t know how the application as a whole works, it cannot directly dispatch actions. Rather, when responding to pointer events, it calls a callback function provided by the code that created it, which will handle the application-specific parts. > const scale = 10; class PictureCanvas { constructor(picture, pointerDown) { this.dom = elt("canvas", { onmousedown: event => this.mouse(event, pointerDown), ontouchstart: event => this.touch(event, pointerDown) }); this.syncState(picture); } syncState(picture) { if (this.picture == picture) return; this.picture = picture; drawPicture(this.picture, this.dom, scale); } } We draw each pixel as a 10-by-10 square, as determined by the `scale` constant. To avoid unnecessary work, the component keeps track of its current picture and does a redraw only when `syncState` is given a new picture. The actual drawing function sets the size of the canvas based on the scale and picture size and fills it with a series of squares, one for each pixel. > function drawPicture(picture, canvas, scale) { canvas.width = picture.width * scale; canvas.height = picture.height * scale; let cx = canvas.getContext("2d"); for (let y = 0; y < picture.height; y++) { for (let x = 0; x < picture.width; x++) { cx.fillStyle = picture.pixel(x, y); cx.fillRect(x * scale, y * scale, scale, scale); } } } When the left mouse button is pressed while the mouse is over the picture canvas, the component calls the `pointerDown` callback, giving it the position of the pixel that was clicked—in picture coordinates. This will be used to implement mouse interaction with the picture. The callback may return another callback function to be notified when the pointer is moved to a different pixel while the button is held down. > PictureCanvas.prototype.mouse = function(downEvent, onDown) { if (downEvent.button != 0) return; let pos = pointerPosition(downEvent, this.dom); let onMove = onDown(pos); if (!onMove) return; let move = moveEvent => { if (moveEvent.buttons == 0) { this.dom.removeEventListener("mousemove", move); } else { let newPos = pointerPosition(moveEvent, this.dom); if (newPos.x == pos.x && newPos.y == pos.y) return; pos = newPos; onMove(newPos); } }; this.dom.addEventListener("mousemove", move); }; function pointerPosition(pos, domNode) { let rect = domNode.getBoundingClientRect(); return {x: Math.floor((pos.clientX - rect.left) / scale), y: Math.floor((pos.clientY - rect.top) / scale)}; } Since we know the size of the pixels and we can use to find the position of the canvas on the screen, it is possible to go from mouse event coordinates ( `clientX` and `clientY` ) to picture coordinates. These are always rounded down so that they refer to a specific pixel. With touch events, we have to do something similar, but using different events and making sure we call `preventDefault` on the `"touchstart"` event to prevent panning. > PictureCanvas.prototype.touch = function(startEvent, onDown) { let pos = pointerPosition(startEvent.touches[0], this.dom); let onMove = onDown(pos); startEvent.preventDefault(); if (!onMove) return; let move = moveEvent => { let newPos = pointerPosition(moveEvent.touches[0], this.dom); if (newPos.x == pos.x && newPos.y == pos.y) return; pos = newPos; onMove(newPos); }; let end = () => { this.dom.removeEventListener("touchmove", move); this.dom.removeEventListener("touchend", end); }; this.dom.addEventListener("touchmove", move); this.dom.addEventListener("touchend", end); }; For touch events, `clientX` and `clientY` aren’t available directly on the event object, but we can use the coordinates of the first touch object in the `touches` property. ## The application To make it possible to build the application piece by piece, we’ll implement the main component as a shell around a picture canvas and a dynamic set of tools and controls that we pass to its constructor. The controls are the interface elements that appear below the picture. They’ll be provided as an array of component constructors. The tools do things like drawing pixels or filling in an area. The application shows the set of available tools as a `<select>` field. The currently selected tool determines what happens when the user interacts with the picture with a pointer device. The set of available tools is provided as an object that maps the names that appear in the drop-down field to functions that implement the tools. Such functions get a picture position, a current application state, and a `dispatch` function as arguments. They may return a move handler function that gets called with a new position and a current state when the pointer moves to a different pixel. > class PixelEditor { constructor(state, config) { let {tools, controls, dispatch} = config; this.state = state; this.canvas = new PictureCanvas(state.picture, pos => { let tool = tools[this.state.tool]; let onMove = tool(pos, this.state, dispatch); if (onMove) return pos => onMove(pos, this.state); }); this.controls = controls.map( Control => new Control(state, config)); this.dom = elt("div", {}, this.canvas.dom, elt("br"), this.controls.reduce( (a, c) => a.concat(" ", c.dom), [])); } syncState(state) { this.state = state; this.canvas.syncState(state.picture); for (let ctrl of this.controls) ctrl.syncState(state); } } The pointer handler given to `PictureCanvas` calls the currently selected tool with the appropriate arguments and, if that returns a move handler, adapts it to also receive the state. All controls are constructed and stored in `this.controls` so that they can be updated when the application state changes. The call to `reduce` introduces spaces between the controls’ DOM elements. That way they don’t look so pressed together. The first control is the tool selection menu. It creates a `<select>` element with an option for each tool and sets up a `"change"` event handler that updates the application state when the user selects a different tool. > class ToolSelect { constructor(state, {tools, dispatch}) { this.select = elt("select", { onchange: () => dispatch({tool: this.select.value}) }, Object.keys(tools).map(name => elt("option", { selected: name == state.tool }, name))); this.dom = elt("label", null, "🖌 Tool: ", this.select); } syncState(state) { this.select.value = state.tool; } } By wrapping the label text and the field in a `<label>` element, we tell the browser that the label belongs to that field so that you can, for example, click the label to focus the field. We also need to be able to change the color, so let’s add a control for that. An HTML `<input>` element with a `type` attribute of `color` gives us a form field that is specialized for selecting colors. Such a field’s value is always a CSS color code in `"#RRGGBB"` format (red, green, and blue components, two digits per color). The browser will show a color picker interface when the user interacts with it. This control creates such a field and wires it up to stay synchronized with the application state’s `color` property. > class ColorSelect { constructor(state, {dispatch}) { this.input = elt("input", { type: "color", value: state.color, onchange: () => dispatch({color: this.input.value}) }); this.dom = elt("label", null, "🎨 Color: ", this.input); } syncState(state) { this.input.value = state.color; } } Before we can draw anything, we need to implement the tools that will control the functionality of mouse or touch events on the canvas. The most basic tool is the draw tool, which changes any pixel you click or tap to the currently selected color. It dispatches an action that updates the picture to a version in which the pointed-at pixel is given the currently selected color. > function draw(pos, state, dispatch) { function drawPixel({x, y}, state) { let drawn = {x, y, color: state.color}; dispatch({picture: state.picture.draw([drawn])}); } drawPixel(pos, state); return drawPixel; } The function immediately calls the `drawPixel` function but then also returns it so that it is called again for newly touched pixels when the user drags or swipes over the picture. To draw larger shapes, it can be useful to quickly create rectangles. The `rectangle` tool draws a rectangle between the point where you start dragging and the point that you drag to. > function rectangle(start, state, dispatch) { function drawRectangle(pos) { let xStart = Math.min(start.x, pos.x); let yStart = Math.min(start.y, pos.y); let xEnd = Math.max(start.x, pos.x); let yEnd = Math.max(start.y, pos.y); let drawn = []; for (let y = yStart; y <= yEnd; y++) { for (let x = xStart; x <= xEnd; x++) { drawn.push({x, y, color: state.color}); } } dispatch({picture: state.picture.draw(drawn)}); } drawRectangle(start); return drawRectangle; } An important detail in this implementation is that when dragging, the rectangle is redrawn on the picture from the original state. That way, you can make the rectangle larger and smaller again while creating it, without the intermediate rectangles sticking around in the final picture. This is one of the reasons why immutable picture objects are useful—we’ll see another reason later. Implementing flood fill is somewhat more involved. This is a tool that fills the pixel under the pointer and all adjacent pixels that have the same color. “Adjacent” means directly horizontally or vertically adjacent, not diagonally. This picture illustrates the set of pixels colored when the flood fill tool is used at the marked pixel: Interestingly, the way we’ll do this looks a bit like the pathfinding code from Chapter 7. Whereas that code searched through a graph to find a route, this code searches through a grid to find all “connected” pixels. The problem of keeping track of a branching set of possible routes is similar. > const around = [{dx: -1, dy: 0}, {dx: 1, dy: 0}, {dx: 0, dy: -1}, {dx: 0, dy: 1}]; function fill({x, y}, state, dispatch) { let targetColor = state.picture.pixel(x, y); let drawn = [{x, y, color: state.color}]; for (let done = 0; done < drawn.length; done++) { for (let {dx, dy} of around) { let x = drawn[done].x + dx, y = drawn[done].y + dy; if (x >= 0 && x < state.picture.width && y >= 0 && y < state.picture.height && state.picture.pixel(x, y) == targetColor && !drawn.some(p => p.x == x && p.y == y)) { drawn.push({x, y, color: state.color}); } } } dispatch({picture: state.picture.draw(drawn)}); } The array of drawn pixels doubles as the function’s work list. For each pixel reached, we have to see whether any adjacent pixels have the same color and haven’t already been painted over. The loop counter lags behind the length of the `drawn` array as new pixels are added. Any pixels ahead of it still need to be explored. When it catches up with the length, no unexplored pixels remain, and the function is done. The final tool is a color picker, which allows you to point at a color in the picture to use it as the current drawing color. > function pick(pos, state, dispatch) { dispatch({color: state.picture.pixel(pos.x, pos.y)}); } We can now test our application! > <div></div> <script> let state = { tool: "draw", color: "#000000", picture: Picture.empty(60, 30, "#f0f0f0") }; let app = new PixelEditor(state, { tools: {draw, fill, rectangle, pick}, controls: [ToolSelect, ColorSelect], dispatch(action) { state = updateState(state, action); app.syncState(state); } }); document.querySelector("div").appendChild(app.dom); </script## Saving and loading When we’ve drawn our masterpiece, we’ll want to save it for later. We should add a button for downloading the current picture as an image file. This control provides that button: > class SaveButton { constructor(state) { this.picture = state.picture; this.dom = elt("button", { onclick: () => this.save() }, "💾 Save"); } save() { let canvas = elt("canvas"); drawPicture(this.picture, canvas, 1); let link = elt("a", { href: canvas.toDataURL(), download: "pixelart.png" }); document.body.appendChild(link); link.click(); link.remove(); } syncState(state) { this.picture = state.picture; } } The component keeps track of the current picture so that it can access it when saving. To create the image file, it uses a `<canvas>` element that it draws the picture on (at a scale of one pixel per pixel). The `toDataURL` method on a canvas element creates a URL that starts with `data:` . Unlike `http:` and `https:` URLs, data URLs contain the whole resource in the URL. They are usually very long, but they allow us to create working links to arbitrary pictures, right here in the browser. To actually get the browser to download the picture, we then create a link element that points at this URL and has a `download` attribute. Such links, when clicked, make the browser show a file save dialog. We add that link to the document, simulate a click on it, and remove it again. You can do a lot with browser technology, but sometimes the way to do it is rather odd. And it gets worse. We’ll also want to be able to load existing image files into our application. To do that, we again define a button component. > class LoadButton { constructor(_, {dispatch}) { this.dom = elt("button", { onclick: () => startLoad(dispatch) }, "📁 Load"); } syncState() {} } function startLoad(dispatch) { let input = elt("input", { type: "file", onchange: () => finishLoad(input.files[0], dispatch) }); document.body.appendChild(input); input.click(); input.remove(); } To get access to a file on the user’s computer, we need the user to select the file through a file input field. But I don’t want the load button to look like a file input field, so we create the file input when the button is clicked and then pretend that this file input itself was clicked. When the user has selected a file, we can use `FileReader` to get access to its contents, again as a data URL. That URL can be used to create an `<img>` element, but because we can’t get direct access to the pixels in such an image, we can’t create a `Picture` object from that. > function finishLoad(file, dispatch) { if (file == null) return; let reader = new FileReader(); reader.addEventListener("load", () => { let image = elt("img", { onload: () => dispatch({ picture: pictureFromImage(image) }), src: reader.result }); }); reader.readAsDataURL(file); } To get access to the pixels, we must first draw the picture to a `<canvas>` element. The canvas context has a `getImageData` method that allows a script to read its pixels. So, once the picture is on the canvas, we can access it and construct a `Picture` object. > function pictureFromImage(image) { let width = Math.min(100, image.width); let height = Math.min(100, image.height); let canvas = elt("canvas", {width, height}); let cx = canvas.getContext("2d"); cx.drawImage(image, 0, 0); let pixels = []; let {data} = cx.getImageData(0, 0, width, height); function hex(n) { return n.toString(16).padStart(2, "0"); } for (let i = 0; i < data.length; i += 4) { let [r, g, b] = data.slice(i, i + 3); pixels.push("#" + hex(r) + hex(g) + hex(b)); } return new Picture(width, height, pixels); } We’ll limit the size of images to 100 by 100 pixels since anything bigger will look huge on our display and might slow down the interface. The `data` property of the object returned by `getImageData` is an array of color components. For each pixel in the rectangle specified by the arguments, it contains four values, which represent the red, green, blue, and alpha components of the pixel’s color, as numbers between 0 and 255. The alpha part represents opacity—when it is zero, the pixel is fully transparent, and when it is 255, it is fully opaque. For our purpose, we can ignore it. The two hexadecimal digits per component, as used in our color notation, correspond precisely to the 0 to 255 range—two base-16 digits can express 162 = 256 different numbers. The `toString` method of numbers can be given a base as argument, so `n.toString(16)` will produce a string representation in base 16. We have to make sure that each number takes up two digits, so the `hex` helper function calls `padStart` to add a leading zero when necessary. We can load and save now! That leaves one more feature before we’re done. ## Undo history Half of the process of editing is making little mistakes and correcting them. So an important feature in a drawing program is an undo history. To be able to undo changes, we need to store previous versions of the picture. Since it’s an immutable value, that is easy. But it does require an additional field in the application state. We’ll add a `done` array to keep previous versions of the picture. Maintaining this property requires a more complicated state update function that adds pictures to the array. But we don’t want to store every change, only changes a certain amount of time apart. To be able to do that, we’ll need a second property, `doneAt` , tracking the time at which we last stored a picture in the history. > function historyUpdateState(state, action) { if (action.undo == true) { if (state.done.length == 0) return state; return Object.assign({}, state, { picture: state.done[0], done: state.done.slice(1), doneAt: 0 }); } else if (action.picture && state.doneAt < Date.now() - 1000) { return Object.assign({}, state, action, { done: [state.picture, state.done], doneAt: Date.now() }); } else { return Object.assign({}, state, action); } } When the action is an undo action, the function takes the most recent picture from the history and makes that the current picture. It sets `doneAt` to zero so that the next change is guaranteed to store the picture back in the history, allowing you to revert to it another time if you want. Otherwise, if the action contains a new picture and the last time we stored something is more than a second (1000 milliseconds) ago, the `done` and `doneAt` properties are updated to store the previous picture. The undo button component doesn’t do much. It dispatches undo actions when clicked and disables itself when there is nothing to undo. > class UndoButton { constructor(state, {dispatch}) { this.dom = elt("button", { onclick: () => dispatch({undo: true}), disabled: state.done.length == 0 }, "⮪ Undo"); } syncState(state) { this.dom.disabled = state.done.length == 0; } } ## Let’s draw To set up the application, we need to create a state, a set of tools, a set of controls, and a dispatch function. We can pass them to the `PixelEditor` constructor to create the main component. Since we’ll need to create several editors in the exercises, we first define some bindings. > const startState = { tool: "draw", color: "#000000", picture: Picture.empty(60, 30, "#f0f0f0"), done: [], doneAt: 0 }; const baseTools = {draw, fill, rectangle, pick}; const baseControls = [ ToolSelect, ColorSelect, SaveButton, LoadButton, UndoButton ]; function startPixelEditor({state = startState, tools = baseTools, controls = baseControls}) { let app = new PixelEditor(state, { tools, controls, dispatch(action) { state = historyUpdateState(state, action); app.syncState(state); } }); return app.dom; } When destructuring an object or array, you can use `=` after a binding name to give the binding a default value, which is used when the property is missing or holds `undefined` . The `startPixelEditor` function makes use of this to accept an object with a number of optional properties as an argument. If you don’t provide a `tools` property, for example, `tools` will be bound to `baseTools` . This is how we get an actual editor on the screen: > <div></div> <script> document.querySelector("div") .appendChild(startPixelEditor({})); </scriptGo ahead and draw something. I’ll wait. ## Why is this so hard? Browser technology is amazing. It provides a powerful set of interface building blocks, ways to style and manipulate them, and tools to inspect and debug your applications. The software you write for the browser can be run on almost every computer and phone on the planet. At the same time, browser technology is ridiculous. You have to learn a large number of silly tricks and obscure facts to master it, and the default programming model it provides is so problematic that most programmers prefer to cover it in several layers of abstraction rather than deal with it directly. And though the situation is definitely improving, it mostly does so in the form of more elements being added to address shortcomings—creating even more complexity. A feature used by a million websites can’t really be replaced. Even if it could, it would be hard to decide what it should be replaced with. Technology never exists in a vacuum—we’re constrained by our tools and the social, economic, and historical factors that produced them. This can be annoying, but it is generally more productive to try to build a good understanding of how the existing technical reality works—and why it is the way it is—than to rage against it or hold out for another reality. New abstractions can be helpful. The component model and data flow convention I used in this chapter is a crude form of that. As mentioned, there are libraries that try to make user interface programming more pleasant. At the time of writing, React and Angular are popular choices, but there’s a whole cottage industry of such frameworks. If you’re interested in programming web applications, I recommend investigating a few of them to understand how they work and what benefits they provide. There is still room for improvement in our program. Let’s add a few more features as exercises. ### Keyboard bindings Add keyboard shortcuts to the application. The first letter of a tool’s name selects the tool, and control-Z or command-Z activates undo. Do this by modifying the `PixelEditor` component. Add a `tabIndex` property of 0 to the wrapping `<div>` element so that it can receive keyboard focus. Note that the property corresponding to the `tabindex` attribute is called `tabIndex` , with a capital I, and our `elt` function expects property names. Register the key event handlers directly on that element. This means you have to click, touch, or tab to the application before you can interact with it with the keyboard. Remember that keyboard events have `ctrlKey` and `metaKey` (for the command key on Mac) properties that you can use to see whether those keys are held down. > <div></div> <script> // The original PixelEditor class. Extend the constructor. class PixelEditor { constructor(state, config) { let {tools, controls, dispatch} = config; this.state = state; this.canvas = new PictureCanvas(state.picture, pos => { let tool = tools[this.state.tool]; let onMove = tool(pos, this.state, dispatch); if (onMove) { return pos => onMove(pos, this.state, dispatch); } }); this.controls = controls.map( Control => new Control(state, config)); this.dom = elt("div", {}, this.canvas.dom, elt("br"), this.controls.reduce( (a, c) => a.concat(" ", c.dom), [])); } syncState(state) { this.state = state; this.canvas.syncState(state.picture); for (let ctrl of this.controls) ctrl.syncState(state); } } document.querySelector("div") .appendChild(startPixelEditor({})); </script> The `key` property of events for letter keys will be the lowercase letter itself, if shift isn’t being held. We’re not interested in key events with shift here. A `"keydown"` handler can inspect its event object to see whether it matches any of the shortcuts. You can automatically get the list of first letters from the `tools` object so that you don’t have to write them out. When the key event matches a shortcut, call `preventDefault` on it and dispatch the appropriate action. ### Efficient drawing During drawing, the majority of work that our application does happens in `drawPicture` . Creating a new state and updating the rest of the DOM isn’t very expensive, but repainting all the pixels on the canvas is quite a bit of work. Find a way to make the `syncState` method of `PictureCanvas` faster by redrawing only the pixels that actually changed. Remember that `drawPicture` is also used by the save button, so if you change it, either make sure the changes don’t break the old use or create a new version with a different name. Also note that changing the size of a `<canvas>` element, by setting its `width` or `height` properties, clears it, making it entirely transparent again. > <div></div> <script> // Change this method PictureCanvas.prototype.syncState = function(picture) { if (this.picture == picture) return; this.picture = picture; drawPicture(this.picture, this.dom, scale); }; // You may want to use or change this as well function drawPicture(picture, canvas, scale) { canvas.width = picture.width * scale; canvas.height = picture.height * scale; let cx = canvas.getContext("2d"); for (let y = 0; y < picture.height; y++) { for (let x = 0; x < picture.width; x++) { cx.fillStyle = picture.pixel(x, y); cx.fillRect(x * scale, y * scale, scale, scale); } } } document.querySelector("div") .appendChild(startPixelEditor({})); </scriptThis exercise is a good example of how immutable data structures can make code faster. Because we have both the old and the new picture, we can compare them and redraw only the pixels that changed color, saving more than 99 percent of the drawing work in most cases. You can either write a new function `updatePicture` or have `drawPicture` take an extra argument, which may be undefined or the previous picture. For each pixel, the function checks whether a previous picture was passed with the same color at this position and skips the pixel when that is the case. Because the canvas gets cleared when we change its size, you should also avoid touching its `width` and `height` properties when the old picture and the new picture have the same size. If they are different, which will happen when a new picture has been loaded, you can set the binding holding the old picture to null after changing the canvas size because you shouldn’t skip any pixels after you’ve changed the canvas size. ### Circles Define a tool called `circle` that draws a filled circle when you drag. The center of the circle lies at the point where the drag or touch gesture starts, and its radius is determined by the distance dragged. > <div></div> <script> function circle(pos, state, dispatch) { // Your code here } let dom = startPixelEditor({ tools: Object.assign({}, baseTools, {circle}) }); document.querySelector("div").appendChild(dom); </script> You can take some inspiration from the `rectangle` tool. Like that tool, you’ll want to keep drawing on the starting picture, rather than the current picture, when the pointer moves. To figure out which pixels to color, you can use the Pythagorean theorem. First figure out the distance between the current pointer position and the start position by taking the square root ( `Math.sqrt` ) of the sum of the square ( `Math.pow(x, 2)` ) of the difference in x-coordinates and the square of the difference in y-coordinates. Then loop over a square of pixels around the start position, whose sides are at least twice the radius, and color those that are within the circle’s radius, again using the Pythagorean formula to figure out their distance from the center. Make sure you don’t try to color pixels that are outside of the picture’s boundaries. ### Proper lines This is a more advanced exercise than the preceding two, and it will require you to design a solution to a nontrivial problem. Make sure you have plenty of time and patience before starting to work on this exercise, and do not get discouraged by initial failures. On most browsers, when you select the `draw` tool and quickly drag across the picture, you don’t get a closed line. Rather, you get dots with gaps between them because the `"mousemove"` or `"touchmove"` events did not fire quickly enough to hit every pixel. Improve the `draw` tool to make it draw a full line. This means you have to make the motion handler function remember the previous position and connect that to the current one. To do this, since the pixels can be an arbitrary distance apart, you’ll have to write a general line drawing function. A line between two pixels is a connected chain of pixels, as straight as possible, going from the start to the end. Diagonally adjacent pixels count as a connected. So a slanted line should look like the picture on the left, not the picture on the right. Finally, if we have code that draws a line between two arbitrary points, we might as well use it to also define a `line` tool, which draws a straight line between the start and end of a drag. > <div></div> <script> // The old draw tool. Rewrite this. function draw(pos, state, dispatch) { function drawPixel({x, y}, state) { let drawn = {x, y, color: state.color}; dispatch({picture: state.picture.draw([drawn])}); } drawPixel(pos, state); return drawPixel; } function line(pos, state, dispatch) { // Your code here } let dom = startPixelEditor({ tools: {draw, line, fill, rectangle, pick} }); document.querySelector("div").appendChild(dom); </scriptThe thing about the problem of drawing a pixelated line is that it is really four similar but slightly different problems. Drawing a horizontal line from the left to the right is easy—you loop over the x-coordinates and color a pixel at every step. If the line has a slight slope (less than 45 degrees or ¼π radians), you can interpolate the y-coordinate along the slope. You still need one pixel per x position, with the y position of those pixels determined by the slope. But as soon as your slope goes across 45 degrees, you need to switch the way you treat the coordinates. You now need one pixel per y position since the line goes up more than it goes left. And then, when you cross 135 degrees, you have to go back to looping over the x-coordinates, but from right to left. You don’t actually have to write four loops. Since drawing a line from A to B is the same as drawing a line from B to A, you can swap the start and end positions for lines going from right to left and treat them as going left to right. So you need two different loops. The first thing your line drawing function should do is check whether the difference between the x-coordinates is larger than the difference between the y-coordinates. If it is, this is a horizontal-ish line, and if not, a vertical-ish one. Make sure you compare the absolute values of the x and y difference, which you can get with `Math.abs` . Once you know along which axis you will be looping, you can check whether the start point has a higher coordinate along that axis than the endpoint and swap them if necessary. A succinct way to swap the values of two bindings in JavaScript uses destructuring assignment like this: > [start, end] = [end, start]; Then you can compute the slope of the line, which determines the amount the coordinate on the other axis changes for each step you take along your main axis. With that, you can run a loop along the main axis while also tracking the corresponding position on the other axis, and you can draw pixels on every iteration. Make sure you round the non-main axis coordinates since they are likely to be fractional and the `draw` method doesn’t respond well to fractional coordinates. Drawing is deception. Browsers give us several ways to display graphics. The simplest way is to use styles to position and color regular DOM elements. This can get you quite far, as the game in the previous chapter showed. By adding partially transparent background images to the nodes, we can make them look exactly the way we want. It is even possible to rotate or skew nodes with the `transform` style. But we’d be using the DOM for something that it wasn’t originally designed for. Some tasks, such as drawing a line between arbitrary points, are extremely awkward to do with regular HTML elements. There are two alternatives. The first is DOM-based but utilizes Scalable Vector Graphics (SVG), rather than HTML. Think of SVG as a document-markup dialect that focuses on shapes rather than text. You can embed an SVG document directly in an HTML document or include it with an `<img>` tag. The second alternative is called a canvas. A canvas is a single DOM element that encapsulates a picture. It provides a programming interface for drawing shapes onto the space taken up by the node. The main difference between a canvas and an SVG picture is that in SVG the original description of the shapes is preserved so that they can be moved or resized at any time. A canvas, on the other hand, converts the shapes to pixels (colored dots on a raster) as soon as they are drawn and does not remember what these pixels represent. The only way to move a shape on a canvas is to clear the canvas (or the part of the canvas around the shape) and redraw it with the shape in a new position. ## SVG This book will not go into SVG in detail, but I will briefly explain how it works. At the end of the chapter, I’ll come back to the trade-offs that you must consider when deciding which drawing mechanism is appropriate for a given application. This is an HTML document with a simple SVG picture in it: > edit & run code by clicking it<p>Normal HTML here.</p> <svg xmlns="http://www.w3.org/2000/svg"> <circle r="50" cx="50" cy="50" fill="red"/> <rect x="120" y="5" width="90" height="90" stroke="blue" fill="none"/> </svg> The `xmlns` attribute changes an element (and its children) to a different XML namespace. This namespace, identified by a URL, specifies the dialect that we are currently speaking. The `<circle>` and `<rect>` tags, which do not exist in HTML, do have a meaning in SVG—they draw shapes using the style and position specified by their attributes. These tags create DOM elements, just like HTML tags, that scripts can interact with. For example, this changes the `<circle>` element to be colored cyan instead: > let circle = document.querySelector("circle"); circle.setAttribute("fill", "cyan"); Canvas graphics can be drawn onto a `<canvas>` element. You can give such an element `width` and `height` attributes to determine its size in pixels. A new canvas is empty, meaning it is entirely transparent and thus shows up as empty space in the document. The `<canvas>` tag is intended to allow different styles of drawing. To get access to an actual drawing interface, we first need to create a context, an object whose methods provide the drawing interface. There are currently two widely supported drawing styles: `"2d"` for two-dimensional graphics and `"webgl"` for three-dimensional graphics through the OpenGL interface. This book won’t discuss WebGL—we’ll stick to two dimensions. But if you are interested in three-dimensional graphics, I do encourage you to look into WebGL. It provides a direct interface to graphics hardware and allows you to render even complicated scenes efficiently, using JavaScript. You create a context with the `getContext` method on the `<canvas>` DOM element. > <p>Before canvas.</p> <canvas width="120" height="60"></canvas> <p>After canvas.</p> <script> let canvas = document.querySelector("canvas"); let context = canvas.getContext("2d"); context.fillStyle = "red"; context.fillRect(10, 10, 100, 50); </scriptAfter creating the context object, the example draws a red rectangle 100 pixels wide and 50 pixels high, with its top-left corner at coordinates (10,10). Just like in HTML (and SVG), the coordinate system that the canvas uses puts (0,0) at the top-left corner, and the positive y-axis goes down from there. So (10,10) is 10 pixels below and to the right of the top-left corner. ## Lines and surfaces In the canvas interface, a shape can be filled, meaning its area is given a certain color or pattern, or it can be stroked, which means a line is drawn along its edge. The same terminology is used by SVG. The `fillRect` method fills a rectangle. It takes first the x- and y-coordinates of the rectangle’s top-left corner, then its width, and then its height. A similar method, `strokeRect` , draws the outline of a rectangle. Neither method takes any further parameters. The color of the fill, thickness of the stroke, and so on, are not determined by an argument to the method (as you might reasonably expect) but rather by properties of the context object. The `fillStyle` property controls the way shapes are filled. It can be set to a string that specifies a color, using the color notation used by CSS. The `strokeStyle` property works similarly but determines the color used for a stroked line. The width of that line is determined by the `lineWidth` property, which may contain any positive number. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.strokeStyle = "blue"; cx.strokeRect(5, 5, 50, 50); cx.lineWidth = 5; cx.strokeRect(135, 5, 50, 50); </script> When no `width` or `height` attribute is specified, as in the example, a canvas element gets a default width of 300 pixels and height of 150 pixels. ## Paths A path is a sequence of lines. The 2D canvas interface takes a peculiar approach to describing such a path. It is done entirely through side effects. Paths are not values that can be stored and passed around. Instead, if you want to do something with a path, you make a sequence of method calls to describe its shape. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.beginPath(); for (let y = 10; y < 100; y += 10) { cx.moveTo(10, y); cx.lineTo(90, y); } cx.stroke(); </script> This example creates a path with a number of horizontal line segments and then strokes it using the `stroke` method. Each segment created with `lineTo` starts at the path’s current position. That position is usually the end of the last segment, unless `moveTo` was called. In that case, the next segment would start at the position passed to `moveTo` . When filling a path (using the `fill` method), each shape is filled separately. A path can contain multiple shapes—each `moveTo` motion starts a new one. But the path needs to be closed (meaning its start and end are in the same position) before it can be filled. If the path is not already closed, a line is added from its end to its start, and the shape enclosed by the completed path is filled. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.beginPath(); cx.moveTo(50, 10); cx.lineTo(10, 70); cx.lineTo(90, 70); cx.fill(); </scriptThis example draws a filled triangle. Note that only two of the triangle’s sides are explicitly drawn. The third, from the bottom-right corner back to the top, is implied and wouldn’t be there when you stroke the path. You could also use the `closePath` method to explicitly close a path by adding an actual line segment back to the path’s start. This segment is drawn when stroking the path. ## Curves A path may also contain curved lines. These are unfortunately a bit more involved to draw. The `quadraticCurveTo` method draws a curve to a given point. To determine the curvature of the line, the method is given a control point as well as a destination point. Imagine this control point as attracting the line, giving it its curve. The line won’t go through the control point, but its direction at the start and end points will be such that a straight line in that direction would point toward the control point. The following example illustrates this: > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.beginPath(); cx.moveTo(10, 90); // control=(60,10) goal=(90,90) cx.quadraticCurveTo(60, 10, 90, 90); cx.lineTo(60, 10); cx.closePath(); cx.stroke(); </scriptWe draw a quadratic curve from the left to the right, with (60,10) as control point, and then draw two line segments going through that control point and back to the start of the line. The result somewhat resembles a Star Trek insignia. You can see the effect of the control point: the lines leaving the lower corners start off in the direction of the control point and then curve toward their target. The `bezierCurveTo` method draws a similar kind of curve. Instead of a single control point, this one has two—one for each of the line’s endpoints. Here is a similar sketch to illustrate the behavior of such a curve: > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.beginPath(); cx.moveTo(10, 90); // control1=(10,10) control2=(90,10) goal=(50,90) cx.bezierCurveTo(10, 10, 90, 10, 50, 90); cx.lineTo(90, 10); cx.lineTo(10, 10); cx.closePath(); cx.stroke(); </scriptThe two control points specify the direction at both ends of the curve. The farther they are away from their corresponding point, the more the curve will “bulge” in that direction. Such curves can be hard to work with—it’s not always clear how to find the control points that provide the shape you are looking for. Sometimes you can compute them, and sometimes you’ll just have to find a suitable value by trial and error. The `arc` method is a way to draw a line that curves along the edge of a circle. It takes a pair of coordinates for the arc’s center, a radius, and then a start angle and end angle. Those last two parameters make it possible to draw only part of the circle. The angles are measured in radians, not degrees. This means a full circle has an angle of 2π, or `2 * Math.PI` , which is about 6.28. The angle starts counting at the point to the right of the circle’s center and goes clockwise from there. You can use a start of 0 and an end bigger than 2π (say, 7) to draw a full circle. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.beginPath(); // center=(50,50) radius=40 angle=0 to 7 cx.arc(50, 50, 40, 0, 7); // center=(150,50) radius=40 angle=0 to ½π cx.arc(150, 50, 40, 0, 0.5 * Math.PI); cx.stroke(); </script> The resulting picture contains a line from the right of the full circle (first call to `arc` ) to the right of the quarter-circle (second call). Like other path-drawing methods, a line drawn with `arc` is connected to the previous path segment. You can call `moveTo` or start a new path to avoid this. ## Drawing a pie chart Imagine you’ve just taken a job at EconomiCorp, Inc., and your first assignment is to draw a pie chart of its customer satisfaction survey results. The `results` binding contains an array of objects that represent the survey responses. > const results = [ {name: "Satisfied", count: 1043, color: "lightblue"}, {name: "Neutral", count: 563, color: "lightgreen"}, {name: "Unsatisfied", count: 510, color: "pink"}, {name: "No comment", count: 175, color: "silver"} ]; To draw a pie chart, we draw a number of pie slices, each made up of an arc and a pair of lines to the center of that arc. We can compute the angle taken up by each arc by dividing a full circle (2π) by the total number of responses and then multiplying that number (the angle per response) by the number of people who picked a given choice. > <canvas width="200" height="200"></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let total = results .reduce((sum, {count}) => sum + count, 0); // Start at the top let currentAngle = -0.5 * Math.PI; for (let result of results) { let sliceAngle = (result.count / total) * 2 * Math.PI; cx.beginPath(); // center=100,100, radius=100 // from current angle, clockwise by slice's angle cx.arc(100, 100, 100, currentAngle, currentAngle + sliceAngle); currentAngle += sliceAngle; cx.lineTo(100, 100); cx.fillStyle = result.color; cx.fill(); } </scriptBut a chart that doesn’t tell us what the slices mean isn’t very helpful. We need a way to draw text to the canvas. ## Text A 2D canvas drawing context provides the methods `fillText` and `strokeText` . The latter can be useful for outlining letters, but usually `fillText` is what you need. It will fill the outline of the given text with the current `fillStyle` . > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.font = "28px Georgia"; cx.fillStyle = "fuchsia"; cx.fillText("I can draw text, too!", 10, 50); </script> You can specify the size, style, and font of the text with the `font` property. This example just gives a font size and family name. It is also possible to add `italic` or `bold` to the start of the string to select a style. The last two arguments to `fillText` and `strokeText` provide the position at which the font is drawn. By default, they indicate the position of the start of the text’s alphabetic baseline, which is the line that letters “stand” on, not counting hanging parts in letters such as j or p. You can change the horizontal position by setting the `textAlign` property to `"end"` or `"center"` and the vertical position by setting `textBaseline` to `"top"` , `"middle"` , or `"bottom"` . We’ll come back to our pie chart, and the problem of labeling the slices, in the exercises at the end of the chapter. ## Images In computer graphics, a distinction is often made between vector graphics and bitmap graphics. The first is what we have been doing so far in this chapter—specifying a picture by giving a logical description of shapes. Bitmap graphics, on the other hand, don’t specify actual shapes but rather work with pixel data (rasters of colored dots). The `drawImage` method allows us to draw pixel data onto a canvas. This pixel data can originate from an `<img>` element or from another canvas. The following example creates a detached `<img>` element and loads an image file into it. But it cannot immediately start drawing from this picture because the browser may not have loaded it yet. To deal with this, we register a `"load"` event handler and do the drawing after the image has loaded. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let img = document.createElement("img"); img.src = "img/hat.png"; img.addEventListener("load", () => { for (let x = 10; x < 200; x += 30) { cx.drawImage(img, x, 10); } }); </script> By default, `drawImage` will draw the image at its original size. You can also give it two additional arguments to set a different width and height. When `drawImage` is given nine arguments, it can be used to draw only a fragment of an image. The second through fifth arguments indicate the rectangle (x, y, width, and height) in the source image that should be copied, and the sixth to ninth arguments give the rectangle (on the canvas) into which it should be copied. This can be used to pack multiple sprites (image elements) into a single image file and then draw only the part you need. For example, we have this picture containing a game character in multiple poses: By alternating which pose we draw, we can show an animation that looks like a walking character. To animate a picture on a canvas, the `clearRect` method is useful. It resembles `fillRect` , but instead of coloring the rectangle, it makes it transparent, removing the previously drawn pixels. We know that each sprite, each subpicture, is 24 pixels wide and 30 pixels high. The following code loads the image and then sets up an interval (repeated timer) to draw the next frame: > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let img = document.createElement("img"); img.src = "img/player.png"; let spriteW = 24, spriteH = 30; img.addEventListener("load", () => { let cycle = 0; setInterval(() => { cx.clearRect(0, 0, spriteW, spriteH); cx.drawImage(img, // source rectangle cycle * spriteW, 0, spriteW, spriteH, // destination rectangle 0, 0, spriteW, spriteH); cycle = (cycle + 1) % 8; }, 120); }); </script> The `cycle` binding tracks our position in the animation. For each frame, it is incremented and then clipped back to the 0 to 7 range by using the remainder operator. This binding is then used to compute the x-coordinate that the sprite for the current pose has in the picture. ## Transformation But what if we want our character to walk to the left instead of to the right? We could draw another set of sprites, of course. But we can also instruct the canvas to draw the picture the other way round. Calling the `scale` method will cause anything drawn after it to be scaled. This method takes two parameters, one to set a horizontal scale and one to set a vertical scale. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); cx.scale(3, .5); cx.beginPath(); cx.arc(50, 50, 40, 0, 7); cx.lineWidth = 3; cx.stroke(); </scriptScaling will cause everything about the drawn image, including the line width, to be stretched out or squeezed together as specified. Scaling by a negative amount will flip the picture around. The flipping happens around point (0,0), which means it will also flip the direction of the coordinate system. When a horizontal scaling of -1 is applied, a shape drawn at x position 100 will end up at what used to be position -100. So to turn a picture around, we can’t simply add `cx.scale(-1, 1)` before the call to `drawImage` because that would move our picture outside of the canvas, where it won’t be visible. You could adjust the coordinates given to `drawImage` to compensate for this by drawing the image at x position -50 instead of 0. Another solution, which doesn’t require the code that does the drawing to know about the scale change, is to adjust the axis around which the scaling happens. There are several other methods besides `scale` that influence the coordinate system for a canvas. You can rotate subsequently drawn shapes with the `rotate` method and move them with the `translate` method. The interesting—and confusing—thing is that these transformations stack, meaning that each one happens relative to the previous transformations. So if we translate by 10 horizontal pixels twice, everything will be drawn 20 pixels to the right. If we first move the center of the coordinate system to (50,50) and then rotate by 20 degrees (about 0.1π radians), that rotation will happen around point (50,50). But if we first rotate by 20 degrees and then translate by (50,50), the translation will happen in the rotated coordinate system and thus produce a different orientation. The order in which transformations are applied matters. To flip a picture around the vertical line at a given x position, we can do the following: > function flipHorizontally(context, around) { context.translate(around, 0); context.scale(-1, 1); context.translate(-around, 0); } We move the y-axis to where we want our mirror to be, apply the mirroring, and finally move the y-axis back to its proper place in the mirrored universe. The following picture explains why this works: This shows the coordinate systems before and after mirroring across the central line. The triangles are numbered to illustrate each step. If we draw a triangle at a positive x position, it would, by default, be in the place where triangle 1 is. A call to `flipHorizontally` first does a translation to the right, which gets us to triangle 2. It then scales, flipping the triangle over to position 3. This is not where it should be, if it were mirrored in the given line. The second `translate` call fixes this—it “cancels” the initial translation and makes triangle 4 appear exactly where it should. We can now draw a mirrored character at position (100,0) by flipping the world around the character’s vertical center. > <canvas></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let img = document.createElement("img"); img.src = "img/player.png"; let spriteW = 24, spriteH = 30; img.addEventListener("load", () => { flipHorizontally(cx, 100 + spriteW / 2); cx.drawImage(img, 0, 0, spriteW, spriteH, 100, 0, spriteW, spriteH); }); </script## Storing and clearing transformations Transformations stick around. Everything else we draw after drawing that mirrored character would also be mirrored. That might be inconvenient. It is possible to save the current transformation, do some drawing and transforming, and then restore the old transformation. This is usually the proper thing to do for a function that needs to temporarily transform the coordinate system. First, we save whatever transformation the code that called the function was using. Then the function does its thing, adding more transformations on top of the current transformation. Finally, we revert to the transformation we started with. The `save` and `restore` methods on the 2D canvas context do this transformation management. They conceptually keep a stack of transformation states. When you call `save` , the current state is pushed onto the stack, and when you call `restore` , the state on top of the stack is taken off and used as the context’s current transformation. You can also call `resetTransform` to fully reset the transformation. The `branch` function in the following example illustrates what you can do with a function that changes the transformation and then calls a function (in this case itself), which continues drawing with the given transformation. This function draws a treelike shape by drawing a line, moving the center of the coordinate system to the end of the line, and calling itself twice—first rotated to the left and then rotated to the right. Every call reduces the length of the branch drawn, and the recursion stops when the length drops below 8. > <canvas width="600" height="300"></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); function branch(length, angle, scale) { cx.fillRect(0, 0, 1, length); if (length < 8) return; cx.save(); cx.translate(0, length); cx.rotate(-angle); branch(length * scale, angle, scale); cx.rotate(2 * angle); branch(length * scale, angle, scale); cx.restore(); } cx.translate(300, 0); branch(60, 0.5, 0.8); </script> If the calls to `save` and `restore` were not there, the second recursive call to `branch` would end up with the position and rotation created by the first call. It wouldn’t be connected to the current branch but rather to the innermost, rightmost branch drawn by the first call. The resulting shape might also be interesting, but it is definitely not a tree. ## Back to the game We now know enough about canvas drawing to start working on a canvas-based display system for the game from the previous chapter. The new display will no longer be showing just colored boxes. Instead, we’ll use `drawImage` to draw pictures that represent the game’s elements. We define another display object type called `CanvasDisplay` , supporting the same interface as `DOMDisplay` from Chapter 16, namely, the methods `syncState` and `clear` . This object keeps a little more information than `DOMDisplay` . Rather than using the scroll position of its DOM element, it tracks its own viewport, which tells us what part of the level we are currently looking at. Finally, it keeps a `flipPlayer` property so that even when the player is standing still, it keeps facing the direction it last moved in. > class CanvasDisplay { constructor(parent, level) { this.canvas = document.createElement("canvas"); this.canvas.width = Math.min(600, level.width * scale); this.canvas.height = Math.min(450, level.height * scale); parent.appendChild(this.canvas); this.cx = this.canvas.getContext("2d"); this.flipPlayer = false; this.viewport = { left: 0, top: 0, width: this.canvas.width / scale, height: this.canvas.height / scale }; } clear() { this.canvas.remove(); } } The `syncState` method first computes a new viewport and then draws the game scene at the appropriate position. > CanvasDisplay.prototype.syncState = function(state) { this.updateViewport(state); this.clearDisplay(state.status); this.drawBackground(state.level); this.drawActors(state.actors); }; Contrary to `DOMDisplay` , this display style does have to redraw the background on every update. Because shapes on a canvas are just pixels, after we draw them there is no good way to move them (or remove them). The only way to update the canvas display is to clear it and redraw the scene. We may also have scrolled, which requires the background to be in a different position. The `updateViewport` method is similar to `DOMDisplay` ’s `scrollPlayerIntoView` method. It checks whether the player is too close to the edge of the screen and moves the viewport when this is the case. > CanvasDisplay.prototype.updateViewport = function(state) { let view = this.viewport, margin = view.width / 3; let player = state.player; let center = player.pos.plus(player.size.times(0.5)); if (center.x < view.left + margin) { view.left = Math.max(center.x - margin, 0); } else if (center.x > view.left + view.width - margin) { view.left = Math.min(center.x + margin - view.width, state.level.width - view.width); } if (center.y < view.top + margin) { view.top = Math.max(center.y - margin, 0); } else if (center.y > view.top + view.height - margin) { view.top = Math.min(center.y + margin - view.height, state.level.height - view.height); } }; The calls to `Math.max` and `Math.min` ensure that the viewport does not end up showing space outside of the level. `Math.max(x, 0)` makes sure the resulting number is not less than zero. `Math.min` similarly guarantees that a value stays below a given bound. When clearing the display, we’ll use a slightly different color depending on whether the game is won (brighter) or lost (darker). > CanvasDisplay.prototype.clearDisplay = function(status) { if (status == "won") { this.cx.fillStyle = "rgb(68, 191, 255)"; } else if (status == "lost") { this.cx.fillStyle = "rgb(44, 136, 214)"; } else { this.cx.fillStyle = "rgb(52, 166, 251)"; } this.cx.fillRect(0, 0, this.canvas.width, this.canvas.height); }; To draw the background, we run through the tiles that are visible in the current viewport, using the same trick used in the `touches` method from the previous chapter. > let otherSprites = document.createElement("img"); otherSprites.src = "img/sprites.png"; CanvasDisplay.prototype.drawBackground = function(level) { let {left, top, width, height} = this.viewport; let xStart = Math.floor(left); let xEnd = Math.ceil(left + width); let yStart = Math.floor(top); let yEnd = Math.ceil(top + height); for (let y = yStart; y < yEnd; y++) { for (let x = xStart; x < xEnd; x++) { let tile = level.rows[y][x]; if (tile == "empty") continue; let screenX = (x - left) * scale; let screenY = (y - top) * scale; let tileX = tile == "lava" ? scale : 0; this.cx.drawImage(otherSprites, tileX, 0, scale, scale, screenX, screenY, scale, scale); } } }; Tiles that are not empty are drawn with `drawImage` . The `otherSprites` image contains the pictures used for elements other than the player. It contains, from left to right, the wall tile, the lava tile, and the sprite for a coin. Background tiles are 20 by 20 pixels since we will use the same scale that we used in `DOMDisplay` . Thus, the offset for lava tiles is 20 (the value of the `scale` binding), and the offset for walls is 0. We don’t bother waiting for the sprite image to load. Calling `drawImage` with an image that hasn’t been loaded yet will simply do nothing. Thus, we might fail to draw the game properly for the first few frames, while the image is still loading, but that is not a serious problem. Since we keep updating the screen, the correct scene will appear as soon as the loading finishes. The walking character shown earlier will be used to represent the player. The code that draws it needs to pick the right sprite and direction based on the player’s current motion. The first eight sprites contain a walking animation. When the player is moving along a floor, we cycle through them based on the current time. We want to switch frames every 60 milliseconds, so the time is divided by 60 first. When the player is standing still, we draw the ninth sprite. During jumps, which are recognized by the fact that the vertical speed is not zero, we use the tenth, rightmost sprite. Because the sprites are slightly wider than the player object—24 instead of 16 pixels to allow some space for feet and arms—the method has to adjust the x-coordinate and width by a given amount ( `playerXOverlap` ). > let playerSprites = document.createElement("img"); playerSprites.src = "img/player.png"; const playerXOverlap = 4; CanvasDisplay.prototype.drawPlayer = function(player, x, y, width, height){ width += playerXOverlap * 2; x -= playerXOverlap; if (player.speed.x != 0) { this.flipPlayer = player.speed.x < 0; } let tile = 8; if (player.speed.y != 0) { tile = 9; } else if (player.speed.x != 0) { tile = Math.floor(Date.now() / 60) % 8; } this.cx.save(); if (this.flipPlayer) { flipHorizontally(this.cx, x + width / 2); } let tileX = tile * width; this.cx.drawImage(playerSprites, tileX, 0, width, height, x, y, width, height); this.cx.restore(); }; The `drawPlayer` method is called by `drawActors` , which is responsible for drawing all the actors in the game. > CanvasDisplay.prototype.drawActors = function(actors) { for (let actor of actors) { let width = actor.size.x * scale; let height = actor.size.y * scale; let x = (actor.pos.x - this.viewport.left) * scale; let y = (actor.pos.y - this.viewport.top) * scale; if (actor.type == "player") { this.drawPlayer(actor, x, y, width, height); } else { let tileX = (actor.type == "coin" ? 2 : 1) * scale; this.cx.drawImage(otherSprites, tileX, 0, width, height, x, y, width, height); } } }; When drawing something that is not the player, we look at its type to find the offset of the correct sprite. The lava tile is found at offset 20, and the coin sprite is found at 40 (two times `scale` ). We have to subtract the viewport’s position when computing the actor’s position since (0,0) on our canvas corresponds to the top left of the viewport, not the top left of the level. We could also have used `translate` for this. Either way works. This document plugs the new display into `runGame` : > <body> <script> runGame(GAME_LEVELS, CanvasDisplay); </script> </body## Choosing a graphics interface So when you need to generate graphics in the browser, you can choose between plain HTML, SVG, and canvas. There is no single best approach that works in all situations. Each option has strengths and weaknesses. Plain HTML has the advantage of being simple. It also integrates well with text. Both SVG and canvas allow you to draw text, but they won’t help you position that text or wrap it when it takes up more than one line. In an HTML-based picture, it is much easier to include blocks of text. SVG can be used to produce crisp graphics that look good at any zoom level. Unlike HTML, it is designed for drawing and is thus more suitable for that purpose. Both SVG and HTML build up a data structure (the DOM) that represents your picture. This makes it possible to modify elements after they are drawn. If you need to repeatedly change a small part of a big picture in response to what the user is doing or as part of an animation, doing it in a canvas can be needlessly expensive. The DOM also allows us to register mouse event handlers on every element in the picture (even on shapes drawn with SVG). You can’t do that with canvas. But canvas’s pixel-oriented approach can be an advantage when drawing a huge number of tiny elements. The fact that it does not build up a data structure but only repeatedly draws onto the same pixel surface gives canvas a lower cost per shape. There are also effects, such as rendering a scene one pixel at a time (for example, using a ray tracer) or postprocessing an image with JavaScript (blurring or distorting it), that can be realistically handled only by a pixel-based approach. In some cases, you may want to combine several of these techniques. For example, you might draw a graph with SVG or canvas but show textual information by positioning an HTML element on top of the picture. For nondemanding applications, it really doesn’t matter much which interface you choose. The display we built for our game in this chapter could have been implemented using any of these three graphics technologies since it does not need to draw text, handle mouse interaction, or work with an extraordinarily large number of elements. In this chapter we discussed techniques for drawing graphics in the browser, focusing on the `<canvas>` element. A canvas node represents an area in a document that our program may draw on. This drawing is done through a drawing context object, created with the `getContext` method. The 2D drawing interface allows us to fill and stroke various shapes. The context’s `fillStyle` property determines how shapes are filled. The `strokeStyle` and `lineWidth` properties control the way lines are drawn. Rectangles and pieces of text can be drawn with a single method call. The `fillRect` and `strokeRect` methods draw rectangles, and the `fillText` and `strokeText` methods draw text. To create custom shapes, we must first build up a path. Calling `beginPath` starts a new path. A number of other methods add lines and curves to the current path. For example, `lineTo` can add a straight line. When a path is finished, it can be filled with the `fill` method or stroked with the `stroke` method. Moving pixels from an image or another canvas onto our canvas is done with the `drawImage` method. By default, this method draws the whole source image, but by giving it more parameters, you can copy a specific area of the image. We used this for our game by copying individual poses of the game character out of an image that contained many such poses. Transformations allow you to draw a shape in multiple orientations. A 2D drawing context has a current transformation that can be changed with the `translate` , `scale` , and `rotate` methods. These will affect all subsequent drawing operations. A transformation state can be saved with the `save` method and restored with the `restore` method. When showing an animation on a canvas, the `clearRect` method can be used to clear part of the canvas before redrawing it. ### Shapes Write a program that draws the following shapes on a canvas: A trapezoid (a rectangle that is wider on one side) * A red diamond (a rectangle rotated 45 degrees or ¼π radians) * A zigzagging line * A spiral made up of 100 straight line segments * A yellow star When drawing the last two, you may want to refer to the explanation of `Math.cos` and `Math.sin` in Chapter 14, which describes how to get coordinates on a circle using these functions. I recommend creating a function for each shape. Pass the position, and optionally other properties such as the size or the number of points, as parameters. The alternative, which is to hard-code numbers all over your code, tends to make the code needlessly hard to read and modify. > <canvas width="600" height="200"></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); // Your code here. </scriptThe trapezoid (1) is easiest to draw using a path. Pick suitable center coordinates and add each of the four corners around the center. The diamond (2) can be drawn the straightforward way, with a path, or the interesting way, with a `rotate` transformation. To use rotation, you will have to apply a trick similar to what we did in the `flipHorizontally` function. Because you want to rotate around the center of your rectangle and not around the point (0,0), you must first `translate` to there, then rotate, and then translate back. Make sure you reset the transformation after drawing any shape that creates one. For the zigzag (3) it becomes impractical to write a new call to `lineTo` for each line segment. Instead, you should use a loop. You can have each iteration draw either two line segments (right and then left again) or one, in which case you must use the evenness ( `% 2` ) of the loop index to determine whether to go left or right. You’ll also need a loop for the spiral (4). If you draw a series of points, with each point moving further along a circle around the spiral’s center, you get a circle. If, during the loop, you vary the radius of the circle on which you are putting the current point and go around more than once, the result is a spiral. The star (5) depicted is built out of `quadraticCurveTo` lines. You could also draw one with straight lines. Divide a circle into eight pieces for a star with eight points, or however many pieces you want. Draw lines between these points, making them curve toward the center of the star. With `quadraticCurveTo` , you can use the center as the control point. ### The pie chart Earlier in the chapter, we saw an example program that drew a pie chart. Modify this program so that the name of each category is shown next to the slice that represents it. Try to find a pleasing-looking way to automatically position this text that would work for other data sets as well. You may assume that categories are big enough to leave ample room for their labels. You might need `Math.sin` and `Math.cos` again, which are described in Chapter 14. > <canvas width="600" height="300"></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let total = results .reduce((sum, {count}) => sum + count, 0); let currentAngle = -0.5 * Math.PI; let centerX = 300, centerY = 150; // Add code to draw the slice labels in this loop. for (let result of results) { let sliceAngle = (result.count / total) * 2 * Math.PI; cx.beginPath(); cx.arc(centerX, centerY, 100, currentAngle, currentAngle + sliceAngle); currentAngle += sliceAngle; cx.lineTo(centerX, centerY); cx.fillStyle = result.color; cx.fill(); } </script> You will need to call `fillText` and set the context’s `textAlign` and `textBaseline` properties in such a way that the text ends up where you want it. A sensible way to position the labels would be to put the text on the line going from the center of the pie through the middle of the slice. You don’t want to put the text directly against the side of the pie but rather move the text out to the side of the pie by a given number of pixels. The angle of this line is `currentAngle + 0.` . The following code finds a position on this line 120 pixels from the center: > let middleAngle = currentAngle + 0.5 * sliceAngle; let textX = Math.cos(middleAngle) * 120 + centerX; let textY = Math.sin(middleAngle) * 120 + centerY; For `textBaseline` , the value `"middle"` is probably appropriate when using this approach. What to use for `textAlign` depends on which side of the circle we are on. On the left, it should be `"right"` , and on the right, it should be `"left"` , so that the text is positioned away from the pie. If you are not sure how to find out which side of the circle a given angle is on, look to the explanation of `Math.cos` in Chapter 14. The cosine of an angle tells us which x-coordinate it corresponds to, which in turn tells us exactly which side of the circle we are on. ### A bouncing ball Use the technique that we saw in Chapter 14 and Chapter 16 to draw a box with a bouncing ball in it. The ball moves at a constant speed and bounces off the box’s sides when it hits them. > <canvas width="400" height="400"></canvas> <script> let cx = document.querySelector("canvas").getContext("2d"); let lastTime = null; function frame(time) { if (lastTime != null) { updateAnimation(Math.min(100, time - lastTime) / 1000); } lastTime = time; requestAnimationFrame(frame); } requestAnimationFrame(frame); function updateAnimation(step) { // Your code here. } </script> A box is easy to draw with `strokeRect` . Define a binding that holds its size or define two bindings if your box’s width and height differ. To create a round ball, start a path and call ``` arc(x, y, radius, 0, 7) ``` , which creates an arc going from zero to more than a whole circle. Then fill the path. To model the ball’s position and speed, you can use the `Vec` class from Chapter 16 (which is available on this page). Give it a starting speed, preferably one that is not purely vertical or horizontal, and for every frame multiply that speed by the amount of time that elapsed. When the ball gets too close to a vertical wall, invert the x component in its speed. Likewise, invert the y component when it hits a horizontal wall. After finding the ball’s new position and speed, use `clearRect` to delete the scene and redraw it using the new position. ### Precomputed mirroring One unfortunate thing about transformations is that they slow down the drawing of bitmaps. The position and size of each pixel has to be transformed, and though it is possible that browsers will get cleverer about transformation in the future, they currently cause a measurable increase in the time it takes to draw a bitmap. In a game like ours, where we are drawing only a single transformed sprite, this is a nonissue. But imagine that we need to draw hundreds of characters or thousands of rotating particles from an explosion. Think of a way to allow us to draw an inverted character without loading additional image files and without having to make transformed `drawImage` calls every frame. The key to the solution is the fact that we can use a canvas element as a source image when using `drawImage` . It is possible to create an extra `<canvas>` element, without adding it to the document, and draw our inverted sprites to it, once. When drawing an actual frame, we just copy the already inverted sprites to the main canvas. Some care would be required because images do not load instantly. We do the inverted drawing only once, and if we do it before the image loads, it won’t draw anything. A `"load"` handler on the image can be used to draw the inverted images to the extra canvas. This canvas can be used as a drawing source immediately (it’ll simply be blank until we draw the character onto it). If you have knowledge, let others light their candles at it. A skill-sharing meeting is an event where people with a shared interest come together and give small, informal presentations about things they know. At a gardening skill-sharing meeting, someone might explain how to cultivate celery. Or in a programming skill-sharing group, you could drop by and tell people about Node.js. Such meetups—also often called users’ groups when they are about computers—are a great way to broaden your horizon, learn about new developments, or simply meet people with similar interests. Many larger cities have JavaScript meetups. They are typically free to attend, and I’ve found the ones I’ve visited to be friendly and welcoming. In this final project chapter, our goal is to set up a website for managing talks given at a skill-sharing meeting. Imagine a small group of people meeting up regularly in the office of one of the members to talk about unicycling. The previous organizer of the meetings moved to another town, and nobody stepped forward to take over this task. We want a system that will let the participants propose and discuss talks among themselves, without a central organizer. Just like in the previous chapter, some of the code in this chapter is written for Node.js, and running it directly in the HTML page that you are looking at is unlikely to work. The full code for the project can be downloaded from https://eloquentjavascript.net/code/skillsharing.zip. ## Design There is a server part to this project, written for Node.js, and a client part, written for the browser. The server stores the system’s data and provides it to the client. It also serves the files that implement the client-side system. The server keeps the list of talks proposed for the next meeting, and the client shows this list. Each talk has a presenter name, a title, a summary, and an array of comments associated with it. The client allows users to propose new talks (adding them to the list), delete talks, and comment on existing talks. Whenever the user makes such a change, the client makes an HTTP request to tell the server about it. The application will be set up to show a live view of the current proposed talks and their comments. Whenever someone, somewhere, submits a new talk or adds a comment, all people who have the page open in their browsers should immediately see the change. This poses a bit of a challenge—there is no way for a web server to open a connection to a client, nor is there a good way to know which clients are currently looking at a given website. A common solution to this problem is called long polling, which happens to be one of the motivations for Node’s design. ## Long polling To be able to immediately notify a client that something changed, we need a connection to that client. Since web browsers do not traditionally accept connections and clients are often behind routers that would block such connections anyway, having the server initiate this connection is not practical. We can arrange for the client to open the connection and keep it around so that the server can use it to send information when it needs to do so. But an HTTP request allows only a simple flow of information: the client sends a request, the server comes back with a single response, and that is it. There is a technology called WebSockets, supported by modern browsers, that makes it possible to open connections for arbitrary data exchange. But using them properly is somewhat tricky. In this chapter, we use a simpler technique—long polling—where clients continuously ask the server for new information using regular HTTP requests, and the server stalls its answer when it has nothing new to report. As long as the client makes sure it constantly has a polling request open, it will receive information from the server quickly after it becomes available. For example, if Fatma has our skill-sharing application open in her browser, that browser will have made a request for updates and will be waiting for a response to that request. When Iman submits a talk on Extreme Downhill Unicycling, the server will notice that Fatma is waiting for updates and send a response containing the new talk to her pending request. Fatma’s browser will receive the data and update the screen to show the talk. To prevent connections from timing out (being aborted because of a lack of activity), long polling techniques usually set a maximum time for each request, after which the server will respond anyway, even though it has nothing to report, after which the client will start a new request. Periodically restarting the request also makes the technique more robust, allowing clients to recover from temporary connection failures or server problems. A busy server that is using long polling may have thousands of waiting requests, and thus TCP connections, open. Node, which makes it easy to manage many connections without creating a separate thread of control for each one, is a good fit for such a system. ## HTTP interface Before we start designing either the server or the client, let’s think about the point where they touch: the HTTP interface over which they communicate. We will use JSON as the format of our request and response body. Like in the file server from Chapter 20, we’ll try to make good use of HTTP methods and headers. The interface is centered around the `/talks` path. Paths that do not start with `/talks` will be used for serving static files—the HTML and JavaScript code for the client-side system. A `GET` request to `/talks` returns a JSON document like this: > [{"title": "Unituning", "presenter": "Jamal", "summary": "Modifying your cycle for extra style", "comments": []}] Creating a new talk is done by making a `PUT` request to a URL like `/talks/Unituning` , where the part after the second slash is the title of the talk. The `PUT` request’s body should contain a JSON object that has `presenter` and `summary` properties. Since talk titles may contain spaces and other characters that may not appear normally in a URL, title strings must be encoded with the `encodeURIComponent` function when building up such a URL. > console.log("/talks/" + encodeURIComponent("How to Idle")); // → /talks/How%20to%20Idle A request to create a talk about idling might look something like this: > PUT /talks/How%20to%20Idle HTTP/1.1 Content-Type: application/json Content-Length: 92 {"presenter": "Maureen", "summary": "Standing still on a unicycle"} Such URLs also support `GET` requests to retrieve the JSON representation of a talk and `DELETE` requests to delete a talk. Adding a comment to a talk is done with a `POST` request to a URL like `/` , with a JSON body that has `author` and `message` properties. > POST /talks/Unituning/comments HTTP/1.1 Content-Type: application/json Content-Length: 72 {"author": "Iman", "message": "Will you talk about raising a cycle?"} To support long polling, `GET` requests to `/talks` may include extra headers that inform the server to delay the response if no new information is available. We’ll use a pair of headers normally intended to manage caching: `ETag` and `If-None-Match` . Servers may include an `ETag` (“entity tag”) header in a response. Its value is a string that identifies the current version of the resource. Clients, when they later request that resource again, may make a conditional request by including an `If-None-Match` header whose value holds that same string. If the resource hasn’t changed, the server will respond with status code 304, which means “not modified”, telling the client that its cached version is still current. When the tag does not match, the server responds as normal. We need something like this, where the client can tell the server which version of the list of talks it has, and the server responds only when that list has changed. But instead of immediately returning a 304 response, the server should stall the response and return only when something new is available or a given amount of time has elapsed. To distinguish long polling requests from normal conditional requests, we give them another header, `Prefer: wait=90` , which tells the server that the client is willing to wait up to 90 seconds for the response. The server will keep a version number that it updates every time the talks change and will use that as the `ETag` value. Clients can make requests like this to be notified when the talks change: > GET /talks HTTP/1.1 If-None-Match: "4" Prefer: wait=90 (time passes) HTTP/1.1 200 OK Content-Type: application/json ETag: "5" Content-Length: 295 [....] The protocol described here does not do any access control. Everybody can comment, modify talks, and even delete them. (Since the Internet is full of hooligans, putting such a system online without further protection probably wouldn’t end well.) ## The server Let’s start by building the server-side part of the program. The code in this section runs on Node.js. ### Routing Our server will use `createServer` to start an HTTP server. In the function that handles a new request, we must distinguish between the various kinds of requests (as determined by the method and the path) that we support. This can be done with a long chain of `if` statements, but there is a nicer way. A router is a component that helps dispatch a request to the function that can handle it. You can tell the router, for example, that `PUT` requests with a path that matches the regular expression `/` ( `/talks/` followed by a talk title) can be handled by a given function. In addition, it can help extract the meaningful parts of the path (in this case the talk title), wrapped in parentheses in the regular expression, and pass them to the handler function. There are a number of good router packages on NPM, but here we’ll write one ourselves to illustrate the principle. This is `router.js` , which we will later `require` from our server module: > const {parse} = require("url"); module.exports = class Router { constructor() { this.routes = []; } add(method, url, handler) { this.routes.push({method, url, handler}); } resolve(context, request) { let path = parse(request.url).pathname; for (let {method, url, handler} of this.routes) { let match = url.exec(path); if (!match || request.method != method) continue; let urlParts = match.slice(1).map(decodeURIComponent); return handler(context, urlParts, request); } return null; } }; The module exports the `Router` class. A router object allows new handlers to be registered with the `add` method and can resolve requests with its `resolve` method. The latter will return a response when a handler was found, and `null` otherwise. It tries the routes one at a time (in the order in which they were defined) until a matching one is found. The handler functions are called with the `context` value (which will be the server instance in our case), match strings for any groups they defined in their regular expression, and the request object. The strings have to be URL-decoded since the raw URL may contain `%20` -style codes. ### Serving files When a request matches none of the request types defined in our router, the server must interpret it as a request for a file in the `public` directory. It would be possible to use the file server defined in Chapter 20 to serve such files, but we neither need nor want to support `PUT` and `DELETE` requests on files, and we would like to have advanced features such as support for caching. So let’s use a solid, well-tested static file server from NPM instead. I opted for `ecstatic` . This isn’t the only such server on NPM, but it works well and fits our purposes. The `ecstatic` package exports a function that can be called with a configuration object to produce a request handler function. We use the `root` option to tell the server where it should look for files. The handler function accepts `request` and `response` parameters and can be passed directly to `createServer` to create a server that serves only files. We want to first check for requests that we should handle specially, though, so we wrap it in another function. > const {createServer} = require("http"); const Router = require("./router"); const ecstatic = require("ecstatic"); const router = new Router(); const defaultHeaders = {"Content-Type": "text/plain"}; class SkillShareServer { constructor(talks) { this.talks = talks; this.version = 0; this.waiting = []; let fileServer = ecstatic({root: "./public"}); this.server = createServer((request, response) => { let resolved = router.resolve(this, request); if (resolved) { resolved.catch(error => { if (error.status != null) return error; return {body: String(error), status: 500}; }).then(({body, status = 200, headers = defaultHeaders}) => { response.writeHead(status, headers); response.end(body); }); } else { fileServer(request, response); } }); } start(port) { this.server.listen(port); } stop() { this.server.close(); } } This uses a similar convention as the file server from the previous chapter for responses—handlers return promises that resolve to objects describing the response. It wraps the server in an object that also holds its state. ### Talks as resources The talks that have been proposed are stored in the `talks` property of the server, an object whose property names are the talk titles. These will be exposed as HTTP resources under `/talks/[title]` , so we need to add handlers to our router that implement the various methods that clients can use to work with them. The handler for requests that `GET` a single talk must look up the talk and respond either with the talk’s JSON data or with a 404 error response. > const talkPath = /^\/talks\/([^\/]+)$/; router.add("GET", talkPath, async (server, title) => { if (title in server.talks) { return {body: JSON.stringify(server.talks[title]), headers: {"Content-Type": "application/json"}}; } else { return {status: 404, body: `No talk '${title}' found`}; } }); Deleting a talk is done by removing it from the `talks` object. > router.add("DELETE", talkPath, async (server, title) => { if (title in server.talks) { delete server.talks[title]; server.updated(); } return {status: 204}; }); The `updated` method, which we will define later, notifies waiting long polling requests about the change. To retrieve the content of a request body, we define a function called `readStream` , which reads all content from a readable stream and returns a promise that resolves to a string. > function readStream(stream) { return new Promise((resolve, reject) => { let data = ""; stream.on("error", reject); stream.on("data", chunk => data += chunk.toString()); stream.on("end", () => resolve(data)); }); } One handler that needs to read request bodies is the `PUT` handler, which is used to create new talks. It has to check whether the data it was given has `presenter` and `summary` properties, which are strings. Any data coming from outside the system might be nonsense, and we don’t want to corrupt our internal data model or crash when bad requests come in. If the data looks valid, the handler stores an object that represents the new talk in the `talks` object, possibly overwriting an existing talk with this title, and again calls `updated` . > router.add("PUT", talkPath, async (server, title, request) => { let requestBody = await readStream(request); let talk; try { talk = JSON.parse(requestBody); } catch (_) { return {status: 400, body: "Invalid JSON"}; } if (!talk || typeof talk.presenter != "string" || typeof talk.summary != "string") { return {status: 400, body: "Bad talk data"}; } server.talks[title] = {title, presenter: talk.presenter, summary: talk.summary, comments: []}; server.updated(); return {status: 204}; }); Adding a comment to a talk works similarly. We use `readStream` to get the content of the request, validate the resulting data, and store it as a comment when it looks valid. > router.add("POST", /^\/talks\/([^\/]+)\/comments$/, async (server, title, request) => { let requestBody = await readStream(request); let comment; try { comment = JSON.parse(requestBody); } catch (_) { return {status: 400, body: "Invalid JSON"}; } if (!comment || typeof comment.author != "string" || typeof comment.message != "string") { return {status: 400, body: "Bad comment data"}; } else if (title in server.talks) { server.talks[title].comments.push(comment); server.updated(); return {status: 204}; } else { return {status: 404, body: `No talk '${title}' found`}; } }); Trying to add a comment to a nonexistent talk returns a 404 error. ### Long polling support The most interesting aspect of the server is the part that handles long polling. When a `GET` request comes in for `/talks` , it may be either a regular request or a long polling request. There will be multiple places in which we have to send an array of talks to the client, so we first define a helper method that builds up such an array and includes an `ETag` header in the response. > SkillShareServer.prototype.talkResponse = function() { let talks = []; for (let title of Object.keys(this.talks)) { talks.push(this.talks[title]); } return { body: JSON.stringify(talks), headers: {"Content-Type": "application/json", "ETag": `"${this.version}"`} }; }; The handler itself needs to look at the request headers to see whether `If-None-Match` and `Prefer` headers are present. Node stores headers, whose names are specified to be case insensitive, under their lowercase names. > router.add("GET", /^\/talks$/, async (server, request) => { let tag = /"(.*)"/.exec(request.headers["if-none-match"]); let wait = /\bwait=(\d+)/.exec(request.headers["prefer"]); if (!tag || tag[1] != server.version) { return server.talkResponse(); } else if (!wait) { return {status: 304}; } else { return server.waitForChanges(Number(wait[1])); } }); If no tag was given or a tag was given that doesn’t match the server’s current version, the handler responds with the list of talks. If the request is conditional and the talks did not change, we consult the `Prefer` header to see whether we should delay the response or respond right away. Callback functions for delayed requests are stored in the server’s `waiting` array so that they can be notified when something happens. The `waitForChanges` method also immediately sets a timer to respond with a 304 status when the request has waited long enough. > SkillShareServer.prototype.waitForChanges = function(time) { return new Promise(resolve => { this.waiting.push(resolve); setTimeout(() => { if (!this.waiting.includes(resolve)) return; this.waiting = this.waiting.filter(r => r != resolve); resolve({status: 304}); }, time * 1000); }); }; Registering a change with `updated` increases the `version` property and wakes up all waiting requests. > SkillShareServer.prototype.updated = function() { this.version++; let response = this.talkResponse(); this.waiting.forEach(resolve => resolve(response)); this.waiting = []; }; That concludes the server code. If we create an instance of `SkillShareServer` and start it on port 8000, the resulting HTTP server serves files from the `public` subdirectory alongside a talk-managing interface under the `/talks` URL. > new SkillShareServer(Object.create(null)).start(8000); ## The client The client-side part of the skill-sharing website consists of three files: a tiny HTML page, a style sheet, and a JavaScript file. ### HTML It is a widely used convention for web servers to try to serve a file named `index.html` when a request is made directly to a path that corresponds to a directory. The file server module we use, `ecstatic` , supports this convention. When a request is made to the path `/` , the server looks for the file `./` ( `./public` being the root we gave it) and returns that file if found. Thus, if we want a page to show up when a browser is pointed at our server, we should put it in `public/` . This is our index file: > <meta charset="utf-8"> <title>Skill Sharing</title> <link rel="stylesheet" href="skillsharing.css"> <h1>Skill Sharing</h1> <script src="skillsharing_client.js"></scriptIt defines the document title and includes a style sheet, which defines a few styles to, among other things, make sure there is some space between talks. At the bottom, it adds a heading at the top of the page and loads the script that contains the client-side application. ### Actions The application state consists of the list of talks and the name of the user, and we’ll store it in a `{talks, user}` object. We don’t allow the user interface to directly manipulate the state or send off HTTP requests. Rather, it may emit actions that describe what the user is trying to do. The `handleAction` function takes such an action and makes it happen. Because our state updates are so simple, state changes are handled in the same function. > function handleAction(state, action) { if (action.type == "setUser") { localStorage.setItem("userName", action.user); return Object.assign({}, state, {user: action.user}); } else if (action.type == "setTalks") { return Object.assign({}, state, {talks: action.talks}); } else if (action.type == "newTalk") { fetchOK(talkURL(action.title), { method: "PUT", headers: {"Content-Type": "application/json"}, body: JSON.stringify({ presenter: state.user, summary: action.summary }) }).catch(reportError); } else if (action.type == "deleteTalk") { fetchOK(talkURL(action.talk), {method: "DELETE"}) .catch(reportError); } else if (action.type == "newComment") { fetchOK(talkURL(action.talk) + "/comments", { method: "POST", headers: {"Content-Type": "application/json"}, body: JSON.stringify({ author: state.user, message: action.message }) }).catch(reportError); } return state; } We’ll store the user’s name in `localStorage` so that it can be restored when the page is loaded. The actions that need to involve the server make network requests, using `fetch` , to the HTTP interface described earlier. We use a wrapper function, `fetchOK` , which makes sure the returned promise is rejected when the server returns an error code. > function fetchOK(url, options) { return fetch(url, options).then(response => { if (response.status < 400) return response; else throw new Error(response.statusText); }); } This helper function is used to build up a URL for a talk with a given title. > function talkURL(title) { return "talks/" + encodeURIComponent(title); } When the request fails, we don’t want to have our page just sit there, doing nothing without explanation. So we define a function called `reportError` , which at least shows the user a dialog that tells them something went wrong. > function reportError(error) { alert(String(error)); } ### Rendering components We’ll use an approach similar to the one we saw in Chapter 19, splitting the application into components. But since some of the components either never need to update or are always fully redrawn when updated, we’ll define those not as classes but as functions that directly return a DOM node. For example, here is a component that shows the field where the user can enter their name: > function renderUserField(name, dispatch) { return elt("label", {}, "Your name: ", elt("input", { type: "text", value: name, onchange(event) { dispatch({type: "setUser", user: event.target.value}); } })); } The `elt` function used to construct DOM elements is the one we used in Chapter 19. A similar function is used to render talks, which include a list of comments and a form for adding a new comment. > function renderTalk(talk, dispatch) { return elt( "section", {className: "talk"}, elt("h2", null, talk.title, " ", elt("button", { type: "button", onclick() { dispatch({type: "deleteTalk", talk: talk.title}); } }, "Delete")), elt("div", null, "by ", elt("strong", null, talk.presenter)), elt("p", null, talk.summary), talk.comments.map(renderComment), elt("form", { onsubmit(event) { event.preventDefault(); let form = event.target; dispatch({type: "newComment", talk: talk.title, message: form.elements.comment.value}); form.reset(); } }, elt("input", {type: "text", name: "comment"}), " ", elt("button", {type: "submit"}, "Add comment"))); } The `"submit"` event handler calls `form.reset` to clear the form’s content after creating a `"newComment"` action. When creating moderately complex pieces of DOM, this style of programming starts to look rather messy. There’s a widely used (non-standard) JavaScript extension called JSX that lets you write HTML directly in your scripts, which can make such code prettier (depending on what you consider pretty). Before you can actually run such code, you have to run a program on your script to convert the pseudo-HTML into JavaScript function calls much like the ones we use here. Comments are simpler to render. > function renderComment(comment) { return elt("p", {className: "comment"}, elt("strong", null, comment.author), ": ", comment.message); } Finally, the form that the user can use to create a new talk is rendered like this: > function renderTalkForm(dispatch) { let title = elt("input", {type: "text"}); let summary = elt("input", {type: "text"}); return elt("form", { onsubmit(event) { event.preventDefault(); dispatch({type: "newTalk", title: title.value, summary: summary.value}); event.target.reset(); } }, elt("h3", null, "Submit a Talk"), elt("label", null, "Title: ", title), elt("label", null, "Summary: ", summary), elt("button", {type: "submit"}, "Submit")); } ### Polling To start the app we need the current list of talks. Since the initial load is closely related to the long polling process—the `ETag` from the load must be used when polling—we’ll write a function that keeps polling the server for `/talks` and calls a callback function when a new set of talks is available. > async function pollTalks(update) { let tag = undefined; for (;;) { let response; try { response = await fetchOK("/talks", { headers: tag && {"If-None-Match": tag, "Prefer": "wait=90"} }); } catch (e) { console.log("Request failed: " + e); await new Promise(resolve => setTimeout(resolve, 500)); continue; } if (response.status == 304) continue; tag = response.headers.get("ETag"); update(await response.json()); } } This is an `async` function so that looping and waiting for the request is easier. It runs an infinite loop that, on each iteration, retrieves the list of talks—either normally or, if this isn’t the first request, with the headers included that make it a long polling request. When a request fails, the function waits a moment and then tries again. This way, if your network connection goes away for a while and then comes back, the application can recover and continue updating. The promise resolved via `setTimeout` is a way to force the `async` function to wait. When the server gives back a 304 response, that means a long polling request timed out, so the function should just immediately start the next request. If the response is a normal 200 response, its body is read as JSON and passed to the callback, and its `ETag` header value is stored for the next iteration. ### The application The following component ties the whole user interface together: > class SkillShareApp { constructor(state, dispatch) { this.dispatch = dispatch; this.talkDOM = elt("div", {className: "talks"}); this.dom = elt("div", null, renderUserField(state.user, dispatch), this.talkDOM, renderTalkForm(dispatch)); this.syncState(state); } syncState(state) { if (state.talks != this.talks) { this.talkDOM.textContent = ""; for (let talk of state.talks) { this.talkDOM.appendChild( renderTalk(talk, this.dispatch)); } this.talks = state.talks; } } } When the talks change, this component redraws all of them. This is simple but also wasteful. We’ll get back to that in the exercises. We can start the application like this: > function runApp() { let user = localStorage.getItem("userName") || "Anon"; let state, app; function dispatch(action) { state = handleAction(state, action); app.syncState(state); } pollTalks(talks => { if (!app) { state = {user, talks}; app = new SkillShareApp(state, dispatch); document.body.appendChild(app.dom); } else { dispatch({type: "setTalks", talks}); } }).catch(reportError); } runApp(); If you run the server and open two browser windows for http://localhost:8000 next to each other, you can see that the actions you perform in one window are immediately visible in the other. The following exercises will involve modifying the system defined in this chapter. To work on them, make sure you download the code first (https://eloquentjavascript.net/code/skillsharing.zip), have Node installed https://nodejs.org, and have installed the project’s dependency with `npm install` . ### Disk persistence The skill-sharing server keeps its data purely in memory. This means that when it crashes or is restarted for any reason, all talks and comments are lost. Extend the server so that it stores the talk data to disk and automatically reloads the data when it is restarted. Do not worry about efficiency—do the simplest thing that works. The simplest solution I can come up with is to encode the whole `talks` object as JSON and dump it to a file with `writeFile` . There is already a method ( `updated` ) that is called every time the server’s data changes. It can be extended to write the new data to disk. Pick a filename, for example `./talks.json` . When the server starts, it can try to read that file with `readFile` , and if that succeeds, the server can use the file’s contents as its starting data. Beware, though. The `talks` object started as a prototype-less object so that the `in` operator could reliably be used. `JSON.parse` will return regular objects with `Object.prototype` as their prototype. If you use JSON as your file format, you’ll have to copy the properties of the object returned by `JSON.parse` into a new, prototype-less object. ### Comment field resets The wholesale redrawing of talks works pretty well because you usually can’t tell the difference between a DOM node and its identical replacement. But there are exceptions. If you start typing something in the comment field for a talk in one browser window and then, in another, add a comment to that talk, the field in the first window will be redrawn, removing both its content and its focus. In a heated discussion, where multiple people are adding comments at the same time, this would be annoying. Can you come up with a way to solve it? The best way to do this is probably to make talks component objects, with a `syncState` method, so that they can be updated to show a modified version of the talk. During normal operation, the only way a talk can be changed is by adding more comments, so the `syncState` method can be relatively simple. The difficult part is that, when a changed list of talks comes in, we have to reconcile the existing list of DOM components with the talks on the new list—deleting components whose talk was deleted and updating components whose talk changed. To do this, it might be helpful to keep a data structure that stores the talk components under the talk titles so that you can easily figure out whether a component exists for a given talk. You can then loop over the new array of talks, and for each of them, either synchronize an existing component or create a new one. To delete components for deleted talks, you’ll have to also loop over the components and check whether the corresponding talks still exist.
abind
cran
R
Package ‘abind’ October 12, 2022 Version 1.4-5 Date 2016-06-19 Title Combine Multidimensional Arrays Author <NAME> <<EMAIL>> and <NAME> Maintainer <NAME> <<EMAIL>> Description Combine multidimensional arrays into a single array. This is a generalization of 'cbind' and 'rbind'. Works with vectors, matrices, and higher-dimensional arrays. Also provides functions 'adrop', 'asub', and 'afill' for manipulating, extracting and replacing data in arrays. Depends R (>= 1.5.0) Imports methods, utils License LGPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2016-07-21 19:18:05 R topics documented: abin... 2 acor... 5 adro... 6 afil... 7 asu... 10 abind Combine multi-dimensional arrays Description Combine multi-dimensional arrays. This is a generalization of cbind and rbind. Takes a sequence of vectors, matrices, or arrays and produces a single array of the same or higher dimension. Usage abind(..., along=N, rev.along=NULL, new.names=NULL, force.array=TRUE, make.names=use.anon.names, use.anon.names=FALSE, use.first.dimnames=FALSE, hier.names=FALSE, use.dnns=FALSE) Arguments ... Any number of vectors, matrices, arrays, or data frames. The dimensions of all the arrays must match, except on one dimension (specified by along=). If these arguments are named, the name will be used for the name of the dimension along which the arrays are joined. Vectors are treated as having a dim attribute of length one. Alternatively, there can be one (and only one) list argument supplied, whose components are the objects to be bound together. Names of the list components are treated in the same way as argument names. along (optional) The dimension along which to bind the arrays. The default is the last dimension, i.e., the maximum length of the dim attribute of the supplied arrays. along= can take any non-negative value up to the minimum length of the dim attribute of supplied arrays plus one. When along= has a fractional value, a value less than 1, or a value greater than N (N is the maximum of the lengths of the dim attribute of the objects to be bound together), a new dimension is created in the result. In these cases, the dimensions of all arguments must be identical. rev.along (optional) Alternate way to specify the dimension along which to bind the ar- rays: along = N + 1 - rev.along. This is provided mainly to allow easy specifi- cation of along = N + 1 (by supplying rev.along=0). If both along and rev.along are supplied, the supplied value of along is ignored. new.names (optional) If new.names is a list, it is the first choice for the dimnames attribute of the result. It should have the same structure as a dimnames attribute. If the names for a particular dimension are NULL, names for this dimension are constructed in other ways. If new.names is a character vector, it is used for dimension names in the same way as argument names are used. Zero length ("") names are ignored. force.array (optional) If FALSE, rbind or cbind are called when possible, i.e., when the argu- ments are all vectors, and along is not 1, or when the arguments are vectors or matrices or data frames and along is 1 or 2. If rbind or cbind are used, they will preserve the data.frame classes (or any other class that r/cbind preserve). Other- wise, abind will convert objects to class array. Thus, to guarantee that an array object is returned, supply the argument force.array=TRUE. Note that the use of rbind or cbind introduces some subtle changes in the way default dimension names are constructed: see the examples below. make.names (optional) If TRUE, the last resort for dimnames for the along dimension will be the deparsed versions of anonymous arguments. This can result in cumbersome names when arguments are expressions. <p>The default is FALSE. use.anon.names (optional) use.anon.names is a deprecated synonym for make.names. use.first.dimnames (optional) When dimension names are present on more than one argument, should dimension names for the result be take from the first available (the default is to take them from the last available, which is the same behavior as rbind and cbind.) hier.names (optional) If TRUE, dimension names on the concatenated dimension will be composed of the argument name and the dimension names of the objects being bound. If a single list argument is supplied, then the names of the components serve as the argument names. hier.names can also have values "before" or "after"; these determine the order in which the argument name and the dimen- sion name are put together (TRUE has the same effect as "before"). use.dnns (default FALSE) Use names on dimensions, e.g., so that names(dimnames(x)) is non-empty. When there are multiple possible sources for names of dimnames, the value of use.first.dimnames determines the result. Details The dimensions of the supplied vectors or arrays do not need to be identical, e.g., arguments can be a mixture of vectors and matrices. abind coerces arguments by the addition of one dimension in order to make them consistent with other arguments and along=. The extra dimension is added in the place specified by along=. The default action of abind is to concatenate on the last dimension, rather than increase the number of dimensions. For example, the result of calling abind with vectors is a longer vector (see first example below). This differs from the action of rbind and cbind which is to return a matrix when called with vectors. abind can be made to behave like cbind on vectors by specifying along=2, and like rbind by specifying along=0. The dimnames of the returned object are pieced together from the dimnames of the arguments, and the names of the arguments. Names for each dimension are searched for in the following order: new.names, argument name, dimnames (or names) attribute of last argument, dimnames (or names) attribute of second last argument, etc. (Supplying the argument use.first.dimnames=TRUE changes this to cause abind to use dimnames or names from the first argument first. The default behavior is the same as for rbind and cbind: use dimnames from later arguments.) If some names are sup- plied for the along dimension (either as argument names or dimnames in arguments), names are constructed for anonymous arguments unless use.anon.names=FALSE. Value An array with a dim attribute calculated as follows. Let rMin=min(sapply(list(...), function(x) length(dim(x)))) and rMax=max(sapply(list(...), function(x) length(dim(x)))) (where the length of the dimensions of a vector are taken to be 1). Then rMax should be equal to or one greater than rMin. If along refers to an existing dimension, then the length of the dim attribute of the result is rMax. If along does not refer to an existing dimension, then rMax should equal rMin and the length of the dim attribute of the result will be rMax+1. rbind or cbind are called to compute the result if (a) force.array=FALSE; and (b) the result will be a two-dimensional object. Note It would be nice to make abind() an S3 generic, but S3 generics cannot dispatch off anonymous arguments. The ability of abind() to accept a single list argument removes much of the need for constructs like do.call("abind", list.of.arrays). Instead, just do abind(list.of.arrays). The direct construct is preferred because do.call() construct can sometimes consume more memory during evaluation. Author(s) <NAME> <<EMAIL>> and <NAME> Examples # Five different ways of binding together two matrices x <- matrix(1:12,3,4) y <- x+100 dim(abind(x,y,along=0)) # binds on new dimension before first dim(abind(x,y,along=1)) # binds on first dimension dim(abind(x,y,along=1.5)) dim(abind(x,y,along=2)) dim(abind(x,y,along=3)) dim(abind(x,y,rev.along=1)) # binds on last dimension dim(abind(x,y,rev.along=0)) # binds on new dimension after last # Unlike cbind or rbind in that the default is to bind # along the last dimension of the inputs, which for vectors # means the result is a vector (because a vector is # treated as an array with length(dim(x))==1). abind(x=1:4,y=5:8) # Like cbind abind(x=1:4,y=5:8,along=2) abind(x=1:4,matrix(5:20,nrow=4),along=2) abind(1:4,matrix(5:20,nrow=4),along=2) # Like rbind abind(x=1:4,matrix(5:20,nrow=4),along=1) abind(1:4,matrix(5:20,nrow=4),along=1) # Create a 3-d array out of two matrices abind(x=matrix(1:16,nrow=4),y=matrix(17:32,nrow=4),along=3) # Use of hier.names abind(x=cbind(a=1:3,b=4:6), y=cbind(a=7:9,b=10:12), hier.names=TRUE) # Use a list argument abind(list(x=x, y=x), along=3) # Use lapply(..., get) to get the objects an <- c('x','y') names(an) <- an abind(lapply(an, get), along=3) acorn Return a corner of an array object (like head) Description Return a small corner of an array object, like head() or tail() but taking only a few slices on each dimension. Usage acorn(x, n=6, m=5, r=1, ...) Arguments x An array (including a matrix or a data frame) n,m,r Numbers of items on each dimension. A negative number is interpreted as this many items at the end (like tail). ... Further arguments specifying numbers of slices to return on each dimension. Details Like head() for multidimensional arrays, with two differences: (1) returns just a few items on each dimension, and (2) negative numbers are treated like tail(). Value An object like x with fewer elements on each dimension. Author(s) <NAME> <<EMAIL>> Examples x <- array(1:24,dim=c(4,3,2),dimnames=rev(list(letters[1:2],LETTERS[1:3],letters[23:26]))) acorn(x) acorn(x, 3) acorn(x, -3) acorn(x, 3, -2) adrop Drop dimensions of an array object Description Drop degenerate dimensions of an array object. Offers less automaticity and more control than the base drop() function. adrop() is a S3 generic, with one method, adrop.default, supplied in the abind package. Usage adrop(x, drop = TRUE, named.vector = TRUE, one.d.array = FALSE, ...) Arguments x An array (including a matrix) drop A logical or numeric vector describing exactly which dimensions to drop. It is intended that this argument be supplied always. The default is very rarely useful (drop=TRUE means drop the first dimension of a 1-d array). named.vector Optional, defaults to TRUE. Controls whether a vector result has names derived from the dimnames of x. one.d.array Optional, defaults to FALSE. If TRUE, a one-dimensional array result will be an object with a dim attribute of length 1, and possibly a dimnames attribute. If FALSE, a one-dimensional result will be a vector object (named if named.vector==TRUE). ... There are no additional arguments allowed for adrop.default but other meth- ods may use them. Details Dimensions can only be dropped if their extent is one, i.e., dimension i of array x can be dropped only if dim(x)[i]==1. It is an error to request adrop to drop a dimension whose extent is not 1. A 1-d array can be converted to a named vector by supplying drop=NULL (which means drop no dimensions, and return a 1-d array result as a named vector). Value If x is an object with a dim attribute (e.g., a matrix or array), then adrop returns an object like x, but with the requested extents of length one removed. Any accompanying dimnames attribute is adjusted and returned with x. Author(s) <NAME> <<EMAIL>> See Also abind Examples x <- array(1:24,dim=c(2,3,4),dimnames=list(letters[1:2],LETTERS[1:3],letters[23:26])) adrop(x[1,,,drop=FALSE],drop=1) adrop(x[,1,,drop=FALSE],drop=2) adrop(x[,,1,drop=FALSE],drop=3) adrop(x[1,1,1,drop=FALSE],drop=1) adrop(x[1,1,1,drop=FALSE],drop=2) adrop(x[1,1,1,drop=FALSE],drop=3) adrop(x[1,1,1,drop=FALSE],drop=1:2) adrop(x[1,1,1,drop=FALSE],drop=1:2,one.d=TRUE) adrop(x[1,1,1,drop=FALSE],drop=1:2,named=FALSE) dim(adrop(x[1,1,1,drop=FALSE],drop=1:2,one.d=TRUE)) dimnames(adrop(x[1,1,1,drop=FALSE],drop=1:2,one.d=TRUE)) names(adrop(x[1,1,1,drop=FALSE],drop=1:2,one.d=TRUE)) dim(adrop(x[1,1,1,drop=FALSE],drop=1:2)) dimnames(adrop(x[1,1,1,drop=FALSE],drop=1:2)) names(adrop(x[1,1,1,drop=FALSE],drop=1:2)) afill Fill an array with subarrays Description Fill an array with subarrays. afill uses the dimension names in the value in determining how to fill the LHS, unlike standard array assignment, which ignores dimension names in the value. afill() is a S3 generic, with one method, afill.default, supplied in the abind package. Usage afill(x, ..., excess.ok = FALSE, local = TRUE) <- value Arguments x An array to be changed ... Arguments that specify indices for x. If length(dim(value)) < length(dim(x)), then exactly length(dim(x)) anonymous arguments must be supplied, with empty ones corresponding to dimensions of x that are supplied in value. excess.ok If there are elements of the dimensions of value that are not found in the corre- sponding dimensions of x, they will be discarded if excess.ok=TRUE. local Should the assignment be done in on a copy of x, and the result returned (normal behavior). If local=FALSE the assignment will be done directly on the actual argument supplied as x, which can be more space efficient. value A vector or array, with dimension names that match some dimensions of x Details The simplest use of afill is to fill a sub-matrix. Here is an example of this usage: > (x <- matrix(0, ncol=3, nrow=4, dimnames=list(letters[1:4], LETTERS[24:26]))) X Y Z a 0 0 0 b 0 0 0 c 0 0 0 d 0 0 0 > (y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26]))) Y Z b 1 3 c 2 4 > afill(x) <- y > x X Y Z a 0 0 0 b 0 1 3 c 0 2 4 d 0 0 0 > The above usage is equivalent (when x and y have appropriately matching dimnames) to > x[match(rownames(y), rownames(x)), match(colnames(y), colnames(x))] <- y A more complex usage of afill is to fill a sub-matrix in a slice of a higher-dimensional array. In this case, indices for x must be supplied as arguments to afill, with the dimensions corresponding to those of value being empty, e.g.: > x <- array(0, dim=c(2,4,3), dimnames=list(LETTERS[1:2], letters[1:4], LETTERS[24:26])) > y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26])) > afill(x, 1, , ) <- y > x[1,,] X Y Z a 0 0 0 b 0 1 3 c 0 2 4 d 0 0 0 > x[2,,] X Y Z a 0 0 0 b 0 0 0 c 0 0 0 d 0 0 0 > The most complex usage of afill is to fill a sub-matrix in multiple slice of a higher-dimensional array. Again, indices for x must be supplied as arguments to afill, with the dimensions corre- sponding to those of value being empty. Indices in which all slices should be filled can be supplied as TRUE. E.g.: > x <- array(0, dim=c(2,4,3), dimnames=list(LETTERS[1:2], letters[1:4], LETTERS[24:26])) > y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26])) > afill(x, TRUE, , ) <- y > x[1,,] X Y Z a 0 0 0 b 0 1 3 c 0 2 4 d 0 0 0 > x[2,,] X Y Z a 0 0 0 b 0 1 3 c 0 2 4 d 0 0 0 > In the above usage, afill takes care of replicating value in the appropriate fashion (which is not straghtforward in some cases). Value The object x is changed. The return value of the assignment is the parts of the object x that are changed. This is similar to how regular subscript-replacement behaves, e.g., the expression x[2:3] <- 1:2 returns the vector 1:2, not the entire object x. However, note that there can be differences Author(s) <NAME> <<EMAIL>> See Also Extract Examples # fill a submatrix defined by the dimnames on y (x <- matrix(0, ncol=3, nrow=4, dimnames=list(letters[1:4], LETTERS[24:26]))) (y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26]))) afill(x) <- y x all.equal(asub(x, dimnames(y)), y) # TRUE # fill a slice in a higher dimensional array x <- array(0, dim=c(2,4,3), dimnames=list(LETTERS[1:2], letters[1:4], LETTERS[24:26])) y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26])) afill(x, 1, , ) <- y x[1,,] x[2,,] all.equal(asub(x, c(1,dimnames(y))), y) # TRUE # fill multiple slices x <- array(0, dim=c(2,4,3), dimnames=list(LETTERS[1:2], letters[1:4], LETTERS[24:26])) y <- matrix(1:4, ncol=2, nrow=2, dimnames=list(letters[2:3], LETTERS[25:26])) afill(x, TRUE, , ) <- y x[1,,] x[2,,] all.equal(asub(x, c(1,dimnames(y))), y) # TRUE all.equal(asub(x, c(2,dimnames(y))), y) # TRUE asub Arbitrary subsetting of array-like objects at specified indices Description Subset array-like objects at specified indices. asub() is a S3 generic, with one method, asub.default, supplied in the abind package. Usage asub(x, idx, dims = seq(len = max(length(dim(x)), 1)), drop = NULL, ...) Arguments x The object to index idx A list of indices (e.g., a list of a mixture of integer, character, and logical vectors, but can actually be anything). Can be just a vector in the case that length(dims)==1. NULL entries in the list will be treated as empty indices. dims The dimensions on which to index (a numeric or integer vector). The default is all of the dimensions. drop The ’drop’ argument to index with (the default is to not supply a ’drop’ argument ... There are no additional arguments allowed for asub.default but other methods may use them. Details Constructs and evaluates an expression to do the requested indexing. E.g., for x with length(dim(x))==4 the call asub(x, list(c("a","b"), 3:5), 2:3) will construct and evaluate the expression x[, c("a","b"), 3:5, ], and the call asub(x, 1, 2, drop=FALSE) will construct and evaluate the ex- pression x[, 1, , , drop=FALSE]. asub checks that the elements of dims are in the range 1 to length(dim(x)) (in the case that x is a vector, length(x) is used for dim(x)). Other than that, no checks are made on the suitability of components of idx as indices for x. If the components of idx have any out-of-range values or unsuitable types, this will be left to the subsetting method for x to catch. Value A subset of x, as returned by x[...]. Author(s) <NAME> <<EMAIL>> References ~put references to the literature/web site here ~ See Also Extract Examples x <- array(1:24,dim=c(2,3,4),dimnames=list(letters[1:2],LETTERS[1:3],letters[23:26])) asub(x, 1, 1, drop=FALSE) asub(x, list(1:2,3:4), c(1,3))
BVAR
cran
R
Package ‘BVAR’ March 8, 2023 Type Package Title Hierarchical Bayesian Vector Autoregression Version 1.0.4 Date 2023-03-08 Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-6642-2543>), <NAME> [aut] (<https://orcid.org/0000-0002-3562-3414>), <NAME> [ctb], <NAME> [dtc], <NAME> [dtc] Maintainer <NAME> <<EMAIL>> Description Estimation of hierarchical Bayesian vector autoregressive models following Kuschnig & Vashold (2021) <doi:10.18637/jss.v100.i14>. Implements hierarchical prior selection for conjugate priors in the fashion of Giannone, Lenza & Primiceri (2015) <doi:10.1162/REST_a_00483>. Functions to compute and identify impulse responses, calculate forecasts, forecast error variance decompositions and scenarios are available. Several methods to print, plot and summarise results facilitate analysis. URL https://github.com/nk027/bvar BugReports https://github.com/nk027/bvar/issues Depends R (>= 3.3.0) Imports mvtnorm, stats, graphics, utils, grDevices Suggests coda, vars, tinytest License GPL-3 | file LICENSE Encoding UTF-8 LazyData true RoxygenNote 7.2.3 NeedsCompilation no Repository CRAN Date/Publication 2023-03-08 19:30:06 UTC R topics documented: BVAR-packag... 2 bva... 3 bv_dumm... 5 bv_fcas... 7 bv_ir... 8 bv_metropoli... 10 bv_minnesot... 11 bv_prior... 14 cod... 15 coef.bva... 16 companio... 18 density.bva... 19 fitted.bva... 20 fred_q... 22 fred_transfor... 23 irf.bva... 24 logLik.bva... 26 par_bva... 27 plot.bva... 29 plot.bvar_fcas... 30 plot.bvar_ir... 32 predict.bva... 34 summary.bva... 36 BVAR-package BVAR: Hierarchical Bayesian vector autoregression Description Estimation of hierarchical Bayesian vector autoregressive models following Kuschnig & Vashold (2021). Implements hierarchical prior selection for conjugate priors in the fashion of Giannone, Lenza & Primiceri (2015) <doi:10.1162/REST_a_00483>. Functions to compute and identify im- pulse responses, calculate forecasts, forecast error variance decompositions and scenarios are avail- able. Several methods to print, plot and summarise results facilitate analysis. References <NAME>. and <NAME>. and <NAME>. (2015) Prior Selection for Vector Autoregressions. The Review of Economics and Statistics, 97:2, 436-451, doi:10.1162/REST_a_00483. <NAME>. and <NAME>. (2021) BVAR: Bayesian Vector Autoregressions with Hierarchical Prior Selection in R. Journal of Statistical Software, 14, 1-27, doi:10.18637/jss.v100.i14. bvar Hierarchical Bayesian vector autoregression Description Used to estimate hierarchical Bayesian Vector Autoregression (VAR) models in the fashion of Gian- none, Lenza and Primiceri (2015). Priors are adjusted and added via bv_priors. The Metropolis- Hastings step can be modified with bv_mh. Usage bvar( data, lags, n_draw = 10000L, n_burn = 5000L, n_thin = 1L, priors = bv_priors(), mh = bv_mh(), fcast = NULL, irf = NULL, verbose = TRUE, ... ) Arguments data Numeric matrix or dataframe. Note that observations are expected to be ordered from earliest to latest, and variables in the columns. lags Integer scalar. Lag order of the model. n_draw, n_burn Integer scalar. The number of iterations to (a) cycle through and (b) burn at the start. n_thin Integer scalar. Every n_thin’th iteration is stored. For a given memory require- ment thinning reduces autocorrelation, while increasing effective sample size. priors Object from bv_priors with prior settings. Used to adjust the Minnesota prior, add custom dummy priors, and choose hyperparameters for hierarchical estima- tion. mh Object from bv_mh with settings for the Metropolis-Hastings step. Used to tune automatic adjustment of the acceptance rate within the burn-in period, or manu- ally adjust the proposal variance. fcast Object from bv_fcast with forecast settings. Options include the horizon and settings for conditional forecasts i.e. scenario analysis. May also be calculated ex-post using predict.bvar. irf Object from bv_irf with settings for the calculation of impulse responses and forecast error variance decompositions. Options include the horizon and differ- ent identification schemes. May also be calculated ex-post using irf.bvar. verbose Logical scalar. Whether to print intermediate results and progress. ... Not used. Details The model can be expressed as: yt = a0 + A1 yt−1 + ... + Ap yt−p + t See Kuschnig and Vashold (2021) and Giannone, Lenza and Primiceri (2015) for further informa- tion. Methods for a bvar object and its derivatives can be used to: • predict and analyse scenarios; • evaluate shocks and the variance of forecast errors; • visualise forecasts and impulse responses, parameters and residuals; • retrieve coefficents and the variance-covariance matrix; • calculate fitted and residual values; Note that these methods generally work by calculating quantiles from the posterior draws. The full posterior may be retrieved directly from the objects. The function str can be very helpful for this. Value Returns a list of class bvar with the following elements: • beta - Numeric array with draws from the posterior of the VAR coefficients. Also see coef.bvar. • sigma - Numeric array with draws from the posterior of the variance-covariance matrix. Also see vcov.bvar. • hyper - Numeric matrix with draws from the posterior of the hierarchically treated hyperpa- rameters. • ml - Numeric vector with the marginal likelihood (with respect to the hyperparameters), that determines acceptance probability. • optim - List with outputs of optim, which is used to find starting values for the hyperparame- ters. • prior - Prior settings from bv_priors. • call - Call to the function. See match.call. • meta - List with meta information. Includes the number of variables, accepted draws, number of iterations, and data. • variables - Character vector with the column names of data. If missing, variables are named iteratively. • explanatories - Character vector with names of explanatory variables. Formatting is akin to: "FEDFUNDS-lag1". • fcast - Forecasts from predict.bvar. • irf - Impulse responses from irf.bvar. Author(s) <NAME>, <NAME> References <NAME>. and Lenza, M. and <NAME>. (2015) Prior Selection for Vector Autoregressions. The Review of Economics and Statistics, 97:2, 436-451, doi:10.1162/REST_a_00483. <NAME>. and <NAME>. (2021) BVAR: Bayesian Vector Autoregressions with Hierarchical Prior Selection in R. Journal of Statistical Software, 14, 1-27, doi:10.18637/jss.v100.i14. See Also bv_priors; bv_mh; bv_fcast; bv_irf; predict.bvar; irf.bvar; plot.bvar; Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Calculate and store forecasts and impulse responses predict(x) <- predict(x, horizon = 8) irf(x) <- irf(x, horizon = 8, fevd = FALSE) ## Not run: # Check convergence of the hyperparameters with a trace and density plot plot(x) # Plot forecasts and impulse responses plot(predict(x)) plot(irf(x)) # Check coefficient values and variance-covariance matrix summary(x) ## End(Not run) bv_dummy Dummy prior settings Description Allows the creation of dummy observation priors for bv_priors. See the Details section for infor- mation on common dummy priors. Usage bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 5, fun) bv_soc(mode = 1, sd = 1, min = 0.0001, max = 50) bv_sur(mode = 1, sd = 1, min = 0.0001, max = 50) Arguments mode, sd Numeric scalar. Mode / standard deviation of the parameter. Note that the mode of psi is set automatically by default, and would need to be provided as vector. min, max Numeric scalar. Minimum / maximum allowed value. Note that for psi these are set automatically or need to provided as vectors. fun Function taking Y, lags and the prior’s parameter par to generate and return a named list with elements X and Y (numeric matrices). Details Dummy priors are often used to "reduce the importance of the deterministic component implied by VARs estimated conditioning on the initial observations" (<NAME> Primiceri, 2015, p. 440). One such prior is the sum-of-coefficients (SOC) prior, which imposes the notion that a no-change forecast is optimal at the beginning of a time series. Its key parameter µ controls the tightness - i.e. for low values the model is pulled towards a form with as many unit roots as variables and no cointegration. Another such prior is the single-unit-root (SUR) prior, that allows for cointegration relationships in the data. It pushes variables either towards their unconditional mean or towards the presence of at least one unit root. These priors are implemented via Theil mixed estimation, i.e. by adding dummy-observations on top of the data matrix. They are available via the functions bv_soc and bv_sur. Value Returns a named list of class bv_dummy for bv_priors. Functions • bv_soc(): Sum-of-coefficients dummy prior • bv_sur(): Single-unit-root dummy prior References <NAME>. and <NAME>. and <NAME>. (2015) Prior Selection for Vector Autoregressions. The Review of Economics and Statistics, 97:2, 436-451, doi:10.1162/REST_a_00483. See Also bv_priors; bv_minnesota Examples # Create a sum-of-coefficients prior add_soc <- function(Y, lags, par) { soc <- if(lags == 1) {diag(Y[1, ]) / par} else { diag(colMeans(Y[1:lags, ])) / par } Y_soc <- soc X_soc <- cbind(rep(0, ncol(Y)), matrix(rep(soc, lags), nrow = ncol(Y))) return(list("Y" = Y_soc, "X" = X_soc)) } soc <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_soc) # Create a single-unit-root prior add_sur <- function(Y, lags, par) { sur <- if(lags == 1) {Y[1, ] / par} else { colMeans(Y[1:lags, ]) / par } Y_sur <- sur X_sur <- c(1 / par, rep(sur, lags)) return(list("Y" = Y_sur, "X" = X_sur)) } sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur) # Add the new custom dummy priors bv_priors(hyper = "auto", soc = soc, sur = sur) bv_fcast Forecast settings Description Provide forecast settings to predict.bvar. Allows adjusting the horizon of forecasts, and for setting up conditional forecasts. See the Details section for further information. Usage bv_fcast(horizon = 12, cond_path = NULL, cond_vars = NULL) Arguments horizon Integer scalar. Horizon for which to compute forecasts. cond_path Optional numeric vector or matrix used for conditional forecasts. Supply vari- able path(s) on which forecasts are conditioned on. Unrestricted future realisa- tions should be filled with NA. Note that not all variables can be restricted at the same time. cond_vars Optional character or numeric vector. Used to subset cond_path to specific vari- able(s) via name or position. Not needed when cond_path is constructed for all variables. Details Conditional forecasts are calculated using the algorithm by Waggoner and Zha (1999). They are set up by imposing a path on selected variables. Value Returns a named list of class bv_fcast with options for bvar or predict.bvar. References Waggoner, <NAME>., & <NAME>. (1999). Conditional Forecasts in Dynamic Multivariate Models. Review of Economics and Statistics, 81:4, 639-651, doi:10.1162/003465399558508. See Also predict.bvar; plot.bvar_fcast Examples # Set forecast-horizon to 20 time periods for unconditional forecasts bv_fcast(horizon = 20) # Define a path for the second variable (in the initial six periods). bv_fcast(cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2) # Constrain the paths of the first and third variables. paths <- matrix(NA, nrow = 10, ncol = 2) paths[1:5, 1] <- 1 paths[1:10, 2] <- 2 bv_fcast(cond_path = paths, cond_var = c(1, 3)) bv_irf Impulse response settings and identification Description Provides settings for the computation of impulse responses to bvar, irf.bvar or fevd.bvar. Al- lows setting the horizon for which impulse responses should be computed, whether or not forecast error variance decompositions (FEVDs) should be included as well as if and what kind of identifica- tion should be used. See the Details section for further information on identification. Identification can be achieved via Cholesky decomposition, sign restrictions (Rubio-Ramirez, Waggoner and Zha, 2010), and zero and sign restrictions (Arias, Rubio-Ramirez and Waggoner, 2018). Usage bv_irf( horizon = 12, fevd = FALSE, identification = TRUE, sign_restr = NULL, sign_lim = 1000 ) Arguments horizon Integer scalar. The horizon for which impulse responses (and FEVDs) should be computed. Note that the first period corresponds to impacts i.e. contempora- neous effects. fevd Logical scalar. Whether or not forecast error variance decompositions should be calculated. identification Logical scalar. Whether or not the shocks used for calculating impulses should be identified. Defaults to TRUE, i.e. identification via Cholesky decomposition of the VCOV-matrix unless sign_restr is provided. sign_restr Elements inform about expected impacts of certain shocks. Can be either 1, −1 or 0 depending on whether a positive, a negative or no contemporaneous effect of a certain shock is expected. Elements set to N A indicate that there are no particular expectations for the contemporaneous effects. The default value is NULL. Note that in order to be fully identified at least M ∗ (M − 1)/2 restrictions have to be set and a maximum of M − j zero restrictions can be imposed on the j’th column. sign_lim Integer scalar. Maximum number of tries to find suitable matrices to for fitting sign or zero and sign restrictions. Details Identification can be performed via Cholesky decomposition, sign restrictions, or zero and sign restrictions. The algorithm for generating suitable sign restrictions follows Rubio-Ramirez, Wag- goner and Zha (2010), while the one for zero and sign restrictions follows Arias, Rubio-Ramirez and Waggoner (2018). Note the possiblity of finding no suitable zero/sign restrictions. Value Returns a named list of class bv_irf with options for bvar, irf.bvar or fevd.bvar. References Rubio-Ramirez, <NAME>. and <NAME>. and <NAME>. (2010) Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference. The Review of Economic Studies, 77, 665- 696, doi:10.1111/j.1467937X.2009.00578.x. Arias, J.E. and <NAME>. and Waggoner, <NAME>. (2018) Inference Based on Structural Vector Autoregressions Identifiied with Sign and Zero Restrictions: Theory and Applications. Econometrica, 86, 2, 685-720, doi:10.3982/ECTA14468. See Also irf.bvar; plot.bvar_irf Examples # Set impulse responses to a horizon of 20 time periods and enable FEVD # (Identification is performed via Cholesky decomposition) bv_irf(horizon = 20, fevd = TRUE) # Set up structural impulse responses using sign restrictions signs <- matrix(c(1, NA, NA, -1, 1, -1, -1, 1, 1), nrow = 3) bv_irf(sign_restr = signs) # Set up structural impulse responses using zero and sign restrictions zero_signs <- matrix(c(1, 0, NA, -1, 1, 0, -1, 1, 1), nrow = 3) bv_irf(sign_restr = zero_signs) # Prepare to estimate unidentified impulse responses bv_irf(identification = FALSE) bv_metropolis Metropolis-Hastings settings Description Function to provide settings for the Metropolis-Hastings step in bvar. Options include scaling the inverse Hessian that is used to draw parameter proposals and automatic scaling to achieve certain acceptance rates. Usage bv_metropolis( scale_hess = 0.01, adjust_acc = FALSE, adjust_burn = 0.75, acc_lower = 0.25, acc_upper = 0.45, acc_change = 0.01 ) bv_mh( scale_hess = 0.01, adjust_acc = FALSE, adjust_burn = 0.75, acc_lower = 0.25, acc_upper = 0.45, acc_change = 0.01 ) Arguments scale_hess Numeric scalar or vector. Scaling parameter, determining the range of hyperpa- rameter draws. Should be calibrated so a reasonable acceptance rate is reached. If provided as vector the length must equal the number of hyperparameters (one per variable for psi). adjust_acc Logical scalar. Whether or not to further scale the variability of parameter draws during the burn-in phase. adjust_burn Numeric scalar. How much of the burn-in phase should be used to scale param- eter variability. See Details. acc_lower, acc_upper Numeric scalar. Lower (upper) bound of the target acceptance rate. Required if adjust_acc is set to TRUE. acc_change Numeric scalar. Percent change applied to the Hessian matrix for tuning accep- tance rate. Required if adjust_acc is set to TRUE. Details Note that adjustment of the acceptance rate by scaling the parameter draw variability can only be done during the burn-in phase, as otherwise the resulting draws do not feature the desirable properties of a Markov chain. After the parameter draws have been scaled, some additional draws should be burnt. Value Returns a named list of class bv_metropolis with options for bvar. Examples # Increase the scaling parameter bv_mh(scale_hess = 1) # Turn on automatic scaling of the acceptance rate to [20%, 40%] bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4) # Increase the rate of automatic scaling bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4, acc_change = 0.1) # Use only 50% of the burn-in phase to adjust scaling bv_mh(adjust_acc = TRUE, adjust_burn = 0.5) bv_minnesota Minnesota prior settings Description Provide settings for the Minnesota prior to bv_priors. See the Details section for further informa- tion. Usage bv_minnesota( lambda = bv_lambda(), alpha = bv_alpha(), psi = bv_psi(), var = 10000000, b = 1 ) bv_mn( lambda = bv_lambda(), alpha = bv_alpha(), psi = bv_psi(), var = 10000000, b = 1 ) bv_lambda(mode = 0.2, sd = 0.4, min = 0.0001, max = 5) bv_alpha(mode = 2, sd = 0.25, min = 1, max = 3) bv_psi(scale = 0.004, shape = 0.004, mode = "auto", min = "auto", max = "auto") Arguments lambda List constructed via bv_lambda. Arguments are mode, sd, min and max. May also be provided as a numeric vector of length 4. alpha List constructed via bv_alpha. Arguments are mode, sd, min and max. High values for mode may affect invertibility of the augmented data matrix. May also be provided as a numeric vector of length 4. psi List with elements scale, shape of the prior as well as mode and optionally min and max. The length of these needs to match the number of variables (i.e. columns) in the data. By default mode is set automatically to the square-root of the innovations variance after fitting an AR(p) model to the data. If arima fails due to a non-stationary time series the order of integration is incremented by 1. By default min / max are set to mode divided / multiplied by 100. var Numeric scalar with the prior variance on the model’s constant. b Numeric scalar, vector or matrix with the prior mean. A scalar is applied to all variables, with a default value of 1. Consider setting it to 0 for growth rates. A vector needs to match the number of variables (i.e. columns) in the data, with a prior mean per variable. If provided, a matrix needs to have a column per variable (M ), and M ∗ p + 1 rows, where p is the number of lags applied. mode, sd Numeric scalar. Mode / standard deviation of the parameter. Note that the mode of psi is set automatically by default, and would need to be provided as vector. min, max Numeric scalar. Minimum / maximum allowed value. Note that for psi these are set automatically or need to provided as vectors. scale, shape Numeric scalar. Scale and shape parameters of a Gamma distribution. Details Essentially this prior imposes the hypothesis, that individual variables all follow random walk pro- cesses. This parsimonious specification typically performs well in forecasts of macroeconomic time series and is often used as a benchmark for evaluating accuracy (Kilian and Lütkepohl, 2017). The key parameter is λ (lambda), which controls the tightness of the prior. The parameter α (alpha) governs variance decay with increasing lag order, while ψ (psi) controls the prior’s standard de- viation on lags of variables other than the dependent. The Minnesota prior is often refined with additional priors, trying to minimise the importance of conditioning on initial observations. See bv_dummy for more information on such priors. Value Returns a list of class bv_minnesota with options for bvar. Functions • bv_lambda(): Tightness of the Minnesota prior • bv_alpha(): Variance decay with increasing lag order • bv_psi(): Prior standard deviation on other lags References <NAME>. and <NAME>. (2017). Structural Vector Autoregressive Analysis. Cambridge Univer- sity Press, doi:10.1017/9781108164818 See Also bv_priors; bv_dummy Examples # Adjust alpha and the Minnesota prior variance. bv_mn(alpha = bv_alpha(mode = 0.5, sd = 1, min = 1e-12, max = 10), var = 1e6) # Optionally use a vector as shorthand bv_mn(alpha = c(0.5, 1, 1e-12, 10), var = 1e6) # Only adjust lambda's standard deviation bv_mn(lambda = bv_lambda(sd = 2)) # Provide prior modes for psi (for a VAR with three variables) bv_mn(psi = bv_psi(mode = c(0.7, 0.3, 0.9))) bv_priors Prior settings Description Function to provide priors and their parameters to bvar. Used for adjusting the parameters treated as hyperparameters, the Minnesota prior and adding various dummy priors through the ellipsis parameter. Note that treating ψ (psi) as a hyperparameter in a model with many variables may lead to very low acceptance rates and thus hinder convergence. Usage bv_priors(hyper = "auto", mn = bv_mn(), ...) Arguments hyper Character vector. Used to specify the parameters to be treated as hyperparame- ters. May also be set to "auto" or "full" for an automatic / full subset. Other allowed values are the Minnesota prior’s parameters "lambda", "alpha" and "psi" as well as the names of additional dummy priors included via .... mn List of class "bv_minnesota". Options for the Minnesota prior, set via bv_mn. ... Optional lists of class bv_dummy with options for dummy priors. Must be as- signed a name in the function call. Created with bv_dummy. Value Returns a named list of class bv_priors with options for bvar. See Also bv_mn; bv_dummy Examples # Extend the hyperparameters to the full Minnesota prior bv_priors(hyper = c("lambda", "alpha", "psi")) # Alternatively # bv_priors("full") # Add a dummy prior via `bv_dummy()` # Re-create the single-unit-root prior add_sur <- function(Y, lags, par) { sur <- if(lags == 1) {Y[1, ] / par} else { colMeans(Y[1:lags, ]) / par } Y_sur <- sur X_sur <- c(1 / par, rep(sur, lags)) return(list("Y" = Y_sur, "X" = X_sur)) } sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur) # Add the new prior bv_priors(hyper = "auto", sur = sur) coda Methods for coda Markov chain Monte Carlo objects Description Methods to convert parameter and/or coefficient draws from bvar to coda’s mcmc (or mcmc.list) format for further processing. Usage as.mcmc.bvar( x, vars = NULL, vars_response = NULL, vars_impulse = NULL, chains = list(), ... ) as.mcmc.bvar_chains( x, vars = NULL, vars_response = NULL, vars_impulse = NULL, chains = list(), ... ) Arguments x A bvar object, obtained from bvar. vars Character vector used to select variables. Elements are matched to hyperpa- rameters or coefficients. Coefficients may be matched based on the dependent variable (by providing the name or position) or the explanatory variables (by providing the name and the desired lag). See the example section for a demon- stration. Defaults to NULL, i.e. all hyperparameters. vars_response, vars_impulse Optional character or integer vectors used to select coefficents. Dependent vari- ables are specified with vars_response, explanatory ones with vars_impulse. See the example section for a demonstration. chains List with additional bvar objects. If provided, an object of class mcmc.list is returned. ... Other parameters for as.mcmc. Value Returns a coda mcmc (or mcmc.list) object. See Also bvar; mcmc; mcmc.list Examples library("coda") # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate two BVARs using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 750L, n_burn = 250L, verbose = FALSE) y <- bvar(data, lags = 1, n_draw = 750L, n_burn = 250L, verbose = FALSE) # Convert the hyperparameter lambda as.mcmc(x, vars = c("lambda")) # Convert coefficients for the first dependent, use chains in method as.mcmc(structure(list(x, y), class = "bvar_chains"), vars = "CPIAUCSL") # Convert the coefs of variable three's first lag, use in the generic as.mcmc(x, vars = "FEDFUNDS-lag1", chains = y) # Convert hyperparameters and constant coefficient values for variable 1 as.mcmc(x, vars = c("lambda", "CPI", "constant")) # Specify coefficent values to convert in alternative way as.mcmc(x, vars_impulse = c("FED", "CPI"), vars_response = "UNRATE") coef.bvar Coefficient and VCOV methods for Bayesian VARs Description Retrieves coefficient / variance-covariance values from Bayesian VAR models generated with bvar. Note that coefficients are available for every stored draw and one may retrieve (a) credible intervals via the conf_bands argument, or (2) means via the type argument. Usage ## S3 method for class 'bvar' coef( object, type = c("quantile", "mean"), conf_bands = 0.5, companion = FALSE, ... ) ## S3 method for class 'bvar' vcov(object, type = c("quantile", "mean"), conf_bands = 0.5, ...) Arguments object A bvar object, obtained from bvar. type Character scalar. Whether to return quantile or mean values. Note that conf_bands is ignored for mean values. conf_bands Numeric vector of confidence bands to apply. E.g. for bands at 5%, 10%, 90% and 95% set this to c(0.05, 0.1). Note that the median, i.e. 0.5 is always included. companion Logical scalar. Whether to retrieve the companion matrix of coefficients. See companion.bvar. ... Not used. Value Returns a numeric array of class bvar_coefs or bvar_vcovs at the specified values. See Also bvar; companion.bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Get coefficent values at the 10%, 50% and 90% quantiles coef(x, conf_bands = 0.10) # Only get the median of the variance-covariance matrix vcov(x, conf_bands = 0.5) companion Retrieve companion matrix from a Bayesian VAR Description Calculates the companion matrix for Bayesian VARs generated via bvar. Usage companion(object, ...) ## S3 method for class 'bvar' companion(object, type = c("quantile", "mean"), conf_bands = 0.5, ...) Arguments object A bvar object, obtained from bvar. ... Not used. type Character scalar. Whether to return quantile or mean values. Note that conf_bands is ignored for mean values. conf_bands Numeric vector of confidence bands to apply. E.g. for bands at 5%, 10%, 90% and 95% set this to c(0.05, 0.1). Note that the median, i.e. 0.5 is always included. Value Returns a numeric array/matrix of class bvar_comp with the VAR’s coefficents in companion form at the specified values. See Also bvar; coef.bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Get companion matrices for confidence bands at 10%, 50% and 90% companion(x, conf_bands = 0.10) density.bvar Density methods for Bayesian VARs Description Calculates densities of hyperparameters or coefficient draws from Bayesian VAR models generated via bvar. Wraps standard density outputs into a list. Usage ## S3 method for class 'bvar' density(x, vars = NULL, vars_response = NULL, vars_impulse = NULL, ...) ## S3 method for class 'bvar_density' plot(x, mar = c(2, 2, 2, 0.5), mfrow = c(length(x), 1), ...) independent_index(var, n_vars, lag) Arguments x A bvar object, obtained from bvar. vars Character vector used to select variables. Elements are matched to hyperpa- rameters or coefficients. Coefficients may be matched based on the dependent variable (by providing the name or position) or the explanatory variables (by providing the name and the desired lag). See the example section for a demon- stration. Defaults to NULL, i.e. all hyperparameters. vars_response, vars_impulse Optional character or integer vectors used to select coefficents. Dependent vari- ables are specified with vars_response, explanatory ones with vars_impulse. See the example section for a demonstration. ... Fed to density or par. mar Numeric vector. Margins for par. mfrow Numeric vector. Rows for par. var, n_vars, lag Integer scalars. Retrieve the position of lag lag of variable var given n_vars total variables. Value Returns a list with outputs of density. See Also bvar; density Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Get densities of the hyperparameters density(x) # Plot them plot(density(x)) # Only get the densities associated with dependent variable 1 density(x, vars_response = "CPI") # Check out the constant's densities plot(density(x, vars_impulse = 1)) # Get the densities of variable three's first lag density(x, vars = "FEDFUNDS-lag1") # Get densities of lambda and the coefficients of dependent variable 2 density(x, vars = c("lambda", "UNRATE")) fitted.bvar Fitted and residual methods for Bayesian VARs Description Calculates fitted or residual values for Bayesian VAR models generated with bvar. Usage ## S3 method for class 'bvar' fitted(object, type = c("quantile", "mean"), conf_bands = 0.5, ...) ## S3 method for class 'bvar' residuals(object, type = c("quantile", "mean"), conf_bands = 0.5, ...) ## S3 method for class 'bvar_resid' plot(x, vars = NULL, mar = c(2, 2, 2, 0.5), ...) Arguments object A bvar object, obtained from bvar. type Character scalar. Whether to return quantile or mean values. Note that conf_bands is ignored for mean values. conf_bands Numeric vector of confidence bands to apply. E.g. for bands at 5%, 10%, 90% and 95% set this to c(0.05, 0.1). Note that the median, i.e. 0.5 is always included. ... Not used. x Object of class bvar_fitted / bvar_resid. vars Character vector used to select variables. Elements are matched to hyperpa- rameters or coefficients. Coefficients may be matched based on the dependent variable (by providing the name or position) or the explanatory variables (by providing the name and the desired lag). See the example section for a demon- stration. Defaults to NULL, i.e. all hyperparameters. mar Numeric vector. Margins for par. Value Returns a numeric array of class bvar_fitted or bvar_resid at the specified values. See Also bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Get fitted values and adjust confidence bands to 10%, 50% and 90% fitted(x, conf_bands = 0.10) # Get the residuals of variable 1 resid(x, vars = 1) ## Not run: # Get residuals and plot them plot(residuals(x)) ## End(Not run) fred_qd FRED-MD and FRED-QD: Databases for Macroeconomic Research Description FRED-MD and FRED-QD are large macroeconomic databases, containing monthly and quarterly time series that are frequently used in the literature. They are intended to facilitate the reproduction of empirical work and simplify data related tasks. Included datasets are provided as is - transforma- tion codes are available in system.file("fred_trans.rds", package = "BVAR"). These can be applied automatically with fred_transform. Usage fred_qd fred_md Format A data.frame object with dates as rownames. An object of class data.frame with 769 rows and 118 columns. Details The versions of FRED-MD and FRED-QD that are provided here are licensed under a modified ODC-BY 1.0 license that can be found in the provided LICENSE file. The provided versions are subset to variables that are either in public domain or for which we were given permission to use. For further details see McCracken and Ng (2016) or https://research.stlouisfed.org/econ/ mccracken/fred-databases/. We would like to thank <NAME> and <NAME>, Adri- <NAME> and the Federal Reserve Bank of St. Louis for creating, updating and making available the datasets and many of the contained time series. We also thank all other owners of included time series that permitted their use. Source https://research.stlouisfed.org/econ/mccracken/fred-databases/ References <NAME>. and <NAME>. (2016) FRED-MD: A Monthly Database for Macroeconomic Re- search. Journal of Business & Economic Statistics, 34:4, 574-589, doi:10.1080/07350015.2015.1086655. <NAME>., & <NAME>. (2020). FRED-QD: A Quarterly Database for Macroeconomic Re- search w26872. National Bureau of Economic Research. See Also fred_transform fred_transform FRED transformation and subset helper Description Apply transformations given by FRED-MD or FRED-QD and generate rectangular subsets. See fred_qd for information on data and the Details section for information on the transformations. Call without arguments to retrieve available codes / all FRED suggestions. Usage fred_transform( data, type = c("fred_qd", "fred_md"), codes, na.rm = TRUE, lag = 1L, scale = 100 ) fred_code(vars, type = c("fred_qd", "fred_md"), table = FALSE) Arguments data A data.frame with FRED-QD or FRED-MD time series. The column names are used to find the correct transformation. type Character scalar. Whether data stems from the FRED-QD or the FRED-MD database. codes Integer vector. Transformation code(s) to apply to data. Overrides automatic lookup of transformation codes. na.rm Logical scalar. Whether to subset to rows without any NA values. A warning is thrown if rows are non-sequential. lag Integer scalar. Number of lags to apply when taking differences. See diff. scale Numeric scalar. Scaling to apply to log differences. vars Character vector. Names of the variables to look for. table Logical scalar. Whether to return a table of matching transformation codes in- stead of just the codes. Details FRED-QD and FRED-MD include a transformation code for every variable. All codes are pro- vided in system.file("fred_trans.csv", package = "BVAR"). The transformation codes are as follows: 1. 1 - no transformation; 2. 2 - first differences - ∆xt ; 3. 3 - second differences - ∆2 xt ; 4. 4 - log transformation - log xt ; 5. 5 - log differences - ∆ log xt ; 6. 6 - log second differences - ∆2 log xt ; 7. 7 - percent change differences - ∆xt /xt−1 − 1; Note that the transformation codes of FRED-MD and FRED-QD may differ for the same series. Value fred_transform returns a data.frame object with applied transformations. fred_code returns transformation codes, or a data.frame of matching transformation codes. See Also fred_qd Examples # Transform a subset of FRED-QD fred_transform(fred_qd[, c("GDPC1", "INDPRO", "FEDFUNDS")]) # Get info on transformation codes for unemployment variables fred_code("UNRATE", table = TRUE) # Get the transformation code for GDPC1 fred_code("GDPC1", type = "fred_qd") # Transform all of FRED-MD ## Not run: fred_transform(fred_md, type = "fred_md") ## End(Not run) irf.bvar Impulse response and forecast error methods for Bayesian VARs Description Retrieves / calculates impulse response functions (IRFs) and/or forecast error variance decompo- sitions (FEVDs) for Bayesian VARs generated via bvar. If the object is already present and no settings are supplied it is simply retrieved, otherwise it will be calculated ex-post. Note that FEVDs require the presence / calculation of IRFs. To store the results you may want to assign the output using the setter function (irf(x) <- irf(x)). May also be used to update confidence bands. Usage ## S3 method for class 'bvar' irf(x, ..., conf_bands, n_thin = 1L) ## S3 method for class 'bvar' fevd(x, ..., conf_bands, n_thin = 1L) irf(x, ...) irf(x) <- value fevd(x, ...) fevd(x) <- value ## S3 method for class 'bvar_irf' summary(object, vars_impulse = NULL, vars_response = NULL, ...) Arguments x, object A bvar object, obtained from bvar. Summary and print methods take in a bvar_irf / bvar_fevd object. ... A bv_irf object or arguments to be fed into bv_irf. Contains settings for the IRFs / FEVDs. conf_bands Numeric vector of confidence bands to apply. E.g. for bands at 5%, 10%, 90% and 95% set this to c(0.05, 0.1). Note that the median, i.e. 0.5 is always included. n_thin Integer scalar. Every n_thin’th draw in x is used to calculate, others are dropped. value A bvar_irf object to assign. vars_impulse, vars_response Optional numeric or character vector. Used to subset the summary method’s outputs to certain variables by position or name (must be available). Defaults to NULL, i.e. all variables. Value Returns a list of class bvar_irf including IRFs and optionally FEVDs at desired confidence bands. The fevd method only returns a the nested bvar_fevd object. The summary method returns a numeric array of impulse responses at the specified confidence bands. See Also plot.bvar_irf; bv_irf Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 600L, n_burn = 100L, verbose = FALSE) # Compute + store IRF with a longer horizon, no identification and thinning irf(x) <- irf(x, bv_irf(horizon = 24L, identification = FALSE), n_thin = 5L) # Update the confidence bands of the IRFs irf(x, conf_bands = c(0.01, 0.05, 0.1)) # Recalculate with sign restrictions provided via the ellipsis irf(x, sign_restr = matrix(c(1, NA, NA, -1, 1, -1, -1, 1, 1), nrow = 3)) # Recalculate with zero and sign restrictions provided via the ellipsis irf(x, sign_restr = matrix(c(1, 0, 1, NA, 1, 1, -1, -1, 1), nrow = 3)) # Calculate the forecast error variance decomposition fevd(x) # Get a summary of the saved impulse response function summary(x) # Limit the summary to responses of variable #2 summary(x, vars_response = 2L) logLik.bvar Log-Likelihood method for Bayesian VARs Description Calculates the log-likelihood of Bayesian VAR models generated with bvar. Usage ## S3 method for class 'bvar' logLik(object, ...) Arguments object A bvar object, obtained from bvar. ... Not used. Value Returns an object of class logLik. See Also bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Calculate the log-likelihood logLik(x) par_bvar Parallel hierarchical Bayesian vector autoregression Description Wrapper for bvar to simplify parallel computation via parLapply. Make sure to properly start and stop the provided cluster. Usage par_bvar( cl, n_runs = length(cl), data, lags, n_draw = 10000L, n_burn = 5000L, n_thin = 1L, priors = bv_priors(), mh = bv_mh(), fcast = NULL, irf = NULL ) Arguments cl A cluster object obtained from makeCluster. n_runs The number of parallel runs to calculate. Defaults to the length of cl, i.e. the number of registered nodes. data Numeric matrix or dataframe. Note that observations are expected to be ordered from earliest to latest, and variables in the columns. lags Integer scalar. Lag order of the model. n_draw, n_burn Integer scalar. The number of iterations to (a) cycle through and (b) burn at the start. n_thin Integer scalar. Every n_thin’th iteration is stored. For a given memory require- ment thinning reduces autocorrelation, while increasing effective sample size. priors Object from bv_priors with prior settings. Used to adjust the Minnesota prior, add custom dummy priors, and choose hyperparameters for hierarchical estima- tion. mh Object from bv_mh with settings for the Metropolis-Hastings step. Used to tune automatic adjustment of the acceptance rate within the burn-in period, or manu- ally adjust the proposal variance. fcast Object from bv_fcast with forecast settings. Options include the horizon and settings for conditional forecasts i.e. scenario analysis. May also be calculated ex-post using predict.bvar. irf Object from bv_irf with settings for the calculation of impulse responses and forecast error variance decompositions. Options include the horizon and differ- ent identification schemes. May also be calculated ex-post using irf.bvar. Value Returns a list of class bvar_chain with bvar objects. See Also bvar; parLapply Examples library("parallel") cl <- makeCluster(2L) # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # A singular run using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Two parallel runs y <- par_bvar(cl, n_runs = 2, data = data, lags = 1, n_draw = 1000L, n_burn = 200L) stopCluster(cl) # Plot lambda for all of the runs ## Not run: plot(x, type = "full", vars = "lambda", chains = y) # Convert the hyperparameter lambda to a coda mcmc.list object coda::as.mcmc(y, vars = "lambda") ## End(Not run) plot.bvar Plotting method for Bayesian VARs Description Method to plot trace and densities of coefficient, hyperparameter and marginal likelihood draws obtained from bvar. Several types of plot are available via the argument type, including traces, densities, plots of forecasts and impulse responses. Usage ## S3 method for class 'bvar' plot( x, type = c("full", "trace", "density", "irf", "fcast"), vars = NULL, vars_response = NULL, vars_impulse = NULL, chains = list(), mar = c(2, 2, 2, 0.5), ... ) Arguments x A bvar object, obtained from bvar. type A string with the type of plot desired. The default option "full" plots both densities and traces. vars Character vector used to select variables. Elements are matched to hyperpa- rameters or coefficients. Coefficients may be matched based on the dependent variable (by providing the name or position) or the explanatory variables (by providing the name and the desired lag). See the example section for a demon- stration. Defaults to NULL, i.e. all hyperparameters. vars_response, vars_impulse Optional character or integer vectors used to select coefficents. Dependent vari- ables are specified with vars_response, explanatory ones with vars_impulse. See the example section for a demonstration. chains List of bvar objects. Contents are then added to trace and density plots to help assessing covergence. mar Numeric vector. Margins for par. ... Other graphical parameters for par. Value Returns x invisibly. See Also bvar; plot.bvar_fcast; plot.bvar_irf. Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Plot full traces and densities plot(x) # Only plot the marginal likelihood's trace plot(x, "trace", "ml") # Access IRF and forecast plotting functions plot(x, type = "irf", vars_response = 2) plot(x, type = "fcast", vars = 2) plot.bvar_fcast Plotting method for Bayesian VAR predictions Description Plotting method for forecasts obtained from predict.bvar. Forecasts of all or a subset of the available variables can be plotted. Usage ## S3 method for class 'bvar_fcast' plot( x, vars = NULL, col = "#737373", t_back = 1, area = FALSE, fill = "#808080", variables = NULL, orientation = c("vertical", "horizontal"), mar = c(2, 2, 2, 0.5), ... ) Arguments x A bvar_fcast object, obtained from predict.bvar. vars Optional numeric or character vector. Used to subset the plot to certain variables by position or name (must be available). Defaults to NULL, i.e. all variables. col Character vector. Colour(s) of the lines delineating credible intervals. Single values will be recycled if necessary. Recycled HEX color codes are varied in transparency if not provided (e.g. "#737373FF"). Lines can be bypassed by setting this to "transparent". t_back Integer scalar. Number of observed datapoints to plot ahead of the forecast. area Logical scalar. Whether to fill the credible intervals using polygon. fill Character vector. Colour(s) to fill the credible intervals with. See col for more information. variables Optional character vector. Names of all variables in the object. Used to subset and title. Taken from x$variables if available. orientation String indicating the orientation of the plots. Defaults to "v" (i.e. vertical); may be set to "h" (i.e. horizontal). mar Numeric vector. Margins for par. ... Other graphical parameters for par. Value Returns x invisibly. See Also bvar; predict.bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Store predictions ex-post predict(x) <- predict(x) # Plot forecasts for all available variables plot(predict(x)) # Subset to variables in positions 1 and 3 via their name plot(predict(x), vars = c("CPI", "FED")) # Subset via position, increase the plotted forecast horizon and past data plot(predict(x, horizon = 20), vars = c(1, 3), t_back = 10) # Adjust confidence bands and the plot's orientation plot(predict(x, conf_bands = 0.25), orientation = "h") # Draw areas inbetween the confidence bands and skip drawing lines plot(predict(x), col = "transparent", area = TRUE) # Plot a conditional forecast (with a constrained second variable). plot(predict(x, cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2)) plot.bvar_irf Plotting method for Bayesian VAR impulse responses Description Plotting method for impulse responses obtained from irf.bvar. Impulse responses of all or a subset of the available variables can be plotted. Usage ## S3 method for class 'bvar_irf' plot( x, vars_response = NULL, vars_impulse = NULL, col = "#737373", area = FALSE, fill = "#808080", variables = NULL, mar = c(2, 2, 2, 0.5), ... ) Arguments x A bvar_irf object, obtained from irf.bvar. vars_impulse, vars_response Optional numeric or character vector. Used to subset the plot’s impulses / re- sponses to certain variables by position or name (must be available). Defaults to NULL, i.e. all variables. col Character vector. Colour(s) of the lines delineating credible intervals. Single values will be recycled if necessary. Recycled HEX color codes are varied in transparency if not provided (e.g. "#737373FF"). Lines can be bypassed by setting this to "transparent". area Logical scalar. Whether to fill the credible intervals using polygon. fill Character vector. Colour(s) to fill the credible intervals with. See col for more information. variables Optional character vector. Names of all variables in the object. Used to subset and title. Taken from x$variables if available. mar Numeric vector. Margins for par. ... Other graphical parameters for par. Value Returns x invisibly. See Also bvar; irf.bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Store IRFs ex-post irf(x) <- irf(x) # Plot impulse responses for all available variables plot(irf(x)) # Subset to impulse variables in positions 2 and 3 via their name plot(irf(x), vars_impulse = c(2, 3)) # Subset via position and increase the plotted IRF horizon plot(irf(x, horizon = 20), vars_impulse = c("UNRATE", "FED")) # Adjust confidence bands and subset to one response variables plot(irf(x, conf_bands = 0.25), vars_response = "CPI") # Draw areas inbetween the confidence bands and skip drawing lines plot(irf(x), col = "transparent", area = TRUE) # Subset to a specific impulse and response plot(irf(x), vars_response = "CPI", vars_impulse = "FED") predict.bvar Predict method for Bayesian VARs Description Retrieves / calculates forecasts for Bayesian VARs generated via bvar. If a forecast is already present and no settings are supplied it is simply retrieved, otherwise it will be calculated. To store the results you may want to assign the output using the setter function (predict(x) <- predict(x)). May also be used to update confidence bands. Usage ## S3 method for class 'bvar' predict(object, ..., conf_bands, n_thin = 1L, newdata) predict(object) <- value ## S3 method for class 'bvar_fcast' summary(object, vars = NULL, ...) Arguments object A bvar object, obtained from bvar. Summary and print methods take in a bvar_fcast object. ... A bv_fcast object or parameters to be fed into bv_fcast. Contains settings for the forecast. conf_bands Numeric vector of confidence bands to apply. E.g. for bands at 5%, 10%, 90% and 95% set this to c(0.05, 0.1). Note that the median, i.e. 0.5 is always included. n_thin Integer scalar. Every n_thin’th draw in object is used to predict, others are dropped. newdata Optional numeric matrix or dataframe. Used to base the prediction on. value A bvar_fcast object to assign. vars Optional numeric or character vector. Used to subset the summary to certain variables by position or name (must be available). Defaults to NULL, i.e. all variables. Value Returns a list of class bvar_fcast including forecasts at desired confidence bands. The summary method returns a numeric array of forecast paths at the specified confidence bands. See Also plot.bvar_fcast; bv_fcast Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) # Calculate a forecast with an increased horizon y <- predict(x, horizon = 20) # Add some confidence bands and store the forecast predict(x) <- predict(x, conf_bands = c(0.05, 0.16)) # Recalculate with different settings and increased thinning predict(x, bv_fcast(24L), n_thin = 10L) # Simulate some new data to predict on predict(x, newdata = matrix(rnorm(300), ncol = 3)) # Calculate a conditional forecast (with a constrained second variable). predict(x, cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2) # Get a summary of the stored forecast summary(x) # Only get the summary for variable #2 summary(x, vars = 2L) summary.bvar Summary method for Bayesian VARs Description Retrieves several outputs of interest, including the median coefficient matrix, the median variance- covariance matrix, and the log-likelihood. Separate summary methods exist for impulse responses and forecasts. Usage ## S3 method for class 'bvar' summary(object, ...) Arguments object A bvar object, obtained from bvar. ... Not used. Value Returns a list of class bvar_summary with elements that can can be accessed individually: • bvar - the bvar object provided. • coef - coefficient values from coef.bvar. • vcov - VCOV values from vcov.bvar. • logLik - the log-likelihood from logLik. See Also bvar; predict.bvar; irf.bvar Examples # Access a subset of the fred_qd dataset data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")] # Transform it to be stationary data <- fred_transform(data, codes = c(5, 5, 1), lag = 4) # Estimate a BVAR using one lag, default settings and very few draws x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE) summary(x)
github.com/owenthereal/upterm
go
Go
README [¶](#section-readme) --- ### Upterm [Upterm](https://github.com/owenthereal/upterm) is an open-source tool enabling developers to share terminal sessions securely over the web. It’s perfect for remote pair programming, accessing computers behind NATs/firewalls, remote debugging, and more. This is a [blog post](https://owenou.com/upterm) to describe Upterm in depth. #### 🎥 Quick Demo [![demo](https://raw.githubusercontent.com/owenthereal/upterm/gh-pages/demo.gif)](https://asciinema.org/a/efeKPxxzKi3pkyu9LWs1yqdbB) #### 🚀 Getting Started #### Installation ##### Mac ``` brew install owenthereal/upterm/upterm ``` ##### Standalone `upterm` can be easily installed as an executable. Download the latest [compiled binaries](https://github.com/owenthereal/upterm/releases) and put it in your executable path. ##### From source ``` git clone [email protected]:owenthereal/upterm.git cd upterm go install ./cmd/upterm/... ``` #### 🔧 Basic Usage 1. Host starts a terminal session: ``` upterm host ``` 1. Host retrieves and shares the SSH connection string: ``` upterm session current ``` 1. Client connects using the shared string: ``` ssh [email protected] ``` #### 📘 Quick Reference Dive into more commands and advanced usage in the [documentation](https://github.com/owenthereal/upterm/blob/v0.13.0/docs/upterm.md). Below are some notable highlights: ##### Command Execution Host a session with any desired command: ``` upterm host -- docker run --rm -ti ubuntu bash ``` ##### Access Control Host a session with specified client public key(s) authorized to connect: ``` upterm host --authorized-key PATH_TO_PUBLIC_KEY ``` Authorize specified GitHub, GitLab, or SourceHut users with their corresponding public keys: ``` upterm host --github-user username upterm host --gitlab-user username upterm host --srht-user username ``` ##### Force command Host a session initiating `tmux new -t pair-programming`, while ensuring clients join with `tmux attach -t pair-programming`. This mirrors functionarity provided by tmate: ``` upterm host --force-command 'tmux attach -t pair-programming' -- tmux new -t pair-programming ``` ##### WebSocket Connection In scenarios where your host restricts ssh transport, establish a connection to `uptermd.upterm.dev` (or your self-hosted server) via WebSocket: ``` upterm host --server wss://uptermd.upterm.dev -- bash ``` Clients can connect to the host session via WebSocket as well: ``` ssh -o ProxyCommand='upterm proxy wss://[email protected]' [email protected]:443 ``` #### 💡 Tips ##### Resolving Tmux Session Display Issue **Issue**: The command `upterm session current` does not display the current session when used within Tmux. **Cause**: This occurs because `upterm session current` requires the `UPTERM_ADMIN_SOCKET` environment variable, which is set in the specified command. Tmux, however, does not carry over environment variables not on its default list to any Tmux session unless instructed to do so ([Reference](http://man.openbsd.org/i386/tmux.1#GLOBAL_AND_SESSION_ENVIRONMENT)). **Solution**: To rectify this, add the following line to your `~/.tmux.conf`: ``` set-option -ga update-environment " UPTERM_ADMIN_SOCKET" ``` ##### Identifying Upterm Session **Issue**: It might be unclear whether your shell command is running in an upterm session, especially with common shell commands like `bash` or `zsh`. **Solution**: To provide a clear indication, amend your `~/.bashrc` or `~/.zshrc` with the following line. This decorates your prompt with an emoji whenever the shell command is running in an upterm session: ``` export PS1="$([[ ! -z "${UPTERM_ADMIN_SOCKET}" ]] && echo -e '\xF0\x9F\x86\x99 ')$PS1" # Add an emoji to the prompt if `UPTERM_ADMIN_SOCKET` exists ``` #### ⚙️ How it works Upterm starts an SSH server (a.k.a. `sshd`) in the host machine and sets up a reverse SSH tunnel to a [Upterm server](https://github.com/owenthereal/upterm/tree/master/cmd/uptermd) (a.k.a. `uptermd`). Clients connect to a terminal session over the public internet via `uptermd` using `ssh` or `ssh` over WebSocket. ![upterm flowchart](https://raw.githubusercontent.com/owenthereal/upterm/gh-pages/upterm-flowchart.svg?sanitize=true) #### 🛠️ Deployment ##### Kubernetes You can deploy uptermd to a Kubernetes cluster. Install it with [helm](https://helm.sh): ``` helm repo add upterm https://upterm.dev helm repo update helm install uptermd upterm/uptermd ``` ##### Heroku The cheapest way to deploy a worry-free [Upterm server](https://github.com/owenthereal/upterm/tree/master/cmd/uptermd) (a.k.a. `uptermd`) is to use [Heroku](https://heroku.com). Heroku offers [free Dyno hours](https://www.heroku.com/pricing) which should be sufficient for most casual uses. You can deploy with one click of the following button: [![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy) You can also automate the deployment with [Heroku Terraform](https://devcenter.heroku.com/articles/using-terraform-with-heroku). The Heroku Terraform scripts are in the [terraform/heroku folder](https://github.com/owenthereal/upterm/blob/v0.13.0/terraform/heroku). A [util script](https://github.com/owenthereal/upterm/blob/v0.13.0/bin/heroku-install) is provided for your convenience to automate everything: ``` git clone https://github.com/owenthereal/upterm cd upterm ``` Provision uptermd in Heroku Common Runtime. Follow instructions. ``` bin/heroku-install ``` Provision uptermd in Heroku Private Spaces. Follow instructions. ``` TF_VAR_heroku_region=REGION TF_VAR_heroku_space=SPACE_NAME TF_VAR_heroku_team=TEAM_NAME bin/heroku-install ``` You **must** use WebScoket as the protocol for a Heroku-deployed Uptermd server because the platform only support HTTP/HTTPS routing. This is how you host a session and join a session: Use the Heroku-deployed Uptermd server via WebSocket ``` upterm host --server wss://YOUR_HEROKU_APP_URL -- YOUR_COMMAND ``` A client connects to the host session via WebSocket ``` ssh -o ProxyCommand='upterm proxy wss://TOKEN@YOUR_HEROKU_APP_URL' TOKEN@YOUR_HEROKU_APP_URL:443 ``` ##### Digital Ocean There is an util script that makes provisioning [Digital Ocean Kubernetes](https://www.digitalocean.com/products/kubernetes) and an Upterm server easier: ``` TF_VAR_do_token=$DO_PAT \ TF_VAR_uptermd_host=uptermd.upterm.dev \ TF_VAR_uptermd_acme_email=YOUR_EMAIL \ TF_VAR_uptermd_helm_repo=http://localhost:8080 \ TF_VAR_uptermd_host_keys_dir=PATH_TO_HOST_KEYS \ bin/do-install ``` ##### Systemd A hardened systemd service is provided in `systemd/uptermd.service`. You can use it to easily run a secured `uptermd` on your machine: ``` cp systemd/uptermd.service /etc/systemd/system/uptermd.service systemctl daemon-reload systemctl start uptermd ``` #### ⚖️ Comparasion with Prior Arts Upterm stands as a modern alternative to [Tmate](https://tmate.io). Tmate originates as a fork from an older iteration of Tmux, extending terminal sharing capabilities atop Tmux 2.x. However, Tmate has no plans to align with the latest Tmux updates, compelling Tmate & Tmux users to manage two separate configurations. For instance, the necessity to [bind identical keys twice, conditionally](https://github.com/tmate-io/tmate/issues/108). On the flip side, Upterm is architected from the ground up to be an independent solution, not a fork. It embodies the idea of connecting the input & output of any shell command between a host and its clients, transcending beyond merely `tmux`. This paves the way for securely sharing terminal sessions utilizing containers. Written in Go, Upterm is more hack-friendly compared to Tmate, which is crafted in C, akin to Tmux. The seamless compilation of Upterm CLI and server (`uptermd`) into a single binary facilitates swift [deployment of your pairing server](#readme-hammer_and_wrench-deployment) across any cloud environment, devoid of dependencies. #### License [Apache 2.0](https://github.com/owenthereal/upterm/raw/master/LICENSE) None
GeneralizedHyperbolic
cran
R
Package ‘GeneralizedHyperbolic’ October 12, 2022 Version 0.8-4 Date 2018-05-15 Title The Generalized Hyperbolic Distribution Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Depends R (>= 3.0.1) Imports DistributionUtils, MASS Suggests VarianceGamma, actuar, SkewHyperbolic, RUnit Encoding latin1 Description Functions for the hyperbolic and related distributions. Density, distribution and quantile functions and random number generation are provided for the hyperbolic distribution, the generalized hyperbolic distribution, the generalized inverse Gaussian distribution and the skew-Laplace distribution. Additional functionality is provided for the hyperbolic distribution, normal inverse Gaussian distribution and generalized inverse Gaussian distribution, including fitting of these distributions to data. Linear models with hyperbolic errors may be fitted using hyperblmFit. License GPL (>= 2) URL https://r-forge.r-project.org/projects/rmetrics/ NeedsCompilation no Repository CRAN Date/Publication 2018-05-15 14:38:15 UTC R topics documented: ArkansasRive... 3 Functions for Moment... 4 Generalized Inverse Gaussia... 5 GeneralizedHyperboli... 8 GeneralizedHyperbolicDistributio... 10 GeneralizedHyperbolicPlot... 13 ghypCalcRang... 15 ghypChangePar... 16 ghypCheckPar... 18 ghypMo... 19 ghypPara... 21 ghypScal... 22 gigCalcRang... 23 gigChangePar... 25 gigCheckPar... 26 gigFi... 27 gigFitStar... 30 gigHessia... 32 gigMo... 33 gigPara... 36 GIGPlot... 37 hyperbCalcRang... 38 hyperbChangePar... 40 hyperbCvMTes... 41 hyperbFi... 43 hyperbFitStar... 46 hyperbHessia... 48 hyperbl... 49 Hyperboli... 54 hyperbPara... 57 HyperbPlot... 58 hyperbWSqTabl... 59 mamqua... 60 momRecursio... 61 nervePuls... 62 NI... 63 nigCalcRang... 66 nigFi... 68 nigFitStar... 71 nigHessia... 73 nigPara... 74 nigPlot... 75 plotShapeTriangl... 77 resistor... 78 SandP50... 79 SkewLaplac... 79 SkewLaplacePlot... 81 Specific Generalized Hyperbolic Moments and Mode . . . . . . . . . . . . . . . . . . . 83 Specific Generalized Inverse Gaussian Moments and Mode . . . . . . . . . . . . . . . . 84 Specific Hyperbolic Distribution Moments and Mode . . . . . . . . . . . . . . . . . . . 86 Specific Normal Inverse Gaussian Distribution Moments and Mode . . . . . . . . . . . 87 summary.gigFi... 89 summary.hyperbFi... 90 summary.hyperbl... 92 summary.nigFi... 94 traffi... 96 ArkansasRiver Soil Electrical Conductivity Description Electrical conductivity of soil paste extracts from the Lower Arkansas River Valley, at sites upstream and downstream of the John Martin Reservoir. Usage data(ArkansasRiver) Format The format is: List of 2 $ upstream : num [1:823] 2.37 3.53 3.06 3.35 3.07 ... $ downstream: num [1:435] 8.75 6.59 5.09 6.03 5.64 ... Details Electrical conductivity is a measure of soil water salinity. Source This data set was supplied by <NAME> (<<EMAIL>>). References <NAME> and <NAME> (2011) Regional assessment of soil water salinity across an extensively irrigated river valley. Journal of Irrigation and Drainage Engineering, doi:10.1061/(ASCE)IR.1943- 4774.0000411 Examples data(ArkansasRiver) lapply(ArkansasRiver, summary) upstream <- ArkansasRiver[[1]] downstream <- ArkansasRiver[[2]] ## Fit normal inverse Gaussian ## Hyperbolic can also be fitted but fit is not as good fitUpstream <- nigFit(upstream) summary(fitUpstream) par(mfrow = c(2,2)) plot(fitUpstream) fitDownstream <- nigFit(downstream) summary(fitDownstream) plot(fitDownstream) par(mfrow = c(1,1)) ## Combined plot to compare ## Reproduces Figure 3 from Morway and Gates (2011) hist(upstream, col = "grey", xlab = "", ylab = "", cex.axis = 1.25, main = "", breaks = seq(0,20, by = 1), xlim = c(0,15), las = 1, ylim = c(0,0.5), freq = FALSE) param <- coef(fitUpstream) nigDens <- function(x) dnig(x, param = param) curve(nigDens, 0, 15, n = 201, add = TRUE, ylab = NULL, col = "red", lty = 1, lwd = 1.7) hist(downstream, add = TRUE, col = "black", angle = 45, density = 15, breaks = seq(0,20, by = 1), freq = FALSE) param <- coef(fitDownstream) nigDens <- function(x) dnig(x, param = param) curve(nigDens, 0, 15, n = 201, add = TRUE, ylab = NULL, col = "red", lty = 1, lwd = 1.7) mtext(expression(EC[e]), side = 1, line = 3, cex = 1.25) mtext("Frequency", side = 2, line = 3, cex = 1.25) legend(x = 7.5, y = 0.250, c("Upstream Region","Downstream Region"), col = c("black","black"), density = c(NA,25), fill = c("grey","black"), angle = c(NA,45), cex = 1.25, bty = "n", xpd = TRUE) Functions for Moments Functions for Calculating Moments Description Functions used to calculate the mean, variance, skewness and kurtosis of a hyperbolic distribution. Not expected to be called directly by users. Usage RLambda(zeta, lambda = 1) SLambda(zeta, lambda = 1) MLambda(zeta, lambda = 1) WLambda1(zeta, lambda = 1) WLambda2(zeta, lambda = 1) WLambda3(zeta, lambda = 1) WLambda4(zeta, lambda = 1) gammaLambda1(hyperbPi, zeta, lambda = 1) gammaLambda1(hyperbPi, zeta, lambda = 1) Arguments hyperbPi Value of the parameter π of the hyperbolic distribution. zeta Value of the parameter ζ of the hyperbolic distribution. lambda Parameter related to order of Bessel functions. Value The functions RLambda and SLambda are used in the calculation of the mean and variance. They are functions of the Bessel functions of the third kind, implemented in R as besselK. The other functions are used in calculation of higher moments. See <NAME>. and Blæsild, P. (1981) for details of the calculations. The parameterization of the hyperbolic distribution used for this and other components of the HyperbolicDist package is the (π, ζ) one. See hyperbChangePars to transfer between param- eterizations. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References Barndorff-Nielsen, O. and Blæsild, P (1981). Hyperbolic distributions and ramifications: contribu- tions to theory and application. In Statistical Distributions in Scientific Work, eds., <NAME>., Patil, <NAME>., and <NAME>., Vol. 4, pp. 19–44. Dordrecht: Reidel. Barndorff-Nielsen, O. and Blæsild, P (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and Read, <NAME>., Vol. 3, pp. 700–707. New York: Wiley. See Also dhyperb, hyperbMean,hyperbChangePars, besselK Generalized Inverse Gaussian Generalized Inverse Gaussian Distribution Description Density function, cumulative distribution function, quantile function and random number genera- tion for the generalized inverse Gaussian distribution with parameter vector param. Utility routines are included for the derivative of the density function and to find suitable break points for use in determining the distribution function. Usage dgig(x, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), KOmega = NULL) pgig(q, chi = 1, psi = 1, lambda = 1, param = c(chi,psi,lambda), lower.tail = TRUE, ibfTol = .Machine$double.eps^(0.85), nmax = 200) qgig(p, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), lower.tail = TRUE, method = c("spline", "integrate"), nInterpol = 501, uniTol = 10^(-7), ibfTol = .Machine$double.eps^(0.85), nmax =200, ...) rgig(n, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) rgig1(n, chi = 1, psi = 1, param = c(chi, psi)) ddgig(x, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), KOmega = NULL) Arguments x, q Vector of quantiles. p Vector of probabilities. n Number of observations to be generated. chi A shape parameter that by default holds a value of 1. psi Another shape parameter that is set to 1 by default. lambda Shape parameter of the GIG distribution. Common to all forms of parameteri- zation. By default this is set to 1. param Parameter vector taking the form c(chi, psi, lambda) for rgig, or c(chi, psi) for rgig1. method Character. If "spline" quantiles are found from a spline approximation to the distribution function. If "integrate", the distribution function used is always obtained by integration. lower.tail Logical. If TRUE, probabilities are P (X ≤ x), otherwise as P (X > x). KOmega Sets the value of the Bessel function in the density or derivative of the density. See Details. ibfTol Value of tolerance to be passed to incompleteBesselK by pgig. nmax Value of maximum order of the approximating series to be passed to incompleteBesselK by pgig. nInterpol The number of points used in qgig for cubic spline interpolation (see splinefun) of the distribution function. uniTol Value of tol in calls to uniroot. See uniroot. ... Passes arguments to uniroot. See Details. Details The generalized inverse Gaussian distribution has density λ (ψ/χ) 2 −1 f (x) = √ 2Kλ ( ψχ) for x > 0, where Kλ () is the modified Bessel function of the third kind with order λ. The generalized inverse Gaussian distribution is investigated in detail in Jörgensen (1982). Use gigChangePars to convert from the (δ, γ), (α, β), or (ω, η) parameterizations to the (χ, ψ), parameterization used above. pgig calls the function incompleteBesselK from the package DistributionUtils to integrate the density function dgig. This can be expected to be accurate to about 13 decimal places on a 32-bit computer, often more accurate. The algorithm used is due to Slavinsky and Safouhi (2010). Calculation of quantiles using qgig permits the use of two different methods. Both methods use uniroot to find the value of x for which a given q is equal F (x) where F denotes the cumulative distribution function. The difference is in how the numerical approximation to F is obtained. The obvious and more accurate method is to calculate the value of F (x) whenever it is required using a call to pghyp. This is what is done if the method is specified as "integrate". It is clear that the time required for this approach is roughly linear in the number of quantiles being calculated. A Q-Q plot of a large data set will clearly take some time. The alternative (and default) method is that for the major part of the distribution a spline approximation to F (x) is calculated and quantiles found using uniroot with this approximation. For extreme values (for which the tail probability is less than 10−7 ), the integration method is still used even when the method specifed is "spline". If accurate probabilities or quantiles are required, tolerances (intTol and uniTol) should be set to small values, say 10−10 or 10−12 with method = "integrate". Generally then accuracy might be expected to be at least 10−9 . If the default values of the functions are used, accuracy can only be expected to be around 10−4 . Note that on 32-bit systems .Machine$double.eps^0.25 = 0.0001220703 is a typical value. Generalized inverse Gaussian observations are obtained via the algorithm of Dagpunar (1989). Value dgig gives the density, pgig gives the distribution function, qgig gives the quantile function, and rgig generates random variates. rgig1 generates random variates in the special case where λ = 1. ddgig gives the derivative of dgig. Author(s) <NAME> <<EMAIL>>, <NAME>, and <NAME>. References <NAME>. (1989). An easily implemented generalised inverse Gaussian generator. Commun. Statist. -Simula., 18, 703–710. <NAME>. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. Slevinsky, <NAME>., and <NAME> (2010) A recursive algorithm for the G transformation and accurate computation of incomplete Bessel functions. Appl. Numer. Math., In press. See Also safeIntegrate, integrate for its shortfalls, splinefun, uniroot and gigChangePars for chang- ing parameters to the (χ, ψ) parameterization, dghyp for the generalized hyperbolic distribution. Examples param <- c(2, 3, 1) gigRange <- gigCalcRange(param = param, tol = 10^(-3)) par(mfrow = c(1, 2)) curve(dgig(x, param = param), from = gigRange[1], to = gigRange[2], n = 1000) title("Density of the\n Generalized Inverse Gaussian") curve(pgig(x, param = param), from = gigRange[1], to = gigRange[2], n = 1000) title("Distribution Function of the\n Generalized Inverse Gaussian") dataVector <- rgig(500, param = param) curve(dgig(x, param = param), range(dataVector)[1], range(dataVector)[2], n = 500) hist(dataVector, freq = FALSE, add = TRUE) title("Density and Histogram\n of the Generalized Inverse Gaussian") DistributionUtils::logHist(dataVector, main = "Log-Density and Log-Histogram\n of the Generalized Inverse Gaussian") curve(log(dgig(x, param = param)), add = TRUE, range(dataVector)[1], range(dataVector)[2], n = 500) par(mfrow = c(2, 1)) curve(dgig(x, param = param), from = gigRange[1], to = gigRange[2], n = 1000) title("Density of the\n Generalized Inverse Gaussian") curve(ddgig(x, param = param), from = gigRange[1], to = gigRange[2], n = 1000) title("Derivative of the Density\n of the Generalized Inverse Gaussian") GeneralizedHyperbolic The Package ‘GeneralizedHyperbolic’: Summary Information Description This package provides a collection of functions for working with the generalized hyperbolic and related distributions. For the hyperbolic distribution functions are provided for the density function, distribution function, quantiles, random number generation and fitting the hyperbolic distribution to data (hyperbFit). The function hyperbChangePars will interchange parameter values between different parameter- izations. The mean, variance, skewness, kurtosis and mode of a given hyperbolic distribution are given by hyperbMean, hyperbVar, hyperbSkew, hyperbKurt, and hyperbMode respectively. For assessing the fit of the hyperbolic distribution to a set of data, the log-histogram is useful. See logHist from package DistributionUtils. Q-Q and P-P plots are also provided for assessing the fit of a hyperbolic distribution. A Crämer-von~Mises test of the goodness of fit of data to a hyperbolic distribution is given by hyperbCvMTest. S3 print, plot and summary methods are provided for the output of hyperbFit. For the generalized hyperbolic distribution functions are provided for the density function, dis- tribution function, quantiles, and for random number generation. The function ghypChangePars will interchange parameter values between different parameterizations. The mean, variance, and mode of a given generalized hyperbolic distribution are given by ghypMean, ghypVar, ghypSkew, ghypKurt, and ghypMode respectively. Q-Q and P-P plots are also provided for assessing the fit of a generalized hyperbolic distribution. For the generalized inverse Gaussian distribution functions are provided for the density function, distribution function, quantiles, and for random number generation. The function gigChangePars will interchange parameter values between different parameterizations. The mean, variance, skew- ness, kurtosis and mode of a given generalized inverse Gaussian distribution are given by gigMean, gigVar, gigSkew, gigKurt, and gigMode respectively. Q-Q and P-P plots are also provided for assessing the fit of a generalized inverse Gaussian distribution. For the skew-Laplace distribution functions are provided for the density function, distribution func- tion, quantiles, and for random number generation. Q-Q and P-P plots are also provided for assess- ing the fit of a skew-Laplace distribution. Acknowledgements A number of students have worked on the package: <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and <NAME>. Thanks to <NAME> and <NAME> for their willingness to answer my questions, and to all the core group for the development of R. Special thanks also to <NAME> without whose advice, this package would be far inferior. LICENCE This package and its documentation are usable under the terms of the "GNU General Public Li- cense", a copy of which is distributed with the package. Author(s) <NAME> <<EMAIL>> References Barndorff-Nielsen, O. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond., A353, 401–419. Barndorff-Nielsen, O. and <NAME> (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist., 41, 127–146. <NAME>. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. <NAME>. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. GeneralizedHyperbolicDistribution Generalized Hyperbolic Distribution Description Density function, distribution function, quantiles and random number generation for the generalized hyperbolic distribution, with parameters α (tail), β (skewness), δ (peakness), µ (location) and λ (shape). Usage dghyp(x, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) pghyp(q, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), lower.tail = TRUE, subdivisions = 100, intTol = .Machine$double.eps^0.25, valueOnly = TRUE, ...) qghyp(p, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), lower.tail = TRUE, method = c("spline","integrate"), nInterpol = 501, uniTol = .Machine$double.eps^0.25, subdivisions = 100, intTol = uniTol, ...) rghyp(n, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) ddghyp(x, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) Arguments x,q Vector of quantiles. p Vector of probabilities. n Number of random variates to be generated. mu Location parameter µ, default is 0. delta Scale parameter δ, default is 1. alpha Tail parameter α, default is 1. beta Skewness parameter β, default is 0. lambda Shape parameter λ, default is 1. param Specifying the parameters as a vector of the form c(mu,delta,alpha,beta,lambda). method Character. If "spline" quantiles are found from a spline approximation to the distribution function. If "integrate", the distribution function used is always obtained by integration. lower.tail Logical. If TRUE, probabilities are P (X ≤ x), otherwise they are P (X > x). subdivisions The maximum number of subdivisions used to integrate the density and deter- mine the accuracy of the distribution function calculation. intTol Value of rel.tol and hence abs.tol in calls to integrate. See integrate. valueOnly Logical. If valueOnly = TRUE calls to pghyp only return the value obtained for the integral. If valueOnly = FALSE an estimate of the accuracy of the numerical integration is also returned. nInterpol Number of points used in qghyp for cubic spline interpolation of the distribution function. uniTol Value of tol in calls to uniroot. See uniroot. ... Passes additional arguments to integrate in pghyp and qghyp, and to uniroot in qghyp. Details Users may either specify the values of the parameters individually or as a vector. If both forms are specified, then the values specified by the vector param will overwrite the other ones. In addition the parameter values are examined by calling the function ghypCheckPars to see if they are valid. The density function is p f (x) = c(λ, α, β, δ) × √2 e ( α ) where Kν () is the modified Bessel function of the third kind with order ν, and α −β λ ( δ ) c(λ, α, β, δ) = √ p Use ghypChangePars to convert from the (ρ, ζ), (ξ, χ), (ᾱ, β̄), or (π, ζ) parameterizations to the (α, β) parameterization used above. pghyp uses the function integrate to numerically integrate the density function. The integration is from -Inf to x if x is to the left of the mode, and from x to Inf if x is to the right of the mode. The probability calculated this way is subtracted from 1 if required. Integration in this manner appears to make calculation of the quantile function more stable in extreme cases. Calculation of quantiles using qghyp permits the use of two different methods. Both methods use uniroot to find the value of x for which a given q is equal F (x) where F denotes the cumulative distribution function. The difference is in how the numerical approximation to F is obtained. The obvious and more accurate method is to calculate the value of F (x) whenever it is required using a call to pghyp. This is what is done if the method is specified as "integrate". It is clear that the time required for this approach is roughly linear in the number of quantiles being calculated. A Q-Q plot of a large data set will clearly take some time. The alternative (and default) method is that for the major part of the distribution a spline approximation to F (x) is calculated and quantiles found using uniroot with this approximation. For extreme values (for which the tail probability is less than 10−7 ), the integration method is still used even when the method specifed is "spline". If accurate probabilities or quantiles are required, tolerances (intTol and uniTol) should be set to small values, say 10−10 or 10−12 with method = "integrate". Generally then accuracy might be expected to be at least 10−9 . If the default values of the functions are used, accuracy can only be expected to be around 10−4 . Note that on 32-bit systems .Machine$double.eps^0.25 = 0.0001220703 is a typical value. Value dghyp gives the density function, pghyp gives the distribution function, qghyp gives the quantile function and rghyp generates random variates. An estimate of the accuracy of the approximation to the distribution function can be found by setting valueOnly = FALSE in the call to pghyp which returns a list with components value and error. ddghyp gives the derivative of dghyp. Author(s) <NAME> <<EMAIL>> References <NAME>., and <NAME> (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>., and <NAME>., Vol. 3, pp. 700–707.-New York: Wiley. <NAME>., and Sörenson,M. (2003). Hyperbolic processes in finance. In Handbook of Heavy Tailed Distributions in Finance,ed., <NAME>. pp. 212–248. Elsevier Science B.~V. <NAME>. (1989). An easily implemented generalised inverse Gaussian generator Commun. Statist.-Simula., 18, 703–710. <NAME>. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also dhyperb for the hyperbolic distribution, dgig for the generalized inverse Gaussian distribution, safeIntegrate, integrate for its shortfalls, also splinefun, uniroot and ghypChangePars for changing parameters to the (α, β) parameterization. Examples param <- c(0, 1, 3, 1, 1/2) ghypRange <- ghypCalcRange(param = param, tol = 10^(-3)) par(mfrow = c(1, 2)) ### curves of density and distribution curve(dghyp(x, param = param), ghypRange[1], ghypRange[2], n = 1000) title("Density of the \n Generalized Hyperbolic Distribution") curve(pghyp(x, param = param), ghypRange[1], ghypRange[2], n = 500) title("Distribution Function of the \n Generalized Hyperbolic Distribution") ### curves of density and log density par(mfrow = c(1, 2)) data <- rghyp(1000, param = param) curve(dghyp(x, param = param), range(data)[1], range(data)[2], n = 1000, col = 2) hist(data, freq = FALSE, add = TRUE) title("Density and Histogram of the\n Generalized Hyperbolic Distribution") DistributionUtils::logHist(data, main = "Log-Density and Log-Histogram of\n the Generalized Hyperbolic Distribution") curve(log(dghyp(x, param = param)), range(data)[1], range(data)[2], n = 500, add = TRUE, col = 2) ### plots of density and derivative par(mfrow = c(2, 1)) curve(dghyp(x, param = param), ghypRange[1], ghypRange[2], n = 1000) title("Density of the\n Generalized Hyperbolic Distribution") curve(ddghyp(x, param = param), ghypRange[1], ghypRange[2], n = 1000) title("Derivative of the Density of the\n Generalized Hyperbolic Distribution") GeneralizedHyperbolicPlots Generalized Hyperbolic Quantile-Quantile and Percent-Percent Plots Description qqghyp produces a generalized hyperbolic Q-Q plot of the values in y. ppghyp produces a generalized hyperbolic P-P (percent-percent) or probability plot of the values in y. Graphical parameters may be given as arguments to qqghyp, and ppghyp. Usage qqghyp(y, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), main = "Generalized Hyperbolic Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, line = TRUE, ...) ppghyp(y, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), main = "Generalized Hyperbolic P-P Plot", xlab = "Uniform Quantiles", ylab = "Probability-integral-transformed Data", plot.it = TRUE, line = TRUE, ...) Arguments y The data sample. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. lambda λ is the shape parameter and dictates the shape that the distribution shall take. Default value is 1. param Parameters of the generalized hyperbolic distribution. xlab, ylab, main Plot labels. plot.it Logical. Should the result be plotted? line Add line through origin with unit slope. ... Further graphical parameters. Value For qqghyp and ppghyp, a list with components: x The x coordinates of the points that are to be plotted. y The y coordinates of the points that are to be plotted. References Wilk, <NAME>. and <NAME>. (1968) Probability plotting methods for the analysis of data. Biometrika. 55, 1–17. See Also ppoints, dghyp. Examples par(mfrow = c(1, 2)) y <- rghyp(200, param = c(2, 2, 2, 1, 2)) qqghyp(y, param = c(2, 2, 2, 1, 2), line = FALSE) abline(0, 1, col = 2) ppghyp(y, param = c(2, 2, 2, 1, 2)) ghypCalcRange Range of a Generalized Hyperbolic Distribution Description Given the parameter vector Theta of a generalized hyperbolic distribution, this function determines the range outside of which the density function is negligible, to a specified tolerance. The parameter- ization used is the (α, β) one (see dghyp). To use another parameterization, use ghypChangePars. Usage ghypCalcRange(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), tol = 10^(-5), density = TRUE, ...) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. lambda λ is the shape parameter and dictates the shape that the distribution shall take. Default value is 1. param Value of parameter vector specifying the generalized hyperbolic distribution. This takes the form c(mu, delta,alpha, beta, lambda). tol Tolerance. density Logical. If TRUE, the bounds are for the density function. If FALSE, they should be for the probability distribution, but this has not yet been implemented. ... Extra arguments for calls to uniroot. Details The particular generalized hyperbolic distribution being considered is specified by the value of the parameter value param. If density = TRUE, the function gives a range, outside of which the density is less than the given tolerance. Useful for plotting the density. Also used in determining break points for the separate sections over which numerical integration is used to determine the distribution function. The points are found by using uniroot on the density function. If density = FALSE, the function returns the message: "Distribution function bounds not yet implemented". Value A two-component vector giving the lower and upper ends of the range. Author(s) <NAME> <<EMAIL>> References <NAME> <NAME> (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. See Also dghyp, ghypChangePars Examples param <- c(0, 1, 5, 3, 1) maxDens <- dghyp(ghypMode(param = param), param = param) ghypRange <- ghypCalcRange(param = param, tol = 10^(-3) * maxDens) ghypRange curve(dghyp(x, param = param), ghypRange[1], ghypRange[2]) ## Not run: ghypCalcRange(param = param, tol = 10^(-3), density = FALSE) ghypChangePars Change Parameterizations of the Generalized Hyperbolic Distribution Description This function interchanges between the following 5 parameterizations of the generalized hyperbolic distribution: 1. µ, δ, α, β, λ 2. µ, δ, ρ, ζ, λ 3. µ, δ, ξ, χ, λ 4. µ, δ, ᾱ, β̄, λ 5. µ, δ, π, ζ, λ The first four are the parameterizations given in Prause (1999). The final parameterization has proven useful in fitting. Usage ghypChangePars(from, to, param, noNames = FALSE) Arguments from The set of parameters to change from. to The set of parameters to change to. param "from" parameter vector consisting of 5 numerical elements. noNames Logical. When TRUE, suppresses the parameter names in the output. Details In the 5 parameterizations, the following must be positive: 1. α, δ 2. ζ, δ 3. ξ, δ 4. ᾱ, δ 5. ζ, δ Furthermore, note that in the first parameterization α must be greater than the absolute value of β; in the third parameterization, ξ must be less than one, and the absolute value of χ must be less than ξ; and in the fourth parameterization, ᾱ must be greater than the absolute value of β̄. Value A numerical vector of length 5 representing param in the to parameterization. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References Barndorff-Nielsen, O. and <NAME>. (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. Prause, K. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also dghyp Examples param1 <- c(0, 3, 2, 1, 2) # Parameterization 1 param2 <- ghypChangePars(1, 2, param1) # Convert to parameterization 2 param2 # Parameterization 2 ghypChangePars(2, 1, param2) # Back to parameterization 1 ghypCheckPars Check Parameters of the Generalized Hyperbolic Distribution Description Given a putative set of parameters for the generalized hyperbolic distribution, the functions checks if they are in the correct range, and if they correspond to the boundary cases. Usage ghypCheckPars(param) Arguments param Numeric. Putative parameter values for a generalized hyperblic distribution. Details The vector param takes the form c(mu, delta, alpha, beta, lambda). If alpha is negative, an error is returned. If lambda is 0 then the absolute value of beta must be less than alpha and delta must be greater than zero. If either of these conditions are false, than a error is returned. If lambda is greater than 0 the absolute value of beta must be less than alpha. delta must also be non-negative. When either one of these is not true, an error is returned. If lambda is less than 0 then the absolute value of beta must be equal to alpha. delta must be greater than 0, if both conditions are not true, an error is returned. Value A list with components: case Either "" or "error". errMessage An appropriate error message if an error was found, the empty string "" other- wise. Author(s) <NAME> <<EMAIL>> References Paolella, Marc S. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also dghyp Examples ghypCheckPars(c(0, 2.5, -0.5, 1, 0)) # error ghypCheckPars(c(0, 2.5, 0.5, 0, 0)) # normal ghypCheckPars(c(0, 1, 1, -1, 0)) # error ghypCheckPars(c(2, 0, 1, 0.5, 0)) # error ghypCheckPars(c(0, 5, 2, 1.5, 0)) # normal ghypCheckPars(c(0, -2.5, -0.5, 1, 1)) # error ghypCheckPars(c(0, -1, 0.5, 1, 1)) # error ghypCheckPars(c(0, 0, -0.5, -1, 1)) # error ghypCheckPars(c(2, 0, 0.5, 0, -1)) # error ghypCheckPars(c(2, 0, 1, 0.5, 1)) # skew laplace ghypCheckPars(c(0, 1, 1, 1, -1)) # skew hyperbolic ghypMom Calculate Moments of the Generalized Hyperbolic Distribution Description Function to calculate raw moments, mu moments, central moments and moments about any other given location for the generalized hyperbolic distribution. Usage ghypMom(order, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda), momType = c("raw", "central", "mu"), about = 0) Arguments order Numeric. The order of the moment to be calculated. Not permitted to be a vector. Must be a positive whole number except for moments about zero. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. lambda λ is the shape parameter and dictates the shape that the distribution shall take. Default value is 1. param Numeric. The parameter vector specifying the generalized hyperbolic distribu- tion. Of the form c(mu, delta, alpha, beta, lambda) (see dghyp). momType Common types of moments to be calculated, default is "raw". See Details. about Numeric. The point around which the moment is to be calculated. Details Checking whether order is a whole number is carried out using the function is.wholenumber. momType can be either "raw" (moments about zero), "mu" (moments about mu), or "central" (mo- ments about mean). If one of these moment types is specified, then there is no need to specify the about value. For moments about any other location, the about value must be specified. In the case that both momType and about are specified and contradicting, the function will always calculate the moments based on about rather than momType. To calculate moments of the generalized hyperbolic distribution, the function firstly calculates mu moments by formula defined below and then transforms mu moments to central moments or raw moments or moments about any other locations as required by calling momChangeAbout. The mu moments are obtained from the recursion formula given in Scott, Würtz and Tran (2011). Value The moment specified. Author(s) <NAME> <<EMAIL>> References <NAME>., <NAME>., <NAME>. and <NAME>. (2011) Moments of the generalized hyperbolic distribution. Comp. Statistics., 26, 459–476. See Also ghypChangePars and from package DistributionUtils: logHist, is.wholenumber, momChangeAbout, and momIntegrated. Further, ghypMean, ghypVar, ghypSkew, ghypKurt. Examples param <- c(1, 2, 2, 1, 2) mu <- param[1] ### mu moments m1 <- ghypMean(param = param) m1 - mu ghypMom(1, param = param, momType = "mu") ## Comparison, using momIntegrated from pkg 'DistributionUtils': momIntegrated <- DistributionUtils :: momIntegrated momIntegrated("ghyp", order = 1, param = param, about = mu) ghypMom(2, param = param, momType = "mu") momIntegrated("ghyp", order = 2, param = param, about = mu) ghypMom(10, param = param, momType = "mu") momIntegrated("ghyp", order = 10, param = param, about = mu) ### raw moments ghypMean(param = param) ghypMom(1, param = param, momType = "raw") momIntegrated("ghyp", order = 1, param = param, about = 0) ghypMom(2, param = param, momType = "raw") momIntegrated("ghyp", order = 2, param = param, about = 0) ghypMom(10, param = param, momType = "raw") momIntegrated("ghyp", order = 10, param = param, about = 0) ### central moments ghypMom(1, param = param, momType = "central") momIntegrated("ghyp", order = 1, param = param, about = m1) ghypVar(param = param) ghypMom(2, param = param, momType = "central") momIntegrated("ghyp", order = 2, param = param, about = m1) ghypMom(10, param = param, momType = "central") momIntegrated("ghyp", order = 10, param = param, about = m1) ghypParam Parameter Sets for the Generalized Hyperbolic Distribution Description These objects store different parameter sets of the generalized hyperbolic distribution as matrices for testing or demonstration purposes. The parameter sets ghypSmallShape and ghypLargeShape have a constant location parameter of µ = 0, and constant scale parameter δ = 1. In ghypSmallParam and ghypLargeParam the values of the location and scale parameters vary. In these parameter sets the location parameter µ = 0 takes values from {0, 1} and {-1, 0, 1, 2} respectively. For the scale parameter δ, values are drawn from {1, 5} and {1, 2, 5, 10} respectively. For the shape parameters α and β the approach is more complex. The values for these shape parameters were chosen by choosing values of ξ and χ which range over the shape triangle, then the function ghypChangePars was applied to convert them to the α, β parameterization. The resulting α, β values were then rounded to three decimal places. See the examples for the values of ξ and χ for the large parameter sets. The values of λ are drawn from {-0.5, 0, 1} in ghypSmallShape and {-1, -0.5, 0, 0.5, 1, 2} in ghypLargeShape. Usage ghypSmallShape ghypLargeShape ghypSmallParam ghypLargeParam Format ghypSmallShape: a 22 by 5 matrix; ghypLargeShape: a 90 by 5 matrix; ghypSmallParam: a 84 by 5 matrix; ghypLargeParam: a 1440 by 5 matrix. Author(s) <NAME> <<EMAIL>> Examples data(ghypParam) plotShapeTriangle() xis <- rep(c(0.1,0.3,0.5,0.7,0.9), 1:5) chis <- c(0,-0.25,0.25,-0.45,0,0.45,-0.65,-0.3,0.3,0.65, -0.85,-0.4,0,0.4,0.85) points(chis, xis, pch = 20, col = "red") ## Testing the accuracy of ghypMean for (i in 1:nrow(ghypSmallParam)) { param <- ghypSmallParam[i, ] x <- rghyp(1000, param = param) sampleMean <- mean(x) funMean <- ghypMean(param = param) difference <- abs(sampleMean - funMean) print(difference) } ghypScale Rescale a generalized hyperbolic distribution Description Given a specific mean and standard deviation will rescale any given generalized hyperbolic dis- tribution to have the same shape but the specified mean and standard deviation. Can be used to standardize a generalized hyperbolic distribution to have mean zero and standard deviation one. Usage ghypScale(newMean, newSD, mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) Arguments newMean Numeric. The required mean of the rescaled distribution. newSD Numeric. The required standard deviation of the rescaled distribution. mu Numeric. Location parameter µ of the starting distribution, default is0. delta Numeric. Scale parameter δ of the starting distribution, default is 1. alpha Numeric. Tail parameter α of the starting distribution, default is 1. beta Numeric. Skewness parameter β of the starting distribution, default is 0. lambda Numeric. Shape parameter λ of the starting distribution, default is 1. param Numeric. Specifying the parameters of the starting distribution as a vector of the form c(mu,delta,alpha,beta,lambda). Value A numerical vector of length 5 giving the value of the parameters in the rescaled generalized hyper- bolic distribution in the usual (α, β) parameterization. Author(s) <NAME> <<EMAIL>> Examples param <- c(2,10,0.1,0.07,-0.5) # a normal inverse Gaussian ghypMean(param = param) ghypVar(param = param) ## convert to standardized parameters (newParam <- ghypScale(0, 1, param = param)) ghypMean(param = newParam) ghypVar(param = newParam) ## try some other mean and sd (newParam <- ghypScale(1, 1, param = param)) ghypMean(param = newParam) sqrt(ghypVar(param = newParam)) (newParam <- ghypScale(10, 2, param = param)) ghypMean(param = newParam) sqrt(ghypVar(param = newParam)) gigCalcRange Range of a Generalized Inverse Gaussian Distribution Description Given the parameter vector param of a generalized inverse Gaussian distribution, this function de- termines the range outside of which the density function is negligible, to a specified tolerance. The parameterization used is the (χ, ψ) one (see dgig). To use another parameterization, use gigChangePars. Usage gigCalcRange(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), tol = 10^(-5), density = TRUE, ...) Arguments chi A shape parameter that by default holds a value of 1. psi Another shape parameter that is set to 1 by default. lambda Shape parameter of the GIG distribution. Common to all forms of parameteri- zation. By default this is set to 1. param Value of parameter vector specifying the generalized inverse Gaussian distribu- tion. tol Tolerance. density Logical. If TRUE, the bounds are for the density function. If FALSE, they should be for the probability distribution, but this has not yet been implemented. ... Extra arguments for calls to uniroot. Details The particular generalized inverse Gaussian distribution being considered is specified by the value of the parameter value param. If density = TRUE, the function gives a range, outside of which the density is less than the given tolerance. Useful for plotting the density. Also used in determining break points for the separate sections over which numerical integration is used to determine the distribution function. The points are found by using uniroot on the density function. If density = FALSE, the function returns the message: "Distribution function bounds not yet implemented". Value A two-component vector giving the lower and upper ends of the range. Author(s) <NAME> <<EMAIL>> References Jörgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. See Also dgig, gigChangePars Examples param <- c(2.5, 0.5, 5) maxDens <- dgig(gigMode(param = param), param = param) gigRange <- gigCalcRange(param = param, tol = 10^(-3) * maxDens) gigRange curve(dgig(x, param = param), gigRange[1], gigRange[2]) ## Not run: gigCalcRange(param = param, tol = 10^(-3), density = FALSE) gigChangePars Change Parameterizations of the Generalized Inverse Gaussian Dis- tribution Description This function interchanges between the following 4 parameterizations of the generalized inverse Gaussian distribution: 1. (χ, ψ, λ) 2. (δ, γ, λ) 3. (α, β, λ) 4. (ω, η, λ) See Jörgensen (1982) and Dagpunar (1989) Usage gigChangePars(from, to, param, noNames = FALSE) Arguments from The set of parameters to change from. to The set of parameters to change to. param “from” parameter vector consisting of 3 numerical elements. noNames Logical. When TRUE, suppresses the parameter names in the output. Details The range of λ is the whole real line. In each parameterization, the other two parameters must take positive values. Value A numerical vector of length 3 representing param in the “to” parameterization. Author(s) <NAME> <<EMAIL>> References Jörgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. <NAME>. (1989). An easily implemented generalised inverse Gaussian generator, Commun. Statist.—Simula., 18, 703–710. See Also dgig Examples param1 <- c(2.5, 0.5, 5) # Parameterisation 1 param2 <- gigChangePars(1, 2, param1) # Convert to parameterization 2 param2 # Parameterization 2 gigChangePars(2, 1, as.numeric(param2)) # Convert back to parameterization 1 gigCheckPars Check Parameters of the Generalized Inverse Gaussian Distribution Description Given a putative set of parameters for the generalized inverse Gaussian distribution, the functions checks if they are in the correct range, and if they correspond to the boundary cases. Usage gigCheckPars(param, ...) Arguments param Numeric. Putative parameter values for a generalized inverse Gaussian distribu- tion. ... Further arguments for calls to all.equal. Details The vector param takes the form c(chi, psi, lambda). If either chi or psi is negative, an error is returned. If chi is 0 (to within tolerance allowed by all.equal) then psi and lambda must be positive or an error is returned. If these conditions are satisfied, the distribution is identified as a gamma distribution. If psi is 0 (to within tolerance allowed by all.equal) then chi must be positive and lambda must be negative or an error is returned. If these conditions are satisfied, the distribution is identified as an inverse gamma distribution. If both chi and psi are positive, then the distribution is identified as a normal generalized inverse Gaussian distribution. Value A list with components: case Whichever of "error", "gamma", invgamma, or "normal" is identified by the function. errMessage An appropriate error message if an error was found, the empty string "" other- wise. Author(s) <NAME> <<EMAIL>> References Paolella, <NAME>. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also dgig Examples gigCheckPars(c(5, 2.5, -0.5)) # normal gigCheckPars(c(-5, 2.5, 0.5)) # error gigCheckPars(c(5, -2.5, 0.5)) # error gigCheckPars(c(-5, -2.5, 0.5)) # error gigCheckPars(c(0, 2.5, 0.5)) # gamma gigCheckPars(c(0, 2.5, -0.5)) # error gigCheckPars(c(0, 0, 0.5)) # error gigCheckPars(c(0, 0, -0.5)) # error gigCheckPars(c(5, 0, 0.5)) # error gigCheckPars(c(5, 0, -0.5)) # invgamma gigFit Fit the Generalized Inverse Gausssian Distribution to Data Description Fits a generalized inverse Gaussian distribution to data. Displays the histogram, log-histogram (both with fitted densities), Q-Q plot and P-P plot for the fit which has the maximum likelihood. Usage gigFit(x, freq = NULL, paramStart = NULL, startMethod = c("Nelder-Mead","BFGS"), startValues = c("LM","GammaIG","MoM","Symb","US"), method = c("Nelder-Mead","BFGS","nlm"), stand = TRUE, plots = FALSE, printOut = FALSE, controlBFGS = list(maxit = 200), controlNM = list(maxit = 1000), maxitNLM = 1500, ...) ## S3 method for class 'gigFit' print(x, digits = max(3, getOption("digits") - 3), ...) ## S3 method for class 'gigFit' plot(x, which = 1:4, plotTitles = paste(c("Histogram of ", "Log-Histogram of ", "Q-Q Plot of ", "P-P Plot of "), x$obsName, sep = ""), ask = prod(par("mfcol")) < length(which) & dev.interactive(), ...) ## S3 method for class 'gigFit' coef(object, ...) ## S3 method for class 'gigFit' vcov(object, ...) Arguments x Data vector for gigFit. Object of class "gigFit" for print.gigFit and plot.gigFit. freq A vector of weights with length equal to length(x). paramStart A user specified starting parameter vector param taking the form c(chi, psi, lambda). startMethod Method used by gigFitStartMoM in calls to optim. startValues Code giving the method of determining starting values for finding the maximum likelihood estimate of param. method Different optimisation methods to consider. See Details. stand Logical. If TRUE, the data is first standardized by dividing by the sample standard deviation. plots Logical. If FALSE suppresses printing of the histogram, log-histogram, Q-Q plot and P-P plot. printOut Logical. If FALSE suppresses printing of results of fitting. controlBFGS A list of control parameters for optim when using the "BFGS" optimisation. controlNM A list of control parameters for optim when using the "Nelder-Mead" optimi- sation. maxitNLM A positive integer specifying the maximum number of iterations when using the "nlm" optimisation. digits Desired number of digits when the object is printed. which If a subset of the plots is required, specify a subset of the numbers 1:4. plotTitles Titles to appear above the plots. ask Logical. If TRUE, the user is asked before each plot, see par(ask = .). ... Passes arguments to optim, par, hist, logHist, qqgig and ppgig. object Object of class "gigFit" for coef.gigFit and for vcov.gigFit. Details Possible values of the argument startValues are the following: • "LM"Based on fitting linear models to the upper tails of the data x and the inverse of the data 1/x. • "GammaIG"Based on fitting gamma and inverse gamma distributions. • "MoM"Method of moments. • "Symb"Not yet implemented. • "US"User-supplied. If startValues = "US" then a value must be supplied for paramStart. For the details concerning the use of paramStart, startMethod, and startValues, see gigFitStart. The three optimisation methods currently available are: • "BFGS"Uses the quasi-Newton method "BFGS" as documented in optim. • "Nelder-Mead"Uses an implementation of the Nelder and Mead method as documented in optim. • "nlm"Uses the nlm function in R. For details of how to pass control information for optimisation using optim and nlm, see optim and nlm. When method = "nlm" is used, warnings may be produced. These do not appear to be a problem. Value gigFit returns a list with components: param A vector giving the maximum likelihood estimate of param, as c(chi, psi, lambda). maxLik The value of the maximised log-likelihood. method Optimisation method used. conv Convergence code. See the relevant documentation (either optim or nlm) for details on convergence. iter Number of iterations of optimisation routine. obs The data used to fit the generalized inverse Gaussian distribution. obsName A character string with the actual x argument name. paramStart Starting value of param returned by call to gigFitStart. svName Descriptive name for the method finding start values. startValues Acronym for the method of finding start values. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. Author(s) <NAME> <<EMAIL>>, <NAME> References Jörgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. See Also optim, par, hist, logHist (pkg DistributionUtils), qqgig, ppgig, and gigFitStart. Examples param <- c(1, 1, 1) dataVector <- rgig(500, param = param) ## See how well gigFit works gigFit(dataVector) ##gigFit(dataVector, plots = TRUE) ## See how well gigFit works in the limiting cases ## Gamma case dataVector2 <- rgamma(500, shape = 1, rate = 1) gigFit(dataVector2) ## Inverse gamma require(actuar) dataVector3 <- rinvgamma(500, shape = 1, rate = 1) gigFit(dataVector3) ## Use nlm instead of default gigFit(dataVector, method = "nlm") gigFitStart Find Starting Values for Fitting a Generalized Inverse Gaussian Dis- tribution Description Finds starting values for input to a maximum likelihood routine for fitting the generalized inverse Gaussian distribution to data. Usage gigFitStart(x, startValues = c("LM","GammaIG","MoM","Symb","US"), paramStart = NULL, startMethodMoM = c("Nelder-Mead","BFGS"), ...) gigFitStartMoM(x, paramStart = NULL, startMethodMoM = "Nelder-Mead", ...) gigFitStartLM(x, ...) Arguments x Data vector. startValues Acronym indicating the method to use for obtaining starting values to be used as input to gigFit. See Details. paramStart Starting values for param if startValues = "US". startMethodMoM Method used by call to optim in finding method of moments estimates. ... Passes arguments to optim and calls to hist. Details Possible values of the argument startValues are the following: • "LM"Based on fitting linear models to the upper tails of the data x and the inverse of the data 1/x. • "GammaIG"Based on fitting gamma and inverse gamma distributions. • "MoM"Method of moments. • "Symb"Not yet implemented. • "US"User-supplied. If startValues = "US" then a value must be supplied for paramStart. When startValues = "MoM" an initial optimisation is needed to find the starting values. This opti- misations starts from arbitrary values, c(1,1,1) for the parameters (χ, ψ, λ) and calls optim with the method given by startMethodMoM. Other starting values for the method of moments can be used by supplying a value for paramStart. The default method of finding starting values is "LM". Testing indicates this is quite fast and finds good starting values. In addition, it does not require any starting values itself. gigFitStartMoM is called by gigFitStart and implements the method of moments approach. gigFitStartLM is called by gigFitStart and implements the linear models approach. Value gigFitStart returns a list with components: paramStart A vector with elements chi, psi, and lambda giving the starting value of param. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. gigFitStartMoM and gigFitStartLM each return paramStart, the starting value of param, to the calling function gigFitStart Author(s) <NAME> <<EMAIL>>, <NAME> See Also dgig, gigFit. Examples param <- c(1, 1, 1) dataVector <- rgig(500, param = param) gigFitStart(dataVector) gigHessian Calculate Two-Sided Hessian for the Generalized Inverse Gaussian Distribution Description Calculates the Hessian of a function, either exactly or approximately. Used to obtaining the infor- mation matrix for maximum likelihood estimation. Usage gigHessian(x, param, hessianMethod = "tsHessian", whichParam = 1) Arguments x Data vector. param The maximum likelihood estimates parameter vector of the generalized inverse Gaussian distribution. There are five different sets of parameterazations can be used in this function, the first four sets are listed in gigChangePars and the last set is the log scale of the first set of the parameterization, i.e., mu,log(delta),Pi,log(zeta). hessianMethod Only the approximate method ("tsHessian") has actually been implemented so far. whichParam Numeric. A number between indicating which parameterization the argument param relates to. Only parameterization 1 is available so far. Details The approximate Hessian is obtained via a call to tsHessian from the package DistributionUtils. summary.gigFit calls the function gigHessian to calculate the Hessian matrix when the argument hessian = TRUE. Value gigHessian gives the approximate Hessian matrix for the data vector x and the estimated parameter vector param. Author(s) <NAME> <<EMAIL>>, <NAME> Examples ### Calculate the approximate Hessian using gigHessian: param <- c(1,1,1) dataVector <- rgig(500, param = param) fit <- gigFit(dataVector) coef <- coef(fit) gigHessian(x = dataVector, param = coef, hessianMethod = "tsHessian", whichParam = 1) ### Or calculate the approximate Hessian using summary.gigFit method: summary(fit, hessian = TRUE) gigMom Calculate Moments of the Generalized Inverse Gaussian Distribution Description Functions to calculate raw moments and moments about a given location for the generalized inverse Gaussian (GIG) distribution, including the gamma and inverse gamma distributions as special cases. Usage gigRawMom(order, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) gigMom(order, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), about = 0) gammaRawMom(order, shape = 1, rate = 1, scale = 1/rate) Arguments order Numeric. The order of the moment to be calculated. Not permitted to be a vector. Must be a positive whole number except for moments about zero. chi A shape parameter that by default holds a value of 1. psi Another shape parameter that is set to 1 by default. lambda Shape parameter of the GIG distribution. Common to all forms of parameteri- zation. By default this is set to 1. param Numeric. The parameter vector specifying the GIG distribution. Of the form c(chi, psi, lambda) (see dgig). about Numeric. The point around which the moment is to be calculated. shape Numeric. The shape parameter, must be non-negative, not permitted to be a vector. scale Numeric. The scale parameter, must be positive, not permitted to be a vector. rate Numeric. The rate parameter, an alternative way to specify the scale. Details The vector param of parameters is examined using gigCheckPars to see if the parameters are valid for the GIG distribution and if they correspond to the special cases which are the gamma and inverse gamma distributions. Checking of special cases and valid parameter vector values is carried out using the function gigCheckPars. Checking whether order is a whole number is carried out using the function is.wholenumber. Raw moments (moments about zero) are calculated using the functions gigRawMom or gammaRawMom. For moments not about zero, the function momChangeAbout is used to derive moments about an- other point from raw moments. Note that raw moments of the inverse gamma distribution can be obtained from the raw moments of the gamma distribution because of the relationship between the two distributions. An alternative implementation of raw moments of the gamma and inverse gamma distributions may be found in the package actuar and these may be faster since they are written in C. To calculate the raw moments of the GIG distribution it is convenient to use the alternative parame- terization of the GIG in terms of ω and η, given as parameterization 3 in gigChangePars. Then the raw moment of the GIG distribution of order k is given by η k Kλ+k (ω)/Kλ (ω) where Kλ () is the modified Bessel function of the third kind of order λ. The raw moment of the gamma distribution of order k with shape parameter α and rate parameter β is given by β −k Γ(α + k)/Γ(α) The raw moment of order k of the inverse gamma distribution with shape parameter α and rate parameter β is the raw moment of order −k of the gamma distribution with shape parameter α and rate parameter 1/β. Value The moment specified. In the case of raw moments, Inf is returned if the moment is infinite. Author(s) <NAME> <<EMAIL>> References Paolella, <NAME>. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also gigCheckPars, gigChangePars and from package DistributionUtils: is.wholenumber, momChangeAbout, momIntegrated Further, gigMean, gigVar, gigSkew, gigKurt. Examples ## Computations, using momIntegrated from pkg 'DistributionUtils': momIntegrated <- DistributionUtils :: momIntegrated ### Raw moments of the generalized inverse Gaussian distribution param <- c(5, 2.5, -0.5) gigRawMom(1, param = param) momIntegrated("gig", order = 1, param = param, about = 0) gigRawMom(2, param = param) momIntegrated("gig", order = 2, param = param, about = 0) gigRawMom(10, param = param) momIntegrated("gig", order = 10, param = param, about = 0) gigRawMom(2.5, param = param) ### Moments of the generalized inverse Gaussian distribution param <- c(5, 2.5, -0.5) (m1 <- gigRawMom(1, param = param)) gigMom(1, param = param) gigMom(2, param = param, about = m1) (m2 <- momIntegrated("gig", order = 2, param = param, about = m1)) gigMom(1, param = param, about = m1) gigMom(3, param = param, about = m1) momIntegrated("gig", order = 3, param = param, about = m1) ### Raw moments of the gamma distribution shape <- 2 rate <- 3 param <- c(shape, rate) gammaRawMom(1, shape, rate) momIntegrated("gamma", order = 1, shape = shape, rate = rate, about = 0) gammaRawMom(2, shape, rate) momIntegrated("gamma", order = 2, shape = shape, rate = rate, about = 0) gammaRawMom(10, shape, rate) momIntegrated("gamma", order = 10, shape = shape, rate = rate, about = 0) ### Moments of the inverse gamma distribution param <- c(5, 0, -0.5) gigRawMom(2, param = param) # Inf gigRawMom(-2, param = param) momIntegrated("invgamma", order = -2, shape = -param[3], rate = param[1]/2, about = 0) ### An example where the moment is infinite: inverse gamma param <- c(5, 0, -0.5) gigMom(1, param = param) gigMom(2, param = param) gigParam Parameter Sets for the Generalized Inverse Gaussian Distribution Description These objects store different parameter sets of the generalized inverse Gaussian distribution as ma- trices for testing or demonstration purposes. The parameter sets gigSmallParam and gigLargeParam give combinations of values of the param- eters χ, ψ and λ. For gigSmallParam, the values of χ and ψ are chosen from {0.1, 0.5, 2, 5, 20, 50}, and the values of λ from {-0.5, 0, 0.5, 1, 5}. For gigLargeParam, the values of χ and ψ are chosen from {0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100}, and the values of λ from {-2, -1, -0.5, 0, 0.1, 0.2, 0.5, 1, 2, 5, 10}. Usage gigSmallParam gigLargeParam Format gigSmallParam: a 125 by 3 matrix; gigLargeParam: a 1100 by 3 matrix. Author(s) <NAME> <<EMAIL>> Examples data(gigParam) ## Check values of chi and psi plot(gigLargeParam[, 1], gigLargeParam[, 2]) ### Check all three parameters pairs(gigLargeParam, labels = c(expression(chi),expression(psi),expression(lambda))) ## Testing the accuracy of gigMean for (i in 1:nrow(gigSmallParam)) { param <- gigSmallParam[i, ] x <- rgig(1000, param = param) sampleMean <- mean(x) funMean <- gigMean(param = param) difference <- abs(sampleMean - funMean) print(difference) } GIGPlots Generalized Inverse Gaussian Quantile-Quantile and Percent-Percent Plots Description qqgig produces a generalized inverse Gaussian QQ plot of the values in y. ppgig produces a generalized inverse Gaussian PP (percent-percent) or probability plot of the values in y. If line = TRUE, a line with zero intercept and unit slope is added to the plot. Graphical parameters may be given as arguments to qqgig, and ppgig. Usage qqgig(y, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), main = "GIG Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, line = TRUE, ...) ppgig(y, chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda), main = "GIG P-P Plot", xlab = "Uniform Quantiles", ylab = "Probability-integral-transformed Data", plot.it = TRUE, line = TRUE, ...) Arguments y The data sample. chi A shape parameter that by default holds a value of 1. psi Another shape parameter that is set to 1 by default. lambda Shape parameter of the GIG distribution. Common to all forms of parameteri- zation. By default this is set to 1. param Parameters of the generalized inverse Gaussian distribution. xlab, ylab, main Plot labels. plot.it Logical. TRUE denotes the results should be plotted. line Logical. If TRUE, a line with zero intercept and unit slope is added to the plot. ... Further graphical parameters. Value For qqgig and ppgig, a list with components: x The x coordinates of the points that are be plotted. y The y coordinates of the points that are be plotted. References <NAME>. and <NAME>. (1968) Probability plotting methods for the analysis of data. Biometrika. 55, 1–17. See Also ppoints, dgig. Examples par(mfrow = c(1, 2)) y <- rgig(1000, param = c(2, 3, 1)) qqgig(y, param = c(2, 3, 1), line = FALSE) abline(0, 1, col = 2) ppgig(y, param = c(2, 3, 1)) hyperbCalcRange Range of a Hyperbolic Distribution Description Given the parameter vector param of a hyperbolic distribution, this function calculates the range outside of which the distribution has negligible probability, or the density function is negligible, to a specified tolerance. The parameterization used is the (α, β) one (see dhyperb). To use another parameterization, use hyperbChangePars. Usage hyperbCalcRange(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), tol = 10^(-5), density = TRUE, ...) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Value of parameter vector specifying the hyperbolic distribution. This takes the form c(mu, delta, alpha, beta). tol Tolerance. density Logical. If FALSE, the bounds are for the probability distribution. If TRUE, they are for the density function. ... Extra arguments for calls to uniroot. Details The particular hyperbolic distribution being considered is specified by the value of the parameter value param. If density = FALSE, the function calculates the effective range of the distribution, which is used in calculating the distribution function and quantiles, and may be used in determining the range when plotting the distribution. By effective range is meant that the probability of an observation being greater than the upper end is less than the specified tolerance tol. Likewise for being smaller than the lower end of the range. Note that this has not been implemented yet. If density = TRUE, the function gives a range, outside of which the density is less than the given tolerance. Useful for plotting the density. Value A two-component vector giving the lower and upper ends of the range. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References Barndorff-Nielsen, O. and Blæsild, P (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. See Also dhyperb, hyperbChangePars Examples par(mfrow = c(1, 2)) param <- c(0, 1, 3, 1) hyperbRange <- hyperbCalcRange(param = param, tol = 10^(-3)) hyperbRange curve(phyperb(x, param = param), hyperbRange[1], hyperbRange[2]) maxDens <- dhyperb(hyperbMode(param = param), param = param) hyperbRange <- hyperbCalcRange(param = param, tol = 10^(-3) * maxDens, density = TRUE) hyperbRange curve(dhyperb(x, param = param), hyperbRange[1], hyperbRange[2]) hyperbChangePars Change Parameterizations of the Hyperbolic Distribution Description This function interchanges between the following 4 parameterizations of the hyperbolic distribution: 1. µ, δ, π, ζ 2. µ, δ, α, β 3. µ, δ, φ, γ 4. µ, δ, ξ, χ The first three are given in Barndorff-Nielsen and Blæsild (1983), and the fourth in Prause (1999) Usage hyperbChangePars(from, to, param, noNames = FALSE) Arguments from The set of parameters to change from. to The set of parameters to change to. param "from" parameter vector consisting of 4 numerical elements. noNames Logical. When TRUE, suppresses the parameter names in the output. Details In the 4 parameterizations, the following must be positive: 1. ζ, δ 2. α, δ 3. φ, γ, δ 4. ξ, δ Furthermore, note that in the second parameterization α must be greater than the absolute value of β, while in the fourth parameterization, ξ must be less than one, and the absolute value of χ must be less than ξ. Value A numerical vector of length 4 representing param in the to parameterization. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References Barndorff-Nielsen, O. and Blæsild, P. (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. <NAME>. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also dhyperb Examples param1 <- c(2, 1, 3, 1) # Parameterization 1 param2 <- hyperbChangePars(1, 2, param1) # Convert to parameterization 2 param2 # Parameterization 2 hyperbChangePars(2, 1, param2) # Back to parameterization 1 hyperbCvMTest Cramer-von~Mises Test of a Hyperbolic Distribution Description Carry out a Crämer-von~Mises test of a hyperbolic distribution where the parameters of the distri- bution are estimated, or calculate the p-value for such a test. Usage hyperbCvMTest(x, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), conf.level = 0.95, ...) hyperbCvMTestPValue(delta = 1, alpha = 1, beta = 0, Wsq, digits = 3) ## S3 method for class 'hyperbCvMTest' print(x, prefix = "\t", ...) Arguments x A numeric vector of data values for hyperbCvMTest, or object of class "hyperbCvMTest" for print.hyperbCvMTest. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameters of the hyperbolic distribution taking the form c(mu, delta, alpha, beta). conf.level Confidence level of the the confidence interval. ... Further arguments to be passed to or from methods. Wsq Value of the test statistic in the Crämer-von~Mises test of the hyperbolic distri- bution. digits Number of decimal places for p-value. prefix Character(s) to be printed before the description of the test. Details hyperbCvMTest carries out a Crämer-von~Mises goodness-of-fit test of the hyperbolic distribution. The parameter param must be given in the (α, β) parameterization. hyperbCvMTestPValue calculates the p-value of the test, and is not expected to be called by the user. The method used is interpolation in Table 5 given in Puig & Stephens (2001), which assumes all the parameters of the distribution are unknown. Since the table used is limited, large p-values are simply given as “>~0.25” and very small ones as “<~0.01”. The table is created as the matrix wsqTable when the package GeneralizedHyperbolic is invoked. print.hyperbCvMTest prints the output from the Crämer-von~Mises goodness-of-fit test for the hyperbolic distribution in very similar format to that provided by print.htest. The only reason for having a special print method is that p-values can be given as less than some value or greater than some value, such as “<\ ~0.01”, or “>\ ~0.25”. Value hyperbCvMTest returns a list with class hyperbCvMTest containing the following components: statistic The value of the test statistic. method A character string with the value “Crämer-von~Mises test of hyperbolic distri- bution”. data.name A character string giving the name(s) of the data. parameter The value of the parameter param p.value The p-value of the test. warn A warning if the parameter values are outside the limits of the table given in Puig & Stephens (2001). hyperbCvMTestPValue returns a list with the elements p.value and warn only. Author(s) <NAME>, <NAME> References Puig, Pedro and Stephens, <NAME>. (2001), Goodness-of-fit tests for the hyperbolic distribution. The Canadian Journal of Statistics/La Revue Canadienne de Statistique, 29, 309–320. Examples param <- c(2, 2, 2, 1.5) dataVector <- rhyperb(500, param = param) fittedparam <- hyperbFit(dataVector)$param hyperbCvMTest(dataVector, param = fittedparam) dataVector <- rnorm(1000) fittedparam <- hyperbFit(dataVector, startValues = "FN")$param hyperbCvMTest(dataVector, param = fittedparam) hyperbFit Fit the Hyperbolic Distribution to Data Description Fits a hyperbolic distribution to data. Displays the histogram, log-histogram (both with fitted den- sities), Q-Q plot and P-P plot for the fit which has the maximum likelihood. Usage hyperbFit(x, freq = NULL, paramStart = NULL, startMethod = c("Nelder-Mead","BFGS"), startValues = c("BN","US","FN","SL","MoM"), criterion = "MLE", method = c("Nelder-Mead","BFGS","nlm", "L-BFGS-B","nlminb","constrOptim"), plots = FALSE, printOut = FALSE, controlBFGS = list(maxit = 200), controlNM = list(maxit = 1000), maxitNLM = 1500, controlLBFGSB = list(maxit = 200), controlNLMINB = list(), controlCO = list(), ...) ## S3 method for class 'hyperbFit' print(x, digits = max(3, getOption("digits") - 3), ...) ## S3 method for class 'hyperbFit' plot(x, which = 1:4, plotTitles = paste(c("Histogram of ","Log-Histogram of ", "Q-Q Plot of ","P-P Plot of "), x$obsName, sep = ""), ask = prod(par("mfcol")) < length(which) & dev.interactive(), ...) ## S3 method for class 'hyperbFit' coef(object, ...) ## S3 method for class 'hyperbFit' vcov(object, ...) Arguments x Data vector for hyperbFit. Object of class "hyperbFit" for print.hyperbFit and plot.hyperbFit. freq A vector of weights with length equal to length(x). paramStart A user specified starting parameter vector param taking the form c(mu, delta, alpha, beta). startMethod Method used by hyperbFitStart in calls to optim. startValues Code giving the method of determining starting values for finding the maximum likelihood estimate of param. criterion Currently only "MLE" is implemented. method Different optimisation methods to consider. See Details. plots Logical. If FALSE suppresses printing of the histogram, log-histogram, Q-Q plot and P-P plot. printOut Logical. If FALSE suppresses printing of results of fitting. controlBFGS A list of control parameters for optim when using the "BFGS" optimisation. controlNM A list of control parameters for optim when using the "Nelder-Mead" optimi- sation. maxitNLM A positive integer specifying the maximum number of iterations when using the "nlm" optimisation. controlLBFGSB A list of control parameters for optim when using the "L-BFGS-B" optimisation. controlNLMINB A list of control parameters for nlminb when using the "nlminb" optimisation. controlCO A list of control parameters for constrOptim when using the "constrOptim" optimisation. digits Desired number of digits when the object is printed. which If a subset of the plots is required, specify a subset of the numbers 1:4. plotTitles Titles to appear above the plots. ask Logical. If TRUE, the user is asked before each plot, see par(ask = .). ... Passes arguments to par, hist, logHist, qqhyperb and pphyperb. object Object of class "hyperbFit" for coef.hyperbFit and for vcov.hyperbFit. Details startMethod can be either "BFGS" or "Nelder-Mead". startValues can be one of the following: • "US"User-supplied. • "BN"Based on Barndorff-Nielsen (1977). • "FN"A fitted normal distribution. • "SL"Based on a fitted skew-Laplace distribution. • "MoM"Method of moments. For the details concerning the use of paramStart, startMethod, and startValues, see hyperbFitStart. The six optimisation methods currently available are: • "BFGS"Uses the quasi-Newton method "BFGS" as documented in optim. • "Nelder-Mead"Uses an implementation of the Nelder and Mead method as documented in optim. • "nlm"Uses the nlm function in R. • "L-BFGS-B"Uses the quasi-Newton method with box constraints "L-BFGS-B" as documented in optim. • "nlminb"Uses the nlminb function in R. • "constrOptim"Uses the constrOptim function in R. For details of how to pass control information for optimisation using optim, nlm, nlminb and constrOptim, see optim, nlm, nlminb and constrOptim. When method = "nlm" is used, warnings may be produced. These do not appear to be a problem. Value hyperbFit returns a list with components: param A vector giving the maximum likelihood estimate of param, as c(mu, delta, alpha, beta). maxLik The value of the maximised log-likelihood. method Optimisation method used. conv Convergence code. See the relevant documentation (either optim or nlm) for details on convergence. iter Number of iterations of optimisation routine. obs The data used to fit the hyperbolic distribution. obsName A character string with the actual x argument name. paramStart Starting value of param returned by call to hyperbFitStart. svName Descriptive name for the method finding start values. startValues Acronym for the method of finding start values. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> References Barndorff-Nielsen, O. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond. A353, 401–419. <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist. 41, 127–146. See Also optim, nlm, nlminb, constrOptim, par, hist, logHist (pkg DistributionUtils), qqhyperb, pphyperb, dskewlap and hyperbFitStart. Examples param <- c(2, 2, 2, 1) dataVector <- rhyperb(500, param = param) ## See how well hyperbFit works hyperbFit(dataVector) hyperbFit(dataVector, plots = TRUE) fit <- hyperbFit(dataVector) par(mfrow = c(1, 2)) plot(fit, which = c(1, 3)) ## Use nlm instead of default hyperbFit(dataVector, method = "nlm") hyperbFitStart Find Starting Values for Fitting a Hyperbolic Distribution Description Finds starting values for input to a maximum likelihood routine for fitting hyperbolic distribution to data. Usage hyperbFitStart(x, startValues = c("BN","US","FN","SL","MoM"), paramStart = NULL, startMethodSL = c("Nelder-Mead","BFGS"), startMethodMoM = c("Nelder-Mead","BFGS"), ...) hyperbFitStartMoM(x, startMethodMoM = "Nelder-Mead", ...) Arguments x Data vector. startValues Vector of the different starting values to consider. See Details. paramStart Starting values for param if startValues = "US". startMethodSL Method used by call to optim in finding skew Laplace estimates. startMethodMoM Method used by call to optim in finding method of moments estimates. ... Passes arguments to hist and optim. Details Possible values of the argument startValues are the following: • "US"User-supplied. • "BN"Based on Barndorff-Nielsen (1977). • "FN"A fitted normal distribution. • "SL"Based on a fitted skew-Laplace distribution. • "MoM"Method of moments. If startValues = "US" then a value must be supplied for paramStart. If startValues = "MoM", hyperbFitStartMoM is called. These starting values are based on Barndorff- Nielsen et al (1985). If startValues = "SL", or startValues = "MoM" an initial optimisation is needed to find the start- ing values. These optimisations call optim. Value hyperbFitStart returns a list with components: paramStart A vector with elements mu, delta, alpha and beta giving the starting value of param. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. hyperbFitStartMoM returns only the method of moments estimates as a vector with elements mu, delta, alpha and beta. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME>, <NAME> References <NAME>. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond., A353, 401–419. <NAME>., <NAME>., <NAME>., and <NAME>. (1985). The fascination of sand. In A celebration of statistics, The ISI Centenary Volume, eds., <NAME>. and <NAME>., pp. 57–87. New York: Springer-Verlag. <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist., 41, 127–146. See Also dhyperb, dskewlap, hyperbFit, hist, and optim. Examples param <- c(2, 2, 2, 1) dataVector <- rhyperb(500, param = param) hyperbFitStart(dataVector, startValues = "FN") hyperbFitStartMoM(dataVector) hyperbFitStart(dataVector, startValues = "MoM") hyperbHessian Calculate Two-Sided Hessian for the Hyperbolic Distribution Description Calculates the Hessian of a function, either exactly or approximately. Used to obtain the information matrix for maximum likelihood estimation. Usage hyperbHessian(x, param, hessianMethod = "exact", whichParam = 1:5) sumX(x, mu, delta, r, k) Arguments x Data vector. param The maximum likelihood estimates parameter vector of the hyperbolic distribu- tion. There are five different sets of parameterazations can be used in this func- tion, the first four sets are listed in hyperbChangePars and the last set is the log scale of the first set of the parameterization, i.e., mu,log(delta),Pi,log(zeta). hessianMethod Two methods are available to calculate the Hessian exactly ( "exact") or ap- proximately ("tsHessian"). whichParam Numeric. A number between 1 to 5 indicating which set of the parameterization is the specified value in argument param belong to. mu Value of the parameter µ of the hyperbolic distribution. delta Value of the parameter δ of the hyperbolic distribution. r Parameter used in calculating a cumulative sum of the data vector x. k Parameter used in calculating a cumulative sum of the data vector x. Details The formulae for the exact Hessian are derived by Maple software with some simplifications. For now, the exact Hessian can only be obtained based on the first, second or the last parame- terization sets. The approximate Hessian is obtained via a call to tsHessian from the package DistributionUtils. summary.hyperbFit calls the function hyperbHessian to calculate the Hes- sian matrix when the argument hessian = TRUE. Value hyperbHessian gives the approximate or exact Hessian matrix for the data vector x and the esti- mated parameter vector param. sumX is a sum term used in calculating the exact Hessian. It is called by hyperbHessian when the argument hessianMethod = "exact". It is not expected to be called directly by users. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples ### Calculate the exact Hessian using hyperbHessian: param <- c(2, 2, 2, 1) dataVector <- rhyperb(500, param = param) fit <- hyperbFit(dataVector, method = "BFGS") coef <- coef(fit) hyperbHessian(x = dataVector, param = coef, hessianMethod = "exact", whichParam = 2) ### Or calculate the exact Hessian using summary.hyperbFit method: summary(fit, hessian = TRUE) ## Calculate the approximate Hessian: summary(fit, hessian = TRUE, hessianMethod = "tsHessian") hyperblm Fitting Linear Models with Hyperbolic Errors Description Fits linear models with hyperbolic errors. Can be used to carry out linear regression for data ex- hibiting heavy tails and skewness. Displays the histogram, log-histogram (both with fitted error distribution), Q-Q plot and residuals vs. fitted values plot for the fitted linear model. Usage hyperblm(formula, data, subset, weights, na.action, xx = FALSE, y = FALSE, contrasts = NULL, offset, method = "Nelder-Mead", startMethod = "Nelder-Mead", startStarts = "BN", paramStart = NULL, maxiter = 100, tolerance = 0.0001, controlBFGS = list(maxit = 1000), controlNM = list(maxit = 10000), maxitNLM = 10000, controlCO = list(), silent = TRUE, ...) ## S3 method for class 'hyperblm' print(x, digits = max(3, getOption("digits")-3), ...) ## S3 method for class 'hyperblm' coef(object, ...) ## S3 method for class 'hyperblm' plot(x, breaks = "FD", plotTitles = c("Residuals vs Fitted Values", "Histogram of residuals", "Log-Histogram of residuals", "Q-Q Plot"), ...) Arguments formula an object of class "formula" (or one that can be coerced to that class): a sym- bolic description of the model to be fitted. The details of model specification are given under ‘Details’. data an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which lm is called. subset an optional vector specifying a subset of observations to be used in the fitting process. weights an optional vector of weights to be used in the fitting process. Should be NULL or a numeric vector. If non-NULL, weighted least squares is used with weights weights (that is, minimizing sum(w*e^2)); otherwise ordinary least squares is used. See also ‘Details’, na.action A function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options, and is na.fail if that is unset. The ‘factory-fresh’ default is na.omit. Another possible value is NULL, no action. Value na.exclude can be useful. xx, y Logicals. If TRUE, the corresponding components of the fit (the explanatory matrix and the response vector) are returned. contrasts An optional list. See the contrasts.arg of model.matrix.default. offset An optional vector. See Details. method Character. Possible values are "BFGS", "Nelder-Mead" and "nlm". See Details. startMethod Character. Possible values are "BFGS" and "Nelder-Mead". See Details. startStarts Character. Possible values are "BN", "FN", "SL", "US" and "MoM". See Details. paramStart An optional vector. A vector of parameter start values for the optimization rou- tine. See Details. maxiter Numeric. The maximum number of two-stage optimization alternating itera- tions. See Details. tolerance Numeric. The two-stage optimization convergence ratio. See Details. controlBFGS, controlNM Lists. Lists of control parameters for optim when using corresponding (BFGS, Nelder-Mead) optimisation method in first stage. See optim. maxitNLM Numeric. The maximum number of iterations for the NLM optimizer. controlCO List. A list of control parameters for constrOptim in second stage. silent Logical. If TRUE, the error messgae of optimizer will not be displayed. x An object of class "hyperblm". object An object of class "hyperblm". breaks May be a vector, a single number or a character string. See hist. plotTitles Titles to appear above the plots. digits Numeric. Desired number of digits when the object is printed. ... Passes additional arguments to function hyperbFitStand, optim and constrOptim. Details Models for hyperblm are specified symbolically. A typical model has the form response ~ terms where response is the (numeric) response vector and terms is a series of terms which specifies a linear predictor for response. A terms specification of the form first + second indicates all the terms in first together with all the terms in second with duplicates removed. A specification of the form first:second indicates the set of terms obtained by taking the interactions of all terms in first with all terms in second. The specification first*second indicates the cross of first and second. This is the same as first + second + first:second. If the formula includes an offset, this is evaluated and subtracted from the response. If response is a matrix a linear model is fitted separately by least-squares to each column of the matrix. See model.matrix for some further details. The terms in the formula will be re-ordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on. A formula has an implied intercept term. To remove this use either y ~ x - 1 or y ~ 0 + x. See formula for more details of allowed formulae. Non-NULL weights can be used to indicate that different observations have different variances (with the values in weights being inversely proportional to the variances); or equivalently, when the elements of weights are positive integers wi , that each response yi is the mean of wi unit-weight observations (including the case that there are wi observations equal to yi and the data have been summarized). hyperblm calls the lower level function hyperblmFit for the actual numerical computations. All of weights, subset and offset are evaluated in the same way as variables in formula, that is first in data and then in the environment of formula. hyperblmFit uses a two-stage alternating optimization routine. The quality of parameter start values (especially the error distribution parameters) is crucial to the routine’s convergence. The user can specify the start values via the paramStart argument, otherwise the function finds reliable start values by calling the hyperbFitStand function. startMethod in the argument list is the optimization method for function hyperbFitStandStart which finds the start values for function hyperbFitStand. It is set to "Nelder-Mead" by default due to the robustness of this optimizer. The "BFGS" method is also implemented as it is relatively fast to converge. Since "BFGS" method is a quasi-Newton method it will not as robust and for some data will not achieve convergence. startStarts is the method used to find the start values for function hyperbFitStandStart which includes: • "BN"A method from Barndorff-Nielsen (1977) based on estimates of ψ and γ the absolute slopes of the left and right asymptotes to the log density function • "FN"Based on a fitted normal distribution as it is a limit of the hyperbolic distribution • "SL"Based on a fitted skew-Laplace distribution for which the log density has the form of two straight line with absolute slopes 1/α, 1/β • "MoM"A method of moment approach • "US"User specified method is the method used in stage one of the two-stage alternating optimization routine. As the startMethod, it is set to "Nelder-Mead" by default. Besides "BFGS","nlm" is also implemented as a alternative. Since BFGS method is a quasi-Newton method it will not as robust and for some data will not achieve convergence. If the maximum of the ratio the change of the individual coefficients is smaller than tolerance then the routine assumes convergence, otherwise if the alternating iteration number exceeds maxiter with the maximum of the ratio the change of the individual coefficients larger than tolerance, the routine is considered not to have converged. Value hyperblm returns an object of class "hyperblm" which is a list containing: coefficients A named vector of regression coefficients. distributionParams A named vector of fitted hyperbolic error distribution parameters. fitted.values The fitted values from the model. residuals The remainder after subtracting fitted values from response. mle The maximum likelihood value of the model. method The optimization method for stage one. paramStart The start values of parameters that the user specified (only where relevant). residsParamStart The start values of parameters obtained by hyperbFitStand (only where rele- vant). call The matched call. terms The terms object used. contrasts The contrasts used (only where relevant). xlevels The levels of the factors used in the fitting (only where relevant). offset The offset used (only where relevant) xNames The names of each explanatory variables. If explanatory variables don’t have names then they will be named x. yVec The response vector. xMatrix The explanatory variables matrix. iterations Number of two-stage alternating iterations to convergence. convergence The convergence code for two stage optimization: 0 is the system converged, 1 is first stage does not converge, 2 is second stage does not converge, 3 is the both stages do not converge. breaks The cell boundaries found by a call the hist. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References Barndorff-Nielsen, O. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond., A353, 401–419. <NAME>. (1999). The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. <NAME> (2005). hypReg: A Function for Fitting a Linear Regression Model in R with Hyperbolic Error. Masters Thesis, Statistics Faculty, University of Auckland. Paolella, <NAME>. (2007). Intermediate Probability: A Computational Approach. pp. 415 -Chichester: Wiley. Scott, <NAME>. and <NAME> and <NAME>, (2011). Fitting the Hyperbolic Distribu- tion with R: A Case Study of Optimization Techniques. In preparation. <NAME>. and <NAME>. (2003). Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology. ISVEE X. See Also print.hyperblm prints the regression result in a table. coef.hyperblm obtains the regression coefficients and error distribution parameters of the fitted model. summary.hyperblm obtains a summary output of class hyperblm object. print.summary.hyperblm prints the summary output in a table. plot.hyperblm obtains a residual vs fitted value plot, a histgram of residuals with error distribution density curve on top, a histgram of log residuals with error distribution error density curve on top and a QQ plot. hyperblmFit, optim, nlm, constrOptim, hist, hyperbFitStand, hyperbFitStandStart. Examples ### stackloss data example ## Not run: airflow <- stackloss[, 1] temperature <- stackloss[, 2] acid <- stackloss[, 3] stack <- stackloss[, 4] hyperblm.fit <- hyperblm(stack ~ airflow + temperature + acid) coef.hyperblm(hyperblm.fit) plot.hyperblm(hyperblm.fit, breaks = 20) summary.hyperblm(hyperblm.fit, hessian = FALSE) ## End(Not run) Hyperbolic Hyperbolic Distribution Description Density function, distribution function, quantiles and random number generation for the hyperbolic distribution with parameter vector param. Utility routines are included for the derivative of the density function and to find suitable break points for use in determining the distribution function. Usage dhyperb(x, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) phyperb(q, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), lower.tail = TRUE, subdivisions = 100, intTol = .Machine$double.eps^0.25, valueOnly = TRUE, ...) qhyperb(p, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), lower.tail = TRUE, method = c("spline", "integrate"), nInterpol = 501, uniTol = .Machine$double.eps^0.25, subdivisions = 100, intTol = uniTol, ...) rhyperb(n, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) ddhyperb(x, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) Arguments x,q Vector of quantiles. p Vector of probabilities. n Number of observations to be generated. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameter vector taking the form c(mu, delta, alpha, beta). method Character. If "spline" quantiles are found from a spline approximation to the distribution function. If "integrate", the distribution function used is always obtained by integration. lower.tail Logical. If lower.tail = TRUE, the cumulative density is taken from the lower tail. subdivisions The maximum number of subdivisions used to integrate the density and deter- mine the accuracy of the distribution function calculation. intTol Value of rel.tol and hence abs.tol in calls to integrate. See integrate. valueOnly Logical. If valueOnly = TRUE calls to pghyp only return the value obtained for the integral. If valueOnly = FALSE an estimate of the accuracy of the numerical integration is also returned. nInterpol Number of points used in qghyp for cubic spline interpolation of the distribution function. uniTol Value of tol in calls to uniroot. See uniroot. ... Passes arguments to uniroot. See Details. Details The hyperbolic distribution has density 1 √ √ x−µ x−µ f (x) = √ e−ζ[ 1+π 1+( δ ) −π δ ] 2 2δ 1 + π K1 (ζ) where K1 () is the modified Bessel function of the third kind with order 1. A succinct description of the hyperbolic distribution is given in Barndorff-Nielsen and Blæsild (1983). Three different possible parameterizations are described in that paper. A fourth parameteri- zation is given in Prause (1999). All use location and scale parameters µ and δ. There are two other parameters in each case. Use hyperbChangePars to convert from the (π, ζ) (φ, γ) or (ξ, χ) parameterizations to the (α, β) parameterization used above. Each of the functions are wrapper functions for their equivalent generalized hyperbolic counterpart. For example, dhyperb calls dghyp. See dghyp. The hyperbolic distribution is a special case of the generalized hyperbolic distribution (Barndorff- Nielsen and Blæsild (1983)). The generalized hyperbolic distribution can be represented as a par- ticular mixture of the normal distribution where the mixing distribution is the generalized inverse Gaussian. rhyperb uses this representation to generate observations from the hyperbolic distribu- tion. Generalized inverse Gaussian observations are obtained via the algorithm of Dagpunar (1989). Value dhyperb gives the density, phyperb gives the distribution function, qhyperb gives the quantile function and rhyperb generates random variates. An estimate of the accuracy of the approximation to the distribution function may be found by setting accuracy = TRUE in the call to phyperb which then returns a list with components value and error. ddhyperb gives the derivative of dhyperb. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME>, <NAME> References Barndorff-Nielsen, O. and Blæsild, P (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. <NAME>. (1989). An easily implemented generalized inverse Gaussian generator Commun. Statist. -Simula., 18, 703–710. <NAME>. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also safeIntegrate, integrate for its shortfalls, splinefun, uniroot and hyperbChangePars for changing parameters to the (α, β) parameterization, dghyp for the generalized hyperbolic distribu- tion. Examples param <- c(0, 2, 1, 0) hyperbRange <- hyperbCalcRange(param = param, tol = 10^(-3)) par(mfrow = c(1, 2)) curve(dhyperb(x, param = param), from = hyperbRange[1], to = hyperbRange[2], n = 1000) title("Density of the\n Hyperbolic Distribution") curve(phyperb(x, param = param), from = hyperbRange[1], to = hyperbRange[2], n = 1000) title("Distribution Function of the\n Hyperbolic Distribution") dataVector <- rhyperb(500, param = param) curve(dhyperb(x, param = param), range(dataVector)[1], range(dataVector)[2], n = 500) hist(dataVector, freq = FALSE, add =TRUE) title("Density and Histogram\n of the Hyperbolic Distribution") DistributionUtils::logHist(dataVector, main = "Log-Density and Log-Histogram\n of the Hyperbolic Distribution") curve(log(dhyperb(x, param = param)), add = TRUE, range(dataVector)[1], range(dataVector)[2], n = 500) par(mfrow = c(2, 1)) curve(dhyperb(x, param = param), from = hyperbRange[1], to = hyperbRange[2], n = 1000) title("Density of the\n Hyperbolic Distribution") curve(ddhyperb(x, param = param), from = hyperbRange[1], to = hyperbRange[2], n = 1000) title("Derivative of the Density\n of the Hyperbolic Distribution") hyperbParam Parameter Sets for the Hyperbolic Distribution Description These objects store different parameter sets of the hyperbolic distribution as matrices for testing or demonstration purposes. The parameter sets hyperbSmallShape and hyperbLargeShape have a constant location parameter of µ = 0, and constant scale parameter δ = 1. In hyperbSmallParam and hyperbLargeParam the values of the location and scale parameters vary. In these parameter sets the location parameter µ = 0 takes values from {0, 1} and {-1, 0, 1, 2} respectively. For the scale parameter δ, values are drawn from {1, 5} and {1, 2, 5, 10} respectively. For the shape parameters α and β the approach is more complex. The values for these shape parameters were chosen by choosing values of ξ and χ which range over the shape triangle, then the function hyperbChangePars was applied to convert them to the α, β parameterization. The resulting α, β values were then rounded to three decimal places. See the examples for the values of ξ and χ for the large parameter sets. Usage hyperbSmallShape hyperbLargeShape hyperbSmallParam hyperbLargeParam Format hyperbSmallShape: a 7 by 4 matrix; hyperbLargeShape: a 15 by 4 matrix; hyperbSmallParam: a 28 by 4 matrix; hyperbLargeParam: a 240 by 4 matrix. Author(s) <NAME> <<EMAIL>> Examples data(hyperbParam) plotShapeTriangle() xis <- rep(c(0.1,0.3,0.5,0.7,0.9), 1:5) chis <- c(0,-0.25,0.25,-0.45,0,0.45,-0.65,-0.3,0.3,0.65, -0.85,-0.4,0,0.4,0.85) points(chis, xis, pch = 20, col = "red") ## Testing the accuracy of hyperbMean for (i in 1:nrow(hyperbSmallParam)) { param <- hyperbSmallParam[i, ] x <- rhyperb(1000, param = param) sampleMean <- mean(x) funMean <- hyperbMean(param = param) difference <- abs(sampleMean - funMean) print(difference) } HyperbPlots Hyperbolic Quantile-Quantile and Percent-Percent Plots Description qqhyperb produces a hyperbolic Q-Q plot of the values in y. pphyperb produces a hyperbolic P-P (percent-percent) or probability plot of the values in y. Graphical parameters may be given as arguments to qqhyperb, and pphyperb. Usage qqhyperb(y, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), main = "Hyperbolic Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, line = TRUE, ...) pphyperb(y, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), main = "Hyperbolic P-P Plot", xlab = "Uniform Quantiles", ylab = "Probability-integral-transformed Data", plot.it = TRUE, line = TRUE, ...) Arguments y The data sample. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameters of the hyperbolic distribution. xlab, ylab, main Plot labels. plot.it Logical. Should the result be plotted? line Add line through origin with unit slope. ... Further graphical parameters. Value For qqhyperb and pphyperb, a list with components: x The x coordinates of the points that are to be plotted. y The y coordinates of the points that are to be plotted. References <NAME>. and <NAME>. (1968) Probability plotting methods for the analysis of data. Biometrika. 55, 1–17. See Also ppoints, dhyperb, hyperbFit Examples par(mfrow = c(1, 2)) param <- c(2, 2, 2, 1.5) y <- rhyperb(200, param = param) qqhyperb(y, param = param, line = FALSE) abline(0, 1, col = 2) pphyperb(y, param = param) hyperbWSqTable Percentage Points for the Cramer-von Mises Test of the Hyperbolic Distribution Description This gives Table 5 of Puig & Stephens (2001) which is used for testing the goodness-of-fit of the hyperbolic distribution using the Crämer-von~Mises test. It is for internal use by hyperbCvMTest and hyperbCvMTestPValue only and is not intended to be accessed by the user. It is loaded auto- matically when the package HyperbolicDist is invoked. Usage hyperbWSqTable Format The hyperbWSqTable matrix has 55 rows and 5 columns, giving percentage points of W 2 for dif- ferent values of ξ and α (the rows), and of χ (the columns). Source Puig, Pedro and Stephens, <NAME>. (2001), Goodness-of-fit tests for the hyperbolic distribution. The Canadian Journal of Statistics/La Revue Canadienne de Statistique, 29, 309–320. mamquam Size of Gravels from Mamquam River Description Size of gravels collected from a sandbar in the Mamquam River, British Columbia, Canada. Sum- mary data, giving the frequency of observations in 16 different size classes. Usage data(mamquam) Format The mamquam data frame has 16 rows and 2 columns. [, 1] midpoints midpoints of intervals (psi units) [, 2] counts number of observations in interval Details Gravel sizes are determined by passing clasts through templates of particular sizes. This gives a range in which the size of each clast lies. Sizes (in mm) are then converted into psi units by taking the base 2 logarithm of the size. The midpoints specified are the midpoints of the psi unit ranges, and counts gives the number of observations in each size range. The classes are of length 0.5 psi units. There are 3574 observations. Source Rice, <NAME> Church, Michael (1996) Sampling surficial gravels: the precision of size distri- bution percentile estimates. J. of Sedimentary Research, 66, 654–665. Examples data(mamquam) str(mamquam) ### Construct data from frequency summary, taking all observations ### at midpoints of intervals psi <- rep(mamquam$midpoints, mamquam$counts) barplot(table(psi)) ### Fit the hyperbolic distribution hyperbFit(psi) ### Actually hyperbFit can deal with frequency data hyperbFit(mamquam$midpoints, freq = mamquam$counts) momRecursion Computes the moment coefficients recursively for generalized hyper- bolic and related distributions Description This function computes all of the moments coefficients by recursion based on Scott, Würtz and Tran (2008). See Details for the formula. Usage momRecursion(order = 12, printMatrix = FALSE) Arguments order Numeric. The order of the moment coefficients to be calculated. Not permitted to be a vector. Must be a positive whole number except for moments about zero. printMatrix Logical. Should the coefficients matrix be printed? Details The moment coefficients recursively as a1,1 = 1 and ak,` = ak−1,`−1 + (2` − k + 1)ak−1,` with ak,` = 0 for ` < b(k + 1)/2c or ` > k where k = order, ` is equal to the integers from (k + 1)/2 to k. This formula is given in Scott, Würtz and Tran (2008, working paper). The function also calculates M which is equal to 2` − k. It is a common term which will appear in the formulae for calculating moments of generalized hyperbolic and related distributions. Value a The non-zero moment coefficients for the specified order. l Integers from (order+1)/2 to order. It is used when computing the moment coefficients and the mu moments. M The common term used when computing mu moments for generalized hyper- bolic and related distributions, M = 2` − k, k=order lmin The minimum of `, which is equal to (order+1)/2. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>. and <NAME>. (2008) Moments of the Generalized Hyperbolic Distribution. Preprint. Examples momRecursion(order = 12) #print out the matrix momRecursion(order = 12, "true") nervePulse Intervals Between Pulses Along a Nerve Fibre Description Times between successive electric pulses on the surface of isolated muscle fibres. Usage data(nervePulse) Format The nervePulse data is a vector with 799 observations. Details The end-plates of resting muscle fibres are the seat of spontaneous electric discharges. The oc- curence of these spontaneous discharges at apparently normal synapses is studied in depth in Fatt and Katz (1951). The frequency and amplitute of these discharges was recorded. The times between each discharge were taken in milliseconds and this has been converted into the number of 1/50 sec intervals between successive pulses. There are 799 observations. Source <NAME>., <NAME>. (1952) Spontaneous subthreshold activity at motor nerve endings. J. of Physiology, 117, 109–128. <NAME>. (1982) Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York Examples data(nervePulse) str(nervePulse) ### Fit the generalized inverse Gaussian distribution gigFit(nervePulse) NIG Normal Inverse Gaussian Distribution Description Density function, distribution function, quantiles and random number generation for the normal inverse Gaussian distribution with parameter vector param. Utility routines are included for the derivative of the density function and to find suitable break points for use in determining the distri- bution function. Usage dnig(x, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) pnig(q, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), lower.tail = TRUE, subdivisions = 100, intTol = .Machine$double.eps^0.25, valueOnly = TRUE, ...) qnig(p, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), lower.tail = TRUE, method = c("spline","integrate"), nInterpol = 501, uniTol = .Machine$double.eps^0.25, subdivisions = 100, intTol = uniTol, ...) rnig(n, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) ddnig(x, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) Arguments x,q Vector of quantiles. p Vector of probabilities. n Number of observations to be generated. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameter vector taking the form c(mu, delta, alpha, beta). method Character. If "spline" quantiles are found from a spline approximation to the distribution function. If "integrate", the distribution function used is always obtained by integration. lower.tail Logical. If lower.tail = TRUE, the cumulative density is taken from the lower tail. subdivisions The maximum number of subdivisions used to integrate the density and deter- mine the accuracy of the distribution function calculation. intTol Value of rel.tol and hence abs.tol in calls to integrate. See integrate. valueOnly Logical. If valueOnly = TRUE calls to pghyp only return the value obtained for the integral. If valueOnly = FALSE an estimate of the accuracy of the numerical integration is also returned. nInterpol Number of points used in qghyp for cubic spline interpolation of the distribution function. uniTol Value of tol in calls to uniroot. See uniroot. ... Passes arguments to uniroot. See Details. Details The normal inverse Gaussian distribution has density √ αδ α2 −β 2 p eδ p K1 (α δ 2 + (x − µ)2 )eβ(x−µ) π δ 2 + (x − µ)2 where K1 () is the modified Bessel function of the third kind with order 1. A succinct description of the normal inverse Gaussian distribution is given in Paolella (2007). Be- cause both of the normal inverse Gaussian distribution and the hyperbolic distribution are special cases of the generalized hyperbolic distribution (with different values of λ), the normal inverse Gaussian distribution has the same sets of parameterizations as the hyperbolic distribution. And therefore one can use hyperbChangePars to interchange between different parameterizations for the normal inverse Gaussian distribution as well (see hyperbChangePars for details). Each of the functions are wrapper functions for their equivalent generalized hyperbolic distribution. For example, dnig calls dghyp. pnig breaks the real line into eight regions in order to determine the integral of dnig. The break points determining the regions are found by nigBreaks, based on the values of small, tiny, and deriv. In the extreme tails of the distribution where the probability is tiny according to nigCalcRange, the probability is taken to be zero. In the range between where the probability is tiny and small according to nigCalcRange, an exponential approximation to the hyperbolic dis- tribution is used. In the inner part of the distribution, the range is divided in 4 regions, 2 above the mode, and 2 below. On each side of the mode, the break point which forms the 2 regions is where the derivative of the density function is deriv times the maximum value of the derivative on that side of the mode. In each of the 4 inner regions the numerical integration routine safeIntegrate (which is a wrapper for integrate) is used to integrate the density dnig. qnig uses the breakup of the real line into the same 8 regions as pnig. For quantiles which fall in the 2 extreme regions, the quantile is returned as -Inf or Inf as appropriate. In the range between where the probability is tiny and small according to nigCalcRange, an exponential approximation to the hyperbolic distribution is used from which the quantile may be found in closed form. In the 4 inner regions splinefun is used to fit values of the distribution function generated by pnig. The quantiles are then found using the uniroot function. pnig and qnig may generally be expected to be accurate to 5 decimal places. Recall that the normal inverse Gaussian distribution is a special case of the generalized hyperbolic distribution and the generalized hyperbolic distribution can be represented as a particular mixture of the normal distribution where the mixing distribution is the generalized inverse Gaussian. rnig uses this representation to generate observations from the normal inverse Gaussian distribution. Generalized inverse Gaussian observations are obtained via the algorithm of Dagpunar (1989). Value dnig gives the density, pnig gives the distribution function, qnig gives the quantile function and rnig generates random variates. An estimate of the accuracy of the approximation to the distribu- tion function may be found by setting accuracy = TRUE in the call to pnig which then returns a list with components value and error. ddnig gives the derivative of dnig. Author(s) <NAME> <<EMAIL>>, <NAME> References <NAME>. and <NAME> (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. Paolella, <NAME>. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley Prause, K. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also safeIntegrate, integrate for its shortfalls, splinefun, uniroot and hyperbChangePars for changing parameters to the (α, β) parameterization, dghyp for the generalized hyperbolic distribu- tion. Examples param <- c(0, 2, 1, 0) nigRange <- nigCalcRange(param = param, tol = 10^(-3)) par(mfrow = c(1, 2)) curve(dnig(x, param = param), from = nigRange[1], to = nigRange[2], n = 1000) title("Density of the\n Normal Inverse Gaussian Distribution") curve(pnig(x, param = param), from = nigRange[1], to = nigRange[2], n = 1000) title("Distribution Function of the\n Normal Inverse Gaussian Distribution") dataVector <- rnig(500, param = param) curve(dnig(x, param = param), range(dataVector)[1], range(dataVector)[2], n = 500) hist(dataVector, freq = FALSE, add =TRUE) title("Density and Histogram\n of the Normal Inverse Gaussian Distribution") DistributionUtils::logHist(dataVector, main = "Log-Density and Log-Histogram\n of the Normal Inverse Gaussian Distribution") curve(log(dnig(x, param = param)), add = TRUE, range(dataVector)[1], range(dataVector)[2], n = 500) par(mfrow = c(2, 1)) curve(dnig(x, param = param), from = nigRange[1], to = nigRange[2], n = 1000) title("Density of the\n Normal Inverse Gaussian Distribution") curve(ddnig(x, param = param), from = nigRange[1], to = nigRange[2], n = 1000) title("Derivative of the Density\n of the Normal Inverse Gaussian Distribution") nigCalcRange Range of a normal inverse Gaussian Distribution Description Given the parameter vector param of a normal inverse Gaussian distribution, this function calculates the range outside of which the distribution has negligible probability, or the density function is negligible, to a specified tolerance. The parameterization used is the (α, β) one (see dnig). To use another parameterization, use hyperbChangePars. Usage nigCalcRange(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), tol = 10^(-5), density = TRUE, ...) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Value of parameter vector specifying the normal inverse Gaussian distribution. This takes the form c(mu, delta, alpha, beta). tol Tolerance. density Logical. If FALSE, the bounds are for the probability distribution. If TRUE, they are for the density function. ... Extra arguments for calls to uniroot. Details The particular normal inverse Gaussian distribution being considered is specified by the parameter value param. If density = FALSE, the function calculates the effective range of the distribution, which is used in calculating the distribution function and quantiles, and may be used in determining the range when plotting the distribution. By effective range is meant that the probability of an observation being greater than the upper end is less than the specified tolerance tol. Likewise for being smaller than the lower end of the range. Note that this has not been implemented yet. If density = TRUE, the function gives a range, outside of which the density is less than the given tolerance. Useful for plotting the density. Value A two-component vector giving the lower and upper ends of the range. Author(s) <NAME> <<EMAIL>>, <NAME> References <NAME>. and <NAME> (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. Paolella, <NAME>. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also dnig, hyperbChangePars Examples par(mfrow = c(1, 2)) param <- c(0, 1, 3, 1) nigRange <- nigCalcRange(param = param, tol = 10^(-3)) nigRange curve(pnig(x, param = param), nigRange[1], nigRange[2]) maxDens <- dnig(nigMode(param = param), param = param) nigRange <- nigCalcRange(param = param, tol = 10^(-3) * maxDens, density = TRUE) nigRange curve(dnig(x, param = param), nigRange[1], nigRange[2]) nigFit Fit the normal inverse Gaussian Distribution to Data Description Fits a normal inverse Gaussian distribution to data. Displays the histogram, log-histogram (both with fitted densities), Q-Q plot and P-P plot for the fit which has the maximum likelihood. Usage nigFit(x, freq = NULL, paramStart = NULL, startMethod = c("Nelder-Mead","BFGS"), startValues = c("FN","Cauchy","MoM","US"), criterion = "MLE", method = c("Nelder-Mead","BFGS","nlm", "L-BFGS-B","nlminb","constrOptim"), plots = FALSE, printOut = FALSE, controlBFGS = list(maxit = 200), controlNM = list(maxit = 1000), maxitNLM = 1500, controlLBFGSB = list(maxit = 200), controlNLMINB = list(), controlCO = list(), ...) ## S3 method for class 'nigFit' print(x, digits = max(3, getOption("digits") - 3), ...) ## S3 method for class 'nigFit' plot(x, which = 1:4, plotTitles = paste(c("Histogram of ","Log-Histogram of ", "Q-Q Plot of ","P-P Plot of "), x$obsName, sep = ""), ask = prod(par("mfcol")) < length(which) & dev.interactive(), ...) ## S3 method for class 'nigFit' coef(object, ...) ## S3 method for class 'nigFit' vcov(object, ...) Arguments x Data vector for nigFit. Object of class "nigFit" for print.nigFit and plot.nigFit. freq A vector of weights with length equal to length(x). paramStart A user specified starting parameter vector param taking the form c(mu, delta, alpha, beta). startMethod Method used by nigFitStart in calls to optim. startValues Code giving the method of determining starting values for finding the maximum likelihood estimate of param. criterion Currently only "MLE" is implemented. method Different optimisation methods to consider. See Details. plots Logical. If FALSE suppresses printing of the histogram, log-histogram, Q-Q plot and P-P plot. printOut Logical. If FALSE suppresses printing of results of fitting. controlBFGS A list of control parameters for optim when using the "BFGS" optimisation. controlNM A list of control parameters for optim when using the "Nelder-Mead" optimi- sation. maxitNLM A positive integer specifying the maximum number of iterations when using the "nlm" optimisation. controlLBFGSB A list of control parameters for optim when using the "L-BFGS-B" optimisation. controlNLMINB A list of control parameters for nlminb when using the "nlminb" optimisation. controlCO A list of control parameters for constrOptim when using the "constrOptim" optimisation. digits Desired number of digits when the object is printed. which If a subset of the plots is required, specify a subset of the numbers 1:4. plotTitles Titles to appear above the plots. ask Logical. If TRUE, the user is asked before each plot, see par(ask = .). ... Passes arguments to par, hist, logHist, qqnig and ppnig. object Object of class "nigFit" for coef.nigFit and for vcov.nigFit. Details startMethod can be either "BFGS" or "Nelder-Mead". startValues can be one of the following: • "US"User-supplied. • "FN"A fitted normal distribution. • "Cauchy"Based on a fitted Cauchy distribution. • "MoM"Method of moments. For the details concerning the use of paramStart, startMethod, and startValues, see nigFitStart. The three optimisation methods currently available are: • "BFGS"Uses the quasi-Newton method "BFGS" as documented in optim. • "Nelder-Mead"Uses an implementation of the Nelder and Mead method as documented in optim. • "nlm"Uses the nlm function in R. For details of how to pass control information for optimisation using optim and nlm, see optim and nlm. When method = "nlm" is used, warnings may be produced. These do not appear to be a problem. Value A list with components: param A vector giving the maximum likelihood estimate of param, as c(mu, delta, alpha, beta). maxLik The value of the maximised log-likelihood. method Optimisation method used. conv Convergence code. See the relevant documentation (either optim or nlm) for details on convergence. iter Number of iterations of optimisation routine. x The data used to fit the normal inverse Gaussian distribution. xName A character string with the actual x argument name. paramStart Starting value of param returned by call to nigFitStart. svName Descriptive name for the method finding start values. startValues Acronym for the method of finding start values. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. Author(s) <NAME> <<EMAIL>>, <NAME> References Barndorff-Nielsen, O. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond., A353, 401–419. <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist., 41, 127–146. Paolella, <NAME>. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also optim, nlm, par, hist, logHist, qqnig, ppnig, dskewlap and nigFitStart. Examples param <- c(2, 2, 2, 1) dataVector <- rnig(500, param = param) ## See how well nigFit works nigFit(dataVector) nigFit(dataVector, plots = TRUE) fit <- nigFit(dataVector) par(mfrow = c(1, 2)) plot(fit, which = c(1, 3)) ## Use nlm instead of default nigFit(dataVector, method = "nlm") nigFitStart Find Starting Values for Fitting a normal inverse Gaussian Distribu- tion Description Finds starting values for input to a maximum likelihood routine for fitting normal inverse Gaussian distribution to data. Usage nigFitStart(x, startValues = c("FN","Cauchy","MoM","US"), paramStart = NULL, startMethodMoM = c("Nelder-Mead","BFGS"), ...) nigFitStartMoM(x, startMethodMoM = "Nelder-Mead", ...) Arguments x data vector. startValues a character strin specifying the method for starting values to consider. See Details. paramStart starting values for param if startValues = "US". startMethodMoM Method used by call to optim in finding method of moments estimates. ... Passes arguments to hist and optim. Details Possible values of the argument startValues are the following: • "US"User-supplied. • "FN"A fitted normal distribution. • "Cauchy"Based on a fitted Cauchy distribution, from fitdistr() of the MASS package. • "MoM"Method of moments. If startValues = "US" then a value must be supplied for paramStart. If startValues = "MoM", nigFitStartMoM is called. If startValues = "MoM" an initial optimisa- tion is needed to find the starting values. These optimisations call optim. Value nigFitStart returns a list with components: paramStart A vector with elements mu, delta, alpha and beta giving the starting value of param. xName A character string with the actual x argument name. breaks The cell boundaries found by a call to hist. midpoints The cell midpoints found by a call to hist. empDens The estimated density found by a call to hist. nigFitStartMoM returns only the method of moments estimates as a vector with elements mu, delta, alpha and beta. Author(s) <NAME> <<EMAIL>>, <NAME> References Barndorff-Nielsen, O. (1977) Exponentially decreasing distributions for the logarithm of particle size, Proc. Roy. Soc. Lond., A353, 401–419. <NAME>., <NAME>., <NAME>., and <NAME>. (1985). The fascination of sand. In A celebration of statistics, The ISI Centenary Volume, eds., <NAME>. and <NAME>., pp. 57–87. New York: Springer-Verlag. <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist., 41, 127–146. See Also dnig, dskewlap, nigFit, hist, optim, fitdistr. Examples param <- c(2, 2, 2, 1) dataVector <- rnig(500, param = param) nigFitStart(dataVector, startValues = "FN") nigFitStartMoM(dataVector) nigFitStart(dataVector, startValues = "MoM") nigHessian Calculate Two-Sided Hessian for the Normal Inverse Gaussian Distri- bution Description Calculates the Hessian of a function, either exactly or approximately. Used to obtaining the infor- mation matrix for maximum likelihood estimation. Usage nigHessian(x, param, hessianMethod = "tsHessian", whichParam = 1:5, ...) Arguments x Data vector. param The maximum likelihood estimates parameter vector of the normal inverse Gaus- sian distribution. The normal inverse Gaussian distribution has the same sets of parameterizations as the hyperbolic distribution.There are five different sets of parameterazations can be used in this function, the first four sets are listed in hyperbChangePars and the last set is the log scale of the first set of the param- eterization, i.e., mu,log(delta),Pi,log(zeta). hessianMethod Only the approximate method ("tsHessian") has actually been implemented so far. whichParam Numeric. A number between 1 to 5 indicating which set of the parameterization is the specified value in argument param belong to. ... Values of other parameters of the function fun if required. Details The approximate Hessian is obtained via a call to tsHessian from the package DistributionUtils. summary.nigFit calls the function nigHessian to calculate the Hessian matrix when the argument hessian = TRUE. Value nigHessian gives the approximate or exact Hessian matrix for the data vector x and the estimated parameter vector param. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples ### Calculate the exact Hessian using nigHessian: param <- c(2, 2, 2, 1) dataVector <- rnig(500, param = param) fit <- nigFit(dataVector, method = "BFGS") coef=coef(fit) nigHessian(x=dataVector, param=coef, hessianMethod = "tsHessian", whichParam = 2) ### Or calculate the exact Hessian using summary.nigFit method: ### summary(fit, hessian = TRUE) ## Calculate the approximate Hessian: summary(fit, hessian = TRUE, hessianMethod = "tsHessian") nigParam Parameter Sets for the Normal Inverse Gaussian Distribution Description These objects store different parameter sets of the normal inverse Gaussian distribution as matrices for testing or demonstration purposes. The parameter sets nigSmallShape and nigLargeShape have a constant location parameter of µ = 0, and constant scale parameter δ = 1. In nigSmallParam and nigLargeParam the values of the location and scale parameters vary. In these parameter sets the location parameter µ = 0 takes values from {0, 1} and {-1, 0, 1, 2} respectively. For the scale parameter δ, values are drawn from {1, 5} and {1, 2, 5, 10} respectively. For the shape parameters α and β the approach is more complex. The values for these shape parameters were chosen by choosing values of ξ and χ which range over the shape triangle, then the function nigChangePars was applied to convert them to the α, β parameterization. The resulting α, β values were then rounded to three decimal places. See the examples for the values of ξ and χ for the large parameter sets. Usage nigSmallShape nigLargeShape nigSmallParam nigLargeParam Format nigSmallShape: a 7 by 4 matrix; nigLargeShape: a 15 by 4 matrix; nigSmallParam: a 28 by 4 matrix; nigLargeParam: a 240 by 4 matrix. Author(s) <NAME> <<EMAIL>> Examples data(nigParam) plotShapeTriangle() xis <- rep(c(0.1,0.3,0.5,0.7,0.9), 1:5) chis <- c(0,-0.25,0.25,-0.45,0,0.45,-0.65,-0.3,0.3,0.65, -0.85,-0.4,0,0.4,0.85) points(chis, xis, pch = 20, col = "red") ## Testing the accuracy of nigMean for (i in 1:nrow(nigSmallParam)) { param <- nigSmallParam[i, ] x <- rnig(1000, param = param) sampleMean <- mean(x) funMean <- nigMean(param = param) difference <- abs(sampleMean - funMean) print(difference) } nigPlots Normal inverse Gaussian Quantile-Quantile and Percent-Percent Plots Description qqnig produces a normal inverse Gaussian Q-Q plot of the values in y. ppnig produces a normal inverse Gaussian P-P (percent-percent) or probability plot of the values in y. Graphical parameters may be given as arguments to qqnig, and ppnig. Usage qqnig(y, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), main = "Normal inverse Gaussian Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, line = TRUE, ...) ppnig(y, mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta), main = "Normal inverse Gaussian P-P Plot", xlab = "Uniform Quantiles", ylab = "Probability-integral-transformed Data", plot.it = TRUE, line = TRUE, ...) Arguments y The data sample. mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameters of the normal inverse Gaussian distribution. xlab, ylab, main Plot labels. plot.it Logical. Should the result be plotted? line Add line through origin with unit slope. ... Further graphical parameters. Value For qqnig and ppnig, a list with components: x The x coordinates of the points that are to be plotted. y The y coordinates of the points that are to be plotted. References <NAME>. and <NAME>. (1968) Probability plotting methods for the analysis of data. Biometrika. 55, 1–17. See Also ppoints, dnig, nigFit Examples par(mfrow = c(1, 2)) param <- c(2, 2, 2, 1.5) y <- rnig(200, param = param) qqnig(y, param = param, line = FALSE) abline(0, 1, col = 2) ppnig(y, param = param) plotShapeTriangle Plot the Shape Triangle Description Plots the shape triangle for a hyperbolic distribution or generalized hyperbolic distribution. For the hyperbolic distribution the parameter χ is related to the skewness, and the parameter ξ is related to the kurtosis. See <NAME>. and <NAME>. (1981). Usage plotShapeTriangle(xgap = 0.025, ygap = 0.0625/2, main = "Shape Triangle", ...) Arguments xgap Gap between the left- and right-hand edges of the shape triangle and the border surrounding the graph. ygap Gap between the top and bottom of the shape triangle and the border surrounding the graph. main Title for the plot. ... Values of other graphical parameters. Author(s) <NAME> <<EMAIL>> References <NAME>. and <NAME> (1981). Hyperbolic distributions and ramifications: contribu- tions to theory and application. In Statistical Distributions in Scientific Work, eds., <NAME>., Patil, <NAME>., and <NAME>., Vol. 4, pp. 19–44. Dordrecht: Reidel. Examples plotShapeTriangle() resistors Resistance of One-half-ohm Resistors Description This data set gives the resistance in ohms of 500 nominally one-half-ohm resistors, presented in Hahn and Shapiro (1967). Summary data giving the frequency of observations in 28 intervals. Usage data(resistors) Format The resistors data frame has 28 rows and 2 columns. [, 1] midpoints midpoints of intervals (ohm) [, 2] counts number of observations in interval Source Hahn, <NAME>. and Shapiro, <NAME>. (1967) Statistical Models in Engineering. New York: Wiley, page 207. References Chen, Hanfeng, and Kamburowska, Grazyna (2001) Fitting data to the Johnson system. J. Statist. Comput. Simul. 70, 21–32. Examples data(resistors) str(resistors) ### Construct data from frequency summary, taking all observations ### at midpoints of intervals resistances <- rep(resistors$midpoints, resistors$counts) hist(resistances) DistributionUtils::logHist(resistances) ## Fit the hyperbolic distribution hyperbFit(resistances) ## Actually fit.hyperb can deal with frequency data hyperbFit(resistors$midpoints, freq = resistors$counts) SandP500 S\&P 500 Description This data set gives the value of Standard and Poor’s most notable stock market price index (the S&P 500) at year end, from 1800 to 2001. Usage data(SandP500) Format A vector of 202 observations. Source At the time, http://www.globalfindata.com which no longer exists. References Brown, <NAME>., <NAME>. and <NAME>. (2002) The log F: a distribution for all seasons. Computational Statistics, 17, 47–58. Examples data(SandP500) ### Consider proportional changes in the index change <- SandP500[-length(SandP500)] / SandP500[-1] hist(change) ### Fit hyperbolic distribution to changes hyperbFit(change) SkewLaplace Skew-Laplace Distribution Description Density function, distribution function, quantiles and random number generation for the skew- Laplace distribution. Usage dskewlap(x, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta), logPars = FALSE) pskewlap(q, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta)) qskewlap(p, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta)) rskewlap(n, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta)) Arguments x, q Vector of quantiles. p Vector of probabilities. n Number of observations to be generated. mu The location parameter, set to 0 by default. alpha, beta The shape parameters, both set to 1 by default. param Vector of parameters of the skew-Laplace distribution: µ, α and β. logPars Logical. If TRUE the second and third components of param are taken to be log(α) and log(β) respectively. Details The central skew-Laplace has mode zero, and is a mixture of a (negative) exponential distribution with mean β, and the negative of an exponential distribution with mean α. The weights of the positive and negative components are proportional to their means. The general skew-Laplace distribution is a shifted central skew-Laplace distribution, where the mode is given by µ. The density is given by: f (x) = e(x−µ)/α α+β for x ≤ µ, and f (x) = e−(x−µ)/β α+β for x ≥ µ Value dskewlap gives the density, pskewlap gives the distribution function, qskewlap gives the quan- tile function and rskewlap generates random variates. The distribution function is obtained by elementary integration of the density function. Random variates are generated from exponential ob- servations using the characterization of the skew-Laplace as a mixture of exponential observations. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References <NAME>., <NAME>. and <NAME>. (1992) Statistics of particle size data. Appl. Statist., 41, 127–146. See Also hyperbFitStart Examples param <- c(1, 1, 2) par(mfrow = c(1, 2)) curve(dskewlap(x, param = param), from = -5, to = 8, n = 1000) title("Density of the\n Skew-Laplace Distribution") curve(pskewlap(x, param = param), from = -5, to = 8, n = 1000) title("Distribution Function of the\n Skew-Laplace Distribution") dataVector <- rskewlap(500, param = param) curve(dskewlap(x, param = param), range(dataVector)[1], range(dataVector)[2], n = 500) hist(dataVector, freq = FALSE, add = TRUE) title("Density and Histogram\n of the Skew-Laplace Distribution") DistributionUtils::logHist(dataVector, main = "Log-Density and Log-Histogram\n of the Skew-Laplace Distribution") curve(log(dskewlap(x, param = param)), add = TRUE, range(dataVector)[1], range(dataVector)[2], n = 500) SkewLaplacePlots Skew-Laplace Quantile-Quantile and Percent-Percent Plots Description qqskewlap produces a skew-Laplace QQ plot of the values in y. ppskewlap produces a skew-Laplace PP (percent-percent) or probability plot of the values in y. If line = TRUE, a line with zero intercept and unit slope is added to the plot. Graphical parameters may be given as arguments to qqskewlap, and ppskewlap. Usage qqskewlap(y, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta), main = "Skew-Laplace Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, line = TRUE, ...) ppskewlap(y, mu = 0, alpha = 1, beta = 1, param = c(mu, alpha, beta), main = "Skew-Laplace P-P Plot", xlab = "Uniform Quantiles", ylab = "Probability-integral-transformed Data", plot.it = TRUE, line = TRUE, ...) Arguments y The data sample. mu The location parameter, set to 0 by default. alpha, beta The shape parameters, both set to 1 by default. param Parameters of the skew-Laplace distribution. xlab, ylab, main Plot labels. plot.it Logical. TRUE denotes the results should be plotted. line Logical. If TRUE, a line with zero intercept and unit slope is added to the plot. ... Further graphical parameters. Value For qqskewlap and ppskewlap, a list with components: x The x coordinates of the points that are be plotted. y The y coordinates of the points that are be plotted. References Wilk, <NAME>. and <NAME>. (1968) Probability plotting methods for the analysis of data. Biometrika. 55, 1–17. See Also ppoints, dskewlap. Examples par(mfrow = c(1, 2)) y <- rskewlap(1000, param = c(2, 0.5, 1)) qqskewlap(y, param = c(2, 0.5, 1), line = FALSE) abline(0, 1, col = 2) ppskewlap(y, param = c(2, 0.5, 1)) Specific Generalized Hyperbolic Moments and Mode Moments and Mode of the Generalized Hyperbolic Distribution Description Functions to calculate the mean, variance, skewness, kurtosis and mode of a specific generalized hyperbolic distribution. Usage ghypMean(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) ghypVar(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) ghypSkew(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) ghypKurt(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) ghypMode(mu = 0, delta = 1, alpha = 1, beta = 0, lambda = 1, param = c(mu, delta, alpha, beta, lambda)) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. lambda λ is the shape parameter and dictates the shape that the distribution shall take. Default value is 1. param Parameter vector of the generalized hyperbolic distribution. Value ghypMean gives the mean of the generalized hyperbolic distribution, ghypVar the variance, ghypSkew the skewness, ghypKurt the kurtosis, and ghypMode the mode. The formulae used for the mean is given in Prause (1999). The variance, skewness and kurtosis are obtained using the recursive for- mula implemented in ghypMom which can calculate moments of all orders about any point. The mode is found by a numerical optimisation using optim. For the special case of the hyperbolic distribution a formula for the mode is available, see hyperbMode. The parameterization of the generalized hyperbolic distribution used for these functions is the (α, β) one. See ghypChangePars to transfer between parameterizations. Author(s) <NAME> <<EMAIL>>, <NAME> 84 Specific Generalized Inverse Gaussian Moments and Mode References Prause, K. (1999) The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. See Also dghyp, ghypChangePars, besselK, RLambda. Examples param <- c(2, 2, 2, 1, 2) ghypMean(param = param) ghypVar(param = param) ghypSkew(param = param) ghypKurt(param = param) ghypMode(param = param) maxDens <- dghyp(ghypMode(param = param), param = param) ghypRange <- ghypCalcRange(param = param, tol = 10^(-3) * maxDens) curve(dghyp(x, param = param), ghypRange[1], ghypRange[2]) abline(v = ghypMode(param = param), col = "blue") abline(v = ghypMean(param = param), col = "red") Specific Generalized Inverse Gaussian Moments and Mode Moments and Mode of the Generalized Inverse Gaussian Distribution Description Functions to calculate the mean, variance, skewness, kurtosis and mode of a specific generalized inverse Gaussian distribution. Usage gigMean(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) gigVar(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) gigSkew(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) gigKurt(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) gigMode(chi = 1, psi = 1, lambda = 1, param = c(chi, psi, lambda)) Specific Generalized Inverse Gaussian Moments and Mode 85 Arguments chi A shape parameter that by default holds a value of 1. psi Another shape parameter that is set to 1 by default. lambda Shape parameter of the GIG distribution. Common to all forms of parameteri- zation. By default this is set to 1. param Parameter vector of the generalized inverse Gaussian distribution. Value gigMean gives the mean of the generalized inverse Gaussian distribution, gigVar the variance, gigSkew the skewness, gigKurt the kurtosis, and gigMode the mode. The formulae used are as given in Jorgensen (1982), pp. 13–17. Note that the kurtosis is the standardised fourth cumulant or what is sometimes called the kurtosis excess. (See http://mathworld.wolfram.com/Kurtosis. html for a discussion.) The parameterization used for the generalized inverse Gaussian distribution is the (χ, ψ) one (see dgig). To use another parameterization, use gigChangePars. Author(s) <NAME> <<EMAIL>> References Jorgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York. See Also dgig, gigChangePars, besselK Examples param <- c(5, 2.5, -0.5) gigMean(param = param) gigVar(param = param) gigSkew(param = param) gigKurt(param = param) gigMode(param = param) Specific Hyperbolic Distribution Moments and Mode Moments and Mode of the Hyperbolic Distribution Description Functions to calculate the mean, variance, skewness, kurtosis and mode of a specific hyperbolic distribution. Usage hyperbMean(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) hyperbVar(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) hyperbSkew(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) hyperbKurt(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) hyperbMode(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameter vector of the hyperbolic distribution. Details The formulae used for the mean, variance and mode are as given in Barndorff-Nielsen and Blæsild (1983), p. 702. The formulae used for the skewness and kurtosis are those of Barndorff-Nielsen and Blæsild (1981), Appendix 2. Note that the variance, skewness and kurtosis can be obtained from the functions for the generalized hyperbolic distribution as special cases. Likewise other moments can be obtained from the function ghypMom which implements a recursive method to moments of any desired order. Note that functions for the generalized hyperbolic distribution use a different parameterization, so care is required. Value hyperbMean gives the mean of the hyperbolic distribution, hyperbVar the variance, hyperbSkew the skewness, hyperbKurt the kurtosis and hyperbMode the mode. Note that the kurtosis is the standardised fourth cumulant or what is sometimes called the kurtosis excess. (See http://mathworld.wolfram.com/Kurtosis.html for a discussion.) Specific Normal Inverse Gaussian Distribution Moments and Mode 87 The parameterization of the hyperbolic distribution used for this and other components of the GeneralizedHyperbolic package is the (α, β) one. See hyperbChangePars to transfer between parameterizations. Author(s) <NAME> <<EMAIL>>, <NAME>, <NAME> References Barndorff-Nielsen, O. and Blæsild, P (1981). Hyperbolic distributions and ramifications: contribu- tions to theory and application. In Statistical Distributions in Scientific Work, eds., <NAME>., Patil, <NAME>., and <NAME>., Vol. 4, pp. 19–44. Dordrecht: Reidel. <NAME>. and Blæsild, P (1983). Hyperbolic distributions. In Encyclopedia of Statis- tical Sciences, eds., <NAME>., <NAME>. and <NAME>., Vol. 3, pp. 700–707. New York: Wiley. See Also dhyperb, hyperbChangePars, besselK, ghypMom, ghypMean, ghypVar, ghypSkew, ghypKurt Examples param <- c(2, 2, 2, 1) hyperbMean(param = param) hyperbVar(param = param) hyperbSkew(param = param) hyperbKurt(param = param) hyperbMode(param = param) Specific Normal Inverse Gaussian Distribution Moments and Mode Moments and Mode of the Normal Inverse Gaussian Distribution Description Functions to calculate the mean, variance, skewness, kurtosis and mode of a specific normal inverse Gaussian distribution. Usage nigMean(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) nigVar(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) nigSkew(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) nigKurt(mu = 0, delta = 1, alpha = 1, beta = 0, 88 Specific Normal Inverse Gaussian Distribution Moments and Mode param = c(mu, delta, alpha, beta)) nigMode(mu = 0, delta = 1, alpha = 1, beta = 0, param = c(mu, delta, alpha, beta)) Arguments mu µ is the location parameter. By default this is set to 0. delta δ is the scale parameter of the distribution. A default value of 1 has been set. alpha α is the tail parameter, with a default value of 1. beta β is the skewness parameter, by default this is 0. param Parameter vector of the normal inverse Gaussian distribution. Details The mean, variance, skewness, kurtosis and mode for the normal inverse Gaussian distribution can be obtained from the functions for the generalized hyperbolic distribution as special cases (i.e., λ = -1/2). Likewise other moments can be obtained from the function ghypMom which implements a recursive method to moments of any desired order. The proper formulae for the mean, variance and skewness of the normal inverse Gaussian distribu- tion can be found in Paolella, Marc S. (2007), Chapter 9, p325. Value nigMean gives the mean of the normal inverse Gaussian distribution, nigVar the variance, nigSkew the skewness, nigKurt the kurtosis and nigMode the mode. Note that the kurtosis is the standardised fourth cumulant or what is sometimes called the kurtosis excess. (See http://mathworld.wolfram.com/Kurtosis.html for a discussion.) The parameterization of the normal inverse Gaussian distribution used for this and other components of the GeneralizedHyperbolic package is the (α, β) one. See hyperbChangePars to transfer between parameterizations. Author(s) <NAME> <<EMAIL>>, <NAME> References Paolella, Marc S. (2007) Intermediate Probability: A Computational Approach, Chichester: Wiley See Also dnig, hyperbChangePars, besselK, ghypMom, ghypMean, ghypVar, ghypSkew, ghypKurt Examples param <- c(2, 2, 2, 1) nigMean(param = param) nigVar(param = param) nigSkew(param = param) nigKurt(param = param) nigMode(param = param) summary.gigFit Summarizing Normal Inverse Gaussian Distribution Fit Description summary Method for class "gigFit". Usage ## S3 method for class 'gigFit' summary(object, hessian = FALSE, hessianMethod = "tsHessian", ...) ## S3 method for class 'summary.gigFit' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments object An object of class "gigFit", resulting from a call to gigFit. hessian Logical. If TRUE the Hessian is printed. hessianMethod The two-sided Hessian approximation given by tsHessian from the package DistributionUtils is the only method implemented so far. x An object of class "summary.gigFit", resulting from a call to summary.gigFit. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Details If hessian = FALSE no calculations are performed, the class of object is simply changed from gigFit to summary.gigFit so that it can be passed to print.summary.gigFit for printing in a convenient form. If hessian = TRUE the Hessian is calculated via a call to gigHessian and the standard errors of the parameter estimates are calculated using the Hessian and these are added to the original list object. The class of the object returned is again changed to summary.gigFit. Value summary.gigFit returns a list comprised of the original object object and additional elements hessian and sds if hessian = TRUE, otherwise it returns the original object. The class of the object returned is changed to summary.gigFit. See gigFit for the composition of an object of class gigFit. If the Hessian and standard errors have not been added to the object x, print.summary.gigFit prints a summary in the same format as print.gigFit. When the Hessian and standard errors are available, the Hessian is printed and the standard errors for the parameter estimates are printed in parentheses beneath the parameter estimates, in the manner of fitdistr in the package MASS. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> See Also gigFit, summary, gigHessian. Examples ### Continuing the gigFit(.) example: param <- c(1,1,1) dataVector <- rgig(500, param = param) fit <- gigFit(dataVector) print(fit) summary(fit, hessian = TRUE, hessianMethod = "tsHessian") summary.hyperbFit Summarizing Hyperbolic Distribution Fit Description summary Method for class "hyperbFit". Usage ## S3 method for class 'hyperbFit' summary(object, hessian = FALSE, hessianMethod = "exact", ...) ## S3 method for class 'summary.hyperbFit' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments object An object of class "hyperbFit", resulting from a call to hyperbFit. hessian Logical. If TRUE the Hessian is printed. hessianMethod Two methods are available to calculate the Hessian exactly ("exact") or ap- proximately ("tsHessian"). x An object of class "summary.hyperbFit", resulting from a call to summary.hyperbFit. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Details If hessian = FALSE no calculations are performed, the class of object is simply changed from hyperbFit to summary.hyperbFit so that it can be passed to print.summary.hyperbFit for printing in a convenient form. If hessian = TRUE the Hessian is calculated via a call to hyperbHessian and the standard errors of the parameter estimates are calculated using the Hessian and these are added to the original list object. The class of the object returned is again changed to summary.hyperbFit. Value summary.hyperbFit returns a list comprised of the original object object and additional elements hessian and sds if hessian = TRUE, otherwise it returns the original object. The class of the object returned is changed to summary.hyperbFit. See hyperbFit for the composition of an object of class hyperbFit. If the Hessian and standard errors have not been added to the object x, print.summary.hyperbFit prints a summary in the same format as print.hyperbFit. When the Hessian and standard errors are available, the Hessian is printed and the standard errors for the parameter estimates are printed in parentheses beneath the parameter estimates, in the manner of fitdistr in the package MASS. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> See Also hyperbFit, summary, hyperbHessian, tsHessian. Examples ### Continuing the hyperbFit(.) example: param <- c(2, 2, 2, 1) dataVector <- rhyperb(500, param = param) fit <- hyperbFit(dataVector, method = "BFGS") print(fit) summary(fit, hessian = TRUE) summary.hyperblm Summary Output of Hyperbolic Regression Description It obtains summary output from class ’hyperblm’ object. The summary output incldes the standard error, t-statistics, p values of the coefficients estimates. Also the estimated parameters of hyper- bolic error distribution, the maximum likelihood, the stage one optimization method, the two-stage alternating iterations and the convergence code. Usage ## S3 method for class 'hyperblm' summary(object, hessian = FALSE, nboots = 1000, ...) ## S3 method for class 'summary.hyperblm' print(x, Arguments object An object of class "hyperblm". x An object of class "summary.hyperblm" resulting from a call to summary.hyperblm. hessian Logical. If is TRUE, the standard error is calculated by the hessian matrix and the also hessian matrix is returned. Otherwise, the standard error is approximated by bootstrapping. See Details. nboots Numeric. Number of bootstrap simulations to obtain the bootstrap estimate of parameters standard errors. digits Numeric. Desired number of digits when the object is printed. ... Passes additional arguments to functions bSE, hyperblmhessian. Details The function summary.hyperblm provides two approaches to obtain the standard error of parame- ters due to the fact that approximated hessian matrix is not stable for such complex optimization. The first approach is by approximated hessian matrix. The setting in the argument list is hessian = TRUE. The Hessian matrix is approximated by function tsHessian. However it may not be reliable for some error distribution parameters, for instance, the function obtains negative variance from the Hessian matrix. The second approach is by parametric bootstrapping. The setting in the argument list is hessian = FALSE which is also the default setting. The default number of bootstrap stimula- tions is 1000, but users can increase this when accuracy has priority over efficiency. Although the bootstrapping is fairly slow, it provides reliable standard errors. Value summary.hyperblm returns an object of class summary.hyperblm which is a list containing: coefficients A names vector of regression coefficients. distributionParams A named vector of fitted hyperbolic error distribution parameters. fitted.values The fitted mean values. residuals The remaining after subtract fitted values from response. MLE The maximum likelihood value of the model. method The optimization method for stage one. paramStart The start values of parameters that the user specified (only where relevant). residsParamStart The start values of parameters returned by hyperbFitStand (only where rele- vant). call The matched call. terms The terms object used. contrasts The contrasts used (only where relevant). xlevels The levels of the factors used in the fitting (only where relevant). offset The offset used (only where relevant). xNames The names of each explanatory variables. If explanatory variables don’t have names then they shall be named x. yVec The response vector. xMatrix The explanatory variables matrix. iterations Number of two-stage alternating iterations to convergency. convergence The convergence code for two-stage optimization: 0 if the system converged; 1 if first stage did not converge, 2 if the second stage did not converge, 3 if the both stages did not converge. breaks The cell boundaries found by a call the hist. hessian Hessian Matrix. Only where Hessian = TRUE. tval t-statistics of regression coefficient estimates. rdf Degrees of freedom. pval P-values of regression coefficients estimates. sds Standard errors of regression coefficient estimates. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>. (1977). Exponentially Decreasing Distribution for the Logarithm of Particle Size. In Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 353, pp. 401–419. <NAME>. (1999). The generalized hyperbolic models: Estimation, financial derivatives and risk measurement. PhD Thesis, Mathematics Faculty, University of Freiburg. <NAME> (2005). hypReg: A Function for Fitting a Linear Regression Model in R with Hyperbolic Error. Masters Thesis, Statistics Faculty, University of Auckland. Paolella, <NAME>. (2007). Intermediate Probability: A Compitational Approach. pp. 415 -Chichester: Wiley. Scott, <NAME>. and <NAME> and <NAME>, (2011). Fitting the Hyperbolic Distribu- tion with R: A Case Study of Optimization Techniques. In preparation. <NAME>. and <NAME>. (2003). Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology. ISVEE X. See Also print.summary.hyperblm prints the summary output in a table. hyperblm fits linear model with hyperbolic error distribution. print.hyperblm prints the regression result in a table. coef.hyperblm obtains the regression coefficients and error distribution parameters of the fitted model. plot.hyperblm obtains a residual vs fitted value plot, a histgram of residuals with error distribution density curve on top, a histgram of log residuals with error distribution error density curve on top and a QQ plot. tsHessian Examples ## stackloss data example # airflow <- stackloss[, 1] # temperature <- stackloss[, 2] # acid <- stackloss[, 3] # stack <- stackloss[, 4] # hyperblm.fit <- hyperblm(stack ~ airflow + temperature + acid, # tolerance = 1e-11) # coef.hyperblm(hyperblm.fit) # plot.hyperblm(hyperblm.fit, breaks = 20) # summary.hyperblm(hyperblm.fit, hessian = FALSE) summary.nigFit Summarizing Normal Inverse Gaussian Distribution Fit Description summary Method for class "nigFit". Usage ## S3 method for class 'nigFit' summary(object, hessian = FALSE, hessianMethod = "tsHessian", ...) ## S3 method for class 'summary.nigFit' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments object An object of class "nigFit", resulting from a call to nigFit. hessian Logical. If TRUE the Hessian is printed. hessianMethod The two-sided Hessian approximation given by tsHessian from the package DistributionUtils is the only method implemented so far. x An object of class "summary.nigFit", resulting from a call to summary.nigFit. digits The number of significant digits to use when printing. ... Further arguments passed to or from other methods. Details If hessian = FALSE no calculations are performed, the class of object is simply changed from nigFit to summary.nigFit so that it can be passed to print.summary.nigFit for printing in a convenient form. If hessian = TRUE the Hessian is calculated via a call to nigHessian and the standard errors of the parameter estimates are calculated using the Hessian and these are added to the original list object. The class of the object returned is again changed to summary.nigFit. Value summary.nigFit returns a list comprised of the original object object and additional elements hessian and sds if hessian = TRUE, otherwise it returns the original object. The class of the object returned is changed to summary.nigFit. See nigFit for the composition of an object of class nigFit. If the Hessian and standard errors have not been added to the object x, print.summary.nigFit prints a summary in the same format as print.nigFit. When the Hessian and standard errors are available, the Hessian is printed and the standard errors for the parameter estimates are printed in parentheses beneath the parameter estimates, in the manner of fitdistr in the package MASS. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> See Also nigFit, summary, nigHessian. Examples ### Continuing the nigFit(.) example: param <- c(2, 2, 2, 1) dataVector <- rnig(500, param = param) fit <- nigFit(dataVector, method = "BFGS") print(fit) summary(fit, hessian = TRUE, hessianMethod = "tsHessian") traffic Intervals Between Vehicles on a Road Description Intervals between the times that 129 successive vehicles pass a point on a road, measured in seconds. Usage data(traffic) Format The traffic data is a vector of 128 observations. Source <NAME>. (1963) Statistical estimation of density functions Sankhya: The Indian Journal of Statistics, Series A, Vol. 25, No. 3, 245–254. <NAME>. (1982) Statistical Properties of the Generalized Inverse Gaussian Distribution. Lec- ture Notes in Statistics, Vol. 9, Springer-Verlag, New York Examples data(traffic) str(traffic) ### Fit the generalized inverse Gaussian distribution gigFit(traffic)
tomcat
rust
Rust
Struct tomcat::Extensions === ``` pub struct Extensions { /* private fields */ } ``` A type map of protocol extensions. `Extensions` can be used by `Request` and `Response` to store extra data derived from the underlying protocol. Implementations --- ### impl Extensions #### pub fn new() -> Extensions Create an empty `Extensions`. #### pub fn insert<T>(&mut self, val: T) -> Option<T>where    T: 'static + Send + Sync, Insert a type into this `Extensions`. If a extension of this type already existed, it will be returned. ##### Example ``` let mut ext = Extensions::new(); assert!(ext.insert(5i32).is_none()); assert!(ext.insert(4u8).is_none()); assert_eq!(ext.insert(9i32), Some(5i32)); ``` #### pub fn get<T>(&self) -> Option<&T>where    T: 'static + Send + Sync, Get a reference to a type previously inserted on this `Extensions`. ##### Example ``` let mut ext = Extensions::new(); assert!(ext.get::<i32>().is_none()); ext.insert(5i32); assert_eq!(ext.get::<i32>(), Some(&5i32)); ``` #### pub fn get_mut<T>(&mut self) -> Option<&mutT>where    T: 'static + Send + Sync, Get a mutable reference to a type previously inserted on this `Extensions`. ##### Example ``` let mut ext = Extensions::new(); ext.insert(String::from("Hello")); ext.get_mut::<String>().unwrap().push_str(" World"); assert_eq!(ext.get::<String>().unwrap(), "Hello World"); ``` #### pub fn remove<T>(&mut self) -> Option<T>where    T: 'static + Send + Sync, Remove a type from this `Extensions`. If a extension of this type existed, it will be returned. ##### Example ``` let mut ext = Extensions::new(); ext.insert(5i32); assert_eq!(ext.remove::<i32>(), Some(5i32)); assert!(ext.get::<i32>().is_none()); ``` #### pub fn clear(&mut self) Clear the `Extensions` of all inserted extensions. ##### Example ``` let mut ext = Extensions::new(); ext.insert(5i32); ext.clear(); assert!(ext.get::<i32>().is_none()); ``` #### pub fn is_empty(&self) -> bool Check whether the extension set is empty or not. ##### Example ``` let mut ext = Extensions::new(); assert!(ext.is_empty()); ext.insert(5i32); assert!(!ext.is_empty()); ``` #### pub fn len(&self) -> usize Get the numer of extensions available. ##### Example ``` let mut ext = Extensions::new(); assert_eq!(ext.len(), 0); ext.insert(5i32); assert_eq!(ext.len(), 1); ``` #### pub fn extend(&mut self, other: Extensions) Extends `self` with another `Extensions`. If an instance of a specific type exists in both, the one in `self` is overwritten with the one from `other`. ##### Example ``` let mut ext_a = Extensions::new(); ext_a.insert(8u8); ext_a.insert(16u16); let mut ext_b = Extensions::new(); ext_b.insert(4u8); ext_b.insert("hello"); ext_a.extend(ext_b); assert_eq!(ext_a.len(), 3); assert_eq!(ext_a.get::<u8>(), Some(&4u8)); assert_eq!(ext_a.get::<u16>(), Some(&16u16)); assert_eq!(ext_a.get::<&'static str>().copied(), Some("hello")); ``` Trait Implementations --- ### impl Debug for Extensions #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn default() -> Extensions Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Extensions ### impl Send for Extensions ### impl Sync for Extensions ### impl Unpin for Extensions ### impl !UnwindSafe for Extensions Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct tomcat::HeaderMap === ``` pub struct HeaderMap<T = HeaderValue> { /* private fields */ } ``` A set of HTTP headers `HeaderMap` is an multimap of `HeaderName` to values. Examples --- Basic usage ``` let mut headers = HeaderMap::new(); headers.insert(HOST, "example.com".parse().unwrap()); headers.insert(CONTENT_LENGTH, "123".parse().unwrap()); assert!(headers.contains_key(HOST)); assert!(!headers.contains_key(LOCATION)); assert_eq!(headers[HOST], "example.com"); headers.remove(HOST); assert!(!headers.contains_key(HOST)); ``` Implementations --- ### impl HeaderMap<HeaderValue#### pub fn new() -> HeaderMap<HeaderValueCreate an empty `HeaderMap`. The map will be created without any capacity. This function will not allocate. ##### Examples ``` let map = HeaderMap::new(); assert!(map.is_empty()); assert_eq!(0, map.capacity()); ``` ### impl<T> HeaderMap<T#### pub fn with_capacity(capacity: usize) -> HeaderMap<TCreate an empty `HeaderMap` with the specified capacity. The returned map will allocate internal storage in order to hold about `capacity` elements without reallocating. However, this is a “best effort” as there are usage patterns that could cause additional allocations before `capacity` headers are stored in the map. More capacity than requested may be allocated. ##### Examples ``` let map: HeaderMap<u32> = HeaderMap::with_capacity(10); assert!(map.is_empty()); assert_eq!(12, map.capacity()); ``` #### pub fn len(&self) -> usize Returns the number of headers stored in the map. This number represents the total number of **values** stored in the map. This number can be greater than or equal to the number of **keys** stored given that a single key may have more than one associated value. ##### Examples ``` let mut map = HeaderMap::new(); assert_eq!(0, map.len()); map.insert(ACCEPT, "text/plain".parse().unwrap()); map.insert(HOST, "localhost".parse().unwrap()); assert_eq!(2, map.len()); map.append(ACCEPT, "text/html".parse().unwrap()); assert_eq!(3, map.len()); ``` #### pub fn keys_len(&self) -> usize Returns the number of keys stored in the map. This number will be less than or equal to `len()` as each key may have more than one associated value. ##### Examples ``` let mut map = HeaderMap::new(); assert_eq!(0, map.keys_len()); map.insert(ACCEPT, "text/plain".parse().unwrap()); map.insert(HOST, "localhost".parse().unwrap()); assert_eq!(2, map.keys_len()); map.insert(ACCEPT, "text/html".parse().unwrap()); assert_eq!(2, map.keys_len()); ``` #### pub fn is_empty(&self) -> bool Returns true if the map contains no elements. ##### Examples ``` let mut map = HeaderMap::new(); assert!(map.is_empty()); map.insert(HOST, "hello.world".parse().unwrap()); assert!(!map.is_empty()); ``` #### pub fn clear(&mut self) Clears the map, removing all key-value pairs. Keeps the allocated memory for reuse. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello.world".parse().unwrap()); map.clear(); assert!(map.is_empty()); assert!(map.capacity() > 0); ``` #### pub fn capacity(&self) -> usize Returns the number of headers the map can hold without reallocating. This number is an approximation as certain usage patterns could cause additional allocations before the returned capacity is filled. ##### Examples ``` let mut map = HeaderMap::new(); assert_eq!(0, map.capacity()); map.insert(HOST, "hello.world".parse().unwrap()); assert_eq!(6, map.capacity()); ``` #### pub fn reserve(&mut self, additional: usize) Reserves capacity for at least `additional` more headers to be inserted into the `HeaderMap`. The header map may reserve more space to avoid frequent reallocations. Like with `with_capacity`, this will be a “best effort” to avoid allocations until `additional` more headers are inserted. Certain usage patterns could cause additional allocations before the number is reached. ##### Panics Panics if the new allocation size overflows `usize`. ##### Examples ``` let mut map = HeaderMap::new(); map.reserve(10); ``` #### pub fn get<K>(&self, key: K) -> Option<&T>where    K: AsHeaderName, Returns a reference to the value associated with the key. If there are multiple values associated with the key, then the first one is returned. Use `get_all` to get all values associated with a given key. Returns `None` if there are no values associated with the key. ##### Examples ``` let mut map = HeaderMap::new(); assert!(map.get("host").is_none()); map.insert(HOST, "hello".parse().unwrap()); assert_eq!(map.get(HOST).unwrap(), &"hello"); assert_eq!(map.get("host").unwrap(), &"hello"); map.append(HOST, "world".parse().unwrap()); assert_eq!(map.get("host").unwrap(), &"hello"); ``` #### pub fn get_mut<K>(&mut self, key: K) -> Option<&mutT>where    K: AsHeaderName, Returns a mutable reference to the value associated with the key. If there are multiple values associated with the key, then the first one is returned. Use `entry` to get all values associated with a given key. Returns `None` if there are no values associated with the key. ##### Examples ``` let mut map = HeaderMap::default(); map.insert(HOST, "hello".to_string()); map.get_mut("host").unwrap().push_str("-world"); assert_eq!(map.get(HOST).unwrap(), &"hello-world"); ``` #### pub fn get_all<K>(&self, key: K) -> GetAll<'_, T>where    K: AsHeaderName, Returns a view of all values associated with a key. The returned view does not incur any allocations and allows iterating the values associated with the key. See `GetAll` for more details. Returns `None` if there are no values associated with the key. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello".parse().unwrap()); map.append(HOST, "goodbye".parse().unwrap()); let view = map.get_all("host"); let mut iter = view.iter(); assert_eq!(&"hello", iter.next().unwrap()); assert_eq!(&"goodbye", iter.next().unwrap()); assert!(iter.next().is_none()); ``` #### pub fn contains_key<K>(&self, key: K) -> boolwhere    K: AsHeaderName, Returns true if the map contains a value for the specified key. ##### Examples ``` let mut map = HeaderMap::new(); assert!(!map.contains_key(HOST)); map.insert(HOST, "world".parse().unwrap()); assert!(map.contains_key("host")); ``` #### pub fn iter(&self) -> Iter<'_, TAn iterator visiting all key-value pairs. The iteration order is arbitrary, but consistent across platforms for the same crate version. Each key will be yielded once per associated value. So, if a key has 3 associated values, it will be yielded 3 times. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello".parse().unwrap()); map.append(HOST, "goodbye".parse().unwrap()); map.insert(CONTENT_LENGTH, "123".parse().unwrap()); for (key, value) in map.iter() { println!("{:?}: {:?}", key, value); } ``` #### pub fn iter_mut(&mut self) -> IterMut<'_, TAn iterator visiting all key-value pairs, with mutable value references. The iterator order is arbitrary, but consistent across platforms for the same crate version. Each key will be yielded once per associated value, so if a key has 3 associated values, it will be yielded 3 times. ##### Examples ``` let mut map = HeaderMap::default(); map.insert(HOST, "hello".to_string()); map.append(HOST, "goodbye".to_string()); map.insert(CONTENT_LENGTH, "123".to_string()); for (key, value) in map.iter_mut() { value.push_str("-boop"); } ``` #### pub fn keys(&self) -> Keys<'_, TAn iterator visiting all keys. The iteration order is arbitrary, but consistent across platforms for the same crate version. Each key will be yielded only once even if it has multiple associated values. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello".parse().unwrap()); map.append(HOST, "goodbye".parse().unwrap()); map.insert(CONTENT_LENGTH, "123".parse().unwrap()); for key in map.keys() { println!("{:?}", key); } ``` #### pub fn values(&self) -> Values<'_, TAn iterator visiting all values. The iteration order is arbitrary, but consistent across platforms for the same crate version. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello".parse().unwrap()); map.append(HOST, "goodbye".parse().unwrap()); map.insert(CONTENT_LENGTH, "123".parse().unwrap()); for value in map.values() { println!("{:?}", value); } ``` #### pub fn values_mut(&mut self) -> ValuesMut<'_, TAn iterator visiting all values mutably. The iteration order is arbitrary, but consistent across platforms for the same crate version. ##### Examples ``` let mut map = HeaderMap::default(); map.insert(HOST, "hello".to_string()); map.append(HOST, "goodbye".to_string()); map.insert(CONTENT_LENGTH, "123".to_string()); for value in map.values_mut() { value.push_str("-boop"); } ``` #### pub fn drain(&mut self) -> Drain<'_, TClears the map, returning all entries as an iterator. The internal memory is kept for reuse. For each yielded item that has `None` provided for the `HeaderName`, then the associated header name is the same as that of the previously yielded item. The first yielded item will have `HeaderName` set. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello".parse().unwrap()); map.append(HOST, "goodbye".parse().unwrap()); map.insert(CONTENT_LENGTH, "123".parse().unwrap()); let mut drain = map.drain(); assert_eq!(drain.next(), Some((Some(HOST), "hello".parse().unwrap()))); assert_eq!(drain.next(), Some((None, "goodbye".parse().unwrap()))); assert_eq!(drain.next(), Some((Some(CONTENT_LENGTH), "123".parse().unwrap()))); assert_eq!(drain.next(), None); ``` #### pub fn entry<K>(&mut self, key: K) -> Entry<'_, T>where    K: IntoHeaderName, Gets the given key’s corresponding entry in the map for in-place manipulation. ##### Examples ``` let mut map: HeaderMap<u32> = HeaderMap::default(); let headers = &[ "content-length", "x-hello", "Content-Length", "x-world", ]; for &header in headers { let counter = map.entry(header).or_insert(0); *counter += 1; } assert_eq!(map["content-length"], 2); assert_eq!(map["x-hello"], 1); ``` #### pub fn try_entry<K>(&mut self, key: K) -> Result<Entry<'_, T>, InvalidHeaderName>where    K: AsHeaderName, Gets the given key’s corresponding entry in the map for in-place manipulation. ##### Errors This method differs from `entry` by allowing types that may not be valid `HeaderName`s to passed as the key (such as `String`). If they do not parse as a valid `HeaderName`, this returns an `InvalidHeaderName` error. #### pub fn insert<K>(&mut self, key: K, val: T) -> Option<T>where    K: IntoHeaderName, Inserts a key-value pair into the map. If the map did not previously have this key present, then `None` is returned. If the map did have this key present, the new value is associated with the key and all previous values are removed. **Note** that only a single one of the previous values is returned. If there are multiple values that have been previously associated with the key, then the first one is returned. See `insert_mult` on `OccupiedEntry` for an API that returns all values. The key is not updated, though; this matters for types that can be `==` without being identical. ##### Examples ``` let mut map = HeaderMap::new(); assert!(map.insert(HOST, "world".parse().unwrap()).is_none()); assert!(!map.is_empty()); let mut prev = map.insert(HOST, "earth".parse().unwrap()).unwrap(); assert_eq!("world", prev); ``` #### pub fn append<K>(&mut self, key: K, value: T) -> boolwhere    K: IntoHeaderName, Inserts a key-value pair into the map. If the map did not previously have this key present, then `false` is returned. If the map did have this key present, the new value is pushed to the end of the list of values currently associated with the key. The key is not updated, though; this matters for types that can be `==` without being identical. ##### Examples ``` let mut map = HeaderMap::new(); assert!(map.insert(HOST, "world".parse().unwrap()).is_none()); assert!(!map.is_empty()); map.append(HOST, "earth".parse().unwrap()); let values = map.get_all("host"); let mut i = values.iter(); assert_eq!("world", *i.next().unwrap()); assert_eq!("earth", *i.next().unwrap()); ``` #### pub fn remove<K>(&mut self, key: K) -> Option<T>where    K: AsHeaderName, Removes a key from the map, returning the value associated with the key. Returns `None` if the map does not contain the key. If there are multiple values associated with the key, then the first one is returned. See `remove_entry_mult` on `OccupiedEntry` for an API that yields all values. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(HOST, "hello.world".parse().unwrap()); let prev = map.remove(HOST).unwrap(); assert_eq!("hello.world", prev); assert!(map.remove(HOST).is_none()); ``` Trait Implementations --- ### impl<T> Clone for HeaderMap<T>where    T: Clone, #### fn clone(&self) -> HeaderMap<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Extends a collection with the contents of an iterator. 🔬This is a nightly-only experimental API. (`extend_one`)Extends a collection with exactly one element.#### fn extend_reserve(&mut self, additional: usize) 🔬This is a nightly-only experimental API. (`extend_one`)Reserves capacity in a collection for the given number of additional elements. Extend a `HeaderMap` with the contents of another `HeaderMap`. This function expects the yielded items to follow the same structure as `IntoIter`. ##### Panics This panics if the first yielded item does not have a `HeaderName`. ##### Examples ``` let mut map = HeaderMap::new(); map.insert(ACCEPT, "text/plain".parse().unwrap()); map.insert(HOST, "hello.world".parse().unwrap()); let mut extra = HeaderMap::new(); extra.insert(HOST, "foo.bar".parse().unwrap()); extra.insert(COOKIE, "hello".parse().unwrap()); extra.append(COOKIE, "world".parse().unwrap()); map.extend(extra); assert_eq!(map["host"], "foo.bar"); assert_eq!(map["accept"], "text/plain"); assert_eq!(map["cookie"], "hello"); let v = map.get_all("host"); assert_eq!(1, v.iter().count()); let v = map.get_all("cookie"); assert_eq!(2, v.iter().count()); ``` #### fn extend_one(&mut self, item: A) 🔬This is a nightly-only experimental API. (`extend_one`)Extends a collection with exactly one element.#### fn extend_reserve(&mut self, additional: usize) 🔬This is a nightly-only experimental API. (`extend_one`)Reserves capacity in a collection for the given number of additional elements. Creates a value from an iterator. #### fn index(&self, index: K) -> &T ##### Panics Using the index operator will cause a panic if the header you’re querying isn’t set. #### type Output = T The returned type after indexing.### impl<'a, T> IntoIterator for &'a HeaderMap<T#### type Item = (&'a HeaderName, &'aT) The type of the elements being iterated over.#### type IntoIter = Iter<'a, TWhich kind of iterator are we turning this into?#### fn into_iter(self) -> Iter<'a, TCreates an iterator from a value. The type of the elements being iterated over.#### type IntoIter = IterMut<'a, TWhich kind of iterator are we turning this into?#### fn into_iter(self) -> IterMut<'a, TCreates an iterator from a value. For each yielded item that has `None` provided for the `HeaderName`, then the associated header name is the same as that of the previously yielded item. The first yielded item will have `HeaderName` set. ##### Examples Basic usage. ``` let mut map = HeaderMap::new(); map.insert(header::CONTENT_LENGTH, "123".parse().unwrap()); map.insert(header::CONTENT_TYPE, "json".parse().unwrap()); let mut iter = map.into_iter(); assert_eq!(iter.next(), Some((Some(header::CONTENT_LENGTH), "123".parse().unwrap()))); assert_eq!(iter.next(), Some((Some(header::CONTENT_TYPE), "json".parse().unwrap()))); assert!(iter.next().is_none()); ``` Multiple values per key. ``` let mut map = HeaderMap::new(); map.append(header::CONTENT_LENGTH, "123".parse().unwrap()); map.append(header::CONTENT_LENGTH, "456".parse().unwrap()); map.append(header::CONTENT_TYPE, "json".parse().unwrap()); map.append(header::CONTENT_TYPE, "html".parse().unwrap()); map.append(header::CONTENT_TYPE, "xml".parse().unwrap()); let mut iter = map.into_iter(); assert_eq!(iter.next(), Some((Some(header::CONTENT_LENGTH), "123".parse().unwrap()))); assert_eq!(iter.next(), Some((None, "456".parse().unwrap()))); assert_eq!(iter.next(), Some((Some(header::CONTENT_TYPE), "json".parse().unwrap()))); assert_eq!(iter.next(), Some((None, "html".parse().unwrap()))); assert_eq!(iter.next(), Some((None, "xml".parse().unwrap()))); assert!(iter.next().is_none()); ``` #### type Item = (Option<HeaderName>, T) The type of the elements being iterated over.#### type IntoIter = IntoIter<TWhich kind of iterator are we turning this into?### impl<T> PartialEq<HeaderMap<T>> for HeaderMap<T>where    T: PartialEq<T>, #### fn eq(&self, other: &HeaderMap<T>) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. Try to convert a `HashMap` into a `HeaderMap`. #### Examples ``` use std::collections::HashMap; use std::convert::TryInto; use http::HeaderMap; let mut map = HashMap::new(); map.insert("X-Custom-Header".to_string(), "my value".to_string()); let headers: HeaderMap = (&map).try_into().expect("valid headers"); assert_eq!(headers["X-Custom-Header"], "my value"); ``` #### type Error = Error The type returned in the event of a conversion error.#### fn try_from(    c: &'a HashMap<K, V, RandomState>) -> Result<HeaderMap<T>, <HeaderMap<T> as TryFrom<&'a HashMap<K, V, RandomState>>>::ErrorPerforms the conversion.### impl<T> Eq for HeaderMap<T>where    T: Eq, Auto Trait Implementations --- ### impl<T> RefUnwindSafe for HeaderMap<T>where    T: RefUnwindSafe, ### impl<T> Send for HeaderMap<T>where    T: Send, ### impl<T> Sync for HeaderMap<T>where    T: Sync, ### impl<T> Unpin for HeaderMap<T>where    T: Unpin, ### impl<T> UnwindSafe for HeaderMap<T>where    T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct tomcat::RequestBuilder === ``` pub struct RequestBuilder { /* private fields */ } ``` A builder to construct the properties of a `Request`. To construct a `RequestBuilder`, refer to the `Client` documentation. Implementations --- ### impl RequestBuilder #### pub fn header<K, V>(self, key: K, value: V) -> RequestBuilderwhere    HeaderName: TryFrom<K>,    <HeaderName as TryFrom<K>>::Error: Into<Error>,    HeaderValue: TryFrom<V>,    <HeaderValue as TryFrom<V>>::Error: Into<Error>, Add a `Header` to this Request. #### pub fn headers(self, headers: HeaderMap<HeaderValue>) -> RequestBuilder Add a set of Headers to the existing ones on this Request. The headers will be merged in to any already set. #### pub fn basic_auth<U, P>(self, username: U, password: Option<P>) -> RequestBuilderwhere    U: Display,    P: Display, Enable HTTP basic authentication. ``` let client = reqwest::Client::new(); let resp = client.delete("http://httpbin.org/delete") .basic_auth("admin", Some("good password")) .send() .await?; ``` #### pub fn bearer_auth<T>(self, token: T) -> RequestBuilderwhere    T: Display, Enable HTTP bearer authentication. #### pub fn body<T>(self, body: T) -> RequestBuilderwhere    T: Into<Body>, Set the request body. #### pub fn timeout(self, timeout: Duration) -> RequestBuilder Enables a request timeout. The timeout is applied from when the request starts connecting until the response body has finished. It affects only this request and overrides the timeout configured using `ClientBuilder::timeout()`. #### pub fn query<T>(self, query: &T) -> RequestBuilderwhere    T: Serialize + ?Sized, Modify the query string of the URL. Modifies the URL of this request, adding the parameters provided. This method appends and does not overwrite. This means that it can be called multiple times and that existing query parameters are not overwritten if the same key is used. The key will simply show up twice in the query string. Calling `.query(&[("foo", "a"), ("foo", "b")])` gives `"foo=a&foo=b"`. ##### Note This method does not support serializing a single key-value pair. Instead of using `.query(("key", "val"))`, use a sequence, such as `.query(&[("key", "val")])`. It’s also possible to serialize structs and maps into a key-value pair. ##### Errors This method will fail if the object you provide cannot be serialized into a query string. #### pub fn version(self, version: Version) -> RequestBuilder Set HTTP version #### pub fn form<T>(self, form: &T) -> RequestBuilderwhere    T: Serialize + ?Sized, Send a form body. Sets the body to the url encoded serialization of the passed value, and also sets the `Content-Type: application/x-www-form-urlencoded` header. ``` let mut params = HashMap::new(); params.insert("lang", "rust"); let client = reqwest::Client::new(); let res = client.post("http://httpbin.org") .form(&params) .send() .await?; ``` ##### Errors This method fails if the passed value cannot be serialized into url encoded format #### pub fn json<T>(self, json: &T) -> RequestBuilderwhere    T: Serialize + ?Sized, Send a JSON body. ##### Optional This requires the optional `json` feature enabled. ##### Errors Serialization can fail if `T`’s implementation of `Serialize` decides to fail, or if `T` contains a map with non-string keys. #### pub fn fetch_mode_no_cors(self) -> RequestBuilder Disable CORS on fetching the request. ##### WASM This option is only effective with WebAssembly target. The request mode will be set to ‘no-cors’. #### pub fn build(self) -> Result<Request, ErrorBuild a `Request`, which can be inspected, modified and executed with `Client::execute()`. #### pub fn send(self) -> impl Future<Output = Result<Response, Error>Constructs the Request and sends it to the target URL, returning a future Response. ##### Errors This method fails if there was an error while sending request, redirect loop was detected or redirect limit was exhausted. ##### Example ``` let response = reqwest::Client::new() .get("https://hyper.rs") .send() .await?; ``` #### pub fn try_clone(&self) -> Option<RequestBuilderAttempt to clone the RequestBuilder. `None` is returned if the RequestBuilder can not be cloned, i.e. if the request body is a stream. ##### Examples ``` let client = reqwest::Client::new(); let builder = client.post("http://httpbin.org/post") .body("from a &str!"); let clone = builder.try_clone(); assert!(clone.is_some()); ``` Trait Implementations --- ### impl Debug for RequestBuilder #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for RequestBuilder ### impl Send for RequestBuilder ### impl Sync for RequestBuilder ### impl Unpin for RequestBuilder ### impl !UnwindSafe for RequestBuilder Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct tomcat::Response === ``` pub struct Response { pub status: u16, pub bytes: Vec<u8>, pub content_length: Option<u64>, pub headers: HeaderMap, pub remote_addr: SocketAddr, pub text: String, pub text_with_charset: String, pub url: String, pub version: Version, } ``` Fields --- `status: u16``bytes: Vec<u8>``content_length: Option<u64>``headers: HeaderMap``remote_addr: SocketAddr``text: String``text_with_charset: String``url: String``version: Version`Implementations --- ### impl Response #### pub fn new(    status: u16,    bytes: Vec<u8>,    content_length: Option<u64>,    headers: HeaderMap,    remote_addr: SocketAddr,    text: String,    text_with_charset: String,    url: String,    version: Version) -> Self Trait Implementations --- ### impl Clone for Response #### fn clone(&self) -> Response Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Response ### impl Send for Response ### impl Sync for Response ### impl Unpin for Response ### impl UnwindSafe for Response Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct tomcat::Version === ``` pub struct Version(_); ``` Represents a version of the HTTP spec. Implementations --- ### impl Version #### pub const HTTP_09: Version = Version(Http::Http09) `HTTP/0.9` #### pub const HTTP_10: Version = Version(Http::Http10) `HTTP/1.0` #### pub const HTTP_11: Version = Version(Http::Http11) `HTTP/1.1` #### pub const HTTP_2: Version = Version(Http::H2) `HTTP/2.0` #### pub const HTTP_3: Version = Version(Http::H3) `HTTP/3.0` Trait Implementations --- ### impl Clone for Version #### fn clone(&self) -> Version Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn default() -> Version Returns the “default value” for a type. #### fn hash<__H>(&self, state: &mut__H)where    __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mutH)where    H: Hasher, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &Version) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Self Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Self Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere    Self: PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &Version) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn partial_cmp(&self, other: &Version) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. ### impl Eq for Version ### impl StructuralEq for Version ### impl StructuralPartialEq for Version Auto Trait Implementations --- ### impl RefUnwindSafe for Version ### impl Send for Version ### impl Sync for Version ### impl Unpin for Version ### impl UnwindSafe for Version Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum tomcat::SocketAddr === 1.0.0 · ``` pub enum SocketAddr { V4(SocketAddrV4), V6(SocketAddrV6), } ``` An internet socket address, either IPv4 or IPv6. Internet socket addresses consist of an IP address, a 16-bit port number, as well as possibly some version-dependent additional information. See `SocketAddrV4`’s and `SocketAddrV6`’s respective documentation for more details. The size of a `SocketAddr` instance may vary depending on the target operating system. Examples --- ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); assert_eq!("127.0.0.1:8080".parse(), Ok(socket)); assert_eq!(socket.port(), 8080); assert_eq!(socket.is_ipv4(), true); ``` Variants --- ### `V4(SocketAddrV4)` An IPv4 socket address. ### `V6(SocketAddrV6)` An IPv6 socket address. Implementations --- ### impl SocketAddr #### pub fn parse_ascii(b: &[u8]) -> Result<SocketAddr, AddrParseError🔬This is a nightly-only experimental API. (`addr_parse_ascii`)Parse a socket address from a slice of bytes. ``` #![feature(addr_parse_ascii)] use std::net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr}; let socket_v4 = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); let socket_v6 = SocketAddr::new(IpAddr::V6(Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 1)), 8080); assert_eq!(SocketAddr::parse_ascii(b"127.0.0.1:8080"), Ok(socket_v4)); assert_eq!(SocketAddr::parse_ascii(b"[::1]:8080"), Ok(socket_v6)); ``` ### impl SocketAddr 1.7.0 (const: unstable) · source#### pub fn new(ip: IpAddr, port: u16) -> SocketAddr Creates a new socket address from an IP address and a port number. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1))); assert_eq!(socket.port(), 8080); ``` 1.7.0 (const: unstable) · source#### pub fn ip(&self) -> IpAddr Returns the IP address associated with this socket address. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1))); ``` 1.9.0 · source#### pub fn set_ip(&mut self, new_ip: IpAddr) Changes the IP address associated with this socket address. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let mut socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); socket.set_ip(IpAddr::V4(Ipv4Addr::new(10, 10, 0, 1))); assert_eq!(socket.ip(), IpAddr::V4(Ipv4Addr::new(10, 10, 0, 1))); ``` const: unstable · source#### pub fn port(&self) -> u16 Returns the port number associated with this socket address. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); assert_eq!(socket.port(), 8080); ``` 1.9.0 · source#### pub fn set_port(&mut self, new_port: u16) Changes the port number associated with this socket address. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let mut socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); socket.set_port(1025); assert_eq!(socket.port(), 1025); ``` 1.16.0 (const: unstable) · source#### pub fn is_ipv4(&self) -> bool Returns `true` if the IP address in this `SocketAddr` is an `IPv4` address, and `false` otherwise. ##### Examples ``` use std::net::{IpAddr, Ipv4Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080); assert_eq!(socket.is_ipv4(), true); assert_eq!(socket.is_ipv6(), false); ``` 1.16.0 (const: unstable) · source#### pub fn is_ipv6(&self) -> bool Returns `true` if the IP address in this `SocketAddr` is an `IPv6` address, and `false` otherwise. ##### Examples ``` use std::net::{IpAddr, Ipv6Addr, SocketAddr}; let socket = SocketAddr::new(IpAddr::V6(Ipv6Addr::new(0, 0, 0, 0, 0, 65535, 0, 1)), 8080); assert_eq!(socket.is_ipv4(), false); assert_eq!(socket.is_ipv6(), true); ``` Trait Implementations --- ### impl Clone for SocketAddr #### fn clone(&self) -> SocketAddr Returns a copy of the value. Performs copy-assignment from `source`. #### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn deserialize<D>(    deserializer: D) -> Result<SocketAddr, <D as Deserializer<'de>>::Error>where    D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read more1.17.0 · source### impl<I> From<(I, u16)> for SocketAddrwhere    I: Into<IpAddr>, #### fn from(pieces: (I, u16)) -> SocketAddr Converts a tuple struct (Into<`IpAddr`>, `u16`) into a `SocketAddr`. This conversion creates a `SocketAddr::V4` for an `IpAddr::V4` and creates a `SocketAddr::V6` for an `IpAddr::V6`. `u16` is treated as port of the newly created `SocketAddr`. 1.16.0 · source### impl From<SocketAddrV4> for SocketAddr #### fn from(sock4: SocketAddrV4) -> SocketAddr Converts a `SocketAddrV4` into a `SocketAddr::V4`. 1.16.0 · source### impl From<SocketAddrV6> for SocketAddr #### fn from(sock6: SocketAddrV6) -> SocketAddr Converts a `SocketAddrV6` into a `SocketAddr::V6`. ### impl FromStr for SocketAddr #### type Err = AddrParseError The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<SocketAddr, AddrParseErrorParses a string `s` to return a value of this type. #### fn hash<__H>(&self, state: &mut__H)where    __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mutH)where    H: Hasher, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &SocketAddr) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Self Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Self Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere    Self: PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &SocketAddr) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn partial_cmp(&self, other: &SocketAddr) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. This method tests less than (for `self` and `other`) and is used by the `<` operator. This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. This method tests greater than (for `self` and `other`) and is used by the `>` operator. This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn serialize<S>(    &self,    serializer: S) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error>where    S: Serializer, Serialize this value into the given Serde serializer. #### type Iter = IntoIter<SocketAddrReturned iterator over socket addresses which this type may correspond to. ### impl Eq for SocketAddr ### impl StructuralEq for SocketAddr ### impl StructuralPartialEq for SocketAddr ### impl ToSocketAddrs for SocketAddr Auto Trait Implementations --- ### impl RefUnwindSafe for SocketAddr ### impl Send for SocketAddr ### impl Sync for SocketAddr ### impl Unpin for SocketAddr ### impl UnwindSafe for SocketAddr Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Trait tomcat::IntoUrl === ``` pub trait IntoUrl: IntoUrlSealed { } ``` A trait to try to convert some type into a `Url`. This trait is “sealed”, such that only types within reqwest can implement it. Implementations on Foreign Types --- ### impl<'a> IntoUrl for &'a str ### impl IntoUrl for Url ### impl<'a> IntoUrl for &'a String ### impl IntoUrl for String Implementors --- Function tomcat::get === ``` pub async fn get(     url: impl ToString + IntoUrl ) -> Result<Response, Box<dyn Error>> ``` non-blocking get --- ``` #[tokio::main] async fn main(){ use tomcat::*; if let Ok(res) = get("https://www.spacex.com").await{ assert_eq!(200,res.status); assert_eq!(r#"{"content-type": "text/html; charset=utf-8", "vary": "Accept-Encoding", "date": "Sun, 09 Oct 2022 18:49:44 GMT", "connection": "keep-alive", "keep-alive": "timeout=5", "transfer-encoding": "chunked"}"#,format!("{:?}",res.headers)); println!("{}",res.text); println!("{}",res.text_with_charset); println!("{}",res.url); println!("{}",res.remote_addr); println!("{:?}",res.version); } } ``` Function tomcat::get_blocking === ``` pub fn get_blocking(     url: impl ToString + IntoUrl ) -> Result<Response, Box<dyn Error>> ``` blocking get --- ``` fn main(){ use tomcat::*; if let Ok(res) = get_blocking("https://www.spacex.com"){ assert_eq!(200,res.status); assert_eq!(r#"{"content-type": "text/html; charset=utf-8", "vary": "Accept-Encoding", "date": "Sun, 09 Oct 2022 18:49:44 GMT", "connection": "keep-alive", "keep-alive": "timeout=5", "transfer-encoding": "chunked"}"#,format!("{:?}",res.headers)); println!("{}",res.text); println!("{}",res.text_with_charset); println!("{}",res.url); println!("{}",res.remote_addr); println!("{:?}",res.version); } } ``` Function tomcat::post === ``` pub async fn post(     url: impl ToString + IntoUrl ) -> Result<RequestBuilder, Box<dyn Error>> ``` non-blocking post --- ``` if let Ok(req) = tomcat::post("https://api.openai.com/v1/completions").await { let res = req .header(header::CONTENT_TYPE, "application/json") .header("Authorization", &auth_header_val) .body(body).send().await.unwrap(); let text = res.text().await.unwrap(); let json: OpenAIResponse = match serde_json::from_str(&text){ Ok(response) => response, Err(_) => { println!("Error calling OpenAI. Check environment variable OPENAI_KEY"); std::process::exit(1); } }; ``` Function tomcat::post_blocking === ``` pub fn post_blocking(     url: impl ToString + IntoUrl ) -> Result<RequestBuilder, Box<dyn Error>> ``` blocking post --- ``` if let Ok(req) = tomcat::post_blocking("https://api.openai.com/v1/completions"){ let res = req .header(header::CONTENT_TYPE, "application/json") .header("Authorization", &auth_header_val) .body(body).send().unwrap(); let text = res.text().unwrap(); let json: OpenAIResponse = match serde_json::from_str(&text){ Ok(response) => response, Err(_) => { println!("Error calling OpenAI. Check environment variable OPENAI_KEY"); std::process::exit(1); } }; ```
notebook
readthedoc
Python
Jupyter Notebook 6.5.4 documentation [Jupyter Notebook](#) --- The Jupyter Notebook[](#the-jupyter-notebook) === * [Installation](https://jupyter.readthedocs.io/en/latest/install.html) * [Starting the Notebook](https://jupyter.readthedocs.io/en/latest/running.html) The Jupyter Notebook[](#the-jupyter-notebook) --- ### Introduction[](#introduction) The notebook extends the console-based approach to interactive computing in a qualitatively new direction, providing a web-based application suitable for capturing the whole computation process: developing, documenting, and executing code, as well as communicating the results. The Jupyter notebook combines two components: **A web application**: a browser-based tool for interactive authoring of documents which combine explanatory text, mathematics, computations and their rich media output. **Notebook documents**: a representation of all content visible in the web application, including inputs and outputs of the computations, explanatory text, mathematics, images, and rich media representations of objects. See also See the [installation guide](https://docs.jupyter.org/en/latest/install.html#install) on how to install the notebook and its dependencies. #### Main features of the web application[](#main-features-of-the-web-application) * In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion/introspection. * The ability to execute code from the browser, with the results of computations attached to the code which generated them. * Displaying the result of computation using rich media representations, such as HTML, LaTeX, PNG, SVG, etc. For example, publication-quality figures rendered by the [matplotlib](https://matplotlib.org) library, can be included inline. * In-browser editing for rich text using the [Markdown](https://daringfireball.net/projects/markdown/syntax) markup language, which can provide commentary for the code, is not limited to plain text. * The ability to easily include mathematical notation within markdown cells using LaTeX, and rendered natively by [MathJax](https://www.mathjax.org/). #### Notebook documents[](#notebook-documents) Notebook documents contains the inputs and outputs of a interactive session as well as additional text that accompanies the code but is not meant for execution. In this way, notebook files can serve as a complete computational record of a session, interleaving executable code with explanatory text, mathematics, and rich representations of resulting objects. These documents are internally [JSON](https://en.wikipedia.org/wiki/JSON) files and are saved with the `.ipynb` extension. Since JSON is a plain text format, they can be version-controlled and shared with colleagues. Notebooks may be exported to a range of static formats, including HTML (for example, for blog posts), reStructuredText, LaTeX, PDF, and slide shows, via the [nbconvert](https://nbconvert.readthedocs.io/en/latest/) command. Furthermore, any `.ipynb` notebook document available from a public URL can be shared via the Jupyter Notebook Viewer <nbviewer>. This service loads the notebook document from the URL and renders it as a static web page. The results may thus be shared with a colleague, or as a public blog post, without other users needing to install the Jupyter notebook themselves. In effect, nbviewer is simply [nbconvert](https://nbconvert.readthedocs.io/en/latest/) as a web service, so you can do your own static conversions with nbconvert, without relying on nbviewer. See also [Details on the notebook JSON file format](https://nbformat.readthedocs.io/en/latest/format_description.html#notebook-file-format) #### Notebooks and privacy[](#notebooks-and-privacy) Because you use Jupyter in a web browser, some people are understandably concerned about using it with sensitive data. However, if you followed the standard [install instructions](https://jupyter.readthedocs.io/en/latest/install.html), Jupyter is actually running on your own computer. If the URL in the address bar starts with `http://localhost:` or `http://127.0.0.1:`, it’s your computer acting as the server. Jupyter doesn’t send your data anywhere else—and as it’s open source, other people can check that we’re being honest about this. You can also use Jupyter remotely: your company or university might run the server for you, for instance. If you want to work with sensitive data in those cases, talk to your IT or data protection staff about it. We aim to ensure that other pages in your browser or other users on the same computer can’t access your notebook server. See [Security in the Jupyter notebook server](index.html#server-security) for more about this. ### Starting the notebook server[](#starting-the-notebook-server) You can start running a notebook server from the command line using the following command: ``` jupyter notebook ``` This will print some information about the notebook server in your console, and open a web browser to the URL of the web application (by default, `http://127.0.0.1:8888`). The landing page of the Jupyter notebook web application, the **dashboard**, shows the notebooks currently available in the notebook directory (by default, the directory from which the notebook server was started). You can create new notebooks from the dashboard with the `New Notebook` button, or open existing ones by clicking on their name. You can also drag and drop `.ipynb` notebooks and standard `.py` Python source code files into the notebook list area. When starting a notebook server from the command line, you can also open a particular notebook directly, bypassing the dashboard, with `jupyter notebook my_notebook.ipynb`. The `.ipynb` extension is assumed if no extension is given. When you are inside an open notebook, the File | Open… menu option will open the dashboard in a new browser tab, to allow you to open another notebook from the notebook directory or to create a new notebook. Note You can start more than one notebook server at the same time, if you want to work on notebooks in different directories. By default the first notebook server starts on port 8888, and later notebook servers search for ports near that one. You can also manually specify the port with the `--port` option. #### Creating a new notebook document[](#creating-a-new-notebook-document) A new notebook may be created at any time, either from the dashboard, or using the File ‣ New menu option from within an active notebook. The new notebook is created within the same directory and will open in a new browser tab. It will also be reflected as a new entry in the notebook list on the dashboard. #### Opening notebooks[](#opening-notebooks) An open notebook has **exactly one** interactive session connected to a kernel, which will execute code sent by the user and communicate back results. This kernel remains active if the web browser window is closed, and reopening the same notebook from the dashboard will reconnect the web application to the same kernel. In the dashboard, notebooks with an active kernel have a `Shutdown` button next to them, whereas notebooks without an active kernel have a `Delete` button in its place. Other clients may connect to the same kernel. When each kernel is started, the notebook server prints to the terminal a message like this: ``` [NotebookApp] Kernel started: 87f7d2c0-13e3-43df-8bb8-1bd37aaf3373 ``` This long string is the kernel’s ID which is sufficient for getting the information necessary to connect to the kernel. If the notebook uses the IPython kernel, you can also see this connection data by running the `%connect_info` [magic](https://ipython.readthedocs.io/en/stable/interactive/tutorial.html#magics-explained), which will print the same ID information along with other details. You can then, for example, manually start a Qt console connected to the *same* kernel from the command line, by passing a portion of the ID: ``` $ jupyter qtconsole --existing 87f7d2c0 ``` Without an ID, `--existing` will connect to the most recently started kernel. With the IPython kernel, you can also run the `%qtconsole` [magic](https://ipython.readthedocs.io/en/stable/interactive/tutorial.html#magics-explained) in the notebook to open a Qt console connected to the same kernel. See also [Decoupled two-process model](https://ipython.readthedocs.io/en/stable/overview.html#ipythonzmq) ### Notebook user interface[](#notebook-user-interface) When you create a new notebook document, you will be presented with the **notebook name**, a **menu bar**, a **toolbar** and an empty **code cell**. **Notebook name**: The name displayed at the top of the page, next to the Jupyter logo, reflects the name of the `.ipynb` file. Clicking on the notebook name brings up a dialog which allows you to rename it. Thus, renaming a notebook from “Untitled0” to “My first notebook” in the browser, renames the `Untitled0.ipynb` file to `My first notebook.ipynb`. **Menu bar**: The menu bar presents different options that may be used to manipulate the way the notebook functions. **Toolbar**: The tool bar gives a quick way of performing the most-used operations within the notebook, by clicking on an icon. **Code cell**: the default type of cell; read on for an explanation of cells. ### Structure of a notebook document[](#structure-of-a-notebook-document) The notebook consists of a sequence of cells. A cell is a multiline text input field, and its contents can be executed by using ``Shift`-`Enter``, or by clicking either the “Play” button the toolbar, or Cell, Run in the menu bar. The execution behavior of a cell is determined by the cell’s type. There are three types of cells: **code cells**, **markdown cells**, and **raw cells**. Every cell starts off being a **code cell**, but its type can be changed by using a drop-down on the toolbar (which will be “Code”, initially), or via [keyboard shortcuts](#keyboard-shortcuts). For more information on the different things you can do in a notebook, see the [collection of examples](https://nbviewer.jupyter.org/github/jupyter/notebook/tree/6.4.x/docs/source/examples/Notebook/). #### Code cells[](#code-cells) A *code cell* allows you to edit and write new code, with full syntax highlighting and tab completion. The programming language you use depends on the *kernel*, and the default kernel (IPython) runs Python code. When a code cell is executed, code that it contains is sent to the kernel associated with the notebook. The results that are returned from this computation are then displayed in the notebook as the cell’s *output*. The output is not limited to text, with many other possible forms of output are also possible, including `matplotlib` figures and HTML tables (as used, for example, in the `pandas` data analysis package). This is known as IPython’s *rich display* capability. See also [Rich Output](https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython%20Kernel/Rich%20Output.ipynb) example notebook #### Markdown cells[](#markdown-cells) You can document the computational process in a literate way, alternating descriptive text with code, using *rich text*. In IPython this is accomplished by marking up text with the Markdown language. The corresponding cells are called *Markdown cells*. The Markdown language provides a simple way to perform this text markup, that is, to specify which parts of the text should be emphasized (italics), bold, form lists, etc. If you want to provide structure for your document, you can use markdown headings. Markdown headings consist of 1 to 6 hash # signs `#` followed by a space and the title of your section. The markdown heading will be converted to a clickable link for a section of the notebook. It is also used as a hint when exporting to other document formats, like PDF. When a Markdown cell is executed, the Markdown code is converted into the corresponding formatted rich text. Markdown allows arbitrary HTML code for formatting. Within Markdown cells, you can also include *mathematics* in a straightforward way, using standard LaTeX notation: `$...$` for inline mathematics and `$$...$$` for displayed mathematics. When the Markdown cell is executed, the LaTeX portions are automatically rendered in the HTML output as equations with high quality typography. This is made possible by [MathJax](https://www.mathjax.org/), which supports a [large subset](https://docs.mathjax.org/en/latest/input/tex/index.html) of LaTeX functionality Standard mathematics environments defined by LaTeX and AMS-LaTeX (the `amsmath` package) also work, such as `\begin{equation}...\end{equation}`, and `\begin{align}...\end{align}`. New LaTeX macros may be defined using standard methods, such as `\newcommand`, by placing them anywhere *between math delimiters* in a Markdown cell. These definitions are then available throughout the rest of the IPython session. See also [Working with Markdown Cells](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/6.4.x/docs/source/examples/Notebook/Working%20With%20Markdown%20Cells.ipynb) example notebook #### Raw cells[](#raw-cells) *Raw* cells provide a place in which you can write *output* directly. Raw cells are not evaluated by the notebook. When passed through [nbconvert](https://nbconvert.readthedocs.io/en/latest/), raw cells arrive in the destination format unmodified. For example, you can type full LaTeX into a raw cell, which will only be rendered by LaTeX after conversion by nbconvert. ### Basic workflow[](#basic-workflow) The normal workflow in a notebook is, then, quite similar to a standard IPython session, with the difference that you can edit cells in-place multiple times until you obtain the desired results, rather than having to rerun separate scripts with the `%run` magic command. Typically, you will work on a computational problem in pieces, organizing related ideas into cells and moving forward once previous parts work correctly. This is much more convenient for interactive exploration than breaking up a computation into scripts that must be executed together, as was previously necessary, especially if parts of them take a long time to run. To interrupt a calculation which is taking too long, use the Kernel, Interrupt menu option, or the `i,i` keyboard shortcut. Similarly, to restart the whole computational process, use the Kernel, Restart menu option or `0,0` shortcut. A notebook may be downloaded as a `.ipynb` file or converted to a number of other formats using the menu option File, Download as. See also [Running Code in the Jupyter Notebook](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Running%20Code.ipynb) example notebook [Notebook Basics](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/6.4.x/docs/source/examples/Notebook/Notebook%20Basics.ipynb) example notebook #### Keyboard shortcuts[](#keyboard-shortcuts) All actions in the notebook can be performed with the mouse, but keyboard shortcuts are also available for the most common ones. The essential shortcuts to remember are the following: * ``Shift`-`Enter``: run cellExecute the current cell, show any output, and jump to the next cell below. If ``Shift`-`Enter`` is invoked on the last cell, it makes a new cell below. This is equivalent to clicking the Cell, Run menu item, or the Play button in the toolbar. * `Esc`: Command modeIn command mode, you can navigate around the notebook using keyboard shortcuts. * `Enter`: Edit modeIn edit mode, you can edit text in cells. For the full list of available shortcuts, click Help, Keyboard Shortcuts in the notebook menus. ### Plotting[](#plotting) One major feature of the Jupyter notebook is the ability to display plots that are the output of running code cells. The IPython kernel is designed to work seamlessly with the [matplotlib](https://matplotlib.org) plotting library to provide this functionality. Specific plotting library integration is a feature of the kernel. ### Installing kernels[](#installing-kernels) For information on how to install a Python kernel, refer to the [IPython install page](https://ipython.org/install.html). The Jupyter wiki has a long list of [Kernels for other languages](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels). They usually come with instructions on how to make the kernel available in the notebook. ### Trusting Notebooks[](#trusting-notebooks) To prevent untrusted code from executing on users’ behalf when notebooks open, we store a signature of each trusted notebook. The notebook server verifies this signature when a notebook is opened. If no matching signature is found, Javascript and HTML output will not be displayed until they are regenerated by re-executing the cells. Any notebook that you have fully executed yourself will be considered trusted, and its HTML and Javascript output will be displayed on load. If you need to see HTML or Javascript output without re-executing, and you are sure the notebook is not malicious, you can tell Jupyter to trust it at the command-line with: ``` $ jupyter trust mynotebook.ipynb ``` See [Security in notebook documents](index.html#notebook-security) for more details about the trust mechanism. ### Browser Compatibility[](#browser-compatibility) The Jupyter Notebook aims to support the latest versions of these browsers: * Chrome * Safari * Firefox Up to date versions of Opera and Edge may also work, but if they don’t, please use one of the supported browsers. Using Safari with HTTPS and an untrusted certificate is known to not work (websockets will fail). User interface components[](#user-interface-components) --- When opening bug reports or sending emails to the Jupyter mailing list, it is useful to know the names of different UI components so that other developers and users have an easier time helping you diagnose your problems. This section will familiarize you with the names of UI elements within the Notebook and the different Notebook modes. ### Notebook Dashboard[](#notebook-dashboard) When you launch `jupyter notebook` the first page that you encounter is the Notebook Dashboard. ### Notebook Editor[](#notebook-editor) Once you’ve selected a Notebook to edit, the Notebook will open in the Notebook Editor. ### Interactive User Interface Tour of the Notebook[](#interactive-user-interface-tour-of-the-notebook) If you would like to learn more about the specific elements within the Notebook Editor, you can go through the user interface tour by selecting *Help* in the menubar then selecting *User Interface Tour*. #### Edit Mode and Notebook Editor[](#edit-mode-and-notebook-editor) When a cell is in edit mode, the Cell Mode Indicator will change to reflect the cell’s state. This state is indicated by a small pencil icon on the top right of the interface. When the cell is in command mode, there is no icon in that location. ### File Editor[](#file-editor) Now let’s say that you’ve chosen to open a Markdown file instead of a Notebook file whilst in the Notebook Dashboard. If so, the file will be opened in the File Editor. Notebook Examples[](#notebook-examples) --- The pages in this section are all converted notebook files. You can also [view these notebooks on nbviewer](https://nbviewer.jupyter.org/github/jupyter/notebook/blob/6.4.x/docs/source/examples/Notebook/). ### What is the Jupyter Notebook?[](#What-is-the-Jupyter-Notebook?) #### Introduction[](#Introduction) The Jupyter Notebook is an **interactive computing environment** that enables users to author notebook documents that include: - Live code - Interactive widgets - Plots - Narrative text - Equations - Images - Video These documents provide a **complete and self-contained record of a computation** that can be converted to various formats and shared with others using email, [Dropbox](https://www.dropbox.com/), version control systems (like git/[GitHub](https://github.com)) or [nbviewer.jupyter.org](https://nbviewer.jupyter.org). ##### Components[](#Components) The Jupyter Notebook combines three components: * **The notebook web application**: An interactive web application for writing and running code interactively and authoring notebook documents. * **Kernels**: Separate processes started by the notebook web application that runs users’ code in a given language and returns output back to the notebook web application. The kernel also handles things like computations for interactive widgets, tab completion and introspection. * **Notebook documents**: Self-contained documents that contain a representation of all content visible in the notebook web application, including inputs and outputs of the computations, narrative text, equations, images, and rich media representations of objects. Each notebook document has its own kernel. #### Notebook web application[](#Notebook-web-application) The notebook web application enables users to: * **Edit code in the browser**, with automatic syntax highlighting, indentation, and tab completion/introspection. * **Run code from the browser**, with the results of computations attached to the code which generated them. * See the results of computations with **rich media representations**, such as HTML, LaTeX, PNG, SVG, PDF, etc. * Create and use **interactive JavaScript widgets**, which bind interactive user interface controls and visualizations to reactive kernel side computations. * Author **narrative text** using the [Markdown](https://daringfireball.net/projects/markdown/) markup language. * Include mathematical equations using **LaTeX syntax in Markdown**, which are rendered in-browser by [MathJax](https://www.mathjax.org/). #### Kernels[](#Kernels) Through Jupyter’s kernel and messaging architecture, the Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook. Each kernel is capable of running code in a single programming language and there are kernels available in the following languages: * Python(<https://github.com/ipython/ipython>) * Julia (<https://github.com/JuliaLang/IJulia.jl>) * R (<https://github.com/IRkernel/IRkernel>) * Ruby (<https://github.com/minrk/iruby>) * Haskell (<https://github.com/gibiansky/IHaskell>) * Scala (<https://github.com/Bridgewater/scala-notebook>) * node.js (<https://gist.github.com/Carreau/4279371>) * Go (<https://github.com/takluyver/igo>) The default kernel runs Python code. The notebook provides a simple way for users to pick which of these kernels is used for a given notebook. Each of these kernels communicate with the notebook web application and web browser using a JSON over ZeroMQ/WebSockets message protocol that is described [here](https://jupyter-client.readthedocs.io/en/latest/messaging.html#messaging). Most users don’t need to know about these details, but it helps to understand that “kernels run code.” #### Notebook documents[](#Notebook-documents) Notebook documents contain the **inputs and outputs** of an interactive session as well as **narrative text** that accompanies the code but is not meant for execution. **Rich output** generated by running code, including HTML, images, video, and plots, is embeddeed in the notebook, which makes it a complete and self-contained record of a computation. When you run the notebook web application on your computer, notebook documents are just **files on your local filesystem with a** `.ipynb` **extension**. This allows you to use familiar workflows for organizing your notebooks into folders and sharing them with others. Notebooks consist of a **linear sequence of cells**. There are three basic cell types: * **Code cells:** Input and output of live code that is run in the kernel * **Markdown cells:** Narrative text with embedded LaTeX equations * **Raw cells:** Unformatted text that is included, without modification, when notebooks are converted to different formats using nbconvert Internally, notebook documents are [JSON](https://en.wikipedia.org/wiki/JSON) **data** with **binary values** [base64](https://en.wikipedia.org/wiki/Base64) encoded. This allows them to be **read and manipulated programmatically** by any programming language. Because JSON is a text format, notebook documents are version control friendly. **Notebooks can be exported** to different static formats including HTML, reStructeredText, LaTeX, PDF, and slide shows ([reveal.js](https://revealjs.com)) using Jupyter’s `nbconvert` utility. Furthermore, any notebook document available from a **public URL or on GitHub can be shared** via [nbviewer](https://nbviewer.jupyter.org). This service loads the notebook document from the URL and renders it as a static web page. The resulting web page may thus be shared with others **without their needing to install the Jupyter Notebook**. ### Notebook Basics[](#Notebook-Basics) #### The Notebook dashboard[](#The-Notebook-dashboard) When you first start the notebook server, your browser will open to the notebook dashboard. The dashboard serves as a home page for the notebook. Its main purpose is to display the notebooks and files in the current directory. For example, here is a screenshot of the dashboard page for the `examples` directory in the Jupyter repository: The top of the notebook list displays clickable breadcrumbs of the current directory. By clicking on these breadcrumbs or on sub-directories in the notebook list, you can navigate your file system. To create a new notebook, click on the “New” button at the top of the list and select a kernel from the dropdown (as seen below). Which kernels are listed depend on what’s installed on the server. Some of the kernels in the screenshot below may not exist as an option to you. Notebooks and files can be uploaded to the current directory by dragging a notebook file onto the notebook list or by the “click here” text above the list. The notebook list shows green “Running” text and a green notebook icon next to running notebooks (as seen below). Notebooks remain running until you explicitly shut them down; closing the notebook’s page is not sufficient. To shutdown, delete, duplicate, or rename a notebook check the checkbox next to it and an array of controls will appear at the top of the notebook list (as seen below). You can also use the same operations on directories and files when applicable. To see all of your running notebooks along with their directories, click on the “Running” tab: This view provides a convenient way to track notebooks that you start as you navigate the file system in a long running notebook server. #### Overview of the Notebook UI[](#Overview-of-the-Notebook-UI) If you create a new notebook or open an existing one, you will be taken to the notebook user interface (UI). This UI allows you to run code and author notebook documents interactively. The notebook UI has the following main areas: * Menu * Toolbar * Notebook area and cells The notebook has an interactive tour of these elements that can be started in the “Help:User Interface Tour” menu item. #### Modal editor[](#Modal-editor) Starting with IPython 2.0, the Jupyter Notebook has a modal user interface. This means that the keyboard does different things depending on which mode the Notebook is in. There are two modes: edit mode and command mode. ##### Edit mode[](#Edit-mode) Edit mode is indicated by a green cell border and a prompt showing in the editor area: When a cell is in edit mode, you can type into the cell, like a normal text editor. Enter edit mode by pressing `Enter` or using the mouse to click on a cell’s editor area. ##### Command mode[](#Command-mode) Command mode is indicated by a grey cell border with a blue left margin: When you are in command mode, you are able to edit the notebook as a whole, but not type into individual cells. Most importantly, in command mode, the keyboard is mapped to a set of shortcuts that let you perform notebook and cell actions efficiently. For example, if you are in command mode and you press `c`, you will copy the current cell - no modifier is needed. Don’t try to type into a cell in command mode; unexpected things will happen! Enter command mode by pressing `Esc` or using the mouse to click *outside* a cell’s editor area. #### Mouse navigation[](#Mouse-navigation) All navigation and actions in the Notebook are available using the mouse through the menubar and toolbar, which are both above the main Notebook area: The first idea of mouse based navigation is that **cells can be selected by clicking on them.** The currently selected cell gets a grey or green border depending on whether the notebook is in edit or command mode. If you click inside a cell’s editor area, you will enter edit mode. If you click on the prompt or output area of a cell you will enter command mode. If you are running this notebook in a live session (not on <http://nbviewer.jupyter.org>) try selecting different cells and going between edit and command mode. Try typing into a cell. The second idea of mouse based navigation is that **cell actions usually apply to the currently selected cell**. Thus if you want to run the code in a cell, you would select it and click the button in the toolbar or the “Cell:Run” menu item. Similarly, to copy a cell you would select it and click the button in the toolbar or the “Edit:Copy” menu item. With this simple pattern, you should be able to do most everything you need with the mouse. Markdown cells have one other state that can be modified with the mouse. These cells can either be rendered or unrendered. When they are rendered, you will see a nice formatted representation of the cell’s contents. When they are unrendered, you will see the raw text source of the cell. To render the selected cell with the mouse, click the button in the toolbar or the “Cell:Run” menu item. To unrender the selected cell, double click on the cell. #### Keyboard Navigation[](#Keyboard-Navigation) The modal user interface of the Jupyter Notebook has been optimized for efficient keyboard usage. This is made possible by having two different sets of keyboard shortcuts: one set that is active in edit mode and another in command mode. The most important keyboard shortcuts are `Enter`, which enters edit mode, and `Esc`, which enters command mode. In edit mode, most of the keyboard is dedicated to typing into the cell’s editor. Thus, in edit mode there are relatively few shortcuts. In command mode, the entire keyboard is available for shortcuts, so there are many more. The `Help`->`Keyboard Shortcuts` dialog lists the available shortcuts. We recommend learning the command mode shortcuts in the following rough order: 1. Basic navigation: `enter`, `shift-enter`, `up/k`, `down/j` 2. Saving the notebook: `s` 3. Change Cell types: `y`, `m`, `1-6`, `t` 4. Cell creation: `a`, `b` 5. Cell editing: `x`, `c`, `v`, `d`, `z` 6. Kernel operations: `i`, `0` (press twice) ### Running Code[](#Running-Code) First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefore runs Python code. #### Code cells allow you to enter and run code[](#Code-cells-allow-you-to-enter-and-run-code) Run a code cell using `Shift-Enter` or pressing the button in the toolbar above: ``` [2]: ``` ``` a = 10 ``` ``` [3]: ``` ``` print(a) ``` ``` 10 ``` There are two other keyboard shortcuts for running code: * `Alt-Enter` runs the current cell and inserts a new one below. * `Ctrl-Enter` run the current cell and enters command mode. #### Managing the Kernel[](#Managing-the-Kernel) Code is run in a separate process called the Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the button in the toolbar above. ``` [4]: ``` ``` import time time.sleep(10) ``` If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter: ``` [5]: ``` ``` import sys from ctypes import CDLL # This will crash a Linux or Mac system # equivalent calls can be made on Windows # Uncomment these lines if you would like to see the segfault # dll = 'dylib' if sys.platform == 'darwin' else 'so.6' # libc = CDLL("libc.%s" % dll) # libc.time(-1) # BOOM!! ``` #### Cell menu[](#Cell-menu) The “Cell” menu has a number of menu items for running code in different ways. These includes: * Run and Select Below * Run and Insert Below * Run All * Run All Above * Run All Below #### Restarting the kernels[](#Restarting-the-kernels) The kernel maintains the state of a notebook’s computations. You can reset this state by restarting the kernel. This is done by clicking on the in the toolbar above. #### sys.stdout and sys.stderr[](#sys.stdout-and-sys.stderr) The stdout and stderr streams are displayed as text in the output area. ``` [6]: ``` ``` print("hi, stdout") ``` ``` hi, stdout ``` ``` [7]: ``` ``` from __future__ import print_function print('hi, stderr', file=sys.stderr) ``` ``` hi, stderr ``` #### Output is asynchronous[](#Output-is-asynchronous) All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end. ``` [8]: ``` ``` import time, sys for i in range(8): print(i) time.sleep(0.5) ``` ``` 0 1 2 3 4 5 6 7 ``` #### Large outputs[](#Large-outputs) To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output: ``` [9]: ``` ``` for i in range(50): print(i) ``` ``` 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 ``` Beyond a certain point, output will scroll automatically: ``` [10]: ``` ``` for i in range(500): print(2**i - 1) ``` ``` 0 1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767 65535 131071 262143 524287 1048575 2097151 4194303 8388607 16777215 33554431 67108863 134217727 268435455 536870911 1073741823 2147483647 4294967295 8589934591 17179869183 34359738367 68719476735 137438953471 274877906943 549755813887 1099511627775 2199023255551 4398046511103 8796093022207 17592186044415 35184372088831 70368744177663 140737488355327 281474976710655 562949953421311 1125899906842623 2251799813685247 4503599627370495 9007199254740991 18014398509481983 36028797018963967 72057594037927935 144115188075855871 288230376151711743 576460752303423487 1152921504606846975 2305843009213693951 4611686018427387903 9223372036854775807 18446744073709551615 36893488147419103231 73786976294838206463 147573952589676412927 295147905179352825855 590295810358705651711 1180591620717411303423 2361183241434822606847 4722366482869645213695 9444732965739290427391 18889465931478580854783 37778931862957161709567 75557863725914323419135 151115727451828646838271 302231454903657293676543 604462909807314587353087 1208925819614629174706175 2417851639229258349412351 4835703278458516698824703 9671406556917033397649407 19342813113834066795298815 38685626227668133590597631 77371252455336267181195263 154742504910672534362390527 309485009821345068724781055 618970019642690137449562111 1237940039285380274899124223 2475880078570760549798248447 4951760157141521099596496895 9903520314283042199192993791 19807040628566084398385987583 39614081257132168796771975167 79228162514264337593543950335 158456325028528675187087900671 316912650057057350374175801343 633825300114114700748351602687 1267650600228229401496703205375 2535301200456458802993406410751 5070602400912917605986812821503 10141204801825835211973625643007 20282409603651670423947251286015 40564819207303340847894502572031 81129638414606681695789005144063 162259276829213363391578010288127 324518553658426726783156020576255 649037107316853453566312041152511 1298074214633706907132624082305023 2596148429267413814265248164610047 5192296858534827628530496329220095 10384593717069655257060992658440191 20769187434139310514121985316880383 41538374868278621028243970633760767 83076749736557242056487941267521535 166153499473114484112975882535043071 332306998946228968225951765070086143 664613997892457936451903530140172287 1329227995784915872903807060280344575 2658455991569831745807614120560689151 5316911983139663491615228241121378303 10633823966279326983230456482242756607 21267647932558653966460912964485513215 42535295865117307932921825928971026431 85070591730234615865843651857942052863 170141183460469231731687303715884105727 340282366920938463463374607431768211455 680564733841876926926749214863536422911 1361129467683753853853498429727072845823 2722258935367507707706996859454145691647 5444517870735015415413993718908291383295 10889035741470030830827987437816582766591 21778071482940061661655974875633165533183 43556142965880123323311949751266331066367 87112285931760246646623899502532662132735 174224571863520493293247799005065324265471 348449143727040986586495598010130648530943 696898287454081973172991196020261297061887 1393796574908163946345982392040522594123775 2787593149816327892691964784081045188247551 5575186299632655785383929568162090376495103 11150372599265311570767859136324180752990207 22300745198530623141535718272648361505980415 44601490397061246283071436545296723011960831 89202980794122492566142873090593446023921663 178405961588244985132285746181186892047843327 356811923176489970264571492362373784095686655 713623846352979940529142984724747568191373311 1427247692705959881058285969449495136382746623 2854495385411919762116571938898990272765493247 5708990770823839524233143877797980545530986495 11417981541647679048466287755595961091061972991 22835963083295358096932575511191922182123945983 45671926166590716193865151022383844364247891967 91343852333181432387730302044767688728495783935 182687704666362864775460604089535377456991567871 365375409332725729550921208179070754913983135743 730750818665451459101842416358141509827966271487 1461501637330902918203684832716283019655932542975 2923003274661805836407369665432566039311865085951 5846006549323611672814739330865132078623730171903 11692013098647223345629478661730264157247460343807 23384026197294446691258957323460528314494920687615 46768052394588893382517914646921056628989841375231 93536104789177786765035829293842113257979682750463 187072209578355573530071658587684226515959365500927 374144419156711147060143317175368453031918731001855 748288838313422294120286634350736906063837462003711 1496577676626844588240573268701473812127674924007423 2993155353253689176481146537402947624255349848014847 5986310706507378352962293074805895248510699696029695 11972621413014756705924586149611790497021399392059391 23945242826029513411849172299223580994042798784118783 47890485652059026823698344598447161988085597568237567 95780971304118053647396689196894323976171195136475135 191561942608236107294793378393788647952342390272950271 383123885216472214589586756787577295904684780545900543 766247770432944429179173513575154591809369561091801087 1532495540865888858358347027150309183618739122183602175 3064991081731777716716694054300618367237478244367204351 6129982163463555433433388108601236734474956488734408703 12259964326927110866866776217202473468949912977468817407 24519928653854221733733552434404946937899825954937634815 49039857307708443467467104868809893875799651909875269631 98079714615416886934934209737619787751599303819750539263 196159429230833773869868419475239575503198607639501078527 392318858461667547739736838950479151006397215279002157055 784637716923335095479473677900958302012794430558004314111 1569275433846670190958947355801916604025588861116008628223 3138550867693340381917894711603833208051177722232017256447 6277101735386680763835789423207666416102355444464034512895 12554203470773361527671578846415332832204710888928069025791 25108406941546723055343157692830665664409421777856138051583 50216813883093446110686315385661331328818843555712276103167 100433627766186892221372630771322662657637687111424552206335 200867255532373784442745261542645325315275374222849104412671 401734511064747568885490523085290650630550748445698208825343 803469022129495137770981046170581301261101496891396417650687 1606938044258990275541962092341162602522202993782792835301375 3213876088517980551083924184682325205044405987565585670602751 6427752177035961102167848369364650410088811975131171341205503 12855504354071922204335696738729300820177623950262342682411007 25711008708143844408671393477458601640355247900524685364822015 51422017416287688817342786954917203280710495801049370729644031 102844034832575377634685573909834406561420991602098741459288063 205688069665150755269371147819668813122841983204197482918576127 411376139330301510538742295639337626245683966408394965837152255 822752278660603021077484591278675252491367932816789931674304511 1645504557321206042154969182557350504982735865633579863348609023 3291009114642412084309938365114701009965471731267159726697218047 6582018229284824168619876730229402019930943462534319453394436095 13164036458569648337239753460458804039861886925068638906788872191 26328072917139296674479506920917608079723773850137277813577744383 52656145834278593348959013841835216159447547700274555627155488767 105312291668557186697918027683670432318895095400549111254310977535 210624583337114373395836055367340864637790190801098222508621955071 421249166674228746791672110734681729275580381602196445017243910143 842498333348457493583344221469363458551160763204392890034487820287 1684996666696914987166688442938726917102321526408785780068975640575 3369993333393829974333376885877453834204643052817571560137951281151 6739986666787659948666753771754907668409286105635143120275902562303 13479973333575319897333507543509815336818572211270286240551805124607 26959946667150639794667015087019630673637144422540572481103610249215 53919893334301279589334030174039261347274288845081144962207220498431 107839786668602559178668060348078522694548577690162289924414440996863 215679573337205118357336120696157045389097155380324579848828881993727 431359146674410236714672241392314090778194310760649159697657763987455 862718293348820473429344482784628181556388621521298319395315527974911 1725436586697640946858688965569256363112777243042596638790631055949823 3450873173395281893717377931138512726225554486085193277581262111899647 6901746346790563787434755862277025452451108972170386555162524223799295 13803492693581127574869511724554050904902217944340773110325048447598591 27606985387162255149739023449108101809804435888681546220650096895197183 55213970774324510299478046898216203619608871777363092441300193790394367 110427941548649020598956093796432407239217743554726184882600387580788735 220855883097298041197912187592864814478435487109452369765200775161577471 441711766194596082395824375185729628956870974218904739530401550323154943 883423532389192164791648750371459257913741948437809479060803100646309887 1766847064778384329583297500742918515827483896875618958121606201292619775 3533694129556768659166595001485837031654967793751237916243212402585239551 7067388259113537318333190002971674063309935587502475832486424805170479103 14134776518227074636666380005943348126619871175004951664972849610340958207 28269553036454149273332760011886696253239742350009903329945699220681916415 56539106072908298546665520023773392506479484700019806659891398441363832831 113078212145816597093331040047546785012958969400039613319782796882727665663 226156424291633194186662080095093570025917938800079226639565593765455331327 452312848583266388373324160190187140051835877600158453279131187530910662655 904625697166532776746648320380374280103671755200316906558262375061821325311 1809251394333065553493296640760748560207343510400633813116524750123642650623 3618502788666131106986593281521497120414687020801267626233049500247285301247 7237005577332262213973186563042994240829374041602535252466099000494570602495 14474011154664524427946373126085988481658748083205070504932198000989141204991 28948022309329048855892746252171976963317496166410141009864396001978282409983 57896044618658097711785492504343953926634992332820282019728792003956564819967 115792089237316195423570985008687907853269984665640564039457584007913129639935 231584178474632390847141970017375815706539969331281128078915168015826259279871 463168356949264781694283940034751631413079938662562256157830336031652518559743 926336713898529563388567880069503262826159877325124512315660672063305037119487 1852673427797059126777135760139006525652319754650249024631321344126610074238975 3705346855594118253554271520278013051304639509300498049262642688253220148477951 7410693711188236507108543040556026102609279018600996098525285376506440296955903 14821387422376473014217086081112052205218558037201992197050570753012880593911807 29642774844752946028434172162224104410437116074403984394101141506025761187823615 59285549689505892056868344324448208820874232148807968788202283012051522375647231 118571099379011784113736688648896417641748464297615937576404566024103044751294463 237142198758023568227473377297792835283496928595231875152809132048206089502588927 474284397516047136454946754595585670566993857190463750305618264096412179005177855 948568795032094272909893509191171341133987714380927500611236528192824358010355711 1897137590064188545819787018382342682267975428761855001222473056385648716020711423 3794275180128377091639574036764685364535950857523710002444946112771297432041422847 7588550360256754183279148073529370729071901715047420004889892225542594864082845695 15177100720513508366558296147058741458143803430094840009779784451085189728165691391 30354201441027016733116592294117482916287606860189680019559568902170379456331382783 60708402882054033466233184588234965832575213720379360039119137804340758912662765567 121416805764108066932466369176469931665150427440758720078238275608681517825325531135 242833611528216133864932738352939863330300854881517440156476551217363035650651062271 485667223056432267729865476705879726660601709763034880312953102434726071301302124543 971334446112864535459730953411759453321203419526069760625906204869452142602604249087 1942668892225729070919461906823518906642406839052139521251812409738904285205208498175 3885337784451458141838923813647037813284813678104279042503624819477808570410416996351 7770675568902916283677847627294075626569627356208558085007249638955617140820833992703 15541351137805832567355695254588151253139254712417116170014499277911234281641667985407 31082702275611665134711390509176302506278509424834232340028998555822468563283335970815 62165404551223330269422781018352605012557018849668464680057997111644937126566671941631 124330809102446660538845562036705210025114037699336929360115994223289874253133343883263 248661618204893321077691124073410420050228075398673858720231988446579748506266687766527 497323236409786642155382248146820840100456150797347717440463976893159497012533375533055 994646472819573284310764496293641680200912301594695434880927953786318994025066751066111 1989292945639146568621528992587283360401824603189390869761855907572637988050133502132223 3978585891278293137243057985174566720803649206378781739523711815145275976100267004264447 7957171782556586274486115970349133441607298412757563479047423630290551952200534008528895 15914343565113172548972231940698266883214596825515126958094847260581103904401068017057791 31828687130226345097944463881396533766429193651030253916189694521162207808802136034115583 63657374260452690195888927762793067532858387302060507832379389042324415617604272068231167 127314748520905380391777855525586135065716774604121015664758778084648831235208544136462335 254629497041810760783555711051172270131433549208242031329517556169297662470417088272924671 509258994083621521567111422102344540262867098416484062659035112338595324940834176545849343 1018517988167243043134222844204689080525734196832968125318070224677190649881668353091698687 2037035976334486086268445688409378161051468393665936250636140449354381299763336706183397375 4074071952668972172536891376818756322102936787331872501272280898708762599526673412366794751 8148143905337944345073782753637512644205873574663745002544561797417525199053346824733589503 16296287810675888690147565507275025288411747149327490005089123594835050398106693649467179007 32592575621351777380295131014550050576823494298654980010178247189670100796213387298934358015 65185151242703554760590262029100101153646988597309960020356494379340201592426774597868716031 130370302485407109521180524058200202307293977194619920040712988758680403184853549195737432063 260740604970814219042361048116400404614587954389239840081425977517360806369707098391474864127 521481209941628438084722096232800809229175908778479680162851955034721612739414196782949728255 1042962419883256876169444192465601618458351817556959360325703910069443225478828393565899456511 2085924839766513752338888384931203236916703635113918720651407820138886450957656787131798913023 4171849679533027504677776769862406473833407270227837441302815640277772901915313574263597826047 8343699359066055009355553539724812947666814540455674882605631280555545803830627148527195652095 16687398718132110018711107079449625895333629080911349765211262561111091607661254297054391304191 33374797436264220037422214158899251790667258161822699530422525122222183215322508594108782608383 66749594872528440074844428317798503581334516323645399060845050244444366430645017188217565216767 133499189745056880149688856635597007162669032647290798121690100488888732861290034376435130433535 266998379490113760299377713271194014325338065294581596243380200977777465722580068752870260867071 533996758980227520598755426542388028650676130589163192486760401955554931445160137505740521734143 1067993517960455041197510853084776057301352261178326384973520803911109862890320275011481043468287 2135987035920910082395021706169552114602704522356652769947041607822219725780640550022962086936575 4271974071841820164790043412339104229205409044713305539894083215644439451561281100045924173873151 8543948143683640329580086824678208458410818089426611079788166431288878903122562200091848347746303 17087896287367280659160173649356416916821636178853222159576332862577757806245124400183696695492607 34175792574734561318320347298712833833643272357706444319152665725155515612490248800367393390985215 68351585149469122636640694597425667667286544715412888638305331450311031224980497600734786781970431 136703170298938245273281389194851335334573089430825777276610662900622062449960995201469573563940863 273406340597876490546562778389702670669146178861651554553221325801244124899921990402939147127881727 546812681195752981093125556779405341338292357723303109106442651602488249799843980805878294255763455 1093625362391505962186251113558810682676584715446606218212885303204976499599687961611756588511526911 2187250724783011924372502227117621365353169430893212436425770606409952999199375923223513177023053823 4374501449566023848745004454235242730706338861786424872851541212819905998398751846447026354046107647 8749002899132047697490008908470485461412677723572849745703082425639811996797503692894052708092215295 17498005798264095394980017816940970922825355447145699491406164851279623993595007385788105416184430591 34996011596528190789960035633881941845650710894291398982812329702559247987190014771576210832368861183 69992023193056381579920071267763883691301421788582797965624659405118495974380029543152421664737722367 139984046386112763159840142535527767382602843577165595931249318810236991948760059086304843329475444735 279968092772225526319680285071055534765205687154331191862498637620473983897520118172609686658950889471 559936185544451052639360570142111069530411374308662383724997275240947967795040236345219373317901778943 1119872371088902105278721140284222139060822748617324767449994550481895935590080472690438746635803557887 2239744742177804210557442280568444278121645497234649534899989100963791871180160945380877493271607115775 4479489484355608421114884561136888556243290994469299069799978201927583742360321890761754986543214231551 8958978968711216842229769122273777112486581988938598139599956403855167484720643781523509973086428463103 17917957937422433684459538244547554224973163977877196279199912807710334969441287563047019946172856926207 35835915874844867368919076489095108449946327955754392558399825615420669938882575126094039892345713852415 71671831749689734737838152978190216899892655911508785116799651230841339877765150252188079784691427704831 143343663499379469475676305956380433799785311823017570233599302461682679755530300504376159569382855409663 286687326998758938951352611912760867599570623646035140467198604923365359511060601008752319138765710819327 573374653997517877902705223825521735199141247292070280934397209846730719022121202017504638277531421638655 1146749307995035755805410447651043470398282494584140561868794419693461438044242404035009276555062843277311 2293498615990071511610820895302086940796564989168281123737588839386922876088484808070018553110125686554623 4586997231980143023221641790604173881593129978336562247475177678773845752176969616140037106220251373109247 9173994463960286046443283581208347763186259956673124494950355357547691504353939232280074212440502746218495 18347988927920572092886567162416695526372519913346248989900710715095383008707878464560148424881005492436991 36695977855841144185773134324833391052745039826692497979801421430190766017415756929120296849762010984873983 73391955711682288371546268649666782105490079653384995959602842860381532034831513858240593699524021969747967 146783911423364576743092537299333564210980159306769991919205685720763064069663027716481187399048043939495935 293567822846729153486185074598667128421960318613539983838411371441526128139326055432962374798096087878991871 587135645693458306972370149197334256843920637227079967676822742883052256278652110865924749596192175757983743 1174271291386916613944740298394668513687841274454159935353645485766104512557304221731849499192384351515967487 2348542582773833227889480596789337027375682548908319870707290971532209025114608443463698998384768703031934975 4697085165547666455778961193578674054751365097816639741414581943064418050229216886927397996769537406063869951 9394170331095332911557922387157348109502730195633279482829163886128836100458433773854795993539074812127739903 18788340662190665823115844774314696219005460391266558965658327772257672200916867547709591987078149624255479807 37576681324381331646231689548629392438010920782533117931316655544515344401833735095419183974156299248510959615 75153362648762663292463379097258784876021841565066235862633311089030688803667470190838367948312598497021919231 150306725297525326584926758194517569752043683130132471725266622178061377607334940381676735896625196994043838463 300613450595050653169853516389035139504087366260264943450533244356122755214669880763353471793250393988087676927 601226901190101306339707032778070279008174732520529886901066488712245510429339761526706943586500787976175353855 1202453802380202612679414065556140558016349465041059773802132977424491020858679523053413887173001575952350707711 2404907604760405225358828131112281116032698930082119547604265954848982041717359046106827774346003151904701415423 4809815209520810450717656262224562232065397860164239095208531909697964083434718092213655548692006303809402830847 9619630419041620901435312524449124464130795720328478190417063819395928166869436184427311097384012607618805661695 19239260838083241802870625048898248928261591440656956380834127638791856333738872368854622194768025215237611323391 38478521676166483605741250097796497856523182881313912761668255277583712667477744737709244389536050430475222646783 76957043352332967211482500195592995713046365762627825523336510555167425334955489475418488779072100860950445293567 153914086704665934422965000391185991426092731525255651046673021110334850669910978950836977558144201721900890587135 307828173409331868845930000782371982852185463050511302093346042220669701339821957901673955116288403443801781174271 615656346818663737691860001564743965704370926101022604186692084441339402679643915803347910232576806887603562348543 1231312693637327475383720003129487931408741852202045208373384168882678805359287831606695820465153613775207124697087 2462625387274654950767440006258975862817483704404090416746768337765357610718575663213391640930307227550414249394175 4925250774549309901534880012517951725634967408808180833493536675530715221437151326426783281860614455100828498788351 9850501549098619803069760025035903451269934817616361666987073351061430442874302652853566563721228910201656997576703 19701003098197239606139520050071806902539869635232723333974146702122860885748605305707133127442457820403313995153407 39402006196394479212279040100143613805079739270465446667948293404245721771497210611414266254884915640806627990306815 78804012392788958424558080200287227610159478540930893335896586808491443542994421222828532509769831281613255980613631 157608024785577916849116160400574455220318957081861786671793173616982887085988842445657065019539662563226511961227263 315216049571155833698232320801148910440637914163723573343586347233965774171977684891314130039079325126453023922454527 630432099142311667396464641602297820881275828327447146687172694467931548343955369782628260078158650252906047844909055 1260864198284623334792929283204595641762551656654894293374345388935863096687910739565256520156317300505812095689818111 2521728396569246669585858566409191283525103313309788586748690777871726193375821479130513040312634601011624191379636223 5043456793138493339171717132818382567050206626619577173497381555743452386751642958261026080625269202023248382759272447 10086913586276986678343434265636765134100413253239154346994763111486904773503285916522052161250538404046496765518544895 20173827172553973356686868531273530268200826506478308693989526222973809547006571833044104322501076808092993531037089791 40347654345107946713373737062547060536401653012956617387979052445947619094013143666088208645002153616185987062074179583 80695308690215893426747474125094121072803306025913234775958104891895238188026287332176417290004307232371974124148359167 161390617380431786853494948250188242145606612051826469551916209783790476376052574664352834580008614464743948248296718335 322781234760863573706989896500376484291213224103652939103832419567580952752105149328705669160017228929487896496593436671 645562469521727147413979793000752968582426448207305878207664839135161905504210298657411338320034457858975792993186873343 1291124939043454294827959586001505937164852896414611756415329678270323811008420597314822676640068915717951585986373746687 2582249878086908589655919172003011874329705792829223512830659356540647622016841194629645353280137831435903171972747493375 5164499756173817179311838344006023748659411585658447025661318713081295244033682389259290706560275662871806343945494986751 10328999512347634358623676688012047497318823171316894051322637426162590488067364778518581413120551325743612687890989973503 20657999024695268717247353376024094994637646342633788102645274852325180976134729557037162826241102651487225375781979947007 41315998049390537434494706752048189989275292685267576205290549704650361952269459114074325652482205302974450751563959894015 82631996098781074868989413504096379978550585370535152410581099409300723904538918228148651304964410605948901503127919788031 165263992197562149737978827008192759957101170741070304821162198818601447809077836456297302609928821211897803006255839576063 330527984395124299475957654016385519914202341482140609642324397637202895618155672912594605219857642423795606012511679152127 661055968790248598951915308032771039828404682964281219284648795274405791236311345825189210439715284847591212025023358304255 1322111937580497197903830616065542079656809365928562438569297590548811582472622691650378420879430569695182424050046716608511 2644223875160994395807661232131084159313618731857124877138595181097623164945245383300756841758861139390364848100093433217023 5288447750321988791615322464262168318627237463714249754277190362195246329890490766601513683517722278780729696200186866434047 10576895500643977583230644928524336637254474927428499508554380724390492659780981533203027367035444557561459392400373732868095 21153791001287955166461289857048673274508949854856999017108761448780985319561963066406054734070889115122918784800747465736191 42307582002575910332922579714097346549017899709713998034217522897561970639123926132812109468141778230245837569601494931472383 84615164005151820665845159428194693098035799419427996068435045795123941278247852265624218936283556460491675139202989862944767 169230328010303641331690318856389386196071598838855992136870091590247882556495704531248437872567112920983350278405979725889535 338460656020607282663380637712778772392143197677711984273740183180495765112991409062496875745134225841966700556811959451779071 676921312041214565326761275425557544784286395355423968547480366360991530225982818124993751490268451683933401113623918903558143 1353842624082429130653522550851115089568572790710847937094960732721983060451965636249987502980536903367866802227247837807116287 2707685248164858261307045101702230179137145581421695874189921465443966120903931272499975005961073806735733604454495675614232575 5415370496329716522614090203404460358274291162843391748379842930887932241807862544999950011922147613471467208908991351228465151 10830740992659433045228180406808920716548582325686783496759685861775864483615725089999900023844295226942934417817982702456930303 21661481985318866090456360813617841433097164651373566993519371723551728967231450179999800047688590453885868835635965404913860607 43322963970637732180912721627235682866194329302747133987038743447103457934462900359999600095377180907771737671271930809827721215 86645927941275464361825443254471365732388658605494267974077486894206915868925800719999200190754361815543475342543861619655442431 173291855882550928723650886508942731464777317210988535948154973788413831737851601439998400381508723631086950685087723239310884863 346583711765101857447301773017885462929554634421977071896309947576827663475703202879996800763017447262173901370175446478621769727 693167423530203714894603546035770925859109268843954143792619895153655326951406405759993601526034894524347802740350892957243539455 1386334847060407429789207092071541851718218537687908287585239790307310653902812811519987203052069789048695605480701785914487078911 2772669694120814859578414184143083703436437075375816575170479580614621307805625623039974406104139578097391210961403571828974157823 5545339388241629719156828368286167406872874150751633150340959161229242615611251246079948812208279156194782421922807143657948315647 11090678776483259438313656736572334813745748301503266300681918322458485231222502492159897624416558312389564843845614287315896631295 22181357552966518876627313473144669627491496603006532601363836644916970462445004984319795248833116624779129687691228574631793262591 44362715105933037753254626946289339254982993206013065202727673289833940924890009968639590497666233249558259375382457149263586525183 88725430211866075506509253892578678509965986412026130405455346579667881849780019937279180995332466499116518750764914298527173050367 177450860423732151013018507785157357019931972824052260810910693159335763699560039874558361990664932998233037501529828597054346100735 354901720847464302026037015570314714039863945648104521621821386318671527399120079749116723981329865996466075003059657194108692201471 709803441694928604052074031140629428079727891296209043243642772637343054798240159498233447962659731992932150006119314388217384402943 1419606883389857208104148062281258856159455782592418086487285545274686109596480318996466895925319463985864300012238628776434768805887 2839213766779714416208296124562517712318911565184836172974571090549372219192960637992933791850638927971728600024477257552869537611775 5678427533559428832416592249125035424637823130369672345949142181098744438385921275985867583701277855943457200048954515105739075223551 11356855067118857664833184498250070849275646260739344691898284362197488876771842551971735167402555711886914400097909030211478150447103 22713710134237715329666368996500141698551292521478689383796568724394977753543685103943470334805111423773828800195818060422956300894207 45427420268475430659332737993000283397102585042957378767593137448789955507087370207886940669610222847547657600391636120845912601788415 90854840536950861318665475986000566794205170085914757535186274897579911014174740415773881339220445695095315200783272241691825203576831 181709681073901722637330951972001133588410340171829515070372549795159822028349480831547762678440891390190630401566544483383650407153663 363419362147803445274661903944002267176820680343659030140745099590319644056698961663095525356881782780381260803133088966767300814307327 726838724295606890549323807888004534353641360687318060281490199180639288113397923326191050713763565560762521606266177933534601628614655 1453677448591213781098647615776009068707282721374636120562980398361278576226795846652382101427527131121525043212532355867069203257229311 2907354897182427562197295231552018137414565442749272241125960796722557152453591693304764202855054262243050086425064711734138406514458623 5814709794364855124394590463104036274829130885498544482251921593445114304907183386609528405710108524486100172850129423468276813028917247 11629419588729710248789180926208072549658261770997088964503843186890228609814366773219056811420217048972200345700258846936553626057834495 23258839177459420497578361852416145099316523541994177929007686373780457219628733546438113622840434097944400691400517693873107252115668991 46517678354918840995156723704832290198633047083988355858015372747560914439257467092876227245680868195888801382801035387746214504231337983 93035356709837681990313447409664580397266094167976711716030745495121828878514934185752454491361736391777602765602070775492429008462675967 186070713419675363980626894819329160794532188335953423432061490990243657757029868371504908982723472783555205531204141550984858016925351935 372141426839350727961253789638658321589064376671906846864122981980487315514059736743009817965446945567110411062408283101969716033850703871 744282853678701455922507579277316643178128753343813693728245963960974631028119473486019635930893891134220822124816566203939432067701407743 1488565707357402911845015158554633286356257506687627387456491927921949262056238946972039271861787782268441644249633132407878864135402815487 2977131414714805823690030317109266572712515013375254774912983855843898524112477893944078543723575564536883288499266264815757728270805630975 5954262829429611647380060634218533145425030026750509549825967711687797048224955787888157087447151129073766576998532529631515456541611261951 11908525658859223294760121268437066290850060053501019099651935423375594096449911575776314174894302258147533153997065059263030913083222523903 23817051317718446589520242536874132581700120107002038199303870846751188192899823151552628349788604516295066307994130118526061826166445047807 47634102635436893179040485073748265163400240214004076398607741693502376385799646303105256699577209032590132615988260237052123652332890095615 95268205270873786358080970147496530326800480428008152797215483387004752771599292606210513399154418065180265231976520474104247304665780191231 190536410541747572716161940294993060653600960856016305594430966774009505543198585212421026798308836130360530463953040948208494609331560382463 381072821083495145432323880589986121307201921712032611188861933548019011086397170424842053596617672260721060927906081896416989218663120764927 762145642166990290864647761179972242614403843424065222377723867096038022172794340849684107193235344521442121855812163792833978437326241529855 1524291284333980581729295522359944485228807686848130444755447734192076044345588681699368214386470689042884243711624327585667956874652483059711 3048582568667961163458591044719888970457615373696260889510895468384152088691177363398736428772941378085768487423248655171335913749304966119423 6097165137335922326917182089439777940915230747392521779021790936768304177382354726797472857545882756171536974846497310342671827498609932238847 12194330274671844653834364178879555881830461494785043558043581873536608354764709453594945715091765512343073949692994620685343654997219864477695 24388660549343689307668728357759111763660922989570087116087163747073216709529418907189891430183531024686147899385989241370687309994439728955391 48777321098687378615337456715518223527321845979140174232174327494146433419058837814379782860367062049372295798771978482741374619988879457910783 97554642197374757230674913431036447054643691958280348464348654988292866838117675628759565720734124098744591597543956965482749239977758915821567 195109284394749514461349826862072894109287383916560696928697309976585733676235351257519131441468248197489183195087913930965498479955517831643135 390218568789499028922699653724145788218574767833121393857394619953171467352470702515038262882936496394978366390175827861930996959911035663286271 780437137578998057845399307448291576437149535666242787714789239906342934704941405030076525765872992789956732780351655723861993919822071326572543 1560874275157996115690798614896583152874299071332485575429578479812685869409882810060153051531745985579913465560703311447723987839644142653145087 3121748550315992231381597229793166305748598142664971150859156959625371738819765620120306103063491971159826931121406622895447975679288285306290175 6243497100631984462763194459586332611497196285329942301718313919250743477639531240240612206126983942319653862242813245790895951358576570612580351 12486994201263968925526388919172665222994392570659884603436627838501486955279062480481224412253967884639307724485626491581791902717153141225160703 24973988402527937851052777838345330445988785141319769206873255677002973910558124960962448824507935769278615448971252983163583805434306282450321407 49947976805055875702105555676690660891977570282639538413746511354005947821116249921924897649015871538557230897942505966327167610868612564900642815 99895953610111751404211111353381321783955140565279076827493022708011895642232499843849795298031743077114461795885011932654335221737225129801285631 199791907220223502808422222706762643567910281130558153654986045416023791284464999687699590596063486154228923591770023865308670443474450259602571263 399583814440447005616844445413525287135820562261116307309972090832047582568929999375399181192126972308457847183540047730617340886948900519205142527 799167628880894011233688890827050574271641124522232614619944181664095165137859998750798362384253944616915694367080095461234681773897801038410285055 1598335257761788022467377781654101148543282249044465229239888363328190330275719997501596724768507889233831388734160190922469363547795602076820570111 3196670515523576044934755563308202297086564498088930458479776726656380660551439995003193449537015778467662777468320381844938727095591204153641140223 6393341031047152089869511126616404594173128996177860916959553453312761321102879990006386899074031556935325554936640763689877454191182408307282280447 12786682062094304179739022253232809188346257992355721833919106906625522642205759980012773798148063113870651109873281527379754908382364816614564560895 25573364124188608359478044506465618376692515984711443667838213813251045284411519960025547596296126227741302219746563054759509816764729633229129121791 51146728248377216718956089012931236753385031969422887335676427626502090568823039920051095192592252455482604439493126109519019633529459266458258243583 102293456496754433437912178025862473506770063938845774671352855253004181137646079840102190385184504910965208878986252219038039267058918532916516487167 204586912993508866875824356051724947013540127877691549342705710506008362275292159680204380770369009821930417757972504438076078534117837065833032974335 409173825987017733751648712103449894027080255755383098685411421012016724550584319360408761540738019643860835515945008876152157068235674131666065948671 818347651974035467503297424206899788054160511510766197370822842024033449101168638720817523081476039287721671031890017752304314136471348263332131897343 1636695303948070935006594848413799576108321023021532394741645684048066898202337277441635046162952078575443342063780035504608628272942696526664263794687 ``` ### Markdown Cells[](#Markdown-Cells) Text can be added to Jupyter Notebooks using Markdown cells. You can change the cell type to Markdown by using the `Cell` menu, the toolbar, or the key shortcut `m`. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here: <https://daringfireball.net/projects/markdown/#### Markdown basics[](#Markdown-basics) You can make text *italic* or **bold** by surrounding a block of text with a single or double * respectively You can build nested itemized or enumerated lists: * One + Sublist - This + Sublist - That - The other thing * Two + Sublist * Three + Sublist Now another list: 1. Here we go 1. Sublist 2. Sublist 2. There we go 3. Now this You can add horizontal rules: --- Here is a blockquote: > Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren’t special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one– and preferably only one –obvious way to do it. > Although that way may not be obvious at first unless you’re Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it’s a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea – let’s do more of those! And shorthand for links: [Jupyter’s website](https://jupyter.org) You can use backslash  to generate literal characters which would otherwise have special meaning in the Markdown syntax. ``` *literal asterisks* *literal asterisks* ``` Use double backslash   to generate the literal $ symbol. #### Headings[](#Headings) You can add headings by starting a line with one (or multiple) `#` followed by a space, as in the following example: ``` # Heading 1 # Heading 2 ## Heading 2.1 ## Heading 2.2 ``` #### Embedded code[](#Embedded-code) You can embed code meant for illustration instead of execution in Python: ``` def f(x): """a docstring""" return x**2 ``` or other languages: ``` for (i=0; i<n; i++) { printf("hello %d\n", i); x += 4; } ``` #### LaTeX equations[](#LaTeX-equations) Courtesy of MathJax, you can include mathematical expressions both inline: \(e^{i\pi} + 1 = 0\) and displayed: \begin{equation} e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i \end{equation} Inline expressions can be added by surrounding the latex code with `$`: ``` $e^{i\pi} + 1 = 0$ ``` Expressions on their own line are surrounded by `\begin{equation}` and `\end{equation}`: ``` \begin{equation} e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i \end{equation} ``` #### GitHub flavored markdown[](#GitHub-flavored-markdown) The Notebook webapp supports Github flavored markdown meaning that you can use triple backticks for code blocks: ``` ```python print "Hello World" ``` ```javascript console.log("Hello World") ``` ``` Gives: ``` print "Hello World" ``` ``` console.log("Hello World") ``` And a table like this: ``` | This | is | |---|---| | a | table| ``` A nice HTML Table: | This | is | | --- | --- | | a | table | #### General HTML[](#General-HTML) Because Markdown is a superset of HTML you can even add things like HTML tables: | Header 1 | Header 2 | | --- | --- | | row 1, cell 1 | row 1, cell 2 | | row 2, cell 1 | row 2, cell 2 | #### Local files[](#Local-files) If you have local files in your Notebook directory, you can refer to these files in Markdown cells directly: ``` [subdirectory/]<filename> ``` For example, in the images folder, we have the Python logo: ``` <img src="../images/python_logo.svg" /> ``` and a video with the HTML5 video tag: ``` <video controls src="../images/animation.m4v">animation</video> ``` animation These do not embed the data into the notebook file, and require that the files exist when you are viewing the notebook. ##### Security of local files[](#Security-of-local-files) Note that this means that the Jupyter notebook server also acts as a generic file server for files inside the same tree as your notebooks. Access is not granted outside the notebook folder so you have strict control over what files are visible, but for this reason it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory). When you run the notebook in a password-protected manner, local file access is restricted to authenticated users unless read-only views are active. ##### Markdown attachments[](#Markdown-attachments) Since Jupyter notebook version 5.0, in addition to referencing external file you can attach a file to a markdown cell. To do so drag the file from in a markdown cell while editing it: Files are stored in cell metadata and will be automatically scrubbed at save-time if not referenced. You can recognized attached images from other files by their url that starts with `attachment:`. For the image above: ``` ![pycon-logo.jpg](attachment:pycon-logo.jpg) ``` Keep in mind that attached files will increase the size of your notebook. You can manually edit the attachment by using the `View > Cell Toolbar > Attachment` menu, but you should not need to. ### Keyboard Shortcut Customization[](#Keyboard-Shortcut-Customization) Starting with Jupyter Notebook 5.0, you can customize the `command` mode shortcuts from within the Notebook Application itself. Head to the **``Help``** menu and select the **``Edit keyboard Shortcuts``** item. A dialog will guide you through the process of adding custom keyboard shortcuts. Keyboard shortcut set from within the Notebook Application will be persisted to your configuration file. A single action may have several shortcuts attached to it. ### Keyboard Shortcut Customization (Pre Notebook 5.0)[](#Keyboard-Shortcut-Customization-(Pre-Notebook-5.0)) Starting with IPython 2.0 keyboard shortcuts in command and edit mode are fully customizable. These customizations are made using the Jupyter JavaScript API. Here is an example that makes the `r` key available for running a cell: ``` [ ]: ``` ``` %%javascript Jupyter.keyboard_manager.command_shortcuts.add_shortcut('r', { help : 'run cell', help_index : 'zz', handler : function (event) { IPython.notebook.execute_cell(); return false; }} ); ``` “By default the keypress `r`, while in command mode, changes the type of the selected cell to `raw`. This shortcut is overridden by the code in the previous cell, and thus the action no longer be available via the keypress `r`.” There are a couple of points to mention about this API: * The `help_index` field is used to sort the shortcuts in the Keyboard Shortcuts help dialog. It defaults to `zz`. * When a handler returns `false` it indicates that the event should stop propagating and the default action should not be performed. For further details about the `event` object or event handling, see the jQuery docs. * If you don’t need a `help` or `help_index` field, you can simply pass a function as the second argument to `add_shortcut`. ``` [ ]: ``` ``` %%javascript Jupyter.keyboard_manager.command_shortcuts.add_shortcut('r', function (event) { IPython.notebook.execute_cell(); return false; }); ``` Likewise, to remove a shortcut, use `remove_shortcut`: ``` [ ]: ``` ``` %%javascript Jupyter.keyboard_manager.command_shortcuts.remove_shortcut('r'); ``` If you want your keyboard shortcuts to be active for all of your notebooks, put the above API calls into your `custom.js` file. Of course we provide name for majority of existing action so that you do not have to re-write everything, here is for example how to bind `r` back to it’s initial behavior: ``` [ ]: ``` ``` %%javascript Jupyter.keyboard_manager.command_shortcuts.add_shortcut('r', 'jupyter-notebook:change-cell-to-raw'); ``` ### Embracing web standards[](#Embracing-web-standards) One of the main reasons why we developed the current notebook web application was to embrace the web technology. By being a pure web application using HTML, JavaScript, and CSS, the Notebook can get all the web technology improvement for free. Thus, as browser support for different media extend, the notebook web app should be able to be compatible without modification. This is also true with performance of the User Interface as the speed of JavaScript VM increases. The other advantage of using only web technology is that the code of the interface is fully accessible to the end user and is modifiable live. Even if this task is not always easy, we strive to keep our code as accessible and reusable as possible. This should allow us - with minimum effort - development of small extensions that customize the behavior of the web interface. #### Tampering with the Notebook application[](#Tampering-with-the-Notebook-application) The first tool that is available to you and that you should be aware of are browser “developers tool”. The exact naming can change across browser and might require the installation of extensions. But basically they can allow you to inspect/modify the DOM, and interact with the JavaScript code that runs the frontend. * In Chrome and Safari, Developer tools are in the menu `View > Developer > JavaScript Console` * In Firefox you might need to install [Firebug](http://getfirebug.com/) Those will be your best friends to debug and try different approaches for your extensions. ##### Injecting JS[](#Injecting-JS) ###### Using magics[](#Using-magics) The above tools can be tedious for editing edit long JavaScript files. Therefore we provide the `%%javascript` magic. This allows you to quickly inject JavaScript into the notebook. Still the JavaScript injected this way will not survive reloading. Hence, it is a good tool for testing and refining a script. You might see here and there people modifying css and injecting js into the notebook by reading file(s) and publishing them into the notebook. Not only does this often break the flow of the notebook and make the re-execution of the notebook broken, but it also means that you need to execute those cells in the entire notebook every time you need to update the code. This can still be useful in some cases, like the `%autosave` magic that allows you to control the time between each save. But this can be replaced by a JavaScript dropdown menu to select the save interval. ``` [ ]: ``` ``` ## you can inspect the autosave code to see what it does. %autosave?? ``` ###### custom.js[](#custom.js) To inject JavaScript we provide an entry point: `custom.js` that allows the user to execute and load other resources into the notebook. JavaScript code in `custom.js` will be executed when the notebook app starts and can then be used to customize almost anything in the UI and in the behavior of the notebook. `custom.js` can be found in the `~/.jupyter/custom/custom.js`. You can share your custom.js with others. ####### Back to theory[](#Back-to-theory) ``` [ ]: ``` ``` from jupyter_core.paths import jupyter_config_dir jupyter_dir = jupyter_config_dir() jupyter_dir ``` and custom js is in ``` [ ]: ``` ``` import os.path custom_js_path = os.path.join(jupyter_dir, 'custom', 'custom.js') ``` ``` [ ]: ``` ``` # my custom js if os.path.isfile(custom_js_path): with open(custom_js_path) as f: print(f.read()) else: print("You don't have a custom.js file") ``` Note that `custom.js` is meant to be modified by user. When writing a script, you can define it in a separate file and add a line of configuration into `custom.js` that will fetch and execute the file. **Warning** : even if modification of `custom.js` takes effect immediately after browser refresh (except if browser cache is aggressive), *creating* a file in `static/` directory needs a **server restart**. #### Exercise :[](#Exercise-:) * Create a `custom.js` in the right location with the following content: ``` alert("hello world from custom.js") ``` * Restart your server and open any notebook. * Be greeted by custom.js Have a look at [default custom.js](https://github.com/jupyter/notebook/blob/4.0.x/notebook/static/custom/custom.js), to see it’s content and for more explanation. ##### For the quick ones :[](#For-the-quick-ones-:) We’ve seen above that you can change the autosave rate by using a magic. This is typically something I don’t want to type every time, and that I don’t like to embed into my workflow and documents. (readers don’t care what my autosave time is). Let’s build an extension that allows us to do it. Create a dropdown element in the toolbar (DOM `Jupyter.toolbar.element`), you will need * `Jupyter.notebook.set_autosave_interval(milliseconds)` * know that 1 min = 60 sec, and 1 sec = 1000 ms ``` var label = jQuery('<label/>').text('AutoScroll Limit:'); var select = jQuery('<select/>') //.append(jQuery('<option/>').attr('value', '2').text('2min (default)')) .append(jQuery('<option/>').attr('value', undefined).text('disabled')) // TODO: //the_toolbar_element.append(label) //the_toolbar_element.append(select); select.change(function() { var val = jQuery(this).val() // val will be the value in [2] // TODO // this will be called when dropdown changes }); var time_m = [1,5,10,15,30]; for (var i=0; i < time_m.length; i++) { var ts = time_m[i]; //[2] ____ this will be `val` on [1] // | // v select.append($('<option/>').attr('value', ts).text(thr+'min')); // this will fill up the dropdown `select` with // 1 min // 5 min // 10 min // 10 min // ... } ``` ###### A non-interactive example first[](#A-non-interactive-example-first) I like my cython to be nicely highlighted ``` Jupyter.config.cell_magic_highlight['magic_text/x-cython'] = {} Jupyter.config.cell_magic_highlight['magic_text/x-cython'].reg = [/^%%cython/] ``` `text/x-cython` is the name of CodeMirror mode name, `magic_` prefix will just patch the mode so that the first line that contains a magic does not screw up the highlighting. `reg`is a list or regular expression that will trigger the change of mode. ###### Get more documentation[](#Get-more-documentation) Sadly, you will have to read the js source file (but there are lots of comments) and/or build the JavaScript documentation using yuidoc. If you have `node` and `yui-doc` installed: ``` $ cd ~/jupyter/notebook/notebook/static/notebook/js/ $ yuidoc . --server warn: (yuidoc): Failed to extract port, setting to the default :3000 info: (yuidoc): Starting [email protected] using [email protected] with [email protected] info: (yuidoc): Scanning for yuidoc.json file. info: (yuidoc): Starting YUIDoc with the following options: info: (yuidoc): { port: 3000, nocode: false, paths: [ '.' ], server: true, outdir: './out' } info: (yuidoc): Scanning for yuidoc.json file. info: (server): Starting server: http://127.0.0.1:3000 ``` and browse <http://127.0.0.1:3000> to get documentation ###### Some convenience methods[](#Some-convenience-methods) By browsing the documentation you will see that we have some convenience methods that allows us to avoid re-inventing the UI every time : ``` Jupyter.toolbar.add_buttons_group([ { 'label' : 'run qtconsole', 'icon' : 'fa-terminal', // select your icon from // http://fontawesome.io/icons/ 'callback': function(){Jupyter.notebook.kernel.execute('%qtconsole')} } // add more button here if needed. ]); ``` with a [lot of icons](http://fontawesome.io/icons/) you can select from. #### Cell Metadata[](#Cell-Metadata) The most requested feature is generally to be able to distinguish an individual cell in the notebook, or run a specific action with them. To do so, you can either use `Jupyter.notebook.get_selected_cell()`, or rely on `CellToolbar`. This allows you to register a set of actions and graphical elements that will be attached to individual cells. ##### Cell Toolbar[](#Cell-Toolbar) You can see some example of what can be done by toggling the `Cell Toolbar` selector in the toolbar on top of the notebook. It provides two default `presets` that are `Default` and `slideshow`. Default allows the user to edit the metadata attached to each cell manually. First we define a function that takes at first parameter an element on the DOM in which to inject UI element. The second element is the cell this element wis registered with. Then we will need to register that function and give it a name. ###### Register a callback[](#Register-a-callback) ``` [ ]: ``` ``` %%javascript var CellToolbar = Jupyter.CellToolbar var toggle = function(div, cell) { var button_container = $(div) // let's create a button that shows the current value of the metadata var button = $('<button/>').addClass('btn btn-mini').text(String(cell.metadata.foo)); // On click, change the metadata value and update the button label button.click(function(){ var v = cell.metadata.foo; cell.metadata.foo = !v; button.text(String(!v)); }) // add the button to the DOM div. button_container.append(button); } // now we register the callback under the name foo to give the // user the ability to use it later CellToolbar.register_callback('tuto.foo', toggle); ``` ###### Registering a preset[](#Registering-a-preset) This function can now be part of many `preset` of the CellToolBar. ``` [ ]: ``` ``` %%javascript Jupyter.CellToolbar.register_preset('Tutorial 1',['tuto.foo','default.rawedit']) Jupyter.CellToolbar.register_preset('Tutorial 2',['slideshow.select','tuto.foo']) ``` You should now have access to two presets : * Tutorial 1 * Tutorial 2 And check that the buttons you defined share state when you toggle preset. Also check that the metadata of the cell is modified when you click the button, and that when saved on reloaded the metadata is still available. ###### Exercise:[](#Exercise:) Try to wrap the all code in a file, put this file in `{jupyter_dir}/custom/<a-name>.js`, and add ``` require(['custom/<a-name>']); ``` in `custom.js` to have this script automatically loaded in all your notebooks. `require` is provided by a [JavaScript library](http://requirejs.org/) that allow you to express dependency. For simple extension like the previous one we directly mute the global namespace, but for more complex extension you could pass a callback to `require([...], <callback>)` call, to allow the user to pass configuration information to your plugin. In Python language, ``` require(['a/b', 'c/d'], function( e, f){ e.something() f.something() }) ``` could be read as ``` import a.b as e import c.d as f e.something() f.something() ``` See for example @damianavila [“ZenMode” plugin](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/blob/b29c698394239a6931fa4911440550df214812cb/src/jupyter_contrib_nbextensions/nbextensions/zenmode/main.js#L32) : ``` // read that as // import custom.zenmode.main as zenmode require(['custom/zenmode/main'],function(zenmode){ zenmode.background('images/back12.jpg'); }) ``` ###### For the quickest[](#For-the-quickest) Try to use [the following](https://github.com/ipython/ipython/blob/1.x/IPython/html/static/notebook/js/celltoolbar.js#L367) to bind a dropdown list to `cell.metadata.difficulty.select`. It should be able to take the 4 following values : * `<None>` * `Easy` * `Medium` * `Hard` We will use it to customize the output of the converted notebook depending on the tag on each cell ``` [1]: ``` ``` # %load soln/celldiff.js ``` ``` [ ]: ``` ``` ``` ### Importing Jupyter Notebooks as Modules[](#Importing-Jupyter-Notebooks-as-Modules) It is a common problem that people want to import code from Jupyter Notebooks. This is made difficult by the fact that Notebooks are not plain Python files, and thus cannot be imported by the regular Python machinery. Fortunately, Python provides some fairly sophisticated [hooks](https://www.python.org/dev/peps/pep-0302/) into the import machinery, so we can actually make Jupyter notebooks importable without much difficulty, and only using public APIs. ``` [ ]: ``` ``` import io, os, sys, types ``` ``` [ ]: ``` ``` from IPython import get_ipython from nbformat import read from IPython.core.interactiveshell import InteractiveShell ``` Import hooks typically take the form of two objects: 1. a Module **Loader**, which takes a module name (e.g. `'IPython.display'`), and returns a Module 2. a Module **Finder**, which figures out whether a module might exist, and tells Python what **Loader** to use ``` [ ]: ``` ``` def find_notebook(fullname, path=None): """find a notebook, given its fully qualified name and an optional path This turns "foo.bar" into "foo/bar.ipynb" and tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar does not exist. """ name = fullname.rsplit('.', 1)[-1] if not path: path = [''] for d in path: nb_path = os.path.join(d, name + ".ipynb") if os.path.isfile(nb_path): return nb_path # let import Notebook_Name find "Notebook Name.ipynb" nb_path = nb_path.replace("_", " ") if os.path.isfile(nb_path): return nb_path ``` #### Notebook Loader[](#Notebook-Loader) Here we have our Notebook Loader. It’s actually quite simple - once we figure out the filename of the module, all it does is: 1. load the notebook document into memory 2. create an empty Module 3. execute every cell in the Module namespace Since IPython cells can have extended syntax, the IPython transform is applied to turn each of these cells into their pure-Python counterparts before executing them. If all of your notebook cells are pure-Python, this step is unnecessary. ``` [ ]: ``` ``` class NotebookLoader(object): """Module Loader for Jupyter Notebooks""" def __init__(self, path=None): self.shell = InteractiveShell.instance() self.path = path def load_module(self, fullname): """import a notebook as a module""" path = find_notebook(fullname, self.path) print ("importing Jupyter notebook from %s" % path) # load the notebook object with io.open(path, 'r', encoding='utf-8') as f: nb = read(f, 4) # create the module and add it to sys.modules # if name in sys.modules: # return sys.modules[name] mod = types.ModuleType(fullname) mod.__file__ = path mod.__loader__ = self mod.__dict__['get_ipython'] = get_ipython sys.modules[fullname] = mod # extra work to ensure that magics that would affect the user_ns # actually affect the notebook module's ns save_user_ns = self.shell.user_ns self.shell.user_ns = mod.__dict__ try: for cell in nb.cells: if cell.cell_type == 'code': # transform the input to executable Python code = self.shell.input_transformer_manager.transform_cell(cell.source) # run the code in themodule exec(code, mod.__dict__) finally: self.shell.user_ns = save_user_ns return mod ``` #### The Module Finder[](#The-Module-Finder) The finder is a simple object that tells you whether a name can be imported, and returns the appropriate loader. All this one does is check, when you do: ``` import mynotebook ``` it checks whether `mynotebook.ipynb` exists. If a notebook is found, then it returns a NotebookLoader. Any extra logic is just for resolving paths within packages. ``` [ ]: ``` ``` class NotebookFinder(object): """Module finder that locates Jupyter Notebooks""" def __init__(self): self.loaders = {} def find_module(self, fullname, path=None): nb_path = find_notebook(fullname, path) if not nb_path: return key = path if path: # lists aren't hashable key = os.path.sep.join(path) if key not in self.loaders: self.loaders[key] = NotebookLoader(path) return self.loaders[key] ``` #### Register the hook[](#Register-the-hook) Now we register the `NotebookFinder` with `sys.meta_path` ``` [ ]: ``` ``` sys.meta_path.append(NotebookFinder()) ``` After this point, my notebooks should be importable. Let’s look at what we have in the CWD: ``` [ ]: ``` ``` ls nbpackage ``` So I should be able to `import nbpackage.mynotebook`. ``` [ ]: ``` ``` import nbpackage.mynotebook ``` ##### Aside: displaying notebooks[](#Aside:-displaying-notebooks) Here is some simple code to display the contents of a notebook with syntax highlighting, etc. ``` [ ]: ``` ``` from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter from IPython.display import display, HTML formatter = HtmlFormatter() lexer = PythonLexer() # publish the CSS for pygments highlighting display(HTML(""" <style type='text/css'> %s </style> """ % formatter.get_style_defs() )) ``` ``` [ ]: ``` ``` def show_notebook(fname): """display a short summary of the cells of a notebook""" with io.open(fname, 'r', encoding='utf-8') as f: nb = read(f, 4) html = [] for cell in nb.cells: html.append("<h4>%s cell</h4>" % cell.cell_type) if cell.cell_type == 'code': html.append(highlight(cell.source, lexer, formatter)) else: html.append("<pre>%s</pre>" % cell.source) display(HTML('\n'.join(html))) show_notebook(os.path.join("nbpackage", "mynotebook.ipynb")) ``` So my notebook has some code cells, one of which contains some IPython syntax. Let’s see what happens when we import it ``` [ ]: ``` ``` from nbpackage import mynotebook ``` Hooray, it imported! Does it work? ``` [ ]: ``` ``` mynotebook.foo() ``` Hooray again! Even the function that contains IPython syntax works: ``` [ ]: ``` ``` mynotebook.has_ip_syntax() ``` #### Notebooks in packages[](#Notebooks-in-packages) We also have a notebook inside the `nb` package, so let’s make sure that works as well. ``` [ ]: ``` ``` ls nbpackage/nbs ``` Note that the `__init__.py` is necessary for `nb` to be considered a package, just like usual. ``` [ ]: ``` ``` show_notebook(os.path.join("nbpackage", "nbs", "other.ipynb")) ``` ``` [ ]: ``` ``` from nbpackage.nbs import other other.bar(5) ``` So now we have importable notebooks, from both the local directory and inside packages. I can even put a notebook inside IPython, to further demonstrate that this is working properly: ``` [ ]: ``` ``` import shutil from IPython.paths import get_ipython_package_dir utils = os.path.join(get_ipython_package_dir(), 'utils') shutil.copy(os.path.join("nbpackage", "mynotebook.ipynb"), os.path.join(utils, "inside_ipython.ipynb") ) ``` and import the notebook from `IPython.utils` ``` [ ]: ``` ``` from IPython.utils import inside_ipython inside_ipython.whatsmyname() ``` This approach can even import functions and classes that are defined in a notebook using the `%%cython` magic. ### Connecting to an existing IPython kernel using the Qt Console[](#Connecting-to-an-existing-IPython-kernel-using-the-Qt-Console) #### The Frontend/Kernel Model[](#The-Frontend/Kernel-Model) The traditional IPython (`ipython`) consists of a single process that combines a terminal based UI with the process that runs the users code. While this traditional application still exists, the modern Jupyter consists of two processes: * Kernel: this is the process that runs the users code. * Frontend: this is the process that provides the user interface where the user types code and sees results. Jupyter currently has 3 frontends: * Terminal Console (`jupyter console`) * Qt Console (`jupyter qtconsole`) * Notebook (`jupyter notebook`) The Kernel and Frontend communicate over a ZeroMQ/JSON based messaging protocol, which allows multiple Frontends (even of different types) to communicate with a single Kernel. This opens the door for all sorts of interesting things, such as connecting a Console or Qt Console to a Notebook’s Kernel. For example, you may want to connect a Qt console to your Notebook’s Kernel and use it as a help browser, calling `??` on objects in the Qt console (whose pager is more flexible than the one in the notebook). This Notebook describes how you would connect another Frontend to an IPython Kernel that is associated with a Notebook. The commands currently given here are specific to the IPython kernel. #### Manual connection[](#Manual-connection) To connect another Frontend to a Kernel manually, you first need to find out the connection information for the Kernel using the `%connect_info` magic: ``` [ ]: ``` ``` %connect_info ``` You can see that this magic displays everything you need to connect to this Notebook’s Kernel. #### Automatic connection using a new Qt Console[](#Automatic-connection-using-a-new-Qt-Console) You can also start a new Qt Console connected to your current Kernel by using the `%qtconsole` magic. This will detect the necessary connection information and start the Qt Console for you automatically. ``` [ ]: ``` ``` a = 10 ``` ``` [ ]: ``` ``` %qtconsole ``` The Markdown parser included in the Jupyter Notebook is MathJax-aware. This means that you can freely mix in mathematical expressions using the [MathJax subset of Tex and LaTeX](https://docs.mathjax.org/en/latest/input/tex/). [Some examples from the MathJax demos site](https://mathjax.github.io/MathJax-demos-web/) are reproduced below, as well as the Markdown+TeX source. ### Motivating Examples[](#Motivating-Examples) #### The Lorenz Equations[](#The-Lorenz-Equations) ##### Source[](#Source) ``` \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} ``` ##### Display[](#Display) \(\begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align}\) #### The Cauchy-Schwarz Inequality[](#The-Cauchy-Schwarz-Inequality) ##### Source[](#id1) ``` \begin{equation*} \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) \end{equation*} ``` ##### Display[](#id2) \(\begin{equation*} \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) \end{equation*}\) #### A Cross Product Formula[](#A-Cross-Product-Formula) ##### Source[](#id3) ``` \begin{equation*} \mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ \frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\ \frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0 \end{vmatrix} \end{equation*} ``` ##### Display[](#id4) \(\begin{equation*} \mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ \frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\ \frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0 \end{vmatrix} \end{equation*}\) #### The probability of getting (k) heads when flipping (n) coins is[](#The-probability-of-getting-(k)-heads-when-flipping-(n)-coins-is) ##### Source[](#id5) ``` \begin{equation*} P(E) = {n \choose k} p^k (1-p)^{ n-k} \end{equation*} ``` ##### Display[](#id6) \(\begin{equation*} P(E) = {n \choose k} p^k (1-p)^{ n-k} \end{equation*}\) #### An Identity of Ramanujan[](#An-Identity-of-Ramanujan) ##### Source[](#id7) ``` \begin{equation*} \frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \end{equation*} ``` ##### Display[](#id8) \(\begin{equation*} \frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \end{equation*}\) #### A Rogers-Ramanujan Identity[](#A-Rogers-Ramanujan-Identity) ##### Source[](#id9) ``` \begin{equation*} 1 + \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots = \prod_{j=0}^{\infty}\frac{1}{(1-q^{5j+2})(1-q^{5j+3})}, \quad\quad \text{for $|q|<1$}. \end{equation*} ``` ##### Display[](#id10) \[\begin{equation*} 1 + \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots = \prod_{j=0}^{\infty}\frac{1}{(1-q^{5j+2})(1-q^{5j+3})}, \quad\quad \text{for $|q|<1$}. \end{equation*}\] #### Maxwell’s Equations[](#Maxwell's-Equations) ##### Source[](#id11) ``` \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{align} ``` ##### Display[](#id12) \(\begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{align}\) #### Equation Numbering and References[](#Equation-Numbering-and-References) Equation numbering and referencing will be available in a future version of the Jupyter notebook. #### Inline Typesetting (Mixing Markdown and TeX)[](#Inline-Typesetting-(Mixing-Markdown-and-TeX)) While display equations look good for a page of samples, the ability to mix math and *formatted* **text** in a paragraph is also important. ##### Source[](#id13) ``` This expression $\sqrt{3x-1}+(1+x)^2$ is an example of a TeX inline equation in a [Markdown-formatted](https://daringfireball.net/projects/markdown/) sentence. ``` ##### Display[](#id14) This expression \(\sqrt{3x-1}+(1+x)^2\) is an example of a TeX inline equation in a [Markdown-formatted](https://daringfireball.net/projects/markdown/) sentence. #### Other Syntax[](#Other-Syntax) You will notice in other places on the web that `$$` are needed explicitly to begin and end MathJax typesetting. This is **not** required if you will be using TeX environments, but the Jupyter notebook will accept this syntax on legacy notebooks. #### Source[](#id15) ``` $$ \begin{array}{c} y_1 \\\ y_2 \mathtt{t}_i \\\ z_{3,4} \end{array} $$ ``` ``` $$ \begin{array}{c} y_1 \cr y_2 \mathtt{t}_i \cr y_{3} \end{array} $$ ``` ``` $$\begin{eqnarray} x' &=& &x \sin\phi &+& z \cos\phi \\ z' &=& - &x \cos\phi &+& z \sin\phi \\ \end{eqnarray}$$ ``` ``` $$ x=4 $$ ``` #### Display[](#id16) \[\begin{split}\begin{array}{c} y_1 \\\ y_2 \mathtt{t}_i \\\ z_{3,4} \end{array}\end{split}\] \[\begin{array}{c} y_1 \cr y_2 \mathtt{t}_i \cr y_{3} \end{array}\] \[\begin{split}\begin{eqnarray} x' &=& &x \sin\phi &+& z \cos\phi \\ z' &=& - &x \cos\phi &+& z \sin\phi \\ \end{eqnarray}\end{split}\] \[x=4\] What to do when things go wrong[](#what-to-do-when-things-go-wrong) --- First, have a look at the common problems listed below. If you can figure it out from these notes, it will be quicker than asking for help. Check that you have the latest version of any packages that look relevant. Unfortunately it’s not always easy to figure out what packages are relevant, but if there was a bug that’s already been fixed, it’s easy to upgrade and get on with what you wanted to do. ### Jupyter fails to start[](#jupyter-fails-to-start) * Have you [installed it](https://jupyter.org/install.html)? ;-) * If you’re using a menu shortcut or Anaconda launcher to start it, try opening a terminal or command prompt and running the command `jupyter notebook`. * If it can’t find `jupyter`, you may need to configure your `PATH` environment variable. If you don’t know what that means, and don’t want to find out, just (re)install Anaconda with the default settings, and it should set up PATH correctly. * If Jupyter gives an error that it can’t find `notebook`, check with pip or conda that the `notebook` package is installed. * Try running `jupyter-notebook` (with a hyphen). This should normally be the same as `jupyter notebook` (with a space), but if there’s any difference, the version with the hyphen is the ‘real’ launcher, and the other one wraps that. ### Jupyter doesn’t load or doesn’t work in the browser[](#jupyter-doesn-t-load-or-doesn-t-work-in-the-browser) * Try in another browser (e.g. if you normally use Firefox, try with Chrome). This helps pin down where the problem is. * Try disabling any browser extensions and/or any Jupyter extensions you have installed. * Some internet security software can interfere with Jupyter. If you have security software, try turning it off temporarily, and look in the settings for a more long-term solution. * In the address bar, try changing between `localhost` and `127.0.0.1`. They should be the same, but in some cases it makes a difference. ### Jupyter can’t start a kernel[](#jupyter-can-t-start-a-kernel) Files called *kernel specs* tell Jupyter how to start different kinds of kernels. To see where these are on your system, run `jupyter kernelspec list`: ``` $ jupyter kernelspec list Available kernels: python3 /home/takluyver/.local/lib/python3.6/site-packages/ipykernel/resources bash /home/takluyver/.local/share/jupyter/kernels/bash ir /home/takluyver/.local/share/jupyter/kernels/ir ``` There’s a special fallback for the Python kernel: if it doesn’t find a real kernelspec, but it can import the `ipykernel` package, it provides a kernel which will run in the same Python environment as the notebook server. A path ending in `ipykernel/resources`, like in the example above, is this default kernel. The default often does what you want, so if the `python3` kernelspec points somewhere else and you can’t start a Python kernel, try deleting or renaming that kernelspec folder to expose the default. If your problem is with another kernel, not the Python one we maintain, you may need to look for support about that kernel. ### Python Environments[](#python-environments) Multiple python environments, whether based on Anaconda or Python Virtual environments, are often the source of reported issues. In many cases, these issues stem from the Notebook server running in one environment, while the kernel and/or its resources, derive from another environment. Indicators of this scenario include: * `import` statements within code cells producing `ImportError` or `ModuleNotFound` exceptions. * General kernel startup failures exhibited by nothing happening when attempting to execute a cell. In these situations, take a close look at your environment structure and ensure all packages required by your notebook’s code are installed in the correct environment. If you need to run the kernel from different environments than your Notebook server, check out [IPython’s documentation](https://ipython.readthedocs.io/en/stable/install/kernel_install.html#kernels-for-different-environments) for using kernels from different environments as this is the recommended approach. Anaconda’s [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels) package might also be an option for you in these scenarios. Another thing to check is the `kernel.json` file that will be located in the aforementioned *kernel specs* directory identified by running `jupyter kernelspec list`. This file will contain an `argv` stanza that includes the actual command to run when launching the kernel. Oftentimes, when reinstalling python environments, a previous `kernel.json` will reference an python executable from an old or non-existent location. As a result, it’s always a good idea when encountering kernel startup issues to validate the `argv` stanza to ensure all file references exist and are appropriate. ### Windows Systems[](#windows-systems) Although Jupyter Notebook is primarily developed on the various flavors of the Unix operating system it also supports Microsoft Windows - which introduces its own set of commonly encountered issues, particularly in the areas of security, process management and lower-level libraries. #### pywin32 Issues[](#pywin32-issues) The primary package for interacting with Windows’ primitives is `pywin32`. * Issues surrounding the creation of the kernel’s communication file utilize `jupyter_core`’s `secure_write()` function. This function ensures a file is created in which only the owner of the file has access. If libraries like `pywin32` are not properly installed, issues can arise when it’s necessary to use the native Windows libraries. Here’s a portion of such a traceback: ``` File "c:\users\jovyan\python\myenv.venv\lib\site-packages\jupyter_core\paths.py", line 424, in secure_write win32_restrict_file_to_user(fname) File "c:\users\jovyan\python\myenv.venv\lib\site-packages\jupyter_core\paths.py", line 359, in win32_restrict_file_to_user import win32api ImportError: DLL load failed: The specified module could not be found. ``` * As noted earlier, the installation of `pywin32` can be problematic on Windows configurations. When such an issue occurs, you may need to revisit how the environment was setup. Pay careful attention to whether you’re running the 32 or 64 bit versions of Windows and be sure to install appropriate packages for that environment. Here’s a portion of such a traceback: ``` File "C:\Users\jovyan\AppData\Roaming\Python\Python37\site-packages\jupyter_core\paths.py", line 435, in secure_write win32_restrict_file_to_user(fname) File "C:\Users\jovyan\AppData\Roaming\Python\Python37\site-packages\jupyter_core\paths.py", line 361, in win32_restrict_file_to_user import win32api ImportError: DLL load failed: %1 is not a valid Win32 application ``` ##### Resolving pywin32 Issues[](#resolving-pywin32-issues) > In this case, your `pywin32` module may not be installed correctly and the following > should be attempted: > ``` > pip install --upgrade pywin32 > ``` > or: > ``` > conda install --force-reinstall pywin32 > ``` > followed by: > ``` > python.exe Scripts/pywin32_postinstall.py -install > ``` > where `Scripts` is located in the active Python’s installation location. * Another common failure specific to Windows environments is the location of various python commands. On `*nix` systems, these typically reside in the `bin` directory of the active Python environment. However, on Windows, these tend to reside in the `Scripts` folder - which is a sibling to `bin`. As a result, when encountering kernel startup issues, again, check the `argv` stanza and verify it’s pointing to a valid file. You may find that it’s pointing in `bin` when `Scripts` is correct, or the referenced file does not include its `.exe` extension - typically resulting in `FileNotFoundError` exceptions. ### This Worked An Hour Ago[](#this-worked-an-hour-ago) The Jupyter stack is very complex and rightfully so, there’s a lot going on. On occassion you might find the system working perfectly well, then, suddenly, you can’t get past a certain cell due to `import` failures. In these situations, it’s best to ask yourself if any new python files were added to your notebook development area. These issues are usually evident by carefully analyzing the traceback produced in the notebook error or the Notebook server’s command window. In these cases, you’ll typically find the Python kernel code (from `IPython` and `ipykernel`) performing *its* imports and notice a file from your Notebook development error included in that traceback followed by an `AttributeError`: ``` File "C:\Users\jovyan\anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in from IPython.core.profiledir import ProfileDir File "C:\Users\jovyan\anaconda3\lib\site-packages\IPython_init.py", line 55, in from .core.application import Application ... File "C:\Users\jovyan\anaconda3\lib\site-packages\ipython_genutils\path.py", line 13, in import random File "C:\Users\jovyan\Desktop\Notebooks\random.py", line 4, in rand_set = random.sample(english_words_lower_set, 12) AttributeError: module 'random' has no attribute 'sample' ``` What has happened is that you have named a file that conflicts with an installed package that is used by the kernel software and now introduces a conflict preventing the kernel’s startup. **Resolution**: You’ll need to rename your file. A best practice would be to prefix or *namespace* your files so as not to conflict with any python package. ### Asking for help[](#asking-for-help) As with any problem, try searching to see if someone has already found an answer. If you can’t find an existing answer, you can ask questions at: * The [Jupyter Discourse Forum](https://discourse.jupyter.org/) * The [jupyter-notebook tag on Stackoverflow](https://stackoverflow.com/questions/tagged/jupyter-notebook) * Peruse the [jupyter/help repository on Github](https://github.com/jupyter/help) (read-only) * Or in an issue on another repository, if it’s clear which component is responsible. Typical repositories include: > + [jupyter_core](https://github.com/jupyter/jupyter_core) - `secure_write()` > and file path issues > + [jupyter_client](https://github.com/jupyter/jupyter_core) - kernel management > issues found in Notebook server’s command window. > + [IPython](https://github.com/ipython/ipython) and > [ipykernel](https://github.com/ipython/ipykernel) - kernel runtime issues > typically found in Notebook server’s command window and/or Notebook cell execution. #### Gathering Information[](#gathering-information) Should you find that your problem warrants that an issue be opened in [notebook](https://github.com/jupyter/notebook) please don’t forget to provide details like the following: * What error messages do you see (within your notebook and, more importantly, in the Notebook server’s command window)? * What platform are you on? * How did you install Jupyter? * What have you tried already? The `jupyter troubleshoot` command collects a lot of information about your installation, which can also be useful. When providing textual information, it’s most helpful if you can *scrape* the contents into the issue rather than providing a screenshot. This enables others to select pieces of that content so they can search more efficiently and try to help. Remember that it’s not anyone’s job to help you. We want Jupyter to work for you, but we can’t always help everyone individually. Changelog[](#changelog) --- A summary of changes in the Jupyter notebook. For more detailed information, see [GitHub](https://github.com/jupyter/notebook). Use `pip install notebook --upgrade` or `conda upgrade notebook` to upgrade to the latest release. We strongly recommend that you upgrade pip to version 9+ of pip before upgrading `notebook`. Use `pip install pip --upgrade` to upgrade pip. Check pip version with `pip --version`. ### 6.5.4[](#id1) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.3...428e00bef821acb42f13843fb2949d225d081d56)) #### Enhancements made[](#enhancements-made) * Add show_banner trait to control the banner display [#6808](https://github.com/jupyter/notebook/pull/6808) ([@echarles](https://github.com/echarles)) #### Contributors to this release[](#contributors-to-this-release) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2023-03-06&to=2023-04-06&type=c)) [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2023-03-06..2023-04-06&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2023-03-06..2023-04-06&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2023-03-06..2023-04-06&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akrassowski+updated%3A2023-03-06..2023-04-06&type=Issues) | [@yuvipanda](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ayuvipanda+updated%3A2023-03-06..2023-04-06&type=Issues) ### 6.5.3[](#id2) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.2...7939fc595db4ffc3482031365d17ef72a02c085e)) #### Enhancements made[](#id3) * Add a banner and log to information about the migration to Notebook 7 plan [#6742](https://github.com/jupyter/notebook/pull/6742) ([@echarles](https://github.com/echarles)) * Add sys_info to page template for 6.5.x [#6668](https://github.com/jupyter/notebook/pull/6668) ([@juhasch](https://github.com/juhasch)) #### Bugs fixed[](#bugs-fixed) * Add .mo and .json files for translations [#6728](https://github.com/jupyter/notebook/pull/6728) ([@frenzymadness](https://github.com/frenzymadness)) * Apply PR #6609 to 6.5.x (Fix rename_file and delete_file to handle hidden files properly) [#6660](https://github.com/jupyter/notebook/pull/6660) ([@yacchin1205](https://github.com/yacchin1205)) #### Other merged PRs[](#other-merged-prs) * Fix ru_RU translation [#6745](https://github.com/jupyter/notebook/pull/6745) ([@andrii-i](https://github.com/andrii-i)) * Update kernel translation [#6744](https://github.com/jupyter/notebook/pull/6744) ([@JasonWeill](https://github.com/JasonWeill)) #### Contributors to this release[](#id4) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-10-30&to=2023-03-06&type=c)) [@andrii-i](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aandrii-i+updated%3A2022-10-30..2023-03-06&type=Issues) | [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-10-30..2023-03-06&type=Issues) | [@fcollonval](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afcollonval+updated%3A2022-10-30..2023-03-06&type=Issues) | [@frenzymadness](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afrenzymadness+updated%3A2022-10-30..2023-03-06&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-10-30..2023-03-06&type=Issues) | [@JasonWeill](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AJasonWeill+updated%3A2022-10-30..2023-03-06&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-10-30..2023-03-06&type=Issues) | [@juhasch](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajuhasch+updated%3A2022-10-30..2023-03-06&type=Issues) | [@RRosio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARRosio+updated%3A2022-10-30..2023-03-06&type=Issues) | [@venkatasg](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Avenkatasg+updated%3A2022-10-30..2023-03-06&type=Issues) | [@yacchin1205](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ayacchin1205+updated%3A2022-10-30..2023-03-06&type=Issues) ### 6.5.2[](#id5) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.1...8a00144fa9afc26ff9a29c9abf12db4014f52293)) #### Bugs fixed[](#id6) * Ensure custom preload is correctly handled [#6580](https://github.com/jupyter/notebook/pull/6580) ([@echarles](https://github.com/echarles)) * Fix: jQuery-UI 404 Error by updating dependency path in static template [#6578](https://github.com/jupyter/notebook/pull/6578) ([@RRosio](https://github.com/RRosio)) #### Maintenance and upkeep improvements[](#maintenance-and-upkeep-improvements) * Depend on nbclassic 0.4.7 [#6593](https://github.com/jupyter/notebook/pull/6593) ([@echarles](https://github.com/echarles)) #### Contributors to this release[](#id7) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-10-13&to=2022-10-30&type=c)) [@bnavigator](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Abnavigator+updated%3A2022-10-13..2022-10-30&type=Issues) | [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-10-13..2022-10-30&type=Issues) | [@fcollonval](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afcollonval+updated%3A2022-10-13..2022-10-30&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-10-13..2022-10-30&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-10-13..2022-10-30&type=Issues) | [@RRosio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARRosio+updated%3A2022-10-13..2022-10-30&type=Issues) ### 6.5.1[](#id8) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.0...67546dad676025b70b8b5f061c42ed31029f5dac)) #### Merged PRs[](#merged-prs) * fix: pin temporary to nbclassic 0.4.5 [#6570](https://github.com/jupyter/notebook/pull/6570) ([@echarles](https://github.com/echarles)) #### Contributors to this release[](#id9) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-10-13&to=2022-10-13&type=c)) [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-10-13..2022-10-13&type=Issues) ### 6.5.0[](#id10) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.0rc0...3c7afbdff6ec33e61531b3cbe3bc20b8440d3181)) #### Bugs fixed[](#id11) * Forward port of #6461 - Fix a typo in exception text [#6545](https://github.com/jupyter/notebook/pull/6545) ([@krassowski](https://github.com/krassowski)) * Normalise `os_path` [#6540](https://github.com/jupyter/notebook/pull/6540) ([@krassowski](https://github.com/krassowski)) #### Contributors to this release[](#id12) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-08-30&to=2022-10-13&type=c)) [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-08-30..2022-10-13&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akrassowski+updated%3A2022-08-30..2022-10-13&type=Issues) ### 6.5.0rc0[](#rc0) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.5.0b0...6d7109a6f39b8ad92d06ebf34e8dbbca5c9cbaf6)) #### Merged PRs[](#id13) * Update redirect logic and tests [#6511](https://github.com/jupyter/notebook/pull/6511) ([@RRosio](https://github.com/RRosio)) #### Contributors to this release[](#id14) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-07-29&to=2022-08-30&type=c)) [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-07-29..2022-08-30&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-07-29..2022-08-30&type=Issues) | [@RRosio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARRosio+updated%3A2022-07-29..2022-08-30&type=Issues) ### 6.5.0b0[](#b0) No merged PRs ### 6.5.0a0[](#a0) ([Full Changelog](https://github.com/jupyter/notebook/compare/6.4.12...87d57658aaeccaffb5242a3b7b95702636922e8c)) #### Maintenance and upkeep improvements[](#id15) * Selenium test updates [#6484](https://github.com/jupyter/notebook/pull/6484) ([@ericsnekbytes](https://github.com/ericsnekbytes)) * Make notebook 6.5.x point to nbclassic static assets [#6474](https://github.com/jupyter/notebook/pull/6474) ([@ericsnekbytes](https://github.com/ericsnekbytes)) #### Documentation improvements[](#documentation-improvements) * Update contributing docs to reflect changes to build process [#6488](https://github.com/jupyter/notebook/pull/6488) ([@RRosio](https://github.com/RRosio)) * Fix Check Release/link_check CI Job [#6485](https://github.com/jupyter/notebook/pull/6485) ([@RRosio](https://github.com/RRosio)) #### Contributors to this release[](#id16) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-06-07&to=2022-07-26&type=c)) [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2022-06-07..2022-07-26&type=Issues) | [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-06-07..2022-07-26&type=Issues) | [@ericsnekbytes](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aericsnekbytes+updated%3A2022-06-07..2022-07-26&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-06-07..2022-07-26&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-06-07..2022-07-26&type=Issues) | [@ofek](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aofek+updated%3A2022-06-07..2022-07-26&type=Issues) | [@RRosio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARRosio+updated%3A2022-06-07..2022-07-26&type=Issues) ### 6.4.12[](#id17) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.11...6.4.12) * Address security advisory [GHSA-v7vq-3x77-87vg](https://github.com/jupyter/notebook/security/advisories/GHSA-v7vq-3x77-87vg) ### 6.4.11[](#id18) ([Full Changelog](https://github.com/jupyter/notebook/compare/6.4.10...3911672959fcd35cf4a1b1ad7c9c8a5651c17ae6)) #### Bugs fixed[](#id19) * Update further to ipykernel comm refactoring [#6358](https://github.com/jupyter/notebook/pull/6358) ([@echarles](https://github.com/echarles)) #### Maintenance and upkeep improvements[](#id20) * Add testpath to the test dependencies. [#6357](https://github.com/jupyter/notebook/pull/6357) ([@echarles](https://github.com/echarles)) * Temporary workaround to fix js-tests related to sanitizer js loading by phantomjs [#6356](https://github.com/jupyter/notebook/pull/6356) ([@echarles](https://github.com/echarles)) * Use place-hold.it instead of plaecehold.it to create image placeholders [#6320](https://github.com/jupyter/notebook/pull/6320) ([@echarles](https://github.com/echarles)) * Migrate to python 3.7+ [#6260](https://github.com/jupyter/notebook/pull/6260) - Fixes [#6256](https://github.com/jupyter/notebook/pull/6256) ([@penguinolog](https://github.com/penguinolog)) #### Contributors to this release[](#id21) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-03-15&to=2022-04-18&type=c)) [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2022-03-15..2022-04-18&type=Issues) | [@echarles](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aecharles+updated%3A2022-03-15..2022-04-18&type=Issues) | [@fcollonval](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afcollonval+updated%3A2022-03-15..2022-04-18&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-03-15..2022-04-18&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-03-15..2022-04-18&type=Issues) | [@penguinolog](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Apenguinolog+updated%3A2022-03-15..2022-04-18&type=Issues) ### 6.4.9[](#id22) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.7...9e3a7001117e64a24ead07b888bd055fdd66faf3)) #### Maintenance and upkeep improvements[](#id23) * Update links and fix check-release [#6310](https://github.com/jupyter/notebook/pull/6310) ([@blink1073](https://github.com/blink1073)) * Update 6.4.x branch with some missing commits [#6308](https://github.com/jupyter/notebook/pull/6308) ([@kycutler](https://github.com/kycutler)) #### Other merged PRs[](#id24) * Specify minimum version of nbconvert required [#6286](https://github.com/jupyter/notebook/pull/6286) ([@adamjstewart](https://github.com/adamjstewart)) #### Contributors to this release[](#id25) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-01-12&to=2022-03-14&type=c)) [@adamjstewart](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aadamjstewart+updated%3A2022-01-12..2022-03-14&type=Issues) | [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2022-01-12..2022-03-14&type=Issues) | [@github-actions](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agithub-actions+updated%3A2022-01-12..2022-03-14&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2022-01-12..2022-03-14&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2022-01-12..2022-03-14&type=Issues) | [@kycutler](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akycutler+updated%3A2022-01-12..2022-03-14&type=Issues) | [@Zsailer](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AZsailer+updated%3A2022-01-12..2022-03-14&type=Issues) ### 6.4.8[](#id26) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.7...479902d83a691253e0cff8439a33577e82408317)) #### Bugs fixed[](#id27) * Fix to remove potential memory leak on Jupyter Notebooks ZMQChannelHandler code [#6251](https://github.com/jupyter/notebook/pull/6251) ([@Vishwajeet0510](https://github.com/Vishwajeet0510)) #### Contributors to this release[](#id28) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2022-01-12&to=2022-01-25&type=c)) [@Vishwajeet0510](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AVishwajeet0510+updated%3A2022-01-12..2022-01-25&type=Issues) ### 6.4.7[](#id29) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.6...b77b5e38b8fa1a20150d7fa4d735dbf1c4f00418)) #### Bugs fixed[](#id30) * Fix Chinese punctuation [#6268](https://github.com/jupyter/notebook/pull/6268) ([@LiHua-Official](https://github.com/LiHua-Official)) * Add date field to kernel message header [#6265](https://github.com/jupyter/notebook/pull/6265) ([@kevin-bates](https://github.com/kevin-bates)) * Fix deprecation warning [#6253](https://github.com/jupyter/notebook/pull/6253) ([@tornaria](https://github.com/tornaria)) #### Maintenance and upkeep improvements[](#id31) * Enforce labels on PRs [#6235](https://github.com/jupyter/notebook/pull/6235) ([@blink1073](https://github.com/blink1073)) * Fix: CI error for python 3.6 & macOS [#6215](https://github.com/jupyter/notebook/pull/6215) ([@penguinolog](https://github.com/penguinolog)) #### Other merged PRs[](#id32) * handle KeyError when get session [#6245](https://github.com/jupyter/notebook/pull/6245) ([@ccw630](https://github.com/ccw630)) * Updated doc for passwd [#6209](https://github.com/jupyter/notebook/pull/6209) ([@antoinecarme](https://github.com/antoinecarme)) #### Contributors to this release[](#id33) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-11-16&to=2022-01-12&type=c)) [@antoinecarme](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aantoinecarme+updated%3A2021-11-16..2022-01-12&type=Issues) | [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-11-16..2022-01-12&type=Issues) | [@ccw630](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Accw630+updated%3A2021-11-16..2022-01-12&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-11-16..2022-01-12&type=Issues) | [@LiHua-Official](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ALiHua-Official+updated%3A2021-11-16..2022-01-12&type=Issues) | [@penguinolog](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Apenguinolog+updated%3A2021-11-16..2022-01-12&type=Issues) | [@tornaria](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Atornaria+updated%3A2021-11-16..2022-01-12&type=Issues) ### 6.4.6[](#id34) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.5...160c27d3c23dafe8b42240571db21b0d5cbae2fe)) #### Bugs fixed[](#id35) * Fix `asyncio` error when opening notebooks [#6221](https://github.com/jupyter/notebook/pull/6221) ([@dleen](https://github.com/dleen)) * Change to use a universal Chinese translation on certain words [#6218](https://github.com/jupyter/notebook/pull/6218) ([@jackexu](https://github.com/jackexu)) * Fix Chinese translation typo [#6211](https://github.com/jupyter/notebook/pull/6211) ([@maliubiao](https://github.com/maliubiao) * Fix `send2trash` tests failing on Windows [#6127](https://github.com/jupyter/notebook/pull/6127) ([@dolfinus](https://github.com/dolfinus)) #### Maintenance and upkeep improvements[](#id36) * TST: don’t look in user site for serverextensions [#6233](https://github.com/jupyter/notebook/pull/6233) ([@bnavigator](https://github.com/bnavigator)) * Enable terminal tests as `pywinpty` is ported for python 3.9 [#6228](https://github.com/jupyter/notebook/pull/6228) ([@nsait-linaro](https://github.com/nsait-linaro)) #### Contributors to this release[](#id37) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-10-19&to=2021-11-16&type=c)) [@bnavigator](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Abnavigator+updated%3A2021-10-19..2021-11-16&type=Issues) | [@dleen](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Adleen+updated%3A2021-10-19..2021-11-16&type=Issues) | [@dolfinus](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Adolfinus+updated%3A2021-10-19..2021-11-16&type=Issues) | [@jackexu](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajackexu+updated%3A2021-10-19..2021-11-16&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-10-19..2021-11-16&type=Issues) | [@maliubiao](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amaliubiao+updated%3A2021-10-19..2021-11-16&type=Issues) | [@nsait-linaro](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ansait-linaro+updated%3A2021-10-19..2021-11-16&type=Issues) | [@takluyver](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Atakluyver+updated%3A2021-10-19..2021-11-16&type=Issues) | [@Zsailer](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AZsailer+updated%3A2021-10-19..2021-11-16&type=Issues) ### 6.4.5[](#id38) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.4...ccd9665571107e02a325a738b8baebd6532b2d3d)) #### Bug fixes[](#bug-fixes) * Recover from failure to render mimetype [#6181](https://github.com/jupyter/notebook/pull/6181) ([@martinRenou](https://github.com/martinRenou)) #### Maintenance and upkeep improvements[](#id39) * Fix crypto handling [#6197](https://github.com/jupyter/notebook/pull/6197) ([@blink1073](https://github.com/blink1073)) * Fix `jupyter_client` warning [#6178](https://github.com/jupyter/notebook/pull/6178) ([@martinRenou](https://github.com/martinRenou)) #### Documentation improvements[](#id40) * Fix nbsphinx settings [#6200](https://github.com/jupyter/notebook/pull/6200) ([@mgeier](https://github.com/mgeier)) * Fully revert the pinning of `nbsphinx` to 0.8.6 [#6201](https://github.com/jupyter/notebook/pull/6201) ([@kevin-bates](https://github.com/kevin-bates)) * Pin `nbsphinx` to 0.8.6, clean up orphaned resources [#6194](https://github.com/jupyter/notebook/pull/6194) ([@kevin-bates](https://github.com/kevin-bates)) * Fix typo in docstring [#6188](https://github.com/jupyter/notebook/pull/6188) ([@jgarte](https://github.com/jgarte)) #### Contributors to this release[](#id41) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-09-03&to=2021-10-19&type=c)) [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-09-03..2021-10-19&type=Issues) | [@jgarte](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajgarte+updated%3A2021-09-03..2021-10-19&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-09-03..2021-10-19&type=Issues) | [@martinRenou](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AmartinRenou+updated%3A2021-09-03..2021-10-19&type=Issues) | [@mgeier](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amgeier+updated%3A2021-09-03..2021-10-19&type=Issues) ### 6.4.4[](#id42) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.3...c06c340574e1d2207940c5bd1190eb73d82ab945)) #### Documentation improvements[](#id43) * Update Manual Release Instructions [#6152](https://github.com/jupyter/notebook/pull/6152) ([@blink1073](https://github.com/blink1073)) #### Other merged PRs[](#id44) * Use default JupyterLab CSS sanitizer options for Markdown [#6160](https://github.com/jupyter/notebook/pull/6160) ([@krassowski](https://github.com/krassowski)) * Fix syntax highlight [#6128](https://github.com/jupyter/notebook/pull/6128) ([@massongit](https://github.com/massongit)) #### Contributors to this release[](#id45) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-08-11&to=2021-09-03&type=c)) [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-08-11..2021-09-03&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-08-11..2021-09-03&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akrassowski+updated%3A2021-08-11..2021-09-03&type=Issues) | [@massongit](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amassongit+updated%3A2021-08-11..2021-09-03&type=Issues) | [@minrk](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aminrk+updated%3A2021-08-11..2021-09-03&type=Issues) | [@Zsailer](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AZsailer+updated%3A2021-08-11..2021-09-03&type=Issues) ### 6.4.3[](#id46) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.2...c373bd89adaaddffbb71747ebbcfe8a749cae0a8)) #### Bugs fixed[](#id47) * Add @babel/core dependency [#6133](https://github.com/jupyter/notebook/pull/6133) ([@afshin](https://github.com/afshin)) * Switch webpack to production mode [#6131](https://github.com/jupyter/notebook/pull/6131) ([@afshin](https://github.com/afshin)) #### Maintenance and upkeep improvements[](#id48) * Clean up link checking [#6130](https://github.com/jupyter/notebook/pull/6130) ([@blink1073](https://github.com/blink1073)) #### Contributors to this release[](#id49) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-08-06&to=2021-08-10&type=c)) [@afshin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aafshin+updated%3A2021-08-06..2021-08-10&type=Issues) | [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-08-06..2021-08-10&type=Issues) | [@Zsailer](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AZsailer+updated%3A2021-08-06..2021-08-10&type=Issues) ### 6.4.2[](#id50) ([Full Changelog](https://github.com/jupyter/notebook/compare/v6.4.0...999e8322bcd24e0ed62b180c19ec13db3f48165b)) #### Bugs fixed[](#id51) * Add missing file to manifest [#6122](https://github.com/jupyter/notebook/pull/6122) ([@afshin](https://github.com/afshin)) * Fix issue #3218 [#6108](https://github.com/jupyter/notebook/pull/6108) ([@Nazeeh21](https://github.com/Nazeeh21)) * Fix version of jupyter-packaging in pyproject.toml [#6101](https://github.com/jupyter/notebook/pull/6101) ([@frenzymadness](https://github.com/frenzymadness)) * “#element”.tooltip is not a function on home page fixed. [#6070](https://github.com/jupyter/notebook/pull/6070) ([@ilayh123](https://github.com/ilayh123)) #### Maintenance and upkeep improvements[](#id52) * Enhancements to the desktop entry [#6099](https://github.com/jupyter/notebook/pull/6099) ([@Amr-Ibra](https://github.com/Amr-Ibra)) * Add missing spaces to help messages in config file [#6085](https://github.com/jupyter/notebook/pull/6085) ([@saiwing-yeung](https://github.com/saiwing-yeung)) #### Contributors to this release[](#id53) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-05-17&to=2021-08-06&type=c)) [@afshin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aafshin+updated%3A2021-05-17..2021-08-06&type=Issues) | [@Amr-Ibra](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AAmr-Ibra+updated%3A2021-05-17..2021-08-06&type=Issues) | [@frenzymadness](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afrenzymadness+updated%3A2021-05-17..2021-08-06&type=Issues) | [@ilayh123](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ailayh123+updated%3A2021-05-17..2021-08-06&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-05-17..2021-08-06&type=Issues) | [@Nazeeh21](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ANazeeh21+updated%3A2021-05-17..2021-08-06&type=Issues) | [@saiwing-yeung](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Asaiwing-yeung+updated%3A2021-05-17..2021-08-06&type=Issues) ### 6.4.0[](#id54) ([Full Changelog](https://github.com/jupyter/notebook/compare/6.3.0...80eb286f316838afc76a9a84b06f54e7dccb6c86)) #### Bugs fixed[](#id55) * Fix Handling of Encoded Paths in Save As Dialog [#6030](https://github.com/jupyter/notebook/pull/6030) ([@afshin](https://github.com/afshin)) * Fix: split_cell doesn’t always split cell [#6017](https://github.com/jupyter/notebook/pull/6017) ([@gamestrRUS](https://github.com/gamestrRUS)) * Correct ‘Content-Type’ headers [#6026](https://github.com/jupyter/notebook/pull/6026) ([@faucct](https://github.com/faucct)) * Fix skipped tests & remove deprecation warnings [#6018](https://github.com/jupyter/notebook/pull/6018) ([@befeleme](https://github.com/befeleme)) * [Gateway] Track only this server’s kernels [#5980](https://github.com/jupyter/notebook/pull/5980) ([@kevin-bates](https://github.com/kevin-bates)) * Bind the HTTPServer in start [#6061](https://github.com/jupyter/notebook/pull/6061) #### Maintenance and upkeep improvements[](#id56) * Revert “do not apply asyncio patch for tornado >=6.1” [#6052](https://github.com/jupyter/notebook/pull/6052) ([@minrk](https://github.com/minrk)) * Use Jupyter Releaser [#6048](https://github.com/jupyter/notebook/pull/6048) ([@afshin](https://github.com/afshin)) * Add Workflow Permissions for Lock Bot [#6042](https://github.com/jupyter/notebook/pull/6042) ([@jtpio](https://github.com/jtpio)) * Fixes related to the recent changes in the documentation [#6021](https://github.com/jupyter/notebook/pull/6021) ([@frenzymadness](https://github.com/frenzymadness)) * Add maths checks in CSS reference test [#6035](https://github.com/jupyter/notebook/pull/6035) ([@stef4k](https://github.com/stef4k)) * Add Issue Lock and Answered Bots [#6019](https://github.com/jupyter/notebook/pull/6019) ([@afshin](https://github.com/afshin)) #### Documentation improvements[](#id57) * Spelling correction [#6045](https://github.com/jupyter/notebook/pull/6045) ([@wggillen](https://github.com/wggillen)) * Minor typographical and comment changes [#6025](https://github.com/jupyter/notebook/pull/6025) ([@misterhay](https://github.com/misterhay)) * Fixes related to the recent changes in the documentation [#6021](https://github.com/jupyter/notebook/pull/6021) ([@frenzymadness](https://github.com/frenzymadness)) * Fix readthedocs environment [#6020](https://github.com/jupyter/notebook/pull/6020) ([@blink1073](https://github.com/blink1073)) #### Contributors to this release[](#id58) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-03-22&to=2021-05-12&type=c)) [@afshin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aafshin+updated%3A2021-03-22..2021-05-12&type=Issues) | [@befeleme](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Abefeleme+updated%3A2021-03-22..2021-05-12&type=Issues) | [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-03-22..2021-05-12&type=Issues) | [@faucct](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afaucct+updated%3A2021-03-22..2021-05-12&type=Issues) | [@frenzymadness](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afrenzymadness+updated%3A2021-03-22..2021-05-12&type=Issues) | [@gamestrRUS](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AgamestrRUS+updated%3A2021-03-22..2021-05-12&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2021-03-22..2021-05-12&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-03-22..2021-05-12&type=Issues) | [@minrk](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aminrk+updated%3A2021-03-22..2021-05-12&type=Issues) | [@misterhay](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amisterhay+updated%3A2021-03-22..2021-05-12&type=Issues) | [@stef4k](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Astef4k+updated%3A2021-03-22..2021-05-12&type=Issues) | [@wggillen](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Awggillen+updated%3A2021-03-22..2021-05-12&type=Issues) ### 6.3.0[](#id59) #### Merged PRs[](#id60) * Add square logo and desktop entry files [#6010](https://github.com/jupyter/notebook/pull/6010) ([@befeleme](https://github.com/befeleme)) * Modernize Changelog [#6008](https://github.com/jupyter/notebook/pull/6008) ([@afshin](https://github.com/afshin)) * Add missing “import inspect” [#5999](https://github.com/jupyter/notebook/pull/5999) ([@mgeier](https://github.com/mgeier)) * Add Codecov badge to README [#5989](https://github.com/jupyter/notebook/pull/5989) ([@thomasrockhu](https://github.com/thomasrockhu)) * Remove configuration for nosetests from setup.cfg [#5986](https://github.com/jupyter/notebook/pull/5986) ([@frenzymadness](https://github.com/frenzymadness)) * Update security.rst [#5978](https://github.com/jupyter/notebook/pull/5978) ([@dlrice](https://github.com/dlrice)) * Docs-Translations: Updated Hindi and Chinese Readme.md [#5976](https://github.com/jupyter/notebook/pull/5976) ([@rjn01](https://github.com/rjn01)) * Allow /metrics by default if auth is off [#5974](https://github.com/jupyter/notebook/pull/5974) ([@blairdrummond](https://github.com/blairdrummond)) * Skip terminal tests on Windows 3.9+ (temporary) [#5968](https://github.com/jupyter/notebook/pull/5968) ([@kevin-bates](https://github.com/kevin-bates)) * Update GatewayKernelManager to derive from AsyncMappingKernelManager [#5966](https://github.com/jupyter/notebook/pull/5966) ([@kevin-bates](https://github.com/kevin-bates)) * Drop use of deprecated pyzmq.ioloop [#5965](https://github.com/jupyter/notebook/pull/5965) ([@kevin-bates](https://github.com/kevin-bates)) * Drop support for Python 3.5 [#5962](https://github.com/jupyter/notebook/pull/5962) ([@kevin-bates](https://github.com/kevin-bates)) * Allow jupyter_server-based contents managers in notebook [#5957](https://github.com/jupyter/notebook/pull/5957) ([@afshin](https://github.com/afshin)) * Russian translation fixes [#5954](https://github.com/jupyter/notebook/pull/5954) ([@insolor](https://github.com/insolor)) * Increase culling test idle timeout [#5952](https://github.com/jupyter/notebook/pull/5952) ([@kevin-bates](https://github.com/kevin-bates)) * Re-enable support for answer_yes flag [#5941](https://github.com/jupyter/notebook/pull/5941) ([@afshin](https://github.com/afshin)) * Replace Travis and Appveyor with Github Actions [#5938](https://github.com/jupyter/notebook/pull/5938) ([@kevin-bates](https://github.com/kevin-bates)) * DOC: Server extension, extra docs on configuration/authentication. [#5937](https://github.com/jupyter/notebook/pull/5937) ([@Carreau](https://github.com/Carreau)) #### Contributors to this release[](#id61) ([GitHub contributors page for this release](https://github.com/jupyter/notebook/graphs/contributors?from=2021-01-13&to=2021-03-18&type=c)) [@abielhammonds](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aabielhammonds+updated%3A2021-01-13..2021-03-18&type=Issues) | [@afshin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aafshin+updated%3A2021-01-13..2021-03-18&type=Issues) | [@ajharry](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aajharry+updated%3A2021-01-13..2021-03-18&type=Issues) | [@Alokrar](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AAlokrar+updated%3A2021-01-13..2021-03-18&type=Issues) | [@befeleme](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Abefeleme+updated%3A2021-01-13..2021-03-18&type=Issues) | [@blairdrummond](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablairdrummond+updated%3A2021-01-13..2021-03-18&type=Issues) | [@blink1073](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ablink1073+updated%3A2021-01-13..2021-03-18&type=Issues) | [@bollwyvl](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Abollwyvl+updated%3A2021-01-13..2021-03-18&type=Issues) | [@Carreau](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ACarreau+updated%3A2021-01-13..2021-03-18&type=Issues) | [@ChenChenDS](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AChenChenDS+updated%3A2021-01-13..2021-03-18&type=Issues) | [@cosmoscalibur](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Acosmoscalibur+updated%3A2021-01-13..2021-03-18&type=Issues) | [@dlrice](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Adlrice+updated%3A2021-01-13..2021-03-18&type=Issues) | [@dwanneruchi](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Adwanneruchi+updated%3A2021-01-13..2021-03-18&type=Issues) | [@ElisonSherton](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AElisonSherton+updated%3A2021-01-13..2021-03-18&type=Issues) | [@FazeelUsmani](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AFazeelUsmani+updated%3A2021-01-13..2021-03-18&type=Issues) | [@frenzymadness](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Afrenzymadness+updated%3A2021-01-13..2021-03-18&type=Issues) | [@goerz](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Agoerz+updated%3A2021-01-13..2021-03-18&type=Issues) | [@insolor](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ainsolor+updated%3A2021-01-13..2021-03-18&type=Issues) | [@jasongrout](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajasongrout+updated%3A2021-01-13..2021-03-18&type=Issues) | [@JianghuiDu](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AJianghuiDu+updated%3A2021-01-13..2021-03-18&type=Issues) | [@JuzerShakir](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AJuzerShakir+updated%3A2021-01-13..2021-03-18&type=Issues) | [@kevin-bates](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Akevin-bates+updated%3A2021-01-13..2021-03-18&type=Issues) | [@Khalilsqu](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AKhalilsqu+updated%3A2021-01-13..2021-03-18&type=Issues) | [@meeseeksdev](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ameeseeksdev+updated%3A2021-01-13..2021-03-18&type=Issues) | [@mgeier](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amgeier+updated%3A2021-01-13..2021-03-18&type=Issues) | [@michaelpedota](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amichaelpedota+updated%3A2021-01-13..2021-03-18&type=Issues) | [@mjbright](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Amjbright+updated%3A2021-01-13..2021-03-18&type=Issues) | [@MSeal](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AMSeal+updated%3A2021-01-13..2021-03-18&type=Issues) | [@ncoughlin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ancoughlin+updated%3A2021-01-13..2021-03-18&type=Issues) | [@NTimmons](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ANTimmons+updated%3A2021-01-13..2021-03-18&type=Issues) | [@ProsperousHeart](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AProsperousHeart+updated%3A2021-01-13..2021-03-18&type=Issues) | [@rjn01](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Arjn01+updated%3A2021-01-13..2021-03-18&type=Issues) | [@slw07g](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Aslw07g+updated%3A2021-01-13..2021-03-18&type=Issues) | [@stenivan](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Astenivan+updated%3A2021-01-13..2021-03-18&type=Issues) | [@takluyver](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Atakluyver+updated%3A2021-01-13..2021-03-18&type=Issues) | [@thomasrockhu](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Athomasrockhu+updated%3A2021-01-13..2021-03-18&type=Issues) | [@wgilpin](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Awgilpin+updated%3A2021-01-13..2021-03-18&type=Issues) | [@wxtt522](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Awxtt522+updated%3A2021-01-13..2021-03-18&type=Issues) | [@yuvipanda](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ayuvipanda+updated%3A2021-01-13..2021-03-18&type=Issues) | [@Zsailer](https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3AZsailer+updated%3A2021-01-13..2021-03-18&type=Issues) ### 6.2.0[](#id62) ### Merged PRs[](#id63) * Increase minimum tornado version ([5933](https://github.com/jupyter/notebook/pull/5933)) * Adjust skip decorators to avoid remaining dependency on nose ([5932](https://github.com/jupyter/notebook/pull/5932)) * Ensure that cell ids persist after save ([5928](https://github.com/jupyter/notebook/pull/5928)) * Add reconnection to Gateway (form nb2kg) ([5924](https://github.com/jupyter/notebook/pull/5924)) * Fix some typos ([5917](https://github.com/jupyter/notebook/pull/5917)) * Handle TrashPermissionError, now that it exist ([5894](https://github.com/jupyter/notebook/pull/5894)) Thank you to all the contributors: * @kevin-bates * @mishaschwartz * @oyvsyo * @user202729 * @stefanor ### 6.1.6[](#id64) ### Merged PRs[](#id65) * do not require nose for testing ([5826](https://github.com/jupyter/notebook/pull/5826)) * [docs] Update Chinese and Hindi readme.md ([5823](https://github.com/jupyter/notebook/pull/5823)) * Add support for creating terminals via GET ([5813](https://github.com/jupyter/notebook/pull/5813)) * Made doc translations in Hindi and Chinese ([5787](https://github.com/jupyter/notebook/pull/5787)) Thank you to all the contributors: * @pgajdos * @rjn01 * @kevin-bates * @virejdasani ### 6.1.5[](#id66) 6.1.5 is a security release, fixing one vulnerability: * Fix open redirect vulnerability GHSA-c7vm-f5p4-8fqh (CVE to be assigned) ### 6.1.4[](#id67) * Fix broken links to jupyter documentation ([5686](https://github.com/jupyter/notebook/pull/5686)) * Add additional entries to troubleshooting section ([5695](https://github.com/jupyter/notebook/pull/5695)) * Revert change in page alignment ([5703](https://github.com/jupyter/notebook/pull/5703)) * Bug fix: remove double encoding in download files ([5720](https://github.com/jupyter/notebook/pull/5720)) * Fix typo for Check in zh_CN ([5730](https://github.com/jupyter/notebook/pull/5730)) * Require a file name in the “Save As” dialog ([5733](https://github.com/jupyter/notebook/pull/5733)) Thank you to all the contributors: * bdbai * <NAME> * <NAME> * <NAME> * <NAME> ### 6.1.3[](#id68) * Title new buttons with label if action undefined ([5676](https://github.com/jupyter/notebook/pull/5676)) Thank you to all the contributors: * <NAME> ### 6.1.2[](#id69) * Fix russian message format for delete/duplicate actions ([5662](https://github.com/jupyter/notebook/pull/5662)) * Remove unnecessary import of bind_unix_socket ([5666](https://github.com/jupyter/notebook/pull/5666)) * Tooltip style scope fix ([5672](https://github.com/jupyter/notebook/pull/5672)) Thank you to all the contributors: * <NAME> * <NAME> * <NAME> ### 6.1.1[](#id70) * Prevent inclusion of requests_unixsocket on Windows ([5650](https://github.com/jupyter/notebook/pull/5650)) Thank you to all the contributors: * <NAME> ### 6.1.0[](#id71) Please note that this repository is currently maintained by a skeleton crew of maintainers from the Jupyter community. For our approach moving forward, please see this [notice](https://github.com/jupyter/notebook#notice) from the README. Thank you. Here is an enumeration of changes made since the last release and included in 6.1.0. * Remove deprecated encoding parameter for Python 3.9 compatibility. ([5174](https://github.com/jupyter/notebook/pull/5174)) * Add support for async kernel management ([4479](https://github.com/jupyter/notebook/pull/4479)) * Fix typo in password_required help message ([5320](https://github.com/jupyter/notebook/pull/5320)) * Gateway only: Ensure launch and request timeouts are in sync ([5317](https://github.com/jupyter/notebook/pull/5317)) * Update Markdown Cells example to HTML5 video tag ([5411](https://github.com/jupyter/notebook/pull/5411)) * Integrated LoginWidget into edit to enable users to logout from the t… ([5406](https://github.com/jupyter/notebook/pull/5406)) * Update message about minimum Tornado version ([5222](https://github.com/jupyter/notebook/pull/5222)) * Logged notebook type ([5425](https://github.com/jupyter/notebook/pull/5425)) * Added nl language ([5354](https://github.com/jupyter/notebook/pull/5354)) * Add UNIX socket support to notebook server. ([4835](https://github.com/jupyter/notebook/pull/4835)) * Update CodeMirror dependency ([5198](https://github.com/jupyter/notebook/pull/5198)) * Tree added download multiple files ([5351](https://github.com/jupyter/notebook/pull/5351)) * Toolbar buttons tooltip: show help instead of label ([5107](https://github.com/jupyter/notebook/pull/5107)) * Remove unnecessary import of requests_unixsocket ([5451](https://github.com/jupyter/notebook/pull/5451)) * Add ability to cull terminals and track last activity ([5372](https://github.com/jupyter/notebook/pull/5372)) * Code refactoring notebook.js ([5352](https://github.com/jupyter/notebook/pull/5352)) * Install terminado for docs build ([5462](https://github.com/jupyter/notebook/pull/5462)) * Convert notifications JS test to selenium ([5455](https://github.com/jupyter/notebook/pull/5455)) * Add cell attachments to markdown example ([5412](https://github.com/jupyter/notebook/pull/5412)) * Add Japanese document ([5231](https://github.com/jupyter/notebook/pull/5231)) * Migrate Move multiselection test to selenium ([5158](https://github.com/jupyter/notebook/pull/5158)) * Use `cmdtrl-enter` to run a cell ([5120](https://github.com/jupyter/notebook/pull/5120)) * Fix broken “Raw cell MIME type” dialog ([5385](https://github.com/jupyter/notebook/pull/5385)) * Make a notebook writable after successful save-as ([5296](https://github.com/jupyter/notebook/pull/5296)) * Add actual watch script ([4738](https://github.com/jupyter/notebook/pull/4738)) * Added `--autoreload` flag to `NotebookApp` ([4795](https://github.com/jupyter/notebook/pull/4795)) * Enable check_origin on gateway websocket communication ([5471](https://github.com/jupyter/notebook/pull/5471)) * Restore detection of missing terminado package ([5465](https://github.com/jupyter/notebook/pull/5465)) * Culling: ensure `last_activity` attr exists before use ([5355](https://github.com/jupyter/notebook/pull/5355)) * Added functionality to allow filter kernels by Jupyter Enterprise Gat… ([5484](https://github.com/jupyter/notebook/pull/5484)) * ‘Play’ icon for run-cell toolbar button ([2922](https://github.com/jupyter/notebook/pull/2922)) * Bump minimum version of jQuery to 3.5.0 ([5491](https://github.com/jupyter/notebook/pull/5491)) * Remove old JS markdown tests, add a new one in selenium ([5497](https://github.com/jupyter/notebook/pull/5497)) * Add support for more RTL languages ([5036](https://github.com/jupyter/notebook/pull/5036)) * Make markdown cells stay RTL in edit mode ([5037](https://github.com/jupyter/notebook/pull/5037)) * Unforce RTL output display ([5039](https://github.com/jupyter/notebook/pull/5039)) * Fixed multicursor backspacing ([4880](https://github.com/jupyter/notebook/pull/4880)) * Implemented Split Cell for multicursor ([4824](https://github.com/jupyter/notebook/pull/4824)) * Alignment issue [FIXED] ([3173](https://github.com/jupyter/notebook/pull/3173)) * MathJax: Support for `\gdef` ([4407](https://github.com/jupyter/notebook/pull/4407)) * Another (Minor) Duplicate Code Reduction ([5316](https://github.com/jupyter/notebook/pull/5316)) * Update readme regarding maintenance ([5500](https://github.com/jupyter/notebook/pull/5500)) * Document contents chunks ([5508](https://github.com/jupyter/notebook/pull/5508)) * Backspace deletes empty line ([5516](https://github.com/jupyter/notebook/pull/5516)) * The dropdown submenu at notebook page is not keyboard accessible ([4732](https://github.com/jupyter/notebook/pull/4732)) * Tooltips visible through keyboard navigation for specified buttons ([4729](https://github.com/jupyter/notebook/pull/4729)) * Fix for recursive symlink ([4670](https://github.com/jupyter/notebook/pull/4670)) * Fix for the terminal shutdown issue ([4180](https://github.com/jupyter/notebook/pull/4180)) * Add japanese translation files ([4490](https://github.com/jupyter/notebook/pull/4490)) * Workaround for socket permission errors on Cygwin ([4584](https://github.com/jupyter/notebook/pull/4584)) * Implement optional markdown header and footer files ([4043](https://github.com/jupyter/notebook/pull/4043)) * Remove double link when using `custom_display_url` ([5544](https://github.com/jupyter/notebook/pull/5544)) * Respect `cell.is_editable` during find-and-replace ([5545](https://github.com/jupyter/notebook/pull/5545)) * Fix exception causes all over the codebase ([5556](https://github.com/jupyter/notebook/pull/5556) * Improve login shell heuristics ([5588](https://github.com/jupyter/notebook/pull/5588)) * Added support for `JUPYTER_TOKEN_FILE` ([5587](https://github.com/jupyter/notebook/pull/5587)) * Kill notebook itself when server cull idle kernel ([5593](https://github.com/jupyter/notebook/pull/5593)) * Implement password hashing with bcrypt ([3793](https://github.com/jupyter/notebook/pull/3793)) * Fix broken links ([5600](https://github.com/jupyter/notebook/pull/5600)) * Russian internationalization support ([5571](https://github.com/jupyter/notebook/pull/5571)) * Add a metadata tag to override notebook direction (ltr/rtl) ([5052](https://github.com/jupyter/notebook/pull/5052)) * Paste two images from clipboard in markdown cell ([5598](https://github.com/jupyter/notebook/pull/5598)) * Add keyboard shortcuts to menu dropdowns ([5525](https://github.com/jupyter/notebook/pull/5525)) * Update codemirror to `5.56.0+components1` ([5637](https://github.com/jupyter/notebook/pull/5637)) Thank you to all the contributors: * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * berendjan * <NAME> * bzinberg * <NAME> * <NAME> * <NAME> * <NAME> * dmpe * dylanzjy * dSchurch * <NAME> * ErwinRussel * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * levinxo * <NAME> * <NAME> * <NAME> * <NAME> * mattn * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * PierreMB * pinarkavak * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * taohan16 * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> ### 6.0.3[](#id72) * Dependency updates to fix startup issues on Windows platform * Add support for nbconvert 6.x * Creation of recent tab Thanks for all the contributors: * <NAME> * <NAME> * ahangsleben * <NAME> * <NAME> * <NAME> * Min RK * forest0 * <NAME> * <NAME> * <NAME> * <NAME> * krinsman * TPartida * <NAME> * <NAME> ### 6.0.2[](#id73) * Update JQuery dependency to version 3.4.1 to fix security vulnerability (CVE-2019-11358) * Update CodeMirror to version 5.48.4 to fix Python formatting issues * Continue removing obsolete Python 2.x code/dependencies * Multiple documentation updates Thanks for all the contributors: * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> ### 6.0.1[](#id74) * Attempt to re-establish websocket connection to Gateway ([4777](https://github.com/jupyter/notebook/pull/4777)) * Add missing react-dom js to package data ([4772](https://github.com/jupyter/notebook/pull/4772)) Thanks for all the contributors: * <NAME> * Min RK ### 6.0[](#id75) This is the first major release of the Jupyter Notebook since version 5.0 (March 2017). We encourage users to start trying JupyterLab, which has just announced it’s 1.0 release in preparation for a future transition. * Remove Python 2.x support in favor of Python 3.5 and higher. * Multiple accessibility enhancements and bug-fixes. * Multiple translation enhancements and bug-fixes. * Remove deprecated ANSI CSS styles. * Native support to forward requests to Jupyter Gateway(s) (Embedded NB2KG). * Use JavaScript to redirect users to notebook homepage. * Enhanced SSL/TLS security by using PROTOCOL_TLS which selects the highest ssl/tls protocol version available that both the client and server support. When PROTOCOL_TLS is not available use PROTOCOL_SSLv23. * Add `?no_track_activity=1` argument to allow API requests. to not be registered as activity (e.g. API calls by external activity monitors). * Kernels shutting down due to an idle timeout is no longer considered an activity-updating event. * Further improve compatibility with tornado 6 with improved checks for when websockets are closed. * Launch the browser with a local file which redirects to the server address including the authentication token. This prevents another logged-in user from stealing the token from command line arguments and authenticating to the server. The single-use token previously used to mitigate this has been removed. Thanks to Dr. <NAME> for suggesting the local file approach. * Respect nbconvert entrypoints as sources for exporters * Update to CodeMirror to 5.37, which includes f-string syntax for Python 3.6. * Update jquery-ui to 1.12 * Execute cells by clicking icon in input prompt. * New “Save as” menu option. * When serving on a loopback interface, protect against DNS rebinding by checking the `Host` header from the browser. This check can be disabled if necessary by setting `NotebookApp.allow_remote_access`. (Disabled by default while we work out some Mac issues in [3754](https://github.com/jupyter/notebook/issues/3754)). * Add kernel_info_timeout traitlet to enable restarting slow kernels. * Add `custom_display_host` config option to override displayed URL. * Add /metrics endpoint for Prometheus Metrics. * Optimize large file uploads. * Allow access control headers to be overriden in jupyter_notebook_config.py to support greater CORS and proxy configuration flexibility. * Add support for terminals on windows. * Add a “restart and run all” button to the toolbar. * Frontend/extension-config: allow default json files in a .d directory. * Allow setting token via jupyter_token env. * Cull idle kernels using `--MappingKernelManager.cull_idle_timeout`. * Allow read-only notebooks to be trusted. * Convert JS tests to Selenium. Security Fixes included in previous minor releases of Jupyter Notebook and also included in version 6.0. * Fix Open Redirect vulnerability (CVE-2019-10255) where certain malicious URLs could redirect from the Jupyter login page to a malicious site after a successful login. * Contains a security fix for a cross-site inclusion (XSSI) vulnerability (CVE-2019–9644), where files at a known URL could be included in a page from an unauthorized website if the user is logged into a Jupyter server. The fix involves setting the `X-Content-Type-Options: nosniff` header, and applying CSRF checks previously on all non-GET API requests to GET requests to API endpoints and the /files/ endpoint. * Check Host header to more securely protect localhost deployments from DNS rebinding. This is a pre-emptive measure, not fixing a known vulnerability. Use `.NotebookApp.allow_remote_access` and `.NotebookApp.local_hostnames` to configure access. * Upgrade bootstrap to 3.4, fixing an XSS vulnerability, which has been assigned [CVE-2018-14041](https://nvd.nist.gov/vuln/detail/CVE-2018-14041). * Contains a security fix preventing malicious directory names from being able to execute javascript. * Contains a security fix preventing nbconvert endpoints from executing javascript with access to the server API. CVE request pending. Thanks for all the contributors: * <NAME> * <NAME>, MBA * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Gabriel * <NAME> * <NAME> * Gestalt LUR * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Min RK * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Sally * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Steve (G<NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Tim * <NAME> * <NAME> * <NAME> * Todd * <NAME> * <NAME> * <NAME> * Victor * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME>anda * ashley teoh * nullptr ### 5.7.8[](#id76) * Fix regression in restarting kernels in 5.7.5. The restart handler would return before restart was completed. * Further improve compatibility with tornado 6 with improved checks for when websockets are closed. * Fix regression in 5.7.6 on Windows where .js files could have the wrong mime-type. * Fix Open Redirect vulnerability (CVE-2019-10255) where certain malicious URLs could redirect from the Jupyter login page to a malicious site after a successful login. 5.7.7 contained only a partial fix for this issue. ### 5.7.6[](#id77) 5.7.6 contains a security fix for a cross-site inclusion (XSSI) vulnerability (CVE-2019–9644), where files at a known URL could be included in a page from an unauthorized website if the user is logged into a Jupyter server. The fix involves setting the `X-Content-Type-Options: nosniff` header, and applying CSRF checks previously on all non-GET API requests to GET requests to API endpoints and the /files/ endpoint. The attacking page is able to access some contents of files when using Internet Explorer through script errors, but this has not been demonstrated with other browsers. ### 5.7.5[](#id78) * Fix compatibility with tornado 6 ([4392](https://github.com/jupyter/notebook/pull/4392), [4449](https://github.com/jupyter/notebook/pull/4449)). * Fix opening integer filedescriptor during startup on Python 2 ([4349](https://github.com/jupyter/notebook/pull/4349)) * Fix compatibility with asynchronous [KernelManager.restart_kernel]{.title-ref} methods ([4412](https://github.com/jupyter/notebook/pull/4412)) ### 5.7.4[](#id79) 5.7.4 fixes a bug introduced in 5.7.3, in which the `list_running_servers()` function attempts to parse HTML files as JSON, and consequently crashes ([4284](https://github.com/jupyter/notebook/pull/4284)). ### 5.7.3[](#id80) 5.7.3 contains one security improvement and one security fix: * Launch the browser with a local file which redirects to the server address including the authentication token ([4260](https://github.com/jupyter/notebook/pull/4260)). This prevents another logged-in user from stealing the token from command line arguments and authenticating to the server. The single-use token previously used to mitigate this has been removed. Thanks to Dr. <NAME> for suggesting the local file approach. * Upgrade bootstrap to 3.4, fixing an XSS vulnerability, which has been assigned [CVE-2018-14041](https://nvd.nist.gov/vuln/detail/CVE-2018-14041) ([4271](https://github.com/jupyter/notebook/pull/4271)). ### 5.7.2[](#id81) 5.7.2 contains a security fix preventing malicious directory names from being able to execute javascript. CVE request pending. ### 5.7.1[](#id82) 5.7.1 contains a security fix preventing nbconvert endpoints from executing javascript with access to the server API. CVE request pending. ### 5.7.0[](#id83) New features: * Update to CodeMirror to 5.37, which includes f-string syntax for Python 3.6 ([3816](https://github.com/jupyter/notebook/pull/3816)) * Update jquery-ui to 1.12 ([3836](https://github.com/jupyter/notebook/pull/3836)) * Check Host header to more securely protect localhost deployments from DNS rebinding. This is a pre-emptive measure, not fixing a known vulnerability ([3766](https://github.com/jupyter/notebook/pull/3766)). Use `.NotebookApp.allow_remote_access` and `.NotebookApp.local_hostnames` to configure access. * Allow access-control-allow-headers to be overridden ([3886](https://github.com/jupyter/notebook/pull/3886)) * Allow configuring max_body_size and max_buffer_size ([3829](https://github.com/jupyter/notebook/pull/3829)) * Allow configuring get_secure_cookie keyword-args ([3778](https://github.com/jupyter/notebook/pull/3778)) * Respect nbconvert entrypoints as sources for exporters ([3879](https://github.com/jupyter/notebook/pull/3879)) * Include translation sources in source distributions ([3925](https://github.com/jupyter/notebook/pull/3925), [3931](https://github.com/jupyter/notebook/pull/3931)) * Various improvements to documentation ([3799](https://github.com/jupyter/notebook/pull/3799), [3800](https://github.com/jupyter/notebook/pull/3800), [3806](https://github.com/jupyter/notebook/pull/3806), [3883](https://github.com/jupyter/notebook/pull/3883), [3908](https://github.com/jupyter/notebook/pull/3908)) Fixing problems: * Fix breadcrumb link when running with a base url ([3905](https://github.com/jupyter/notebook/pull/3905)) * Fix possible type error when closing activity stream ([3907](https://github.com/jupyter/notebook/pull/3907)) * Disable metadata editing for non-editable cells ([3744](https://github.com/jupyter/notebook/pull/3744)) * Fix some styling and alignment of prompts caused by regressions in 5.6.0. * Enter causing page reload in shortcuts editor ([3871](https://github.com/jupyter/notebook/pull/3871)) * Fix uploading to the same file twice ([3712](https://github.com/jupyter/notebook/pull/3712)) See the 5.7 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.7) involved in this release. Thanks to the following contributors: * <NAME> * <NAME> * <NAME> * bxy007 * <NAME> * <NAME> * <NAME> * Gabriel * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> * Mokkapati, Praneet(ES) * <NAME> * <NAME> * <NAME> * <NAME> * <NAME> ### 5.6.0[](#id84) New features: * Execute cells by clicking icon in input prompt ([3535](https://github.com/jupyter/notebook/pull/3535), [3687](https://github.com/jupyter/notebook/pull/3687)) * New “Save as” menu option ([3289](https://github.com/jupyter/notebook/pull/3289)) * When serving on a loopback interface, protect against DNS rebinding by checking the `Host` header from the browser ([3714](https://github.com/jupyter/notebook/pull/3714)). This check can be disabled if necessary by setting `NotebookApp.allow_remote_access`. (Disabled by default while we work out some Mac issues in [3754](https://github.com/jupyter/notebook/issues/3754)). * Add kernel_info_timeout traitlet to enable restarting slow kernels ([3665](https://github.com/jupyter/notebook/pull/3665)) * Add `custom_display_host` config option to override displayed URL ([3668](https://github.com/jupyter/notebook/pull/3668)) * Add /metrics endpoint for Prometheus Metrics ([3490](https://github.com/jupyter/notebook/pull/3490)) * Update to MathJax 2.7.4 ([3751](https://github.com/jupyter/notebook/pull/3751)) * Update to jQuery 3.3 ([3655](https://github.com/jupyter/notebook/pull/3655)) * Update marked to 0.4 ([3686](https://github.com/jupyter/notebook/pull/3686)) Fixing problems: * Don’t duplicate token in displayed URL ([3656](https://github.com/jupyter/notebook/pull/3656)) * Clarify displayed URL when listening on all interfaces ([3703](https://github.com/jupyter/notebook/pull/3703)) * Don’t trash non-empty directories on Windows ([3673](https://github.com/jupyter/notebook/pull/3673)) * Include LICENSE file in wheels ([3671](https://github.com/jupyter/notebook/pull/3671)) * Don’t show “0 active kernels” when starting the notebook ([3696](https://github.com/jupyter/notebook/pull/3696)) Testing: * Add find replace test ([3630](https://github.com/jupyter/notebook/pull/3630)) * Selenium test for deleting all cells ([3601](https://github.com/jupyter/notebook/pull/3601)) * Make creating a new notebook more robust ([3726](https://github.com/jupyter/notebook/pull/3726)) Thanks to the following contributors: * <NAME> ([arovit](https://github.com/arovit)) * lucasoshiro ([lucasoshiro](https://github.com/lucasoshiro)) * <NAME> ([mpacer](https://github.com/mpacer)) * <NAME> ([takluyver](https://github.com/takluyver)) * Todd ([toddrme2178](https://github.com/toddrme2178)) * <NAME> ([yuvipanda](https://github.com/yuvipanda)) See the 5.6 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.6) involved in this release. ### 5.5.0[](#id85) New features: * The files list now shows file sizes ([3539](https://github.com/jupyter/notebook/pull/3539)) * Add a quit button in the dashboard ([3004](https://github.com/jupyter/notebook/pull/3004)) * Display hostname in the terminal when running remotely ([3356](https://github.com/jupyter/notebook/pull/3356), [3593](https://github.com/jupyter/notebook/pull/3593)) * Add slides exportation/download to the menu ([3287](https://github.com/jupyter/notebook/pull/3287)) * Add any extra installed nbconvert exporters to the “Download as” menu ([3323](https://github.com/jupyter/notebook/pull/3323)) * Editor: warning when overwriting a file that is modified on disk ([2783](https://github.com/jupyter/notebook/pull/2783)) * Display a warning message if cookies are not enabled ([3511](https://github.com/jupyter/notebook/pull/3511)) * Basic `__version__` reporting for extensions ([3541](https://github.com/jupyter/notebook/pull/3541)) * Add `NotebookApp.terminals_enabled` config option ([3478](https://github.com/jupyter/notebook/pull/3478)) * Make buffer time between last modified on disk and last modified on last save configurable ([3273](https://github.com/jupyter/notebook/pull/3273)) * Allow binding custom shortcuts for ‘close and halt’ ([3314](https://github.com/jupyter/notebook/pull/3314)) * Add description for ‘Trusted’ notification ([3386](https://github.com/jupyter/notebook/pull/3386)) * Add `settings['activity_sources']` ([3401](https://github.com/jupyter/notebook/pull/3401)) * Add an `output_updated.OutputArea` event ([3560](https://github.com/jupyter/notebook/pull/3560)) Fixing problems: * Fixes to improve web accessibility ([3507](https://github.com/jupyter/notebook/pull/3507)) * Fixed color contrast issue in tree.less ([3336](https://github.com/jupyter/notebook/pull/3336)) * Allow cancelling upload of large files ([3373](https://github.com/jupyter/notebook/pull/3373)) * Don’t clear login cookie on requests without cookie ([3380](https://github.com/jupyter/notebook/pull/3380)) * Don’t trash files on different device to home dir on Linux ([3304](https://github.com/jupyter/notebook/pull/3304)) * Clear waiting asterisks when restarting kernel ([3494](https://github.com/jupyter/notebook/pull/3494)) * Fix output prompt when `execution_count` missing ([3236](https://github.com/jupyter/notebook/pull/3236)) * Make the ‘changed on disk’ dialog work when displayed twice ([3589](https://github.com/jupyter/notebook/pull/3589)) * Fix going back to root directory with history in notebook list ([3411](https://github.com/jupyter/notebook/pull/3411)) * Allow defining keyboard shortcuts for missing actions ([3561](https://github.com/jupyter/notebook/pull/3561)) * Prevent default on pageup/pagedown when completer is active ([3500](https://github.com/jupyter/notebook/pull/3500)) * Prevent default event handling on new terminal ([3497](https://github.com/jupyter/notebook/pull/3497)) * ConfigManager should not write out default values found in the .d directory ([3485](https://github.com/jupyter/notebook/pull/3485)) * Fix leak of iopub object in activity monitoring ([3424](https://github.com/jupyter/notebook/pull/3424)) * Javascript lint in notebooklist.js ([3409](https://github.com/jupyter/notebook/pull/3409)) * Some Javascript syntax fixes ([3294](https://github.com/jupyter/notebook/pull/3294)) * Convert native for loop to `Array.forEach()` ([3477](https://github.com/jupyter/notebook/pull/3477)) * Disable cache when downloading nbconvert output ([3484](https://github.com/jupyter/notebook/pull/3484)) * Add missing digestmod arg to HMAC ([3399](https://github.com/jupyter/notebook/pull/3399)) * Log OSErrors failing to create less-critical files during startup ([3384](https://github.com/jupyter/notebook/pull/3384)) * Use powershell on Windows ([3379](https://github.com/jupyter/notebook/pull/3379)) * API spec improvements, API handler improvements ([3368](https://github.com/jupyter/notebook/pull/3368)) * Set notebook to dirty state after change to kernel metadata ([3350](https://github.com/jupyter/notebook/pull/3350)) * Use CSP header to treat served files as belonging to a separate origin ([3341](https://github.com/jupyter/notebook/pull/3341)) * Don’t install gettext into builtins ([3330](https://github.com/jupyter/notebook/pull/3330)) * Add missing `import _` ([3316](https://github.com/jupyter/notebook/pull/3316), [3326](https://github.com/jupyter/notebook/pull/3326)) * Write `notebook.json` file atomically ([3305](https://github.com/jupyter/notebook/pull/3305)) * Fix clicking with modifiers, page title updates ([3282](https://github.com/jupyter/notebook/pull/3282)) * Upgrade jQuery to version 2.2 ([3428](https://github.com/jupyter/notebook/pull/3428)) * Upgrade xterm.js to 3.1.0 ([3189](https://github.com/jupyter/notebook/pull/3189)) * Upgrade moment.js to 2.19.3 ([3562](https://github.com/jupyter/notebook/pull/3562)) * Upgrade CodeMirror to 5.35 ([3372](https://github.com/jupyter/notebook/pull/3372)) * “Require” pyzmq>=17 ([3586](https://github.com/jupyter/notebook/pull/3586)) Documentation: * Documentation updates and organisation ([3584](https://github.com/jupyter/notebook/pull/3584)) * Add section in docs about privacy ([3571](https://github.com/jupyter/notebook/pull/3571)) * Add explanation on how to change the type of a cell to Markdown ([3377](https://github.com/jupyter/notebook/pull/3377)) * Update docs with confd implementation details ([3520](https://github.com/jupyter/notebook/pull/3520)) * Add more information for where `jupyter_notebook_config.py` is located ([3346](https://github.com/jupyter/notebook/pull/3346)) * Document options to enable nbextensions in specific sections ([3525](https://github.com/jupyter/notebook/pull/3525)) * jQuery attribute selector value MUST be surrounded by quotes ([3527](https://github.com/jupyter/notebook/pull/3527)) * Do not execute special notebooks with nbsphinx ([3360](https://github.com/jupyter/notebook/pull/3360)) * Other minor fixes in [3288](https://github.com/jupyter/notebook/pull/3288), [3528](https://github.com/jupyter/notebook/pull/3528), [3293](https://github.com/jupyter/notebook/pull/3293), [3367](https://github.com/jupyter/notebook/pull/3367) Testing: * Testing with Selenium & Sauce labs ([3321](https://github.com/jupyter/notebook/pull/3321)) * Selenium utils + markdown rendering tests ([3458](https://github.com/jupyter/notebook/pull/3458)) * Convert insert cell tests to Selenium ([3508](https://github.com/jupyter/notebook/pull/3508)) * Convert prompt numbers tests to Selenium ([3554](https://github.com/jupyter/notebook/pull/3554)) * Convert delete cells tests to Selenium ([3465](https://github.com/jupyter/notebook/pull/3465)) * Convert undelete cell tests to Selenium ([3475](https://github.com/jupyter/notebook/pull/3475)) * More selenium testing utilities ([3412](https://github.com/jupyter/notebook/pull/3412)) * Only check links when build is trigger by Travis Cron job ([3493](https://github.com/jupyter/notebook/pull/3493)) * Fix Appveyor build errors ([3430](https://github.com/jupyter/notebook/pull/3430)) * Undo patches in teardown before attempting to delete files ([3459](https://github.com/jupyter/notebook/pull/3459)) * Get tests running with tornado 5 ([3398](https://github.com/jupyter/notebook/pull/3398)) * Unpin ipykernel version on Travis ([3223](https://github.com/jupyter/notebook/pull/3223)) Thanks to the following contributors: * <NAME> ([arovit](https://github.com/arovit)) * <NAME> ([ashleytqy](https://github.com/ashleytqy)) * <NAME> ([bollwyvl](https://github.com/bollwyvl)) * <NAME> ([cancan101](https://github.com/cancan101)) * <NAME> ([ckilcrease](https://github.com/ckilcrease)) * dabuside ([dabuside](https://github.com/dabuside)) * <NAME> ([damianavila](https://github.com/damianavila)) * <NAME> ([danagilliann](https://github.com/danagilliann)) * <NAME> ([dhirschfeld](https://github.com/dhirschfeld)) * <NAME> ([ehengao](https://github.com/ehengao)) * <NAME> ([elgalu](https://github.com/elgalu)) * <NAME> ([evandam](https://github.com/evandam)) * forbxy ([forbxy](https://github.com/forbxy)) * <NAME> ([gnestor](https://github.com/gnestor)) * <NAME> ([hendrixet](https://github.com/hendrixet)) * <NAME> ([hroncok](https://github.com/hroncok)) * <NAME> ([ivanov](https://github.com/ivanov)) * <NAME> ([kant](https://github.com/kant)) * <NAME> ([kevin-bates](https://github.com/kevin-bates)) * <NAME> ([maartenbreddels](https://github.com/maartenbreddels)) * <NAME> ([mdboom](https://github.com/mdboom)) * Min RK ([minrk](https://github.com/minrk)) * <NAME> ([mpacer](https://github.com/mpacer)) * <NAME> ([parente](https://github.com/parente)) * <NAME> ([paulmasson](https://github.com/paulmasson)) * <NAME> ([philippjfr](https://github.com/philippjfr)) * <NAME> ([Shels1909](https://github.com/Shels1909)) * <NAME> ([Sheshtawy](https://github.com/Sheshtawy)) * <NAME> ([SimonBiggs](https://github.com/SimonBiggs)) * <NAME> (`@sunilhari`) * <NAME> ([takluyver](https://github.com/takluyver)) * <NAME> ([tklever](https://github.com/tklever)) * <NAME> ([unnamedplay-r](https://github.com/unnamedplay-r)) * <NAME> ([vaibhavsagar](https://github.com/vaibhavsagar)) * <NAME> ([whosford](https://github.com/whosford)) * Hong ([xuhdev](https://github.com/xuhdev)) See the 5.5 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.5) involved in this release. ### 5.4.1[](#id86) A security release to fix [CVE-2018-8768](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8768). Thanks to [Alex](https://hackerone.com/pisarenko) for identifying this bug, and <NAME> and <NAME> at Quantopian for verifying it and bringing it to our attention. ### 5.4.0[](#id87) * Fix creating files and folders after navigating directories in the dashboard ([3264](https://github.com/jupyter/notebook/pull/3264)). * Enable printing notebooks in colour, removing the CSS that made everything black and white ([3212](https://github.com/jupyter/notebook/pull/3212)). * Limit the completion options displayed in the notebook to 1000, to avoid performance issues with very long lists ([3195](https://github.com/jupyter/notebook/pull/3195)). * Accessibility improvements in `tree.html` ([3271](https://github.com/jupyter/notebook/pull/3271)). * Added alt-text to the kernel logo image in the notebook UI ([3228](https://github.com/jupyter/notebook/pull/3228)). * Added a test on Travis CI to flag if symlinks are accidentally introduced in the future. This should prevent the issue that necessitated `release-5.3.1`{.interpreted-text role=”ref”} ([3227](https://github.com/jupyter/notebook/pull/3227)). * Use lowercase letters for random IDs generated in our Javascript ([3264](https://github.com/jupyter/notebook/pull/3264)). * Removed duplicate code setting `TextCell.notebook` ([3256](https://github.com/jupyter/notebook/pull/3256)). Thanks to the following contributors: * <NAME> ([asoderman](https://github.com/asoderman)) * <NAME> ([Carreau](https://github.com/Carreau)) * <NAME> ([minrk](https://github.com/minrk)) * <NAME> ([ns23](https://github.com/ns23)) * <NAME> ([takluyver](https://github.com/takluyver)) * <NAME> ([yuvipanda](https://github.com/yuvipanda)) See the 5.4 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.4) involved in this release. ### 5.3.1[](#id88) Replaced a symlink in the repository with a copy, to fix issues installing on Windows ([3220](https://github.com/jupyter/notebook/pull/3220)). ### 5.3.0[](#id89) This release introduces a couple noteable improvements, such as terminal support for Windows and support for OS trash (files deleted from the notebook dashboard are moved to the OS trash vs. deleted permanently). * Add support for terminals on windows ([3087](https://github.com/jupyter/notebook/pull/3087)). * Add a “restart and run all” button to the toolbar ([2965](https://github.com/jupyter/notebook/pull/2965)). * Send files to os trash mechanism on delete ([1968](https://github.com/jupyter/notebook/pull/1968)). * Allow programmatic copy to clipboard ([3088](https://github.com/jupyter/notebook/pull/3088)). * Use DOM History API for navigating between directories in the file browser ([3115](https://github.com/jupyter/notebook/pull/3115)). * Add translated files to folder(docs-translations) ([3065](https://github.com/jupyter/notebook/pull/3065)). * Allow non empty dirs to be deleted ([3108](https://github.com/jupyter/notebook/pull/3108)). * Set cookie on base_url ([2959](https://github.com/jupyter/notebook/pull/2959)). * Allow token-authenticated requests cross-origin by default ([2920](https://github.com/jupyter/notebook/pull/2920)). * Change cull_idle_timeout_minimum to 1 from 300 ([2910](https://github.com/jupyter/notebook/pull/2910)). * Config option to shut down server after n seconds with no kernels ([2963](https://github.com/jupyter/notebook/pull/2963)). * Display a “close” button on load notebook error ([3176](https://github.com/jupyter/notebook/pull/3176)). * Add action to command pallette to run CodeMirror’s “indentAuto” on selection ([3175](https://github.com/jupyter/notebook/pull/3175)). * Add option to specify extra services ([3158](https://github.com/jupyter/notebook/pull/3158)). * Warn_bad_name should not use global name ([3160](https://github.com/jupyter/notebook/pull/3160)). * Avoid overflow of hidden form ([3148](https://github.com/jupyter/notebook/pull/3148)). * Fix shutdown trans loss ([3147](https://github.com/jupyter/notebook/pull/3147)). * Find available kernelspecs more efficiently ([3136](https://github.com/jupyter/notebook/pull/3136)). * Don’t try to translate missing help strings ([3122](https://github.com/jupyter/notebook/pull/3122)). * Frontend/extension-config: allow default json files in a .d directory ([3116](https://github.com/jupyter/notebook/pull/3116)). * Use [requirejs]{.title-ref} vs. [require]{.title-ref} ([3097](https://github.com/jupyter/notebook/pull/3097)). * Fixes some ui bugs in firefox #3044 ([3058](https://github.com/jupyter/notebook/pull/3058)). * Compare non-specific language code when choosing to use arabic numerals ([3055](https://github.com/jupyter/notebook/pull/3055)). * Fix save-script deprecation ([3053](https://github.com/jupyter/notebook/pull/3053)). * Include moment locales in package_data ([3051](https://github.com/jupyter/notebook/pull/3051)). * Fix moment locale loading in bidi support ([3048](https://github.com/jupyter/notebook/pull/3048)). * Tornado 5: periodiccallback loop arg will be removed ([3034](https://github.com/jupyter/notebook/pull/3034)). * Use [/files]{.title-ref} prefix for pdf-like files ([3031](https://github.com/jupyter/notebook/pull/3031)). * Add folder for document translation ([3022](https://github.com/jupyter/notebook/pull/3022)). * When login-in via token, let a chance for user to set the password ([3008](https://github.com/jupyter/notebook/pull/3008)). * Switch to jupyter_core implementation of ensure_dir_exists ([3002](https://github.com/jupyter/notebook/pull/3002)). * Send http shutdown request on ‘stop’ subcommand ([3000](https://github.com/jupyter/notebook/pull/3000)). * Work on loading ui translations ([2969](https://github.com/jupyter/notebook/pull/2969)). * Fix ansi inverse ([2967](https://github.com/jupyter/notebook/pull/2967)). * Add send2trash to requirements for building docs ([2964](https://github.com/jupyter/notebook/pull/2964)). * I18n readme.md improvement ([2962](https://github.com/jupyter/notebook/pull/2962)). * Add ‘reason’ field to json error responses ([2958](https://github.com/jupyter/notebook/pull/2958)). * Add some padding for stream outputs ([3194](https://github.com/jupyter/notebook/pull/3194)). * Always use setuptools in `setup.py` ([3206](https://github.com/jupyter/notebook/pull/3206)). * Fix clearing cookies on logout when `base_url` is configured ([3207](https://github.com/jupyter/notebook/pull/3207)). Thanks to the following contributors: * bacboc ([bacboc](https://github.com/bacboc)) * <NAME> ([blink1073](https://github.com/blink1073)) * <NAME> ([Carreau](https://github.com/Carreau)) * ChungJooHo ([ChungJooHo](https://github.com/ChungJooHo)) * edida ([edida](https://github.com/edida)) * <NAME> (`ferdas`) * forbxy ([forbxy](https://github.com/forbxy)) * <NAME> ([gnestor](https://github.com/gnestor)) * <NAME> ([jcb91](https://github.com/jcb91)) * JocelynDelalande ([JocelynDelalande](https://github.com/JocelynDelalande)) * <NAME> ([karthikb351](https://github.com/karthikb351)) * <NAME> ([kevin-bates](https://github.com/kevin-bates)) * <NAME> ([kirit93](https://github.com/kirit93)) * <NAME> ([Naereen](https://github.com/Naereen)) * <NAME> ([maartenbreddels](https://github.com/maartenbreddels)) * Madhu94 ([Madhu94](https://github.com/Madhu94)) * <NAME> ([mgeier](https://github.com/mgeier)) * <NAME> ([mheilman](https://github.com/mheilman)) * Min RK ([minrk](https://github.com/minrk)) * PHaeJin ([PHaeJin](https://github.com/PHaeJin)) * Sukneet ([Sukneet](https://github.com/Sukneet)) * <NAME> ([takluyver](https://github.com/takluyver)) See the 5.3 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.3) involved in this release. ### 5.2.1[](#id90) * Fix invisible CodeMirror cursor at specific browser zoom levels ([2983](https://github.com/jupyter/notebook/pull/2983)). * Fix nbconvert handler causing broken export to PDF ([2981](https://github.com/jupyter/notebook/pull/2981)). * Fix the prompt_area argument of the output area constructor. ([2961](https://github.com/jupyter/notebook/pull/2961)). * Handle a compound extension in new_untitled ([2949](https://github.com/jupyter/notebook/pull/2949)). * Allow disabling offline message buffering ([2916](https://github.com/jupyter/notebook/pull/2916)). Thanks to the following contributors: * <NAME> ([blink1073](https://github.com/blink1073)) * <NAME> ([gnestor](https://github.com/gnestor)) * <NAME> ([jasongrout](https://github.com/jasongrout)) * <NAME> ([minrk](https://github.com/minrk)) * <NAME> ([mpacer](https://github.com/mpacer)) See the 5.2.1 milestone on GitHub for a complete list of [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.2.1) involved in this release. ### 5.2.0[](#id91) * Allow setting token via jupyter_token env ([2921](https://github.com/jupyter/notebook/pull/2921)). * Fix some errors caused by raising 403 in get_current_user ([2919](https://github.com/jupyter/notebook/pull/2919)). * Register contents_manager.files_handler_class directly ([2917](https://github.com/jupyter/notebook/pull/2917)). * Update viewable_extensions ([2913](https://github.com/jupyter/notebook/pull/2913)). * Show edit shortcuts modal after shortcuts modal is hidden ([2912](https://github.com/jupyter/notebook/pull/2912)). * Improve edit/view behavior ([2911](https://github.com/jupyter/notebook/pull/2911)). * The root directory of the notebook server should never be hidden ([2907](https://github.com/jupyter/notebook/pull/2907)). * Fix notebook require config to match tools/build-main ([2888](https://github.com/jupyter/notebook/pull/2888)). * Give page constructor default arguments ([2887](https://github.com/jupyter/notebook/pull/2887)). * Fix codemirror.less to match codemirror’s expected padding layout ([2880](https://github.com/jupyter/notebook/pull/2880)). * Add x-xsrftoken to access-control-allow-headers ([2876](https://github.com/jupyter/notebook/pull/2876)). * Buffer messages when websocket connection is interrupted ([2871](https://github.com/jupyter/notebook/pull/2871)). * Load locale dynamically only when not en-us ([2866](https://github.com/jupyter/notebook/pull/2866)). * Changed key strength to 2048 bits ([2861](https://github.com/jupyter/notebook/pull/2861)). * Resync jsversion with python version ([2860](https://github.com/jupyter/notebook/pull/2860)). * Allow copy operation on modified, read-only notebook ([2854](https://github.com/jupyter/notebook/pull/2854)). * Update error handling on apihandlers ([2853](https://github.com/jupyter/notebook/pull/2853)). * Test python 3.6 on travis, drop 3.3 ([2852](https://github.com/jupyter/notebook/pull/2852)). * Avoid base64-literals in image tests ([2851](https://github.com/jupyter/notebook/pull/2851)). * Upgrade xterm.js to 2.9.2 ([2849](https://github.com/jupyter/notebook/pull/2849)). * Changed all python variables named file to file_name to not override built_in file ([2830](https://github.com/jupyter/notebook/pull/2830)). * Add more doc tests ([2823](https://github.com/jupyter/notebook/pull/2823)). * Typos fix ([2815](https://github.com/jupyter/notebook/pull/2815)). * Rename and update license [ci skip] ([2810](https://github.com/jupyter/notebook/pull/2810)). * Travis builds doc ([2808](https://github.com/jupyter/notebook/pull/2808)). * Pull request i18n ([2804](https://github.com/jupyter/notebook/pull/2804)). * Factor out output_prompt_function, as is done with input prompt ([2774](https://github.com/jupyter/notebook/pull/2774)). * Use rfc5987 encoding for filenames ([2767](https://github.com/jupyter/notebook/pull/2767)). * Added path to the resources metadata, the same as in from_filename(…) in nbconvert.exporters.py ([2753](https://github.com/jupyter/notebook/pull/2753)). * Make “extrakeys” consistent for notebook and editor ([2745](https://github.com/jupyter/notebook/pull/2745)). * Bidi support ([2357](https://github.com/jupyter/notebook/pull/2357)). Special thanks to [samarsultan](https://github.com/samarsultan) and the Arabic Competence and Globalization Center Team at IBM Egypt for adding RTL (right-to-left) support to the notebook! See the 5.2 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A5.2) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.2) involved in this release. ### 5.1.0[](#id92) * Preliminary i18n implementation ([2140](https://github.com/jupyter/notebook/pull/2140)). * Expose URL with auth token in notebook UI ([2666](https://github.com/jupyter/notebook/pull/2666)). * Fix search background style ([2387](https://github.com/jupyter/notebook/pull/2387)). * List running notebooks without requiring `--allow-root` ([2421](https://github.com/jupyter/notebook/pull/2421)). * Allow session of type other than notebook ([2559](https://github.com/jupyter/notebook/pull/2559)). * Fix search background style ([2387](https://github.com/jupyter/notebook/pull/2387)). * Fix some Markdown styling issues ([2571](https://github.com/jupyter/notebook/pull/2571)), ([2691](https://github.com/jupyter/notebook/pull/2691)) and ([2534](https://github.com/jupyter/notebook/pull/2534)). * Remove keymaps that conflict with non-English keyboards ([2535](https://github.com/jupyter/notebook/pull/2535)). * Add session-specific favicons (notebook, terminal, file) ([2452](https://github.com/jupyter/notebook/pull/2452)). * Add /api/shutdown handler ([2507](https://github.com/jupyter/notebook/pull/2507)). * Include metadata when copying a cell ([2349](https://github.com/jupyter/notebook/pull/2349)). * Stop notebook server from command line ([2388](https://github.com/jupyter/notebook/pull/2388)). * Improve “View” and “Edit” file handling in dashboard ([2449](https://github.com/jupyter/notebook/pull/2449)) and ([2402](https://github.com/jupyter/notebook/pull/2402)). * Provide a promise to replace use of the `app_initialized.NotebookApp` event ([2710](https://github.com/jupyter/notebook/pull/2710)). * Fix disabled collapse/expand output button ([2681](https://github.com/jupyter/notebook/pull/2681)). * Cull idle kernels using `--MappingKernelManager.cull_idle_timeout` ([2215](https://github.com/jupyter/notebook/pull/2215)). * Allow read-only notebooks to be trusted ([2718](https://github.com/jupyter/notebook/pull/2718)). See the 5.1 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A5.1) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A5.1) involved in this release. ### 5.0.0[](#id93) This is the first major release of the Jupyter Notebook since version 4.0 was created by the “Big Split” of IPython and Jupyter. We encourage users to start trying JupyterLab in preparation for a future transition. We have merged more than 300 pull requests since 4.0. Some of the major user-facing changes are described here. #### File sorting in the dashboard[](#file-sorting-in-the-dashboard) Files in the dashboard may now be sorted by last modified date or name ([943](https://github.com/jupyter/notebook/pull/943)): #### Cell tags[](#cell-tags) There is a new cell toolbar for adding *cell tags* ([2048](https://github.com/jupyter/notebook/pull/2048)): Cell tags are a lightweight way to customise the behaviour of tools working with notebooks; we’re working on building support for them into tools like [nbconvert](https://nbconvert.readthedocs.io/en/latest/) and [nbval](https://github.com/computationalmodelling/nbval). To start using tags, select `Tags` in the `View > Cell Toolbar` menu in a notebook. The UI for editing cell tags is basic for now; we hope to improve it in future releases. #### Table style[](#table-style) The default styling for tables in the notebook has been updated ([1776](https://github.com/jupyter/notebook/pull/1776)). Before: After: #### Customise keyboard shortcuts[](#customise-keyboard-shortcuts) You can now edit keyboard shortcuts for *Command Mode* within the UI ([1347](https://github.com/jupyter/notebook/pull/1347)): See the `Help > Edit Keyboard Shortcuts` menu item and follow the instructions. #### Other additions[](#other-additions) * You can copy and paste cells between notebooks, using `Ctrl-C`{.interpreted-text role=”kbd”} and `Ctrl-V`{.interpreted-text role=”kbd”} (`Cmd-C`{.interpreted-text role=”kbd”} and `Cmd-V`{.interpreted-text role=”kbd”} on Mac). * It’s easier to configure a password for the notebook with the new `jupyter notebook password` command ([2007](https://github.com/jupyter/notebook/pull/2007)). * The file list can now be ordered by *last modified* or by *name* ([943](https://github.com/jupyter/notebook/pull/943)). * Markdown cells now support attachments. Simply drag and drop an image from your desktop to a markdown cell to add it. Unlike relative links that you enter manually, attachments are embedded in the notebook itself. An unreferenced attachment will be automatically scrubbed from the notebook on save ([621](https://github.com/jupyter/notebook/pull/621)). * Undoing cell deletion now supports undeleting multiple cells. Cells may not be in the same order as before their deletion, depending on the actions you did on the meantime, but this should should help reduce the impact of accidentally deleting code. * The file browser now has *Edit* and *View* buttons. * The file browser now supports moving multiple files at once ([1088](https://github.com/jupyter/notebook/pull/1088)). * The Notebook will refuse to run as root unless the `--allow-root` flag is given ([1115](https://github.com/jupyter/notebook/pull/1115)). * Keyboard shortcuts are now declarative ([1234](https://github.com/jupyter/notebook/pull/1234)). * Toggling line numbers can now affect all cells ([1312](https://github.com/jupyter/notebook/pull/1312)). * Add more visible *Trusted* and *Untrusted* notifications ([1658](https://github.com/jupyter/notebook/pull/1658)). * The favicon (browser shortcut icon) now changes to indicate when the kernel is busy ([1837](https://github.com/jupyter/notebook/pull/1837)). * Header and toolbar visibility is now persisted in nbconfig and across sessions ([1769](https://github.com/jupyter/notebook/pull/1769)). * Load server extensions with ConfigManager so that merge happens recursively, unlike normal config values, to make it load more consistently with frontend extensions([2108](https://github.com/jupyter/notebook/pull/2108)). * The notebook server now supports the [bundler API](https://jupyter-notebook.readthedocs.io/en/stable/extending/bundler_extensions.html) from the [jupyter_cms incubator project](https://github.com/jupyter-incubator/contentmanagement) ([1579](https://github.com/jupyter/notebook/pull/1579)). * The notebook server now provides information about kernel activity in its kernel resource API ([1827](https://github.com/jupyter/notebook/pull/1827)). Remember that upgrading `notebook` only affects the user interface. Upgrading kernels and libraries may also provide new features, better stability and integration with the notebook interface. ### 4.4.0[](#id94) * Allow override of output callbacks to redirect output messages. This is used to implement the ipywidgets Output widget, for example. * Fix an async bug in message handling by allowing comm message handlers to return a promise which halts message processing until the promise resolves. See the 4.4 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A4.4) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A4.4) involved in this release. ### 4.3.2[](#id95) 4.3.2 is a patch release with a bug fix for CodeMirror and improved handling of the “editable” cell metadata field. * Monkey-patch for CodeMirror that resolves [#2037](https://github.com/jupyter/notebook/issues/2037) without breaking [#1967](https://github.com/jupyter/notebook/issues/1967) * Read-only (`"editable": false`) cells can be executed but cannot be split, merged, or deleted See the 4.3.2 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A4.3.2) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A4.3.2) involved in this release. ### 4.3.1[](#id96) 4.3.1 is a patch release with a security patch, a couple bug fixes, and improvements to the newly-released token authentication. **Security fix**: * CVE-2016-9971. Fix CSRF vulnerability, where malicious forms could create untitled files and start kernels (no remote execution or modification of existing files) for users of certain browsers (Firefox, Internet Explorer / Edge). All previous notebook releases are affected. Bug fixes: * Fix carriage return handling * Make the font size more robust against fickle browsers * Ignore resize events that bubbled up and didn’t come from window * Add Authorization to allowed CORS headers * Downgrade CodeMirror to 5.16 while we figure out issues in Safari Other improvements: * Better docs for token-based authentication * Further highlight token info in log output when autogenerated See the 4.3.1 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A4.3.1) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A4.3.1) involved in this release. ### 4.3.0[](#id97) 4.3 is a minor release with many bug fixes and improvements. The biggest user-facing change is the addition of token authentication, which is enabled by default. A token is generated and used when your browser is opened automatically, so you shouldn’t have to enter anything in the default circumstances. If you see a login page (e.g. by switching browsers, or launching on a new port with `--no-browser`), you get a login URL with the token from the command `jupyter notebook list`, which you can paste into your browser. Highlights: * API for creating mime-type based renderer extensions using `OutputArea.register_mime_type` and `Notebook.render_cell_output` methods. See [mimerender-cookiecutter](https://github.com/jupyterlab/mimerender-cookiecutter) for reference implementations and cookiecutter. * Enable token authentication by default. See `server_security`{.interpreted-text role=”ref”} for more details. * Update security docs to reflect new signature system * Switched from term.js to xterm.js Bug fixes: * Ensure variable is set if exc_info is falsey * Catch and log handler exceptions in `events.trigger` * Add debug log for static file paths * Don’t check origin on token-authenticated requests * Remove leftover print statement * Fix highlighting of Python code blocks * `json_errors` should be outermost decorator on API handlers * Fix remove old nbserver info files * Fix notebook mime type on download links * Fix carriage symbol behavior * Fix terminal styles * Update dead links in docs * If kernel is broken, start a new session * Include cross-origin check when allowing login URL redirects Other improvements: * Allow JSON output data with mime type `application/*+json` * Allow kernelspecs to have spaces in them for backward compat * Allow websocket connections from scripts * Allow `None` for post_save_hook * Upgrade CodeMirror to 5.21 * Upgrade xterm to 2.1.0 * Docs for using comms * Set `dirty` flag when output arrives * Set `ws-url` data attribute when accessing a notebook terminal * Add base aliases for nbextensions * Include `@` operator in CodeMirror IPython mode * Extend mathjax_url docstring * Load nbextension in predictable order * Improve the error messages for nbextensions * Include cross-origin check when allowing login URL redirects See the 4.3 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?utf8=%E2%9C%93&q=is%3Aissue%20milestone%3A4.3%20) and [pull requests](https://github.com/jupyter/notebook/pulls?utf8=%E2%9C%93&q=is%3Apr%20milestone%3A4.3%20) involved in this release. ### 4.2.3[](#id98) 4.2.3 is a small bugfix release on 4.2. > Highlights: * Fix regression in 4.2.2 that delayed loading custom.js until after `notebook_loaded` and `app_initialized` events have fired. * Fix some outdated docs and links. ### 4.2.2[](#id99) 4.2.2 is a small bugfix release on 4.2, with an important security fix. All users are strongly encouraged to upgrade to 4.2.2. > Highlights: * **Security fix**: CVE-2016-6524, where untrusted latex output could be added to the page in a way that could execute javascript. * Fix missing POST in OPTIONS responses. * Fix for downloading non-ascii filenames. * Avoid clobbering ssl_options, so that users can specify more detailed SSL configuration. * Fix inverted load order in nbconfig, so user config has highest priority. * Improved error messages here and there. ### 4.2.1[](#id100) 4.2.1 is a small bugfix release on 4.2. Highlights: * Compatibility fixes for some versions of ipywidgets * Fix for ignored CSS on Windows * Fix specifying destination when installing nbextensions ### 4.2.0[](#id101) Release 4.2 adds a new API for enabling and installing extensions. Extensions can now be enabled at the system-level, rather than just per-user. An API is defined for installing directly from a Python package, as well. Highlighted changes: * Upgrade MathJax to 2.6 to fix vertical-bar appearing on some equations. * Restore ability for notebook directory to be root (4.1 regression) * Large outputs are now throttled, reducing the ability of output floods to kill the browser. * Fix the notebook ignoring cell executions while a kernel is starting by queueing the messages. * Fix handling of url prefixes (e.g. JupyterHub) in terminal and edit pages. * Support nested SVGs in output. And various other fixes and improvements. ### 4.1.0[](#id102) Bug fixes: * Properly reap zombie subprocesses * Fix cross-origin problems * Fix double-escaping of the base URL prefix * Handle invalid unicode filenames more gracefully * Fix ANSI color-processing * Send keepalive messages for web terminals * Fix bugs in the notebook tour UI changes: * Moved the cell toolbar selector into the *View* menu. Added a button that triggers a “hint” animation to the main toolbar so users can find the new location. (Click here to see a [screencast](https://cloud.githubusercontent.com/assets/335567/10711889/59665a5a-7a3e-11e5-970f-86b89592880c.gif) ) > * Added *Restart & Run All* to the *Kernel* menu. Users can also bind it to a keyboard shortcut on action `restart-kernel-and-run-all-cells`. * Added multiple-cell selection. Users press `Shift-Up/Down` or `Shift-K/J` to extend selection in command mode. Various actions such as cut/copy/paste, execute, and cell type conversions apply to all selected cells. * Added a command palette for executing Jupyter actions by name. Users press `Cmd/Ctrl-Shift-P` or click the new command palette icon on the toolbar. * Added a *Find and Replace* dialog to the *Edit* menu. Users can also press `F` in command mode to show the dialog. Other improvements: * Custom KernelManager methods can be Tornado coroutines, allowing async operations. * Make clearing output optional when rewriting input with `set_next_input(replace=True)`. * Added support for TLS client authentication via `--NotebookApp.client-ca`. * Added tags to `jupyter/notebook` releases on DockerHub. `latest` continues to track the master branch. See the 4.1 milestone on GitHub for a complete list of [issues](https://github.com/jupyter/notebook/issues?page=3&q=milestone%3A4.1+is%3Aclosed+is%3Aissue&utf8=%E2%9C%93) and [pull requests](https://github.com/jupyter/notebook/pulls?q=milestone%3A4.1+is%3Aclosed+is%3Apr) handled. ### 4.0.x[](#x) #### 4.0.6[](#id103) * fix installation of mathjax support files * fix some double-escape regressions in 4.0.5 * fix a couple of cases where errors could prevent opening a notebook #### 4.0.5[](#id104) Security fixes for maliciously crafted files. * [CVE-2015-6938](http://www.openwall.com/lists/oss-security/2015/09/02/3): malicious filenames * [CVE-2015-7337](http://www.openwall.com/lists/oss-security/2015/09/16/3): malicious binary files in text editor. Thanks to <NAME> at Quantopian and <NAME> for the reports. #### 4.0.4[](#id105) * Fix inclusion of mathjax-safe extension #### 4.0.2[](#id106) * Fix launching the notebook on Windows * Fix the path searched for frontend config #### 4.0.0[](#id107) First release of the notebook as a standalone package. Comms[](#comms) --- *Comms* allow custom messages between the frontend and the kernel. They are used, for instance, in [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) to update widget state. A comm consists of a pair of objects, in the kernel and the frontend, with an automatically assigned unique ID. When one side sends a message, a callback on the other side is triggered with that message data. Either side, the frontend or kernel, can open or close the comm. See also [Custom Messages](https://jupyter-client.readthedocs.io/en/latest/messaging.html#custom-messages)The messaging specification section on comms ### Opening a comm from the kernel[](#opening-a-comm-from-the-kernel) First, the function to accept the comm must be available on the frontend. This can either be specified in a requirejs module, or registered in a registry, for example when an [extension](index.html#document-extending/frontend_extensions) is loaded. This example shows a frontend comm target registered in a registry: ``` Jupyter.notebook.kernel.comm_manager.register_target('my_comm_target', function(comm, msg) { // comm is the frontend comm instance // msg is the comm_open message, which can carry data // Register handlers for later messages: comm.on_msg(function(msg) {...}); comm.on_close(function(msg) {...}); comm.send({'foo': 0}); }); ``` Now that the frontend comm is registered, you can open the comm from the kernel: ``` from ipykernel.comm import Comm # Use comm to send a message from the kernel my_comm = Comm(target_name='my_comm_target', data={'foo': 1}) my_comm.send({'foo': 2}) # Add a callback for received messages. @my_comm.on_msg def _recv(msg): # Use msg['content']['data'] for the data in the message ``` This example uses the IPython kernel; it’s up to each language kernel what API, if any, it offers for using comms. ### Opening a comm from the frontend[](#opening-a-comm-from-the-frontend) This is very similar to above, but in reverse. First, a comm target must be registered in the kernel. For instance, this may be done by code displaying output: it will register a target in the kernel, and then display output containing Javascript to connect to it. ``` def target_func(comm, open_msg): # comm is the kernel Comm instance # msg is the comm_open message # Register handler for later messages @comm.on_msg def _recv(msg): # Use msg['content']['data'] for the data in the message comm.send({'echo': msg['content']['data']}) # Send data to the frontend on creation comm.send({'foo': 5}) get_ipython().kernel.comm_manager.register_target('my_comm_target', target_func) ``` This example uses the IPython kernel again; this example will be different in other kernels that support comms. Refer to the specific language kernel’s documentation for comms support. And then open the comm from the frontend: ``` const comm = Jupyter.notebook.kernel.comm_manager.new_comm('my_comm_target', {'foo': 6}) // Send data comm.send({'foo': 7}) // Register a handler comm.on_msg(function(msg) { console.log(msg.content.data.foo); }); ``` Configuration Overview[](#configuration-overview) --- Beyond the default configuration settings, you can configure a rich array of options to suit your workflow. Here are areas that are commonly configured when using Jupyter Notebook: > * [Jupyter’s common configuration system](#configure-common) > * [Notebook server](#configure-nbserver) > * [Notebook front-end client](#configure-nbclient) > * [Notebook extensions](#configure-nbextensions) Let’s look at highlights of each area. ### Jupyter’s Common Configuration system[](#jupyter-s-common-configuration-system) Jupyter applications, from the Notebook to JupyterHub to nbgrader, share a common configuration system. The process for creating a configuration file and editing settings is similar for all the Jupyter applications. > * [Jupyter’s Common Configuration Approach](https://jupyter.readthedocs.io/en/latest/use/config.html) > * [Common Directories and File Locations](https://jupyter.readthedocs.io/en/latest/use/jupyter-directories.html) > * [Language kernels](https://jupyter.readthedocs.io/en/latest/projects/kernels.html) > * [traitlets](https://traitlets.readthedocs.io/en/latest/config.html#module-traitlets.config) > provide a low-level architecture for configuration. ### Notebook server[](#notebook-server) The Notebook server runs the language kernel and communicates with the front-end Notebook client (i.e. the familiar notebook interface). > * Configuring the Notebook server > > > To create a `jupyter_notebook_config.py` file in the `.jupyter` > > directory, with all the defaults commented out, use the following > > command: > > > > > ``` > > $ jupyter notebook --generate-config > > > :ref:`Command line arguments for configuration <config>` settings are > > documented in the configuration file and the user documentation. > > > ``` > > > > > * [Running a Notebook server](index.html#working-remotely) > * Related: [Configuring a language kernel](https://ipython.readthedocs.io/en/latest/install/kernel_install.html) > to run in the Notebook server enables your server to run other languages, like R or Julia. ### Notebook front-end client[](#notebook-front-end-client) #### Configuring the notebook frontend[](#configuring-the-notebook-frontend) Note The ability to configure the notebook frontend UI and preferences is still a work in progress. This document is a rough explanation on how you can persist some configuration options for the notebook JavaScript. There is no exhaustive list of all the configuration options as most options are passed down to other libraries, which means that non valid configuration can be ignored without any error messages. ##### How front end configuration works[](#how-front-end-configuration-works) The frontend configuration system works as follows: > * get a handle of a configurable JavaScript object. > * access its configuration attribute. > * update its configuration attribute with a JSON patch. ##### Example - Changing the notebook’s default indentation[](#example-changing-the-notebook-s-default-indentation) This example explains how to change the default setting `indentUnit` for CodeMirror Code Cells: ``` var cell = Jupyter.notebook.get_selected_cell(); var config = cell.config; var patch = { CodeCell:{ cm_config:{indentUnit:2} } } config.update(patch) ``` You can enter the previous snippet in your browser’s JavaScript console once. Then reload the notebook page in your browser. Now, the preferred indent unit should be equal to two spaces. The custom setting persists and you do not need to reissue the patch on new notebooks. `indentUnit`, used in this example, is one of the many [CodeMirror options](https://codemirror.net/doc/manual.html#option_indentUnit) which are available for configuration. You can similarly change the options of the file editor by entering the following snippet in the browser’s Javascript console once (from a file editing page).: ``` var config = Jupyter.editor.config var patch = { Editor: { codemirror_options: { indentUnit: 2 } } } config.update(patch) ``` ##### Example - Restoring the notebook’s default indentation[](#example-restoring-the-notebook-s-default-indentation) If you want to restore a notebook frontend preference to its default value, you will enter a JSON patch with a `null` value for the preference setting. For example, let’s restore the indent setting `indentUnit` to its default of four spaces. Enter the following code snippet in your JavaScript console: ``` var cell = Jupyter.notebook.get_selected_cell(); var config = cell.config; var patch = { CodeCell:{ cm_config:{indentUnit: null} // only change here. } } config.update(patch) ``` Reload the notebook in your browser and the default indent should again be two spaces. ##### Persisting configuration settings[](#persisting-configuration-settings) Under the hood, Jupyter will persist the preferred configuration settings in `~/.jupyter/nbconfig/<section>.json`, with `<section>` taking various value depending on the page where the configuration is issued. `<section>` can take various values like `notebook`, `tree`, and `editor`. A `common` section contains configuration settings shared by all pages. ##### Persisting configuration settings[](#id1) A banner might be shown to users to inform them about news or updates. This banner can be hidden launching the server with the show_banner trait.: ``` jupyter notebook --NotebookApp.show_banner=False ``` ### Notebook extensions[](#notebook-extensions) * [Distributing Jupyter Extensions as Python Packages](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Distributing%20Jupyter%20Extensions%20as%20Python%20Packages.html#Distributing-Jupyter-Extensions-as-Python-Packages) * [Extending the Notebook](https://jupyter-notebook.readthedocs.io/en/stable/extending/index.html) [Security in Jupyter notebooks:](index.html#notebook-security) Since security policies vary from organization to organization, we encourage you to consult with your security team on settings that would be best for your use cases. Our documentation offers some responsible security practices, and we recommend becoming familiar with the practices. Config file and command line options[](#config-file-and-command-line-options) --- The notebook server can be run with a variety of command line arguments. A list of available options can be found below in the [options section](#options). Defaults for these options can also be set by creating a file named `jupyter_notebook_config.py` in your Jupyter folder. The Jupyter folder is in your home directory, `~/.jupyter`. To create a `jupyter_notebook_config.py` file, with all the defaults commented out, you can use the following command line: ``` $ jupyter notebook --generate-config ``` ### Options[](#options) This list of options can be generated by running the following and hitting enter: ``` $ jupyter notebook --help ``` Application.log_datefmtUnicodeDefault: `'%Y-%m-%d %H:%M:%S'` The date format used by logging formatters for %(asctime)s Application.log_formatUnicodeDefault: `'[%(name)s]%(highlevel)s %(message)s'` The Logging format template Application.log_levelany of `0``|``10``|``20``|``30``|``40``|``50``|`’DEBUG’`|`’INFO’`|`’WARN’`|`’ERROR’`|`’CRITICAL’``Default: `30` Set the log level by value or name. Application.logging_configDictDefault: `{}` Configure additional log handlers. The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings. This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers. If provided this should be a logging configuration dictionary, for more information see: <https://docs.python.org/3/library/logging.config.html#logging-config-dictschemaThis dictionary is merged with the base logging configuration which defines the following: * A logging formatter intended for interactive use called `console`. * A logging handler that writes to stderr called `console` which uses the formatter `console`. * A logger with the name of this application set to `DEBUG` level. This example adds a new handler that writes to a file: ``` c.Application.logging_config = { 'handlers': { 'file': { 'class': 'logging.FileHandler', 'level': 'DEBUG', 'filename': '<path/to/file>', } }, 'loggers': { '<application-name>': { 'level': 'DEBUG', # NOTE: if you don't list the default "console" # handler here then it will be disabled 'handlers': ['console', 'file'], }, } } ``` Application.show_configBoolDefault: `False` Instead of starting the Application, dump configuration to stdout Application.show_config_jsonBoolDefault: `False` Instead of starting the Application, dump configuration to stdout (as JSON) JupyterApp.answer_yesBoolDefault: `False` Answer yes to any prompts. JupyterApp.config_fileUnicodeDefault: `''` Full path of a config file. JupyterApp.config_file_nameUnicodeDefault: `''` Specify a config file to load. JupyterApp.generate_configBoolDefault: `False` Generate default config file. JupyterApp.log_datefmtUnicodeDefault: `'%Y-%m-%d %H:%M:%S'` The date format used by logging formatters for %(asctime)s JupyterApp.log_formatUnicodeDefault: `'[%(name)s]%(highlevel)s %(message)s'` The Logging format template JupyterApp.log_levelany of `0``|``10``|``20``|``30``|``40``|``50``|`’DEBUG’`|`’INFO’`|`’WARN’`|`’ERROR’`|`’CRITICAL’``Default: `30` Set the log level by value or name. JupyterApp.logging_configDictDefault: `{}` Configure additional log handlers. The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings. This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers. If provided this should be a logging configuration dictionary, for more information see: <https://docs.python.org/3/library/logging.config.html#logging-config-dictschemaThis dictionary is merged with the base logging configuration which defines the following: * A logging formatter intended for interactive use called `console`. * A logging handler that writes to stderr called `console` which uses the formatter `console`. * A logger with the name of this application set to `DEBUG` level. This example adds a new handler that writes to a file: ``` c.Application.logging_config = { 'handlers': { 'file': { 'class': 'logging.FileHandler', 'level': 'DEBUG', 'filename': '<path/to/file>', } }, 'loggers': { '<application-name>': { 'level': 'DEBUG', # NOTE: if you don't list the default "console" # handler here then it will be disabled 'handlers': ['console', 'file'], }, } } ``` JupyterApp.show_configBoolDefault: `False` Instead of starting the Application, dump configuration to stdout JupyterApp.show_config_jsonBoolDefault: `False` Instead of starting the Application, dump configuration to stdout (as JSON) NotebookApp.allow_credentialsBoolDefault: `False` Set the Access-Control-Allow-Credentials: true header NotebookApp.allow_originUnicodeDefault: `''` Set the Access-Control-Allow-Origin header > Use ‘*’ to allow any origin to access your server. > Takes precedence over allow_origin_pat. NotebookApp.allow_origin_patUnicodeDefault: `''` Use a regular expression for the Access-Control-Allow-Origin header > Requests from an origin matching the expression will get replies with: > > > Access-Control-Allow-Origin: origin > > > > where origin is the origin of the request. > Ignored if allow_origin is set. NotebookApp.allow_password_changeBoolDefault: `True` Allow password to be changed at login for the notebook server. > While logging in with a token, the notebook server UI will give the opportunity to > the user to enter a new password at the same time that will replace > the token login mechanism. > This can be set to false to prevent changing password from the UI/API. NotebookApp.allow_remote_accessBoolDefault: `False` Allow requests where the Host header doesn’t point to a local server > By default, requests get a 403 forbidden response if the ‘Host’ header > shows that the browser thinks it’s on a non-local domain. > Setting this option to True disables this check. > This protects against ‘DNS rebinding’ attacks, where a remote web server > serves you a page and then changes its DNS to send later requests to a > local IP, bypassing same-origin checks. > Local IP addresses (such as 127.0.0.1 and ::1) are allowed as local, > along with hostnames configured in local_hostnames. NotebookApp.allow_rootBoolDefault: `False` Whether to allow the user to run the notebook as root. NotebookApp.answer_yesBoolDefault: `False` Answer yes to any prompts. NotebookApp.authenticate_prometheusBoolDefault: `True` “Require authentication to access prometheus metrics. NotebookApp.autoreloadBoolDefault: `False` Reload the webapp when changes are made to any Python src files. NotebookApp.base_project_urlUnicodeDefault: `'/'` DEPRECATED use base_url NotebookApp.base_urlUnicodeDefault: `'/'` The base URL for the notebook server. > Leading and trailing slashes can be omitted, > and will automatically be added. NotebookApp.browserUnicodeDefault: `''` Specify what command to use to invoke a webbrowser when opening the notebook. If not specified, the default browser will be determined by the webbrowser standard library module, which allows setting of the BROWSER environment variable to override it. NotebookApp.certfileUnicodeDefault: `''` The full path to an SSL/TLS certificate file. NotebookApp.client_caUnicodeDefault: `''` The full path to a certificate authority certificate for SSL/TLS client authentication. NotebookApp.config_fileUnicodeDefault: `''` Full path of a config file. NotebookApp.config_file_nameUnicodeDefault: `''` Specify a config file to load. NotebookApp.config_manager_classTypeDefault: `'notebook.services.config.manager.ConfigManager'` The config manager class to use NotebookApp.contents_manager_classTypeFromClassesDefault: `'notebook.services.contents.largefilemanager.LargeFileManager'` The notebook manager class to use. NotebookApp.cookie_optionsDictDefault: `{}` Extra keyword arguments to pass to set_secure_cookie. See tornado’s set_secure_cookie docs for details. NotebookApp.cookie_secretBytesDefault: `b''` The random bytes used to secure cookies.By default this is a new random number every time you start the Notebook. Set it to a value in a config file to enable logins to persist across server sessions. Note: Cookie secrets should be kept private, do not share config files with cookie_secret stored in plaintext (you can read the value from a file). NotebookApp.cookie_secret_fileUnicodeDefault: `''` The file where the cookie secret is stored. NotebookApp.custom_display_urlUnicodeDefault: `''` Override URL shown to users. > Replace actual URL, including protocol, address, port and base URL, > with the given value when displaying URL to the users. Do not change > the actual connection URL. If authentication token is enabled, the > token is added to the custom URL automatically. > This option is intended to be used when the URL to display to the user > cannot be determined reliably by the Jupyter notebook server (proxified > or containerized setups for example). NotebookApp.default_urlUnicodeDefault: `'/tree'` The default URL to redirect to from / NotebookApp.disable_check_xsrfBoolDefault: `False` Disable cross-site-request-forgery protection > Jupyter notebook 4.3.1 introduces protection from cross-site request forgeries, > requiring API requests to either: > * originate from pages served by this server (validated with XSRF cookie and token), or > * authenticate with a token > Some anonymous compute resources still desire the ability to run code, > completely without authentication. > These services can disable all authentication and security checks, > with the full knowledge of what that implies. NotebookApp.enable_mathjaxBoolDefault: `True` Whether to enable MathJax for typesetting math/TeX > MathJax is the javascript library Jupyter uses to render math/LaTeX. It is > very large, so you may want to disable it if you have a slow internet > connection, or for offline use of the notebook. > When disabled, equations etc. will appear as their untransformed TeX source. NotebookApp.extra_nbextensions_pathListDefault: `[]` extra paths to look for Javascript notebook extensions NotebookApp.extra_servicesListDefault: `[]` handlers that should be loaded at higher priority than the default services NotebookApp.extra_static_pathsListDefault: `[]` Extra paths to search for serving static files. > This allows adding javascript/css to be available from the notebook server machine, > or overriding individual files in the IPython NotebookApp.extra_template_pathsListDefault: `[]` Extra paths to search for serving jinja templates. > Can be used to override templates from notebook.templates. NotebookApp.file_to_runUnicodeDefault: `''` No description NotebookApp.generate_configBoolDefault: `False` Generate default config file. NotebookApp.get_secure_cookie_kwargsDictDefault: `{}` Extra keyword arguments to pass to get_secure_cookie. See tornado’s get_secure_cookie docs for details. NotebookApp.ignore_minified_jsBoolDefault: `False` Deprecated: Use minified JS file or not, mainly use during dev to avoid JS recompilation NotebookApp.iopub_data_rate_limitFloatDefault: `1000000` (bytes/sec)Maximum rate at which stream output can be sent on iopub before they are limited. NotebookApp.iopub_msg_rate_limitFloatDefault: `1000` (msgs/sec)Maximum rate at which messages can be sent on iopub before they are limited. NotebookApp.ipUnicodeDefault: `'localhost'` The IP address the notebook server will listen on. NotebookApp.jinja_environment_optionsDictDefault: `{}` Supply extra arguments that will be passed to Jinja environment. NotebookApp.jinja_template_varsDictDefault: `{}` Extra variables to supply to jinja templates when rendering. NotebookApp.kernel_manager_classTypeDefault: `'notebook.services.kernels.kernelmanager.MappingKernelManager'` The kernel manager class to use. NotebookApp.kernel_spec_manager_classTypeDefault: `'jupyter_client.kernelspec.KernelSpecManager'` The kernel spec manager class to use. Should be a subclass of jupyter_client.kernelspec.KernelSpecManager. The Api of KernelSpecManager is provisional and might change without warning between this version of Jupyter and the next stable one. NotebookApp.keyfileUnicodeDefault: `''` The full path to a private key file for usage with SSL/TLS. NotebookApp.local_hostnamesListDefault: `['localhost']` Hostnames to allow as local when allow_remote_access is False. > Local IP addresses (such as 127.0.0.1 and ::1) are automatically accepted > as local as well. NotebookApp.log_datefmtUnicodeDefault: `'%Y-%m-%d %H:%M:%S'` The date format used by logging formatters for %(asctime)s NotebookApp.log_formatUnicodeDefault: `'[%(name)s]%(highlevel)s %(message)s'` The Logging format template NotebookApp.log_jsonBoolDefault: `False` Set to True to enable JSON formatted logs. Run “pip install notebook[json-logging]” to install the required dependent packages. Can also be set using the environment variable JUPYTER_ENABLE_JSON_LOGGING=true. NotebookApp.log_levelany of `0``|``10``|``20``|``30``|``40``|``50``|`’DEBUG’`|`’INFO’`|`’WARN’`|`’ERROR’`|`’CRITICAL’``Default: `30` Set the log level by value or name. NotebookApp.logging_configDictDefault: `{}` Configure additional log handlers. The default stderr logs handler is configured by the log_level, log_datefmt and log_format settings. This configuration can be used to configure additional handlers (e.g. to output the log to a file) or for finer control over the default handlers. If provided this should be a logging configuration dictionary, for more information see: <https://docs.python.org/3/library/logging.config.html#logging-config-dictschemaThis dictionary is merged with the base logging configuration which defines the following: * A logging formatter intended for interactive use called `console`. * A logging handler that writes to stderr called `console` which uses the formatter `console`. * A logger with the name of this application set to `DEBUG` level. This example adds a new handler that writes to a file: ``` c.Application.logging_config = { 'handlers': { 'file': { 'class': 'logging.FileHandler', 'level': 'DEBUG', 'filename': '<path/to/file>', } }, 'loggers': { '<application-name>': { 'level': 'DEBUG', # NOTE: if you don't list the default "console" # handler here then it will be disabled 'handlers': ['console', 'file'], }, } } ``` NotebookApp.login_handler_classTypeDefault: `'notebook.auth.login.LoginHandler'` The login handler class to use. NotebookApp.logout_handler_classTypeDefault: `'notebook.auth.logout.LogoutHandler'` The logout handler class to use. NotebookApp.mathjax_configUnicodeDefault: `'TeX-AMS-MML_HTMLorMML-full,Safe'` The MathJax.js configuration file that is to be used. NotebookApp.mathjax_urlUnicodeDefault: `''` A custom url for MathJax.js.Should be in the form of a case-sensitive url to MathJax, for example: /static/components/MathJax/MathJax.js NotebookApp.max_body_sizeIntDefault: `536870912` Sets the maximum allowed size of the client request body, specified in the Content-Length request header field. If the size in a request exceeds the configured value, a malformed HTTP message is returned to the client. Note: max_body_size is applied even in streaming mode. NotebookApp.max_buffer_sizeIntDefault: `536870912` Gets or sets the maximum amount of memory, in bytes, that is allocated for use by the buffer manager. NotebookApp.min_open_files_limitIntDefault: `0` Gets or sets a lower bound on the open file handles process resource limit. This may need to be increased if you run into an OSError: [Errno 24] Too many open files. This is not applicable when running on Windows. NotebookApp.nbserver_extensionsDictDefault: `{}` Dict of Python modules to load as notebook server extensions. Entry values can be used to enable and disable the loading of the extensions. The extensions will be loaded in alphabetical order. NotebookApp.notebook_dirUnicodeDefault: `''` The directory to use for notebooks and kernels. NotebookApp.open_browserBoolDefault: `True` Whether to open in a browser after starting.The specific browser used is platform dependent and determined by the python standard library webbrowser module, unless it is overridden using the –browser (NotebookApp.browser) configuration option. NotebookApp.passwordUnicodeDefault: `''` Hashed password to use for web authentication. > To generate, type in a python/IPython shell: > > > from notebook.auth import passwd; passwd() > > > > The string should be of the form type:salt:hashed-password. NotebookApp.password_requiredBoolDefault: `False` Forces users to use a password for the Notebook server.This is useful in a multi user environment, for instance when everybody in the LAN can access each other’s machine through ssh. In such a case, serving the notebook server on localhost is not secure since any user can connect to the notebook server via ssh. NotebookApp.portIntDefault: `8888` The port the notebook server will listen on (env: JUPYTER_PORT). NotebookApp.port_retriesIntDefault: `50` The number of additional ports to try if the specified port is not available (env: JUPYTER_PORT_RETRIES). NotebookApp.pylabUnicodeDefault: `'disabled'` DISABLED: use %pylab or %matplotlib in the notebook to enable matplotlib. NotebookApp.quit_buttonBoolDefault: `True` If True, display a button in the dashboard to quit(shutdown the notebook server). NotebookApp.rate_limit_windowFloatDefault: `3` (sec) Time window used tocheck the message and data rate limits. NotebookApp.reraise_server_extension_failuresBoolDefault: `False` Reraise exceptions encountered loading server extensions? NotebookApp.server_extensionsListDefault: `[]` DEPRECATED use the nbserver_extensions dict instead NotebookApp.session_manager_classTypeDefault: `'notebook.services.sessions.sessionmanager.SessionManager'` The session manager class to use. NotebookApp.show_bannerBoolDefault: `True` Whether the banner is displayed on the page. > By default, the banner is displayed. NotebookApp.show_configBoolDefault: `False` Instead of starting the Application, dump configuration to stdout NotebookApp.show_config_jsonBoolDefault: `False` Instead of starting the Application, dump configuration to stdout (as JSON) NotebookApp.shutdown_no_activity_timeoutIntDefault: `0` Shut down the server after N seconds with no kernels or terminals running and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the notebook server when it’s not in use. This is not precisely timed: it may shut down up to a minute later. 0 (the default) disables this automatic shutdown. NotebookApp.sockUnicodeDefault: `''` The UNIX socket the notebook server will listen on. NotebookApp.sock_modeUnicodeDefault: `'0600'` The permissions mode for UNIX socket creation (default: 0600). NotebookApp.ssl_optionsDictDefault: `{}` Supply SSL options for the tornado HTTPServer.See the tornado docs for details. NotebookApp.terminado_settingsDictDefault: `{}` Supply overrides for terminado. Currently only supports “shell_command”. On Unix, if “shell_command” is not provided, a non-login shell is launched by default when the notebook server is connected to a terminal, a login shell otherwise. NotebookApp.terminals_enabledBoolDefault: `True` Set to False to disable terminals. > This does *not* make the notebook server more secure by itself. > Anything the user can in a terminal, they can also do in a notebook. > Terminals may also be automatically disabled if the terminado package > is not available. NotebookApp.tokenUnicodeDefault: `'<generated>'` Token used for authenticating first-time connections to the server. > The token can be read from the file referenced by JUPYTER_TOKEN_FILE or set directly > with the JUPYTER_TOKEN environment variable. > When no password is enabled, > the default is to generate a new, random token. > Setting to an empty string disables authentication altogether, which is NOT RECOMMENDED. NotebookApp.tornado_settingsDictDefault: `{}` Supply overrides for the tornado.web.Application that the Jupyter notebook uses. NotebookApp.trust_xheadersBoolDefault: `False` Whether to trust or not X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For headers sent by the upstream reverse proxy. Necessary if the proxy handles SSL NotebookApp.use_redirect_fileBoolDefault: `True` Disable launching browser by redirect file > For versions of notebook > 5.7.2, a security feature measure was added that > prevented the authentication token used to launch the browser from being visible. > This feature makes it difficult for other users on a multi-user system from > running code in your Jupyter session as you. > However, some environments (like Windows Subsystem for Linux (WSL) and Chromebooks), > launching a browser using a redirect file can lead the browser failing to load. > This is because of the difference in file structures/paths between the runtime and > the browser. > Disabling this setting to False will disable this behavior, allowing the browser > to launch by using a URL and visible token (as before). NotebookApp.webapp_settingsDictDefault: `{}` DEPRECATED, use tornado_settings NotebookApp.webbrowser_open_newIntDefault: `2` Specify Where to open the notebook on startup. This is thenew argument passed to the standard library method webbrowser.open. The behaviour is not guaranteed, but depends on browser support. Valid values are: > * 2 opens a new tab, > * 1 opens a new window, > * 0 opens in an existing window. See the webbrowser.open documentation for details. NotebookApp.websocket_compression_optionsAnyDefault: `None` Set the tornado compression options for websocket connections. This value will be returned from `WebSocketHandler.get_compression_options()`. None (default) will disable compression. A dict (even an empty one) will enable compression. See the tornado docs for WebSocketHandler.get_compression_options for details. NotebookApp.websocket_urlUnicodeDefault: `''` The base URL for websockets,if it differs from the HTTP server (hint: it almost certainly doesn’t). Should be in the form of an HTTP origin: ws[s]://hostname[:port] ConnectionFileMixin.connection_fileUnicodeDefault: `''` JSON file in which to store connection info [default: kernel-<pid>.json] > This file will contain the IP, ports, and authentication key needed to connect > clients to this kernel. By default, this file will be created in the security dir > of the current profile, but can be specified by absolute path. ConnectionFileMixin.control_portIntDefault: `0` set the control (ROUTER) port [default: random] ConnectionFileMixin.hb_portIntDefault: `0` set the heartbeat port [default: random] ConnectionFileMixin.iopub_portIntDefault: `0` set the iopub (PUB) port [default: random] ConnectionFileMixin.ipUnicodeDefault: `''` Set the kernel’s IP address [default localhost].If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful! ConnectionFileMixin.shell_portIntDefault: `0` set the shell (ROUTER) port [default: random] ConnectionFileMixin.stdin_portIntDefault: `0` set the stdin (ROUTER) port [default: random] ConnectionFileMixin.transportany of `'tcp'``|`’ipc’`` (case-insensitive)Default: `'tcp'` No description KernelManager.autorestartBoolDefault: `True` Should we autorestart the kernel if it dies. KernelManager.connection_fileUnicodeDefault: `''` JSON file in which to store connection info [default: kernel-<pid>.json] > This file will contain the IP, ports, and authentication key needed to connect > clients to this kernel. By default, this file will be created in the security dir > of the current profile, but can be specified by absolute path. KernelManager.control_portIntDefault: `0` set the control (ROUTER) port [default: random] KernelManager.hb_portIntDefault: `0` set the heartbeat port [default: random] KernelManager.iopub_portIntDefault: `0` set the iopub (PUB) port [default: random] KernelManager.ipUnicodeDefault: `''` Set the kernel’s IP address [default localhost].If the IP address is something other than localhost, then Consoles on other machines will be able to connect to the Kernel, so be careful! KernelManager.shell_portIntDefault: `0` set the shell (ROUTER) port [default: random] KernelManager.shutdown_wait_timeFloatDefault: `5.0` Time to wait for a kernel to terminate before killing it, in seconds. When a shutdown request is initiated, the kernel will be immediately sent an interrupt (SIGINT), followedby a shutdown_request message, after 1/2 of shutdown_wait_time`it will be sent a terminate (SIGTERM) request, and finally at the end of `shutdown_wait_time will be killed (SIGKILL). terminate and kill may be equivalent on windows. Note that this value can beoverridden by the in-use kernel provisioner since shutdown times mayvary by provisioned environment. KernelManager.stdin_portIntDefault: `0` set the stdin (ROUTER) port [default: random] KernelManager.transportany of `'tcp'``|`’ipc’`` (case-insensitive)Default: `'tcp'` No description Session.buffer_thresholdIntDefault: `1024` Threshold (in bytes) beyond which an object’s buffer should be extracted to avoid pickling. Session.check_pidBoolDefault: `True` Whether to check PID to protect against calls after fork. > This check can be disabled if fork-safety is handled elsewhere. Session.copy_thresholdIntDefault: `65536` Threshold (in bytes) beyond which a buffer should be sent without copying. Session.debugBoolDefault: `False` Debug output in the Session Session.digest_history_sizeIntDefault: `65536` The maximum number of digests to remember. > The digest history will be culled when it exceeds this value. Session.item_thresholdIntDefault: `64` The maximum number of items for a container to be introspected for custom serialization.Containers larger than this are pickled outright. Session.keyCBytesDefault: `b''` execution key, for signing messages. Session.keyfileUnicodeDefault: `''` path to file containing execution key. Session.metadataDictDefault: `{}` Metadata dictionary, which serves as the default top-level metadata dict for each message. Session.packerDottedObjectNameDefault: `'json'` The name of the packer for serializing messages.Should be one of ‘json’, ‘pickle’, or an import name for a custom callable serializer. Session.sessionCUnicodeDefault: `''` The UUID identifying this session. Session.signature_schemeUnicodeDefault: `'hmac-sha256'` The digest scheme used to construct the message signatures.Must have the form ‘hmac-HASH’. Session.unpackerDottedObjectNameDefault: `'json'` The name of the unpacker for unserializing messages.Only used with custom functions for packer. Session.usernameUnicodeDefault: `'username'` Username for the Session. Default is your system username. MultiKernelManager.default_kernel_nameUnicodeDefault: `'python3'` The name of the default kernel to start MultiKernelManager.kernel_manager_classDottedObjectNameDefault: `'jupyter_client.ioloop.IOLoopKernelManager'` The kernel manager class. This is configurable to allowsubclassing of the KernelManager for customized behavior. MultiKernelManager.shared_contextBoolDefault: `True` Share a single zmq.Context to talk to all my kernels MappingKernelManager.allowed_message_typesListDefault: `[]` White list of allowed kernel message types.When the list is empty, all message types are allowed. MappingKernelManager.buffer_offline_messagesBoolDefault: `True` Whether messages from kernels whose frontends have disconnected should be buffered in-memory.When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity. Disable if long-running kernels will produce too much output while no frontends are connected. MappingKernelManager.cull_busyBoolDefault: `False` Whether to consider culling kernels which are busy.Only effective if cull_idle_timeout > 0. MappingKernelManager.cull_connectedBoolDefault: `False` Whether to consider culling kernels which have one or more connections.Only effective if cull_idle_timeout > 0. MappingKernelManager.cull_idle_timeoutIntDefault: `0` Timeout (in seconds) after which a kernel is considered idle and ready to be culled.Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections. MappingKernelManager.cull_intervalIntDefault: `300` The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value. MappingKernelManager.default_kernel_nameUnicodeDefault: `'python3'` The name of the default kernel to start MappingKernelManager.kernel_info_timeoutFloatDefault: `60` Timeout for giving up on a kernel (in seconds).On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup). MappingKernelManager.kernel_manager_classDottedObjectNameDefault: `'jupyter_client.ioloop.IOLoopKernelManager'` The kernel manager class. This is configurable to allowsubclassing of the KernelManager for customized behavior. MappingKernelManager.root_dirUnicodeDefault: `''` No description MappingKernelManager.shared_contextBoolDefault: `True` Share a single zmq.Context to talk to all my kernels KernelSpecManager.allowed_kernelspecsSetDefault: `set()` List of allowed kernel names. > By default, all installed kernels are allowed. KernelSpecManager.ensure_native_kernelBoolDefault: `True` If there is no Python kernelspec registered and the IPythonkernel is available, ensure it is added to the spec list. KernelSpecManager.kernel_spec_classTypeDefault: `'jupyter_client.kernelspec.KernelSpec'` The kernel spec class. This is configurable to allowsubclassing of the KernelSpecManager for customized behavior. KernelSpecManager.whitelistSetDefault: `set()` Deprecated, use KernelSpecManager.allowed_kernelspecs ContentsManager.allow_hiddenBoolDefault: `False` Allow access to hidden files ContentsManager.checkpointsInstanceDefault: `None` No description ContentsManager.checkpoints_classTypeDefault: `'notebook.services.contents.checkpoints.Checkpoints'` No description ContentsManager.checkpoints_kwargsDictDefault: `{}` No description ContentsManager.files_handler_classTypeDefault: `'notebook.files.handlers.FilesHandler'` handler class to use when serving raw file requests. > Default is a fallback that talks to the ContentsManager API, > which may be inefficient, especially for large files. > Local files-based ContentsManagers can use a StaticFileHandler subclass, > which will be much more efficient. > Access to these files should be Authenticated. ContentsManager.files_handler_paramsDictDefault: `{}` Extra parameters to pass to files_handler_class. > For example, StaticFileHandlers generally expect a path argument > specifying the root directory from which to serve files. ContentsManager.hide_globsListDefault: `['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...` Glob patterns to hide in file and directory listings. ContentsManager.pre_save_hookAnyDefault: `None` Python callable or importstring thereof > To be called on a contents model prior to save. > This can be used to process the structure, > such as removing notebook outputs or other side effects that > should not be saved. > It will be called as (all arguments passed by keyword): > ``` > hook(path=path, model=model, contents_manager=self) > ``` > * model: the model to be saved. Includes file contents. > Modifying this dict will affect the file that is stored. > * path: the API path of the save destination > * contents_manager: this ContentsManager instance ContentsManager.root_dirUnicodeDefault: `'/'` No description ContentsManager.untitled_directoryUnicodeDefault: `'Untitled Folder'` The base name used when creating untitled directories. ContentsManager.untitled_fileUnicodeDefault: `'untitled'` The base name used when creating untitled files. ContentsManager.untitled_notebookUnicodeDefault: `'Untitled'` The base name used when creating untitled notebooks. FileManagerMixin.use_atomic_writingBoolDefault: `True` By default notebooks are saved on disk on a temporary file and then if successfully written, it replaces the old ones.This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota ) FileContentsManager.allow_hiddenBoolDefault: `False` Allow access to hidden files FileContentsManager.checkpointsInstanceDefault: `None` No description FileContentsManager.checkpoints_classTypeDefault: `'notebook.services.contents.checkpoints.Checkpoints'` No description FileContentsManager.checkpoints_kwargsDictDefault: `{}` No description FileContentsManager.delete_to_trashBoolDefault: `True` If True (default), deleting files will send them to theplatform’s trash/recycle bin, where they can be recovered. If False, deleting files really deletes them. FileContentsManager.files_handler_classTypeDefault: `'notebook.files.handlers.FilesHandler'` handler class to use when serving raw file requests. > Default is a fallback that talks to the ContentsManager API, > which may be inefficient, especially for large files. > Local files-based ContentsManagers can use a StaticFileHandler subclass, > which will be much more efficient. > Access to these files should be Authenticated. FileContentsManager.files_handler_paramsDictDefault: `{}` Extra parameters to pass to files_handler_class. > For example, StaticFileHandlers generally expect a path argument > specifying the root directory from which to serve files. FileContentsManager.hide_globsListDefault: `['__pycache__', '*.pyc', '*.pyo', '.DS_Store', '*.so', '*.dyl...` Glob patterns to hide in file and directory listings. FileContentsManager.post_save_hookAnyDefault: `None` Python callable or importstring thereof > to be called on the path of a file just saved. > This can be used to process the file on disk, > such as converting the notebook to a script or HTML via nbconvert. > It will be called as (all arguments passed by keyword): > ``` > hook(os_path=os_path, model=model, contents_manager=instance) > ``` > * path: the filesystem path to the file just written > * model: the model representing the file > * contents_manager: this ContentsManager instance FileContentsManager.pre_save_hookAnyDefault: `None` Python callable or importstring thereof > To be called on a contents model prior to save. > This can be used to process the structure, > such as removing notebook outputs or other side effects that > should not be saved. > It will be called as (all arguments passed by keyword): > ``` > hook(path=path, model=model, contents_manager=self) > ``` > * model: the model to be saved. Includes file contents. > Modifying this dict will affect the file that is stored. > * path: the API path of the save destination > * contents_manager: this ContentsManager instance FileContentsManager.root_dirUnicodeDefault: `''` No description FileContentsManager.save_scriptBoolDefault: `False` DEPRECATED, use post_save_hook. Will be removed in Notebook 5.0 FileContentsManager.untitled_directoryUnicodeDefault: `'Untitled Folder'` The base name used when creating untitled directories. FileContentsManager.untitled_fileUnicodeDefault: `'untitled'` The base name used when creating untitled files. FileContentsManager.untitled_notebookUnicodeDefault: `'Untitled'` The base name used when creating untitled notebooks. FileContentsManager.use_atomic_writingBoolDefault: `True` By default notebooks are saved on disk on a temporary file and then if successfully written, it replaces the old ones.This procedure, namely ‘atomic_writing’, causes some bugs on file system without operation order enforcement (like some networked fs). If set to False, the new notebook is written directly on the old one which could fail (eg: full filesystem or quota ) NotebookNotary.algorithmany of `'sha224'``|`’sha3_512’`|`’sha384’`|`’md5’`|`’sha3_384’`|`’sha512’`|`’sha256’`|`’blake2s’`|`’sha3_224’`|`’blake2b’`|`’sha3_256’`|`’sha1’``Default: `'sha256'` The hashing algorithm used to sign notebooks. NotebookNotary.data_dirUnicodeDefault: `''` The storage directory for notary secret and database. NotebookNotary.db_fileUnicodeDefault: `''` The sqlite file in which to store notebook signatures.By default, this will be in your Jupyter data directory. You can set it to ‘:memory:’ to disable sqlite writing to the filesystem. NotebookNotary.secretBytesDefault: `b''` The secret key with which notebooks are signed. NotebookNotary.secret_fileUnicodeDefault: `''` The file where the secret key is stored. NotebookNotary.store_factoryCallableDefault: `traitlets.Undefined` A callable returning the storage backend for notebook signatures.The default uses an SQLite database. AsyncMultiKernelManager.default_kernel_nameUnicodeDefault: `'python3'` The name of the default kernel to start AsyncMultiKernelManager.kernel_manager_classDottedObjectNameDefault: `'jupyter_client.ioloop.AsyncIOLoopKernelManager'` The kernel manager class. This is configurable to allowsubclassing of the AsyncKernelManager for customized behavior. AsyncMultiKernelManager.shared_contextBoolDefault: `True` Share a single zmq.Context to talk to all my kernels AsyncMultiKernelManager.use_pending_kernelsBoolDefault: `False` Whether to make kernels available before the process has started. Thekernel has a .ready future which can be awaited before connecting AsyncMappingKernelManager.allowed_message_typesListDefault: `[]` White list of allowed kernel message types.When the list is empty, all message types are allowed. AsyncMappingKernelManager.buffer_offline_messagesBoolDefault: `True` Whether messages from kernels whose frontends have disconnected should be buffered in-memory.When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity. Disable if long-running kernels will produce too much output while no frontends are connected. AsyncMappingKernelManager.cull_busyBoolDefault: `False` Whether to consider culling kernels which are busy.Only effective if cull_idle_timeout > 0. AsyncMappingKernelManager.cull_connectedBoolDefault: `False` Whether to consider culling kernels which have one or more connections.Only effective if cull_idle_timeout > 0. AsyncMappingKernelManager.cull_idle_timeoutIntDefault: `0` Timeout (in seconds) after which a kernel is considered idle and ready to be culled.Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections. AsyncMappingKernelManager.cull_intervalIntDefault: `300` The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value. AsyncMappingKernelManager.default_kernel_nameUnicodeDefault: `'python3'` The name of the default kernel to start AsyncMappingKernelManager.kernel_info_timeoutFloatDefault: `60` Timeout for giving up on a kernel (in seconds).On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup). AsyncMappingKernelManager.kernel_manager_classDottedObjectNameDefault: `'jupyter_client.ioloop.AsyncIOLoopKernelManager'` The kernel manager class. This is configurable to allowsubclassing of the AsyncKernelManager for customized behavior. AsyncMappingKernelManager.root_dirUnicodeDefault: `''` No description AsyncMappingKernelManager.shared_contextBoolDefault: `True` Share a single zmq.Context to talk to all my kernels AsyncMappingKernelManager.use_pending_kernelsBoolDefault: `False` Whether to make kernels available before the process has started. Thekernel has a .ready future which can be awaited before connecting GatewayKernelManager.allowed_message_typesListDefault: `[]` White list of allowed kernel message types.When the list is empty, all message types are allowed. GatewayKernelManager.buffer_offline_messagesBoolDefault: `True` Whether messages from kernels whose frontends have disconnected should be buffered in-memory.When True (default), messages are buffered and replayed on reconnect, avoiding lost messages due to interrupted connectivity. Disable if long-running kernels will produce too much output while no frontends are connected. GatewayKernelManager.cull_busyBoolDefault: `False` Whether to consider culling kernels which are busy.Only effective if cull_idle_timeout > 0. GatewayKernelManager.cull_connectedBoolDefault: `False` Whether to consider culling kernels which have one or more connections.Only effective if cull_idle_timeout > 0. GatewayKernelManager.cull_idle_timeoutIntDefault: `0` Timeout (in seconds) after which a kernel is considered idle and ready to be culled.Values of 0 or lower disable culling. Very short timeouts may result in kernels being culled for users with poor network connections. GatewayKernelManager.cull_intervalIntDefault: `300` The interval (in seconds) on which to check for idle kernels exceeding the cull timeout value. GatewayKernelManager.default_kernel_nameUnicodeDefault: `'python3'` The name of the default kernel to start GatewayKernelManager.kernel_info_timeoutFloatDefault: `60` Timeout for giving up on a kernel (in seconds).On starting and restarting kernels, we check whether the kernel is running and responsive by sending kernel_info_requests. This sets the timeout in seconds for how long the kernel can take before being presumed dead. This affects the MappingKernelManager (which handles kernel restarts) and the ZMQChannelsHandler (which handles the startup). GatewayKernelManager.kernel_manager_classDottedObjectNameDefault: `'jupyter_client.ioloop.AsyncIOLoopKernelManager'` The kernel manager class. This is configurable to allowsubclassing of the AsyncKernelManager for customized behavior. GatewayKernelManager.root_dirUnicodeDefault: `''` No description GatewayKernelManager.shared_contextBoolDefault: `True` Share a single zmq.Context to talk to all my kernels GatewayKernelManager.use_pending_kernelsBoolDefault: `False` Whether to make kernels available before the process has started. Thekernel has a .ready future which can be awaited before connecting GatewayKernelSpecManager.allowed_kernelspecsSetDefault: `set()` List of allowed kernel names. > By default, all installed kernels are allowed. GatewayKernelSpecManager.ensure_native_kernelBoolDefault: `True` If there is no Python kernelspec registered and the IPythonkernel is available, ensure it is added to the spec list. GatewayKernelSpecManager.kernel_spec_classTypeDefault: `'jupyter_client.kernelspec.KernelSpec'` The kernel spec class. This is configurable to allowsubclassing of the KernelSpecManager for customized behavior. GatewayKernelSpecManager.whitelistSetDefault: `set()` Deprecated, use KernelSpecManager.allowed_kernelspecs GatewayClient.auth_tokenUnicodeDefault: `None` The authorization token used in the HTTP headers. (JUPYTER_GATEWAY_AUTH_TOKEN env var) GatewayClient.ca_certsUnicodeDefault: `None` The filename of CA certificates or None to use defaults. (JUPYTER_GATEWAY_CA_CERTS env var) GatewayClient.client_certUnicodeDefault: `None` The filename for client SSL certificate, if any. (JUPYTER_GATEWAY_CLIENT_CERT env var) GatewayClient.client_keyUnicodeDefault: `None` The filename for client SSL key, if any. (JUPYTER_GATEWAY_CLIENT_KEY env var) GatewayClient.connect_timeoutFloatDefault: `40.0` The time allowed for HTTP connection establishment with the Gateway server.(JUPYTER_GATEWAY_CONNECT_TIMEOUT env var) GatewayClient.env_whitelistUnicodeDefault: `''` A comma-separated list of environment variable names that will be included, along withtheir values, in the kernel startup request. The corresponding env_whitelist configuration value must also be set on the Gateway server - since that configuration value indicates which environmental values to make available to the kernel. (JUPYTER_GATEWAY_ENV_WHITELIST env var) GatewayClient.gateway_retry_intervalFloatDefault: `1.0` The time allowed for HTTP reconnection with the Gateway server for the first time.Next will be JUPYTER_GATEWAY_RETRY_INTERVAL multiplied by two in factor of numbers of retries but less than JUPYTER_GATEWAY_RETRY_INTERVAL_MAX. (JUPYTER_GATEWAY_RETRY_INTERVAL env var) GatewayClient.gateway_retry_interval_maxFloatDefault: `30.0` The maximum time allowed for HTTP reconnection retry with the Gateway server.(JUPYTER_GATEWAY_RETRY_INTERVAL_MAX env var) GatewayClient.gateway_retry_maxIntDefault: `5` The maximum retries allowed for HTTP reconnection with the Gateway server.(JUPYTER_GATEWAY_RETRY_MAX env var) GatewayClient.headersUnicodeDefault: `'{}'` Additional HTTP headers to pass on the request. This value will be converted to a dict.(JUPYTER_GATEWAY_HEADERS env var) GatewayClient.http_pwdUnicodeDefault: `None` The password for HTTP authentication. (JUPYTER_GATEWAY_HTTP_PWD env var) GatewayClient.http_userUnicodeDefault: `None` The username for HTTP authentication. (JUPYTER_GATEWAY_HTTP_USER env var) GatewayClient.kernels_endpointUnicodeDefault: `'/api/kernels'` The gateway API endpoint for accessing kernel resources (JUPYTER_GATEWAY_KERNELS_ENDPOINT env var) GatewayClient.kernelspecs_endpointUnicodeDefault: `'/api/kernelspecs'` The gateway API endpoint for accessing kernelspecs (JUPYTER_GATEWAY_KERNELSPECS_ENDPOINT env var) GatewayClient.kernelspecs_resource_endpointUnicodeDefault: `'/kernelspecs'` The gateway endpoint for accessing kernelspecs resources(JUPYTER_GATEWAY_KERNELSPECS_RESOURCE_ENDPOINT env var) GatewayClient.request_timeoutFloatDefault: `40.0` The time allowed for HTTP request completion. (JUPYTER_GATEWAY_REQUEST_TIMEOUT env var) GatewayClient.urlUnicodeDefault: `None` The url of the Kernel or Enterprise Gateway server wherekernel specifications are defined and kernel management takes place. If defined, this Notebook server acts as a proxy for all kernel management and kernel specification retrieval. (JUPYTER_GATEWAY_URL env var) GatewayClient.validate_certBoolDefault: `True` For HTTPS requests, determines if server’s certificate should be validated or not.(JUPYTER_GATEWAY_VALIDATE_CERT env var) GatewayClient.ws_urlUnicodeDefault: `None` The websocket url of the Kernel or Enterprise Gateway server. If not provided, this valuewill correspond to the value of the Gateway url with ‘ws’ in place of ‘http’. (JUPYTER_GATEWAY_WS_URL env var) TerminalManager.cull_inactive_timeoutIntDefault: `0` Timeout (in seconds) in which a terminal has been inactive and ready to be culled.Values of 0 or lower disable culling. TerminalManager.cull_intervalIntDefault: `300` The interval (in seconds) on which to check for terminals exceeding the inactive timeout value. Running a notebook server[](#running-a-notebook-server) --- The [Jupyter notebook](index.html#document-notebook) web application is based on a server-client structure. The notebook server uses a [two-process kernel architecture](https://ipython.readthedocs.io/en/stable/overview.html#ipythonzmq) based on [ZeroMQ](http://zeromq.org), as well as [Tornado](http://www.tornadoweb.org) for serving HTTP requests. Note By default, a notebook server runs locally at 127.0.0.1:8888 and is accessible only from localhost. You may access the notebook server from the browser using http://127.0.0.1:8888. This document describes how you can [secure a notebook server](#notebook-server-security) and how to [run it on a public interface](#notebook-public-server). Important **This is not the multi-user server you are looking for**. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a notebook server as it is described in this document, their commands may collide, clobber and overwrite each other. If you want a multi-user server, the official solution is [JupyterHub](https://jupyterhub.readthedocs.io/en/latest/). To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional [security concerns](https://jupyterhub.readthedocs.io/en/latest/getting-started/security-basics.html). ### Securing a notebook server[](#securing-a-notebook-server) You can protect your notebook server with a simple single password. As of notebook 5.0 this can be done automatically. To set up a password manually you can configure the `NotebookApp.password` setting in `jupyter_notebook_config.py`. #### Prerequisite: A notebook configuration file[](#prerequisite-a-notebook-configuration-file) Check to see if you have a notebook configuration file, `jupyter_notebook_config.py`. The default location for this file is your Jupyter folder located in your home directory: > * Windows: `C:\Users\USERNAME\.jupyter\jupyter_notebook_config.py` > * OS X: `/Users/USERNAME/.jupyter/jupyter_notebook_config.py` > * Linux: `/home/USERNAME/.jupyter/jupyter_notebook_config.py` If you don’t already have a Jupyter folder, or if your Jupyter folder doesn’t contain a notebook configuration file, run the following command: ``` $ jupyter notebook --generate-config ``` This command will create the Jupyter folder if necessary, and create notebook configuration file, `jupyter_notebook_config.py`, in this folder. #### Automatic Password setup[](#automatic-password-setup) As of notebook 5.3, the first time you log-in using a token, the notebook server should give you the opportunity to setup a password from the user interface. You will be presented with a form asking for the current _token_, as well as your _new_ _password_ ; enter both and click on `Login and setup new password`. Next time you need to log in you’ll be able to use the new password instead of the login token, otherwise follow the procedure to set a password from the command line. The ability to change the password at first login time may be disabled by integrations by setting the `--NotebookApp.allow_password_change=False` Starting at notebook version 5.0, you can enter and store a password for your notebook server with a single command. **jupyter notebook password** will prompt you for your password and record the hashed password in your `jupyter_notebook_config.json`. ``` $ jupyter notebook password Enter password: **** Verify password: **** [NotebookPasswordApp] Wrote hashed password to /Users/you/.jupyter/jupyter_notebook_config.json ``` This can be used to reset a lost password; or if you believe your credentials have been leaked and desire to change your password. Changing your password will invalidate all logged-in sessions after a server restart. #### Preparing a hashed password[](#preparing-a-hashed-password) You can prepare a hashed password manually, using the function `notebook.auth.security.passwd()`: ``` In [1]: from notebook.auth import passwd In [2]: passwd() Enter password: Verify password: Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed' ``` Caution `passwd()` when called with no arguments will prompt you to enter and verify your password such as in the above code snippet. Although the function can also be passed a string as an argument such as `passwd('mypassword')`, please **do not** pass a string as an argument inside an IPython session, as it will be saved in your input history. #### Adding hashed password to your notebook configuration file[](#adding-hashed-password-to-your-notebook-configuration-file) You can then add the hashed password to your `jupyter_notebook_config.py`. The default location for this file `jupyter_notebook_config.py` is in your Jupyter folder in your home directory, `~/.jupyter`, e.g.: ``` c.NotebookApp.password = u'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed' ``` Automatic password setup will store the hash in `jupyter_notebook_config.json` while this method stores the hash in `jupyter_notebook_config.py`. The `.json` configuration options take precedence over the `.py` one, thus the manual password may not take effect if the Json file has a password set. #### Using SSL for encrypted communication[](#using-ssl-for-encrypted-communication) When using a password, it is a good idea to also use SSL with a web certificate, so that your hashed password is not sent unencrypted by your browser. Important Web security is rapidly changing and evolving. We provide this document as a convenience to the user, and recommend that the user keep current on changes that may impact security, such as new releases of OpenSSL. The Open Web Application Security Project ([OWASP](https://www.owasp.org)) website is a good resource on general security issues and web practices. You can start the notebook to communicate via a secure protocol mode by setting the `certfile` option to your self-signed certificate, i.e. `mycert.pem`, with the command: ``` $ jupyter notebook --certfile=mycert.pem --keyfile mykey.key ``` Tip A self-signed certificate can be generated with `openssl`. For example, the following command will create a certificate valid for 365 days with both the key and certificate data written to the same file: ``` $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mykey.key -out mycert.pem ``` When starting the notebook server, your browser may warn that your self-signed certificate is insecure or unrecognized. If you wish to have a fully compliant self-signed certificate that will not raise warnings, it is possible (but rather involved) to create one, as explained in detail in this [tutorial](https://arstechnica.com/information-technology/2009/12/how-to-get-set-with-a-secure-sertificate-for-free/). Alternatively, you may use [Let’s Encrypt](https://letsencrypt.org) to acquire a free SSL certificate and follow the steps in [Using Let’s Encrypt](#using-lets-encrypt) to set up a public server. ### Running a public notebook server[](#running-a-public-notebook-server) If you want to access your notebook server remotely via a web browser, you can do so by running a public notebook server. For optimal security when running a public notebook server, you should first secure the server with a password and SSL/HTTPS as described in [Securing a notebook server](#notebook-server-security). Start by creating a certificate file and a hashed password, as explained in [Securing a notebook server](#notebook-server-security). If you don’t already have one, create a config file for the notebook using the following command line: ``` $ jupyter notebook --generate-config ``` In the `~/.jupyter` directory, edit the notebook config file, `jupyter_notebook_config.py`. By default, the notebook config file has all fields commented out. The minimum set of configuration options that you should uncomment and edit in `jupyter_notebook_config.py` is the following: ``` # Set options for certfile, ip, password, and toggle off # browser auto-opening c.NotebookApp.certfile = u'/absolute/path/to/your/certificate/mycert.pem' c.NotebookApp.keyfile = u'/absolute/path/to/your/certificate/mykey.key' # Set ip to '*' to bind on all interfaces (ips) for the public server c.NotebookApp.ip = '*' c.NotebookApp.password = u'sha1:bcd259ccf...<your hashed password here>' c.NotebookApp.open_browser = False # It is a good idea to set a known, fixed port for server access c.NotebookApp.port = 9999 ``` You can then start the notebook using the `jupyter notebook` command. #### Using Let’s Encrypt[](#using-let-s-encrypt) [Let’s Encrypt](https://letsencrypt.org) provides free SSL/TLS certificates. You can also set up a public server using a [Let’s Encrypt](https://letsencrypt.org) certificate. [Running a public notebook server](#notebook-public-server) will be similar when using a Let’s Encrypt certificate with a few configuration changes. Here are the steps: 1. Create a [Let’s Encrypt certificate](https://letsencrypt.org/getting-started/). 2. Use [Preparing a hashed password](#hashed-pw) to create one. 3. If you don’t already have config file for the notebook, create one using the following command: ``` $ jupyter notebook --generate-config ``` 4. In the `~/.jupyter` directory, edit the notebook config file, `jupyter_notebook_config.py`. By default, the notebook config file has all fields commented out. The minimum set of configuration options that you should to uncomment and edit in `jupyter_notebook_config.py` is the following: ``` # Set options for certfile, ip, password, and toggle off # browser auto-opening c.NotebookApp.certfile = u'/absolute/path/to/your/certificate/fullchain.pem' c.NotebookApp.keyfile = u'/absolute/path/to/your/certificate/privkey.pem' # Set ip to '*' to bind on all interfaces (ips) for the public server c.NotebookApp.ip = '*' c.NotebookApp.password = u'sha1:bcd259ccf...<your hashed password here>' c.NotebookApp.open_browser = False # It is a good idea to set a known, fixed port for server access c.NotebookApp.port = 9999 ``` You can then start the notebook using the `jupyter notebook` command. Important **Use ‘https’.** Keep in mind that when you enable SSL support, you must access the notebook server over `https://`, not over plain `http://`. The startup message from the server prints a reminder in the console, but *it is easy to overlook this detail and think the server is for some reason non-responsive*. **When using SSL, always access the notebook server with ‘https://’.** You may now access the public server by pointing your browser to `https://your.host.com:9999` where `your.host.com` is your public server’s domain. #### Firewall Setup[](#firewall-setup) To function correctly, the firewall on the computer running the jupyter notebook server must be configured to allow connections from client machines on the access port `c.NotebookApp.port` set in `jupyter_notebook_config.py` to allow connections to the web interface. The firewall must also allow connections from 127.0.0.1 (localhost) on ports from 49152 to 65535. These ports are used by the server to communicate with the notebook kernels. The kernel communication ports are chosen randomly by ZeroMQ, and may require multiple connections per kernel, so a large range of ports must be accessible. ### Running the notebook with a customized URL prefix[](#running-the-notebook-with-a-customized-url-prefix) The notebook dashboard, which is the landing page with an overview of the notebooks in your working directory, is typically found and accessed at the default URL `http://localhost:8888/`. If you prefer to customize the URL prefix for the notebook dashboard, you can do so through modifying `jupyter_notebook_config.py`. For example, if you prefer that the notebook dashboard be located with a sub-directory that contains other ipython files, e.g. `http://localhost:8888/ipython/`, you can do so with configuration options like the following (see above for instructions about modifying `jupyter_notebook_config.py`): ``` c.NotebookApp.base_url = '/ipython/' ``` ### Embedding the notebook in another website[](#embedding-the-notebook-in-another-website) Sometimes you may want to embed the notebook somewhere on your website, e.g. in an IFrame. To do this, you may need to override the Content-Security-Policy to allow embedding. Assuming your website is at https://mywebsite.example.com, you can embed the notebook on your website with the following configuration setting in `jupyter_notebook_config.py`: ``` c.NotebookApp.tornado_settings = { 'headers': { 'Content-Security-Policy': "frame-ancestors https://mywebsite.example.com 'self' " } } ``` When embedding the notebook in a website using an iframe, consider putting the notebook in single-tab mode. Since the notebook opens some links in new tabs by default, single-tab mode keeps the notebook from opening additional tabs. Adding the following to `~/.jupyter/custom/custom.js` will enable single-tab mode: ``` define(['base/js/namespace'], function(Jupyter){ Jupyter._target = '_self'; }); ``` ### Using a gateway server for kernel management[](#using-a-gateway-server-for-kernel-management) You are now able to redirect the management of your kernels to a Gateway Server (i.e., [Jupyter Kernel Gateway](https://jupyter-kernel-gateway.readthedocs.io/en/latest/) or [Jupyter Enterprise Gateway](https://jupyter-enterprise-gateway.readthedocs.io/en/latest/)) simply by specifying a Gateway url via the following command-line option: > ``` > $ jupyter notebook --gateway-url=http://my-gateway-server:8888 > ``` the environment: > ``` > JUPYTER_GATEWAY_URL=http://my-gateway-server:8888 > ``` or in `jupyter_notebook_config.py`: > ``` > c.GatewayClient.url = http://my-gateway-server:8888 > ``` When provided, all kernel specifications will be retrieved from the specified Gateway server and all kernels will be managed by that server. This option enables the ability to target kernel processes against managed clusters while allowing for the notebook’s management to remain local to the Notebook server. ### Known issues[](#known-issues) #### Proxies[](#proxies) When behind a proxy, especially if your system or browser is set to autodetect the proxy, the notebook web application might fail to connect to the server’s websockets, and present you with a warning at startup. In this case, you need to configure your system not to use the proxy for the server’s address. For example, in Firefox, go to the Preferences panel, Advanced section, Network tab, click ‘Settings…’, and add the address of the notebook server to the ‘No proxy for’ field. #### Content-Security-Policy (CSP)[](#content-security-policy-csp) Certain [security guidelines](https://infosec.mozilla.org/guidelines/web_security.html#content-security-policy) recommend that servers use a Content-Security-Policy (CSP) header to prevent cross-site scripting vulnerabilities, specifically limiting to `default-src: https:` when possible. This directive causes two problems with Jupyter. First, it disables execution of inline javascript code, which is used extensively by Jupyter. Second, it limits communication to the https scheme, and prevents WebSockets from working because they communicate via the wss scheme (or ws for insecure communication). Jupyter uses WebSockets for interacting with kernels, so when you visit a server with such a CSP, your browser will block attempts to use wss, which will cause you to see “Connection failed” messages from jupyter notebooks, or simply no response from jupyter terminals. By looking in your browser’s javascript console, you can see any error messages that will explain what is failing. To avoid these problem, you need to add `'unsafe-inline'` and `connect-src https: wss:` to your CSP header, at least for pages served by jupyter. (That is, you can leave your CSP unchanged for other parts of your website.) Note that multiple CSP headers are allowed, but successive CSP headers can only restrict the policy; they cannot loosen it. For example, if your server sends both of these headers > Content-Security-Policy “default-src https: ‘unsafe-inline’” > Content-Security-Policy “connect-src https: wss:” the first policy will already eliminate wss connections, so the second has no effect. Therefore, you can’t simply add the second header; you have to actually modify your CSP header to look more like this: > Content-Security-Policy “default-src https: ‘unsafe-inline’; connect-src https: wss:” #### Docker CMD[](#docker-cmd) Using `jupyter notebook` as a [Docker CMD](https://docs.docker.com/engine/reference/builder/#cmd) results in kernels repeatedly crashing, likely due to a lack of [PID reaping](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/). To avoid this, use the [tini](https://github.com/krallin/tini) `init` as your Dockerfile ENTRYPOINT: ``` # Add Tini. Tini operates as a process subreaper for jupyter. This prevents # kernel crashes. ENV TINI_VERSION v0.6.0 ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini RUN chmod +x /usr/bin/tini ENTRYPOINT ["/usr/bin/tini", "--"] EXPOSE 8888 CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"] ``` Security in the Jupyter notebook server[](#security-in-the-jupyter-notebook-server) --- Since access to the Jupyter notebook server means access to running arbitrary code, it is important to restrict access to the notebook server. For this reason, notebook 4.3 introduces token-based authentication that is **on by default**. Note If you enable a password for your notebook server, token authentication is not enabled by default, and the behavior of the notebook server is unchanged from versions earlier than 4.3. When token authentication is enabled, the notebook uses a token to authenticate requests. This token can be provided to login to the notebook server in three ways: * in the `Authorization` header, e.g.: ``` Authorization: token abcdef... ``` * In a URL parameter, e.g.: ``` https://my-notebook/tree/?token=abcdef... ``` * In the password field of the login form that will be shown to you if you are not logged in. When you start a notebook server with token authentication enabled (default), a token is generated to use for authentication. This token is logged to the terminal, so that you can copy/paste the URL into your browser: ``` [I 11:59:16.597 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/?token=c8de56fa4deed24899803e93c227592aef6538f93025fe01 ``` If the notebook server is going to open your browser automatically (the default, unless `--no-browser` has been passed), an *additional* token is generated for launching the browser. This additional token can be used only once, and is used to set a cookie for your browser once it connects. After your browser has made its first request with this one-time-token, the token is discarded and a cookie is set in your browser. At any later time, you can see the tokens and URLs for all of your running servers with **jupyter notebook list**: ``` $ jupyter notebook list Currently running servers: http://localhost:8888/?token=abc... :: /home/you/notebooks https://0.0.0.0:9999/?token=123... :: /tmp/public http://localhost:8889/ :: /tmp/has-password ``` For servers with token-authentication enabled, the URL in the above listing will include the token, so you can copy and paste that URL into your browser to login. If a server has no token (e.g. it has a password or has authentication disabled), the URL will not include the token argument. Once you have visited this URL, a cookie will be set in your browser and you won’t need to use the token again, unless you switch browsers, clear your cookies, or start a notebook server on a new port. ### Alternatives to token authentication[](#alternatives-to-token-authentication) If a generated token doesn’t work well for you, you can set a password for your notebook. **jupyter notebook password** will prompt you for a password, and store the hashed password in your `jupyter_notebook_config.json`. New in version 5.0: **jupyter notebook password** command is added. It is possible to disable authentication altogether by setting the token and password to empty strings, but this is **NOT RECOMMENDED**, unless authentication or access restrictions are handled at a different layer in your web application: ``` c.NotebookApp.token = '' c.NotebookApp.password = '' ``` Security in notebook documents[](#security-in-notebook-documents) --- As Jupyter notebooks become more popular for sharing and collaboration, the potential for malicious people to attempt to exploit the notebook for their nefarious purposes increases. IPython 2.0 introduced a security model to prevent execution of untrusted code without explicit user input. ### The problem[](#the-problem) The whole point of Jupyter is arbitrary code execution. We have no desire to limit what can be done with a notebook, which would negatively impact its utility. Unlike other programs, a Jupyter notebook document includes output. Unlike other documents, that output exists in a context that can execute code (via Javascript). The security problem we need to solve is that no code should execute just because a user has **opened** a notebook that **they did not write**. Like any other program, once a user decides to execute code in a notebook, it is considered trusted, and should be allowed to do anything. ### Our security model[](#our-security-model) * Untrusted HTML is always sanitized * Untrusted Javascript is never executed * HTML and Javascript in Markdown cells are never trusted * **Outputs** generated by the user are trusted * Any other HTML or Javascript (in Markdown cells, output generated by others) is never trusted * The central question of trust is “Did the current user do this?” ### The details of trust[](#the-details-of-trust) When a notebook is executed and saved, a signature is computed from a digest of the notebook’s contents plus a secret key. This is stored in a database, writable only by the current user. By default, this is located at: ``` ~/.local/share/jupyter/nbsignatures.db # Linux ~/Library/Jupyter/nbsignatures.db # OS X %APPDATA%/jupyter/nbsignatures.db # Windows ``` Each signature represents a series of outputs which were produced by code the current user executed, and are therefore trusted. When you open a notebook, the server computes its signature, and checks if it’s in the database. If a match is found, HTML and Javascript output in the notebook will be trusted at load, otherwise it will be untrusted. Any output generated during an interactive session is trusted. #### Updating trust[](#updating-trust) A notebook’s trust is updated when the notebook is saved. If there are any untrusted outputs still in the notebook, the notebook will not be trusted, and no signature will be stored. If all untrusted outputs have been removed (either via `Clear Output` or re-execution), then the notebook will become trusted. While trust is updated per output, this is only for the duration of a single session. A newly loaded notebook file is either trusted or not in its entirety. #### Explicit trust[](#explicit-trust) Sometimes re-executing a notebook to generate trusted output is not an option, either because dependencies are unavailable, or it would take a long time. Users can explicitly trust a notebook in two ways: * At the command-line, with: ``` jupyter trust /path/to/notebook.ipynb ``` * After loading the untrusted notebook, with `File / Trust Notebook` These two methods simply load the notebook, compute a new signature, and add that signature to the user’s database. ### Reporting security issues[](#reporting-security-issues) If you find a security vulnerability in Jupyter, either a failure of the code to properly implement the model described here, or a failure of the model itself, please report it to [<EMAIL>@ipython.org](mailto:security%40ipython.org). If you prefer to encrypt your security reports, you can use [`this PGP public key`](_downloads/1d303a645f2505a8fd283826fafc9908/ipython_security.asc). ### Affected use cases[](#affected-use-cases) Some use cases that work in Jupyter 1.0 became less convenient in 2.0 as a result of the security changes. We do our best to minimize these annoyances, but security is always at odds with convenience. #### Javascript and CSS in Markdown cells[](#javascript-and-css-in-markdown-cells) While never officially supported, it had become common practice to put hidden Javascript or CSS styling in Markdown cells, so that they would not be visible on the page. Since Markdown cells are now sanitized (by [Google Caja](https://developers.google.com/caja)), all Javascript (including click event handlers, etc.) and CSS will be stripped. We plan to provide a mechanism for notebook themes, but in the meantime styling the notebook can only be done via either `custom.css` or CSS in HTML output. The latter only have an effect if the notebook is trusted, because otherwise the output will be sanitized just like Markdown. #### Collaboration[](#collaboration) When collaborating on a notebook, people probably want to see the outputs produced by their colleagues’ most recent executions. Since each collaborator’s key will differ, this will result in each share starting in an untrusted state. There are three basic approaches to this: * re-run notebooks when you get them (not always viable) * explicitly trust notebooks via `jupyter trust` or the notebook menu (annoying, but easy) * share a notebook signatures database, and use configuration dedicated to the collaboration while working on the project. To share a signatures database among users, you can configure: ``` c.NotebookNotary.data_dir = "/path/to/signature_dir" ``` to specify a non-default path to the SQLite database (of notebook hashes, essentially). We are aware that SQLite doesn’t work well on NFS and we are [working out better ways to do this](https://github.com/jupyter/notebook/issues/1782). Distributing Jupyter Extensions as Python Packages[](#Distributing-Jupyter-Extensions-as-Python-Packages) --- ### Overview[](#Overview) #### How can the notebook be extended?[](#How-can-the-notebook-be-extended?) The Jupyter Notebook client and server application are both deeply customizable. Their behavior can be extended by creating, respectively: * nbextension: a notebook extension + a single JS file, or directory of JavaScript, Cascading StyleSheets, etc. that contain at minimum a JavaScript module packaged as an [AMD modules](https://en.wikipedia.org/wiki/Asynchronous_module_definition) that exports a function `load_ipython_extension` * server extension: an importable Python module + that implements `load_jupyter_server_extension` * bundler extension: an importable Python module with generated File -> Download as / Deploy as menu item trigger + that implements `bundle` #### Why create a Python package for Jupyter extensions?[](#Why-create-a-Python-package-for-Jupyter-extensions?) Since it is rare to have a server extension that does not have any frontend components (an nbextension), for convenience and consistency, all these client and server extensions with their assets can be packaged and versioned together as a Python package with a few simple commands, or as of Notebook 5.3, handled automatically by your package manager of choice. This makes installing the package of extensions easier and less error-prone for the user. ### Installation of Jupyter Extensions[](#Installation-of-Jupyter-Extensions) #### Install a Python package containing Jupyter Extensions[](#Install-a-Python-package-containing-Jupyter-Extensions) There are several ways that you may get a Python package containing Jupyter Extensions. Commonly, you will use a package manager for your system: ``` pip install helpful_package # or conda install helpful_package # or apt-get install helpful_package # where 'helpful_package' is a Python package containing one or more Jupyter Extensions ``` #### Automatic installation and Enabling[](#Automatic-installation-and-Enabling) > New in Notebook 5.3 The absolute simplest case requires no user interaction at all! Configured correctly, after installing with their package manager of choice, both server and frontend extensions can be enabled by default in the environment where they were installed, i.e. `--sys-prefix`. See the `setup.py` in the example below. #### Enable a Server Extension[](#Enable-a-Server-Extension) The simplest case would be to enable a server extension which has no frontend components. A `pip` user that wants their configuration stored in their home directory would type the following command: ``` jupyter serverextension enable --py helpful_package ``` Alternatively, a `virtualenv` or `conda` user can pass `--sys-prefix` which keeps their environment isolated and reproducible. For example: ``` # Make sure that your virtualenv or conda environment is activated [source] activate my-environment jupyter serverextension enable --py helpful_package --sys-prefix ``` #### Install the nbextension assets[](#Install-the-nbextension-assets) If a package also has an nbextension with frontend assets that must be available (but not neccessarily enabled by default), install these assets with the following command: ``` jupyter nbextension install --py helpful_package # or --sys-prefix if using virtualenv or conda ``` #### Enable nbextension assets[](#Enable-nbextension-assets) If a package has assets that should be loaded every time a Jupyter app (e.g. lab, notebook, dashboard, terminal) is loaded in the browser, the following command can be used to enable the nbextension: ``` jupyter nbextension enable --py helpful_package # or --sys-prefix if using virtualenv or conda ``` ### Did it work? Check by listing Jupyter Extensions.[](#Did-it-work?-Check-by-listing-Jupyter-Extensions.) After running one or more extension installation steps, you can list what is presently known about nbextensions, server extensions, or bundler extensions. The following commands will list which extensions are available, whether they are enabled, and other extension details: ``` jupyter nbextension list jupyter serverextension list jupyter bundlerextension list ``` ### Additional resources on creating and distributing packages[](#Additional-resources-on-creating-and-distributing-packages) > Of course, in addition to the files listed, there are number of other files one needs to build a proper package. Here are some good resources: - [The Hitchhiker’s Guide to Packaging](https://the-hitchhikers-guide-to-packaging.readthedocs.io/en/latest/quickstart.html) - [Structure of the Repository](https://docs.python-guide.org/writing/structure/) by <NAME> and Real Python > How you distribute them, too, is important: - [Packaging and Distributing Projects](https://python-packaging-user-guide.readthedocs.io/tutorials/distributing-packages/) - [conda: Building packages](https://conda.io/projects/conda-build/en/latest/user-guide/tutorials/building-conda-packages.html) ### Example - Server extension[](#Example---Server-extension) #### Creating a Python package with a server extension[](#Creating-a-Python-package-with-a-server-extension) Here is an example of a python module which contains a server extension directly on itself. It has this directory structure: ``` - setup.py - MANIFEST.in - my_module/ - __init__.py ``` #### Defining the server extension[](#Defining-the-server-extension) This example shows that the server extension and its `load_jupyter_server_extension` function are defined in the `__init__.py` file. ##### `my_module/__init__.py`[](#my_module/__init__.py) ``` def _jupyter_server_extension_paths(): return [{ "module": "my_module" }] def load_jupyter_server_extension(nbapp): nbapp.log.info("my module enabled!") ``` #### Install and enable the server extension[](#Install-and-enable-the-server-extension) Which a user can install with: ``` jupyter serverextension enable --py my_module [--sys-prefix] ``` ### Example - Server extension and nbextension[](#Example---Server-extension-and-nbextension) #### Creating a Python package with a server extension and nbextension[](#Creating-a-Python-package-with-a-server-extension-and-nbextension) Here is another server extension, with a front-end module. It assumes this directory structure: ``` - setup.py - MANIFEST.in - my_fancy_module/ - __init__.py - static/ index.js ``` #### Defining the server extension and nbextension[](#Defining-the-server-extension-and-nbextension) This example again shows that the server extension and its `load_jupyter_server_extension` function are defined in the `__init__.py` file. This time, there is also a function `_jupyter_nbextension_paths` for the nbextension. ##### `my_fancy_module/__init__.py`[](#my_fancy_module/__init__.py) ``` def _jupyter_server_extension_paths(): return [{ "module": "my_fancy_module" }] # Jupyter Extension points def _jupyter_nbextension_paths(): return [dict( section="notebook", # the path is relative to the `my_fancy_module` directory src="static", # directory in the `nbextension/` namespace dest="my_fancy_module", # _also_ in the `nbextension/` namespace require="my_fancy_module/index")] def load_jupyter_server_extension(nbapp): nbapp.log.info("my module enabled!") ``` #### Install and enable the server extension and nbextension[](#Install-and-enable-the-server-extension-and-nbextension) The user can install and enable the extensions with the following set of commands: ``` jupyter nbextension install --py my_fancy_module [--sys-prefix|--user] jupyter nbextension enable --py my_fancy_module [--sys-prefix|--system] jupyter serverextension enable --py my_fancy_module [--sys-prefix|--system] ``` #### Automatically enabling a server extension and nbextension[](#Automatically-enabling-a-server-extension-and-nbextension) > New in Notebook 5.3 Server extensions and nbextensions can be installed and enabled without any user intervention or post-install scripts beyond `<package manager> install <extension package name>` In addition to the `my_fancy_module` file tree, assume: ``` jupyter-config/ ├── jupyter_notebook_config.d/ │   └── my_fancy_module.json └── nbconfig/ └── notebook.d/ └── my_fancy_module.json ``` ##### `jupyter-config/jupyter_notebook_config.d/my_fancy_module.json`[](#jupyter-config/jupyter_notebook_config.d/my_fancy_module.json) ``` { "NotebookApp": { "nbserver_extensions": { "my_fancy_module": true } } } ``` ##### `jupyter-config/nbconfig/notebook.d/my_fancy_module.json`[](#jupyter-config/nbconfig/notebook.d/my_fancy_module.json) ``` { "load_extensions": { "my_fancy_module/index": true } } ``` Put all of them in place via: ##### `setup.py`[](#setup.py) ``` import setuptools setuptools.setup( name="MyFancyModule", ... include_package_data=True, data_files=[ # like `jupyter nbextension install --sys-prefix` ("share/jupyter/nbextensions/my_fancy_module", [ "my_fancy_module/static/index.js", ]), # like `jupyter nbextension enable --sys-prefix` ("etc/jupyter/nbconfig/notebook.d", [ "jupyter-config/nbconfig/notebook.d/my_fancy_module.json" ]), # like `jupyter serverextension enable --sys-prefix` ("etc/jupyter/jupyter_notebook_config.d", [ "jupyter-config/jupyter_notebook_config.d/my_fancy_module.json" ]) ], ... zip_safe=False ) ``` and last, but not least: ##### `MANIFEST.in`[](#MANIFEST.in) ``` recursive-include jupyter-config *.json recursive-include my_fancy_module/static *.js ``` As most package managers will only modify their environment, the eventual configuration will be as if the user had typed: ``` jupyter nbextension install --py my_fancy_module --sys-prefix jupyter nbextension enable --py my_fancy_module --sys-prefix jupyter serverextension enable --py my_fancy_module --sys-prefix ``` If a user manually `disable`s an extension, that configuration will override the bundled package configuration. ##### When automagical install fails[](#When-automagical-install-fails) Note this can still fail in certain situations with `pip`, requiring manual use of `install` and `enable` commands. Non-python-specific package managers (e.g. `conda`, `apt`) may choose not to implement the above behavior at the `setup.py` level, having more ways to put data files in various places at build time. ### Example - Bundler extension[](#Example---Bundler-extension) #### Creating a Python package with a bundlerextension[](#Creating-a-Python-package-with-a-bundlerextension) Here is a bundler extension that adds a *Download as -> Notebook Tarball (tar.gz)* option to the notebook *File* menu. It assumes this directory structure: ``` - setup.py - MANIFEST.in - my_tarball_bundler/ - __init__.py ``` #### Defining the bundler extension[](#Defining-the-bundler-extension) This example shows that the bundler extension and its `bundle` function are defined in the `__init__.py` file. ##### `my_tarball_bundler/__init__.py`[](#my_tarball_bundler/__init__.py) ``` import tarfile import io import os import nbformat def _jupyter_bundlerextension_paths(): """Declare bundler extensions provided by this package.""" return [{ # unique bundler name "name": "tarball_bundler", # module containing bundle function "module_name": "my_tarball_bundler", # human-readable menu item label "label" : "Notebook Tarball (tar.gz)", # group under 'deploy' or 'download' menu "group" : "download", }] def bundle(handler, model): """Create a compressed tarball containing the notebook document. Parameters --- handler : tornado.web.RequestHandler Handler that serviced the bundle request model : dict Notebook model from the configured ContentManager """ notebook_filename = model['name'] notebook_content = nbformat.writes(model['content']).encode('utf-8') notebook_name = os.path.splitext(notebook_filename)[0] tar_filename = '{}.tar.gz'.format(notebook_name) info = tarfile.TarInfo(notebook_filename) info.size = len(notebook_content) with io.BytesIO() as tar_buffer: with tarfile.open(tar_filename, "w:gz", fileobj=tar_buffer) as tar: tar.addfile(info, io.BytesIO(notebook_content)) # Set headers to trigger browser download handler.set_header('Content-Disposition', 'attachment; filename="{}"'.format(tar_filename)) handler.set_header('Content-Type', 'application/gzip') # Return the buffer value as the response handler.finish(tar_buffer.getvalue()) ``` See [Extending the Notebook](index.html#document-extending/index) for more documentation about writing nbextensions, server extensions, and bundler extensions. Extending the Notebook[](#extending-the-notebook) --- Certain subsystems of the notebook server are designed to be extended or overridden by users. These documents explain these systems, and show how to override the notebook’s defaults with your own custom behavior. ### Contents API[](#contents-api) The Jupyter Notebook web application provides a graphical interface for creating, opening, renaming, and deleting files in a virtual filesystem. The `ContentsManager` class defines an abstract API for translating these interactions into operations on a particular storage medium. The default implementation, `FileContentsManager`, uses the local filesystem of the server for storage and straightforwardly serializes notebooks into JSON. Users can override these behaviors by supplying custom subclasses of ContentsManager. This section describes the interface implemented by ContentsManager subclasses. We refer to this interface as the **Contents API**. #### Data Model[](#data-model) ##### Filesystem Entities[](#filesystem-entities) ContentsManager methods represent virtual filesystem entities as dictionaries, which we refer to as **models**. Models may contain the following entries: | Key | Type | Info | | --- | --- | --- | | **name** | unicode | Basename of the entity. | | **path** | unicode | Full ([API-style](#apipaths)) path to the entity. | | **type** | unicode | The entity type. One of `"notebook"`, `"file"` or `"directory"`. | | **created** | datetime | Creation date of the entity. | | **last_modified** | datetime | Last modified date of the entity. | | **content** | variable | The “content” of the entity. ([See Below](#modelcontent)) | | **mimetype** | unicode or `None` | The mimetype of `content`, if any. ([See Below](#modelcontent)) | | **format** | unicode or `None` | The format of `content`, if any. ([See Below](#modelcontent)) | Certain model fields vary in structure depending on the `type` field of the model. There are three model types: **notebook**, **file**, and **directory**. * `notebook` models + The `format` field is always `"json"`. + The `mimetype` field is always `None`. + The `content` field contains a [`nbformat.notebooknode.NotebookNode`](https://nbformat.readthedocs.io/en/latest/api.html#nbformat.NotebookNode) representing the .ipynb file represented by the model. See the [NBFormat](https://nbformat.readthedocs.io/en/latest/index.html) documentation for a full description. * `file` models + The `format` field is either `"text"` or `"base64"`. + The `mimetype` field can be any mimetype string, but defaults to `text/plain` for text-format models and `application/octet-stream` for base64-format models. For files with unknown mime types (e.g. unknown file extensions), this field may be None. + The `content` field is always of type `unicode`. For text-format file models, `content` simply contains the file’s bytes after decoding as UTF-8. Non-text (`base64`) files are read as bytes, base64 encoded, and then decoded as UTF-8. * `directory` models + The `format` field is always `"json"`. + The `mimetype` field is always `None`. + The `content` field contains a list of [content-free](#contentfree) models representing the entities in the directory. Note In certain circumstances, we don’t need the full content of an entity to complete a Contents API request. In such cases, we omit the `content`, and `format` keys from the model. The default values for the `mimetype` field will might also not be evaluated, in which case it will be set as None. This reduced reply most commonly occurs when listing a directory, in which circumstance we represent files within the directory as content-less models to avoid having to recursively traverse and serialize the entire filesystem. **Sample Models** ``` # Notebook Model with Content { 'content': { 'metadata': {}, 'nbformat': 4, 'nbformat_minor': 0, 'cells': [ { 'cell_type': 'markdown', 'metadata': {}, 'source': 'Some **Markdown**', }, ], }, 'created': datetime(2015, 7, 25, 19, 50, 19, 19865), 'format': 'json', 'last_modified': datetime(2015, 7, 25, 19, 50, 19, 19865), 'mimetype': None, 'name': 'a.ipynb', 'path': 'foo/a.ipynb', 'type': 'notebook', 'writable': True, } # Notebook Model without Content { 'content': None, 'created': datetime.datetime(2015, 7, 25, 20, 17, 33, 271931), 'format': None, 'last_modified': datetime.datetime(2015, 7, 25, 20, 17, 33, 271931), 'mimetype': None, 'name': 'a.ipynb', 'path': 'foo/a.ipynb', 'type': 'notebook', 'writable': True } ``` ##### API Paths[](#api-paths) ContentsManager methods represent the locations of filesystem resources as **API-style paths**. Such paths are interpreted as relative to the root directory of the notebook server. For compatibility across systems, the following guarantees are made: * Paths are always `unicode`, not `bytes`. * Paths are not URL-escaped. * Paths are always forward-slash (/) delimited, even on Windows. * Leading and trailing slashes are stripped. For example, `/foo/bar/buzz/` becomes `foo/bar/buzz`. * The empty string (`""`) represents the root directory. #### Writing a Custom ContentsManager[](#writing-a-custom-contentsmanager) The default ContentsManager is designed for users running the notebook as an application on a personal computer. It stores notebooks as .ipynb files on the local filesystem, and it maps files and directories in the Notebook UI to files and directories on disk. It is possible to override how notebooks are stored by implementing your own custom subclass of `ContentsManager`. For example, if you deploy the notebook in a context where you don’t trust or don’t have access to the filesystem of the notebook server, it’s possible to write your own ContentsManager that stores notebooks and files in a database. ##### Required Methods[](#required-methods) A minimal complete implementation of a custom `ContentsManager` must implement the following methods: | `ContentsManager.get`(path[, content, type, ...]) | Get a file or directory model. | | `ContentsManager.save`(model, path) | Save a file or directory model to path. | | `ContentsManager.delete_file`(path) | Delete the file or directory at path. | | `ContentsManager.rename_file`(old_path, new_path) | Rename a file or directory. | | `ContentsManager.file_exists`([path]) | Does a file exist at the given path? | | `ContentsManager.dir_exists`(path) | Does a directory exist at the given path? | | `ContentsManager.is_hidden`(path) | Is path a hidden directory or file? | You may be required to specify a Checkpoints object, as the default one, `FileCheckpoints`, could be incompatible with your custom ContentsManager. ##### Chunked Saving[](#chunked-saving) The contents API allows for “chunked” saving of files, i.e. saving/transmitting in partial pieces: * This can only be used when the `type` of the model is `file`. * The model should be as otherwise expected for `save()`, with an added field `chunk`. * The value of `chunk` should be an integer starting at `1`, and incrementing for each subsequent chunk, except for the final chunk, which should be indicated with a value of `-1`. * The model returned from using `save()` with `chunk` should be treated as unreliable for all chunks except the final one. * Any interaction with a file being saved in a chunked manner is unreliable until the final chunk has been saved. This includes directory listings. #### Customizing Checkpoints[](#customizing-checkpoints) Customized Checkpoint definitions allows behavior to be altered and extended. The `Checkpoints` and `GenericCheckpointsMixin` classes (from `notebook.services.contents.checkpoints`) have reusable code and are intended to be used together, but require the following methods to be implemented. | `Checkpoints.rename_checkpoint`(checkpoint_id, ...) | Rename a single checkpoint from old_path to new_path. | | `Checkpoints.list_checkpoints`(path) | Return a list of checkpoints for a given file | | `Checkpoints.delete_checkpoint`(checkpoint_id, ...) | delete a checkpoint for a file | | `GenericCheckpointsMixin.create_file_checkpoint`(...) | Create a checkpoint of the current state of a file | | `GenericCheckpointsMixin.create_notebook_checkpoint`(nb, ...) | Create a checkpoint of the current state of a file | | `GenericCheckpointsMixin.get_file_checkpoint`(...) | Get the content of a checkpoint for a non-notebook file. | | `GenericCheckpointsMixin.get_notebook_checkpoint`(...) | Get the content of a checkpoint for a notebook. | ##### No-op example[](#no-op-example) Here is an example of a no-op checkpoints object - note the mixin comes first. The docstrings indicate what each method should do or return for a more complete implementation. ``` class NoOpCheckpoints(GenericCheckpointsMixin, Checkpoints): """requires the following methods:""" def create_file_checkpoint(self, content, format, path): """ -> checkpoint model""" def create_notebook_checkpoint(self, nb, path): """ -> checkpoint model""" def get_file_checkpoint(self, checkpoint_id, path): """ -> {'type': 'file', 'content': <str>, 'format': {'text', 'base64'}}""" def get_notebook_checkpoint(self, checkpoint_id, path): """ -> {'type': 'notebook', 'content': <output of nbformat.read>}""" def delete_checkpoint(self, checkpoint_id, path): """deletes a checkpoint for a file""" def list_checkpoints(self, path): """returns a list of checkpoint models for a given file, default just does one per file """ return [] def rename_checkpoint(self, checkpoint_id, old_path, new_path): """renames checkpoint from old path to new path""" ``` See `GenericFileCheckpoints` in `notebook.services.contents.filecheckpoints` for a more complete example. #### Testing[](#testing) `notebook.services.contents.tests` includes several test suites written against the abstract Contents API. This means that an excellent way to test a new ContentsManager subclass is to subclass our tests to make them use your ContentsManager. Note [PGContents](https://github.com/quantopian/pgcontents) is an example of a complete implementation of a custom `ContentsManager`. It stores notebooks and files in [PostgreSQL](https://www.postgresql.org/) and encodes directories as SQL relations. PGContents also provides an example of how to re-use the notebook’s tests. ### File save hooks[](#file-save-hooks) You can configure functions that are run whenever a file is saved. There are two hooks available: * `ContentsManager.pre_save_hook` runs on the API path and model with content. This can be used for things like stripping output that people don’t like adding to VCS noise. * `FileContentsManager.post_save_hook` runs on the filesystem path and model without content. This could be used to commit changes after every save, for instance. They are both called with keyword arguments: ``` pre_save_hook(model=model, path=path, contents_manager=cm) post_save_hook(model=model, os_path=os_path, contents_manager=cm) ``` #### Examples[](#examples) These can both be added to `jupyter_notebook_config.py`. A pre-save hook for stripping output: ``` def scrub_output_pre_save(model, **kwargs): """scrub output before saving notebooks""" # only run on notebooks if model['type'] != 'notebook': return # only run on nbformat v4 if model['content']['nbformat'] != 4: return for cell in model['content']['cells']: if cell['cell_type'] != 'code': continue cell['outputs'] = [] cell['execution_count'] = None c.FileContentsManager.pre_save_hook = scrub_output_pre_save ``` A post-save hook to make a script equivalent whenever the notebook is saved (replacing the `--script` option in older versions of the notebook): ``` import io import os from notebook.utils import to_api_path _script_exporter = None def script_post_save(model, os_path, contents_manager, **kwargs): """convert notebooks to Python script after save with nbconvert replaces `jupyter notebook --script` """ from nbconvert.exporters.script import ScriptExporter if model['type'] != 'notebook': return global _script_exporter if _script_exporter is None: _script_exporter = ScriptExporter(parent=contents_manager) log = contents_manager.log base, ext = os.path.splitext(os_path) script, resources = _script_exporter.from_filename(os_path) script_fname = base + resources.get('output_extension', '.txt') log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir)) with io.open(script_fname, 'w', encoding='utf-8') as f: f.write(script) c.FileContentsManager.post_save_hook = script_post_save ``` This could be a simple call to `jupyter nbconvert --to script`, but spawning the subprocess every time is quite slow. ### Custom request handlers[](#custom-request-handlers) The notebook webserver can be interacted with using a well [defined RESTful API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/master/notebook/services/api/api.yaml). You can define custom RESTful API handlers in addition to the ones provided by the notebook. As described below, to define a custom handler you need to first write a notebook server extension. Then, in the extension, you can register the custom handler. #### Writing a notebook server extension[](#writing-a-notebook-server-extension) The notebook webserver is written in Python, hence your server extension should be written in Python too. Server extensions, like IPython extensions, are Python modules that define a specially named load function, `load_jupyter_server_extension`. This function is called when the extension is loaded. ``` def load_jupyter_server_extension(nb_server_app): """ Called when the extension is loaded. Args: nb_server_app (NotebookWebApplication): handle to the Notebook webserver instance. """ pass ``` To get the notebook server to load your custom extension, you’ll need to add it to the list of extensions to be loaded. You can do this using the config system. `NotebookApp.nbserver_extensions` is a config variable which is a dictionary of strings, each a Python module to be imported, mapping to `True` to enable or `False` to disable each extension. Because this variable is notebook config, you can set it two different ways, using config files or via the command line. For example, to get your extension to load via the command line add a double dash before the variable name, and put the Python dictionary in double quotes. If your package is “mypackage” and module is “mymodule”, this would look like `jupyter notebook --NotebookApp.nbserver_extensions="{'mypackage.mymodule':True}"` . Basically the string should be Python importable. Alternatively, you can have your extension loaded regardless of the command line args by setting the variable in the Jupyter config file. The default location of the Jupyter config file is `~/.jupyter/jupyter_notebook_config.py` (see [Configuration Overview](index.html#document-config_overview)). Inside the config file, you can use Python to set the variable. For example, the following config does the same as the previous command line example. ``` c = get_config() c.NotebookApp.nbserver_extensions = { 'mypackage.mymodule': True, } ``` Before continuing, it’s a good idea to verify that your extension is being loaded. Use a print statement to print something unique. Launch the notebook server and you should see your statement printed to the console. #### Registering custom handlers[](#registering-custom-handlers) Once you’ve defined a server extension, you can register custom handlers because you have a handle to the Notebook server app instance (`nb_server_app` above). However, you first need to define your custom handler. To declare a custom handler, inherit from `notebook.base.handlers.IPythonHandler`. The example below[1] is a Hello World handler: ``` from notebook.base.handlers import IPythonHandler class HelloWorldHandler(IPythonHandler): def get(self): self.finish('Hello, world!') ``` The Jupyter Notebook server use [Tornado](http://www.tornadoweb.org/en/stable/) as its web framework. For more information on how to implement request handlers, refer to the [Tornado documentation on the matter](http://www.tornadoweb.org/en/stable/web.html#request-handlers). After defining the handler, you need to register the handler with the Notebook server. See the following example: ``` web_app = nb_server_app.web_app host_pattern = '.*$' route_pattern = url_path_join(web_app.settings['base_url'], '/hello') web_app.add_handlers(host_pattern, [(route_pattern, HelloWorldHandler)]) ``` Putting this together with the extension code, the example looks like the following: ``` from notebook.utils import url_path_join from notebook.base.handlers import IPythonHandler class HelloWorldHandler(IPythonHandler): def get(self): self.finish('Hello, world!') def load_jupyter_server_extension(nb_server_app): """ Called when the extension is loaded. Args: nb_server_app (NotebookWebApplication): handle to the Notebook webserver instance. """ web_app = nb_server_app.web_app host_pattern = '.*$' route_pattern = url_path_join(web_app.settings['base_url'], '/hello') web_app.add_handlers(host_pattern, [(route_pattern, HelloWorldHandler)]) ``` ### Extra Parameters and authentication[](#extra-parameters-and-authentication) Here is a quick rundown of what you need to know to pass extra parameters to the handler and enable authentication: > * extra arguments to the `__init__` constructor are given in a dictionary after the handler class in `add_handlers`: ``` class HelloWorldHandler(IPythonHandler): def __init__(self, *args, **kwargs): self.extra = kwargs.pop('extra') ... def load_jupyter_server_extension(nb_server_app): ... web_app.add_handlers(host_pattern, [ (route_pattern, HelloWorldHandler, {"extra": nb_server_app.extra}) ]) ``` All handler methods that require authentication _MUST_ be decorated with `@tornado.web.authenticated`: ``` from tornado import web class HelloWorldHandler(IPythonHandler): ... @web.authenticated def get(self, *args, **kwargs): ... @web.authenticated def post(self, *args, **kwargs): ... ``` References: 1. [<NAME>’s Mindtrove](https://mindtrove.info/4-ways-to-extend-jupyter-notebook/#nb-server-exts) ### Custom front-end extensions[](#custom-front-end-extensions) This describes the basic steps to write a JavaScript extension for the Jupyter notebook front-end. This allows you to customize the behaviour of the various pages like the dashboard, the notebook, or the text editor. #### The structure of a front-end extension[](#the-structure-of-a-front-end-extension) Note The notebook front-end and Javascript API are not stable, and are subject to a lot of changes. Any extension written for the current notebook is almost guaranteed to break in the next release. A front-end extension is a JavaScript file that defines an [AMD module](https://en.wikipedia.org/wiki/Asynchronous_module_definition) which exposes at least a function called `load_ipython_extension`, which takes no arguments. We will not get into the details of what each of these terms consists of yet, but here is the minimal code needed for a working extension: ``` // file my_extension/main.js define(function(){ function load_ipython_extension(){ console.info('this is my first extension'); } return { load_ipython_extension: load_ipython_extension }; }); ``` Note Although for historical reasons the function is called `load_ipython_extension`, it does apply to the Jupyter notebook in general, and will work regardless of the kernel in use. If you are familiar with JavaScript, you can use this template to require any Jupyter module and modify its configuration, or do anything else in client-side Javascript. Your extension will be loaded at the right time during the notebook page initialisation for you to set up a listener for the various events that the page can trigger. You might want access to the current instances of the various Jupyter notebook components on the page, as opposed to the classes defined in the modules. The current instances are exposed by a module named `base/js/namespace`. If you plan on accessing instances on the page, you should `require` this module rather than accessing the global variable `Jupyter`, which will be removed in future. The following example demonstrates how to access the current notebook instance: ``` // file my_extension/main.js define([ 'base/js/namespace' ], function( Jupyter ) { function load_ipython_extension() { console.log( 'This is the current notebook application instance:', Jupyter.notebook ); } return { load_ipython_extension: load_ipython_extension }; }); ``` #### Modifying key bindings[](#modifying-key-bindings) One of the abilities of extensions is to modify key bindings, although once again this is an API which is not guaranteed to be stable. However, custom key bindings are frequently requested, and are helpful to increase accessibility, so in the following we show how to access them. Here is an example of an extension that will unbind the shortcut `0,0` in command mode, which normally restarts the kernel, and bind `0,0,0` in its place: ``` // file my_extension/main.js define([ 'base/js/namespace' ], function( Jupyter ) { function load_ipython_extension() { Jupyter.keyboard_manager.command_shortcuts.remove_shortcut('0,0'); Jupyter.keyboard_manager.command_shortcuts.add_shortcut('0,0,0', 'jupyter-notebook:restart-kernel'); } return { load_ipython_extension: load_ipython_extension }; }); ``` Note The standard keybindings might not work correctly on non-US keyboards. Unfortunately, this is a limitation of browser implementations and the status of keyboard event handling on the web in general. We appreciate your feedback if you have issues binding keys, or have any ideas to help improve the situation. You can see that I have used the **action name** `jupyter-notebook:restart-kernel` to bind the new shortcut. There is no API yet to access the list of all available *actions*, though the following in the JavaScript console of your browser on a notebook page should give you an idea of what is available: ``` Object.keys(require('base/js/namespace').actions._actions); ``` In this example, we changed a keyboard shortcut in **command mode**; you can also customize keyboard shortcuts in **edit mode**. However, most of the keyboard shortcuts in edit mode are handled by CodeMirror, which supports custom key bindings via a completely different API. #### Defining and registering your own actions[](#defining-and-registering-your-own-actions) As part of your front-end extension, you may wish to define actions, which can be attached to toolbar buttons, or called from the command palette. Here is an example of an extension that defines an (not very useful!) action to show an alert, and adds a toolbar button using the full action name: ``` // file my_extension/main.js define([ 'base/js/namespace' ], function( Jupyter ) { function load_ipython_extension() { var handler = function () { alert('this is an alert from my_extension!'); }; var action = { icon: 'fa-comment-o', // a font-awesome class used on buttons, etc help : 'Show an alert', help_index : 'zz', handler : handler }; var prefix = 'my_extension'; var action_name = 'show-alert'; var full_action_name = Jupyter.actions.register(action, action_name, prefix); // returns 'my_extension:show-alert' Jupyter.toolbar.add_buttons_group([full_action_name]); } return { load_ipython_extension: load_ipython_extension }; }); ``` Every action needs a name, which, when joined with its prefix to make the full action name, should be unique. Built-in actions, like the `jupyter-notebook:restart-kernel` we bound in the earlier [Modifying key bindings](#modifying-key-bindings) example, use the prefix `jupyter-notebook`. For actions defined in an extension, it makes sense to use the extension name as the prefix. For the action name, the following guidelines should be considered: * First pick a noun and a verb for the action. For example, if the action is “restart kernel,” the verb is “restart” and the noun is “kernel”. * Omit terms like “selected” and “active” by default, so “delete-cell”, rather than “delete-selected-cell”. Only provide a scope like “-all-” if it is other than the default “selected” or “active” scope. * If an action has a secondary action, separate the secondary action with “-and-”, so “restart-kernel-and-clear-output”. * Use above/below or previous/next to indicate spatial and sequential relationships. * Don’t ever use before/after as they have a temporal connotation that is confusing when used in a spatial context. * For dialogs, use a verb that indicates what the dialog will accomplish, such as “confirm-restart-kernel”. #### Installing and enabling extensions[](#installing-and-enabling-extensions) You can install your nbextension with the command: ``` jupyter nbextension install path/to/my_extension/ [--user|--sys-prefix] ``` The default installation is system-wide. You can use `--user` to do a per-user installation, or `--sys-prefix` to install to Python’s prefix (e.g. in a virtual or conda environment). Where my_extension is the directory containing the Javascript files. This will copy it to a Jupyter data directory (the exact location is platform dependent - see [jupyter_path](https://docs.jupyter.org/en/latest/use/jupyter-directories.html#jupyter-path)). For development, you can use the `--symlink` flag to symlink your extension rather than copying it, so there’s no need to reinstall after changes. To use your extension, you’ll also need to **enable** it, which tells the notebook interface to load it. You can do that with another command: ``` jupyter nbextension enable my_extension/main [--sys-prefix][--section='common'] ``` The argument refers to the Javascript module containing your `load_ipython_extension` function, which is `my_extension/main.js` in this example. The `--section='common'` argument will affect all pages, by default it will be loaded on the notebook view only. There is a corresponding `disable` command to stop using an extension without uninstalling it. Changed in version 4.2: Added `--sys-prefix` argument #### Kernel Specific extensions[](#kernel-specific-extensions) Warning This feature serves as a stopgap for kernel developers who need specific JavaScript injected onto the page. The availability and API are subject to change at anytime. It is possible to load some JavaScript on the page on a per kernel basis. Be aware that doing so will make the browser page reload without warning as soon as the user switches the kernel without notice. If you, a kernel developer, need a particular piece of JavaScript to be loaded on a “per kernel” basis, such as: * if you are developing a CodeMirror mode for your language * if you need to enable some specific debugging options your `kernelspecs` are allowed to contain a `kernel.js` file that defines an AMD module. The AMD module should define an onload function that will be called when the kernelspec loads, such as: * when you load a notebook that uses your kernelspec * change the active kernelspec of a notebook to your kernelspec. Note that adding a kernel.js to your kernelspec will add an unexpected side effect to changing a kernel in the notebook. As it is impossible to “unload” JavaScript, any attempt to change the kernelspec again will save the current notebook and reload the page without confirmations. Here is an example of `kernel.js`: ``` define(function(){ return {onload: function(){ console.info('Kernel specific javascript loaded'); // do more things here, like define a codemirror mode }} }); ``` ### Customize keymaps[](#customize-keymaps) Note Declarative Custom Keymaps is a provisional feature with unstable API which is not guaranteed to be kept in future versions of the notebook, and can be removed or changed without warnings. The notebook shortcuts that are defined by jupyter both in edit mode and command mode are configurable in the frontend configuration file `~/.jupyter/nbconfig/notebook.json`. The modification of keyboard shortcuts suffers from several limitations, mainly that your Browser and OS might prevent certain shortcuts from working correctly. If this is the case, there is unfortunately not much that can be done. The second issue can arise with keyboards that have a layout different than US English. Again, even if we are aware of the issue, there is not much that can be done. Shortcuts are also limited by the underlying library that handles code and text editing: CodeMirror. If some keyboard shortcuts are conflicting, the method described below might not work to create new keyboard shortcuts, especially in the `edit` mode of the notebook. The 4 sections of interest in `~/.jupyter/nbconfig/notebook.json` are the following: > * `keys.command.unbind` > * `keys.edit.unbind` > * `keys.command.bind` > * `keys.edit.bind` The first two sections describe which default keyboard shortcuts not to register at notebook startup time. These are mostly useful if you need to `unbind` a default keyboard shortcut before binding it to a new `command`. The first two sections apply respectively to the `command` and `edit` mode of the notebook. They take a list of shortcuts to `unbind`. For example, to unbind the shortcut to split a cell at the position of the cursor (`Ctrl-Shift-Minus`) use the following: ``` // file ~/.jupyter/nbconfig/notebook.json { "keys": { "edit": { "unbind": [ "Ctrl-Shift-Minus" ] }, }, } ``` The last two sections describe which new keyboard shortcuts to register at notebook startup time and which actions they trigger. The last two sections apply respectively to the `command` and `edit` mode of the notebook. They take a dictionary with shortcuts as `keys` and `commands` name as value. For example, to bind the shortcut `G,G,G` (Press G three time in a row) in command mode to the command that restarts the kernel and runs all cells, use the following: ``` // file ~/.jupyter/nbconfig/notebook.json { "keys": { "command": { "bind": { "G,G,G":"jupyter-notebook:restart-kernel-and-run-all-cells" } } }, } ``` The name of the available `commands` can be find by hovering over the right end of a row in the command palette. ### Custom bundler extensions[](#custom-bundler-extensions) The notebook server supports the writing of *bundler extensions* that transform, package, and download/deploy notebook files. As a developer, you need only write a single Python function to implement a bundler. The notebook server automatically generates a *File -> Download as* or *File -> Deploy as* menu item in the notebook front-end to trigger your bundler. Here are some examples of what you can implement using bundler extensions: * Convert a notebook file to a HTML document and publish it as a post on a blog site * Create a snapshot of the current notebook environment and bundle that definition plus notebook into a zip download * Deploy a notebook as a standalone, interactive [dashboard](https://github.com/jupyter-incubator/dashboards_bundlers) To implement a bundler extension, you must do all of the following: * Declare bundler extension metadata in your Python package * Write a bundle function that responds to bundle requests * Instruct your users on how to enable/disable your bundler extension The following sections describe these steps in detail. #### Declaring bundler metadata[](#declaring-bundler-metadata) You must provide information about the bundler extension(s) your package provides by implementing a _jupyter_bundlerextensions_paths function. This function can reside anywhere in your package so long as it can be imported when enabling the bundler extension. (See [Enabling/disabling bundler extensions](#enabling-bundlers).) ``` # in mypackage.hello_bundler def _jupyter_bundlerextension_paths(): """Example "hello world" bundler extension""" return [{ 'name': 'hello_bundler', # unique bundler name 'label': 'Hello Bundler', # human-readable menu item label 'module_name': 'mypackage.hello_bundler', # module containing bundle() 'group': 'deploy' # group under 'deploy' or 'download' menu }] ``` Note that the return value is a list. By returning multiple dictionaries in the list, you allow users to enable/disable sets of bundlers all at once. #### Writing the bundle function[](#writing-the-bundle-function) At runtime, a menu item with the given label appears either in the *File -> Deploy as* or *File -> Download as* menu depending on the group value in your metadata. When a user clicks the menu item, a new browser tab opens and notebook server invokes a bundle function in the module_name specified in the metadata. You must implement a bundle function that matches the signature of the following example: ``` # in mypackage.hello_bundler def bundle(handler, model): """Transform, convert, bundle, etc. the notebook referenced by the given model. Then issue a Tornado web response using the `handler` to redirect the user's browser, download a file, show a HTML page, etc. This function must finish the handler response before returning either explicitly or by raising an exception. Parameters --- handler : tornado.web.RequestHandler Handler that serviced the bundle request model : dict Notebook model from the configured ContentManager """ handler.finish('I bundled {}!'.format(model['path'])) ``` Your bundle function is free to do whatever it wants with the request and respond in any manner. For example, it may read additional query parameters from the request, issue a redirect to another site, run a local process (e.g., nbconvert), make a HTTP request to another service, etc. The caller of the bundle function is @tornado.gen.coroutine decorated and wraps its call with torando.gen.maybe_future. This behavior means you may handle the web request synchronously, as in the example above, or asynchronously using @tornado.gen.coroutine and yield, as in the example below. ``` from tornado import gen @gen.coroutine def bundle(handler, model): # simulate a long running IO op (e.g., deploying to a remote host) yield gen.sleep(10) # now respond handler.finish('I spent 10 seconds bundling {}!'.format(model['path'])) ``` You should prefer the second, asynchronous approach when your bundle operation is long-running and would otherwise block the notebook server main loop if handled synchronously. For more details about the data flow from menu item click to bundle function invocation, see [Bundler invocation details](#bundler-details). #### Enabling/disabling bundler extensions[](#enabling-disabling-bundler-extensions) The notebook server includes a command line interface (CLI) for enabling and disabling bundler extensions. You should document the basic commands for enabling and disabling your bundler. One possible command for enabling the hello_bundler example is the following: ``` jupyter bundlerextension enable --py mypackage.hello_bundler --sys-prefix ``` The above updates the notebook configuration file in the current conda/virtualenv environment (–sys-prefix) with the metadata returned by the mypackage.hellow_bundler._jupyter_bundlerextension_paths function. The corresponding command to later disable the bundler extension is the following: ``` jupyter bundlerextension disable --py mypackage.hello_bundler --sys-prefix ``` For more help using the bundlerextension subcommand, run the following. ``` jupyter bundlerextension --help ``` The output describes options for listing enabled bundlers, configuring bundlers for single users, configuring bundlers system-wide, etc. #### Example: IPython Notebook bundle (.zip)[](#example-ipython-notebook-bundle-zip) The hello_bundler example in this documentation is simplistic in the name of brevity. For more meaningful examples, see notebook/bundler/zip_bundler.py and notebook/bundler/tarball_bundler.py. You can enable them to try them like so: ``` jupyter bundlerextension enable --py notebook.bundler.zip_bundler --sys-prefix jupyter bundlerextension enable --py notebook.bundler.tarball_bundler --sys-prefix ``` #### Bundler invocation details[](#bundler-invocation-details) Support for bundler extensions comes from Python modules in notebook/bundler and JavaScript in notebook/static/notebook/js/menubar.js. The flow of data between the various components proceeds roughly as follows: 1. User opens a notebook document 2. Notebook front-end JavaScript loads notebook configuration 3. Bundler front-end JS creates menu items for all bundler extensions in the config 4. User clicks a bundler menu item 5. JS click handler opens a new browser window/tab to <notebook base_url>/bundle/<path/to/notebook>?bundler=<name> (i.e., a HTTP GET request) 6. Bundle handler validates the notebook path and bundler name 7. Bundle handler delegates the request to the bundle function in the bundler’s module_name 8. bundle function finishes the HTTP request Contributing to the Jupyter Notebook[](#contributing-to-the-jupyter-notebook) --- If you’re reading this section, you’re probably interested in contributing to Jupyter. Welcome and thanks for your interest in contributing! Please take a look at the Contributor documentation, familiarize yourself with using the Jupyter Notebook, and introduce yourself on the mailing list and share what area of the project you are interested in working on. ### General Guidelines[](#general-guidelines) For general documentation about contributing to Jupyter projects, see the [Project Jupyter Contributor Documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html). ### Setting Up a Development Environment[](#setting-up-a-development-environment) #### Installing the Jupyter Notebook[](#installing-the-jupyter-notebook) Use the following steps: ``` pip install --upgrade setuptools pip git clone https://github.com/jupyter/notebook cd notebook pip install -e . ``` If you are using a system-wide Python installation and you only want to install the notebook for you, you can add `--user` to the install commands. Once you have done this, you can launch the master branch of Jupyter notebook from any directory in your system with: ``` jupyter notebook ``` #### Verification[](#verification) While running the notebook, select one of your notebook files (the file will have the extension `.ipynb`). In the top tab you will click on “Help” and then click on “About”. In the pop window you will see information about the version of Jupyter that you are running. You will see “The version of the notebook server is:”. If you are working in development mode, you will see that your version of Jupyter notebook will include the word “dev”. If it does not include the word “dev”, you are currently not working in development mode and should follow the steps below to uninstall and reinstall Jupyter. #### Troubleshooting the Installation[](#troubleshooting-the-installation) If you do not see that your Jupyter Notebook is running on dev mode, it’s possible that you are running other instances of Jupyter Notebook. You can try the following steps: 1. Uninstall all instances of the notebook package. These include any installations you made using pip or conda. 2. Run `python3 -m pip install -e .` in the notebook repository to install the notebook from there. 3. Launch with `python3 -m notebook --port 8989`, and check that the browser is pointing to `localhost:8989` (rather than the default 8888). You don’t necessarily have to launch with port 8989, as long as you use a port that is neither the default nor in use, then it should be fine. 4. Verify the installation with the steps in the previous section. If you have tried the above and still find that the notebook is not reflecting the current source code, try cleaning the repo with `git clean -xfd` and reinstalling with `pip install -e .`. #### Modifying the JavaScript and CSS[](#modifying-the-javascript-and-css) The build process for this version of notebook grabs the static assets from the nbclassic package. Frontend changes should be made in the [nbclassic repository](https://github.com/jupyter/nbclassic). ### Running Tests[](#running-tests) #### Python Tests[](#python-tests) Install dependencies: ``` pip install -e '.[test]' ``` To run the Python tests, use: ``` pytest ``` If you want coverage statistics as well, you can run: ``` py.test --cov notebook -v --pyargs notebook ``` ### Building the Documentation[](#building-the-documentation) To build the documentation you’ll need [Sphinx](http://www.sphinx-doc.org/), [pandoc](http://pandoc.org/) and a few other packages. To install (and activate) a conda environment named `notebook_docs` containing all the necessary packages (except pandoc), use: ``` conda create -n notebook_docs pip conda activate notebook_docs # Linux and OS X activate notebook_docs # Windows pip install .[docs] ``` If you want to install the necessary packages with `pip`, use the following instead: ``` pip install .[docs] ``` Once you have installed the required packages, you can build the docs with: ``` cd docs make html ``` After that, the generated HTML files will be available at `build/html/index.html`. You may view the docs in your browser. You can automatically check if all hyperlinks are still valid: ``` make linkcheck ``` Windows users can find `make.bat` in the `docs` folder. You should also have a look at the [Project Jupyter Documentation Guide](https://jupyter.readthedocs.io/en/latest/contributing/docs-contributions/index.html). Developer FAQ[](#developer-faq) --- 1. How do I install a prerelease version such as a beta or release candidate? ``` python -m pip install notebook --pre --upgrade ``` My Notebook[](#My-Notebook) --- ``` [1]: ``` ``` def foo(): return "foo" ``` ``` [2]: ``` ``` def has_ip_syntax(): listing = !ls return listing ``` ``` [4]: ``` ``` def whatsmyname(): return __name__ ``` Other notebook[](#Other-notebook) --- This notebook just defines `bar` ``` [2]: ``` ``` def bar(x): return "bar" * x ```
marshmallow-jsonapi
readthedoc
Python
marshmallow-jsonapi 0.24.0 documentation [marshmallow-jsonapi](#) --- marshmallow-jsonapi[¶](#marshmallow-jsonapi) === Release v0.24.0. ([Changelog](index.html#changelog)) JSON API 1.0 ([https://jsonapi.org](http://jsonapi.org/)) formatting with [marshmallow](https://marshmallow.readthedocs.io). marshmallow-jsonapi provides a simple way to produce JSON API-compliant data in any Python web framework. ``` from marshmallow_jsonapi import Schema, fields class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() author = fields.Relationship( related_url="/authors/{author_id}", related_url_kwargs={"author_id": "<author.id>"}, ) comments = fields.Relationship( related_url="/posts/{post_id}/comments", related_url_kwargs={"post_id": "<id>"}, # Include resource linkage many=True, include_resource_linkage=True, type_="comments", ) class Meta: type_ = "posts" strict = True post_schema = PostSchema() post_schema.dump(post) # { # "data": { # "id": "1", # "type": "posts" # "attributes": { # "title": "JSON API paints my bikeshed!" # }, # "relationships": { # "author": { # "links": { # "related": "/authors/9" # } # }, # "comments": { # "data": [ # {"id": 5, "type": "comments"}, # {"id": 12, "type": "comments"} # ], # "links": { # "related": "/posts/1/comments/" # } # } # }, # } # } ``` Installation[¶](#installation) --- ``` pip install marshmallow-jsonapi ``` Guide[¶](#guide) --- ### Quickstart[¶](#quickstart) Note The following guide assumes some familiarity with the marshmallow API. To learn more about marshmallow, see its official documentation at <https://marshmallow.readthedocs.io>. #### Declaring schemas[¶](#declaring-schemas) Let’s start with a basic post “model”. ``` class Post: def __init__(self, id, title): self.id = id self.title = title ``` Declare your schemas as you would with marshmallow. A [`Schema`](index.html#marshmallow_jsonapi.Schema) **MUST** define: * An `id` field * The `type_` class Meta option It is **RECOMMENDED** to set strict mode to [`True`](https://python.readthedocs.io/en/latest/library/constants.html#True). Automatic self-linking is supported through these Meta options: * `self_url` specifies the URL to the resource itself * `self_url_kwargs` specifies replacement fields for `self_url` * `self_url_many` specifies the URL the resource when a collection (many) are serialized ``` from marshmallow_jsonapi import Schema, fields class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() class Meta: type_ = "posts" self_url = "/posts/{id}" self_url_kwargs = {"id": "<id>"} self_url_many = "/posts/" ``` These URLs can be auto-generated by specifying `self_view`, `self_view_kwargs` and `self_view_many` instead when using the [Flask integration](#flask-integration). #### Serialization[¶](#serialization) Objects will be serialized to [JSON API documents](http://jsonapi.org/format/#document-structure) with primary data. ``` post = Post(id="1", title="Django is Omakase") PostSchema().dump(post) # { # 'data': { # 'id': '1', # 'type': 'posts', # 'attributes': {'title': 'Django is Omakase'}, # 'links': {'self': '/posts/1'} # }, # 'links': {'self': '/posts/1'} # } ``` #### Relationships[¶](#relationships) The `Relationship` field is used to serialize [relationship objects](http://jsonapi.org/format/#document-resource-object-relationships). For example, a Post may have an author and comments associated with it. ``` class User: def __init__(self, id, name): self.id = id self.name = name class Comment: def __init__(self, id, body, author): self.id = id self.body = body self.author = author class Post: def __init__(self, id, title, author, comments=None): self.id = id self.title = title self.author = author # User object self.comments = [] if comments is None else comments # Comment objects ``` To serialize links, pass a URL format string and a dictionary of keyword arguments. String arguments enclosed in `< >` will be interpreted as attributes to pull from the object being serialized. The relationship links can automatically be generated from Flask view names when using the [Flask integration](#flask-integration). ``` class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() author = fields.Relationship( self_url="/posts/{post_id}/relationships/author", self_url_kwargs={"post_id": "<id>"}, related_url="/authors/{author_id}", related_url_kwargs={"author_id": "<author.id>"}, ) class Meta: type_ = "posts" user = User(id="94", name="Laura") post = Post(id="1", title="Django is Omakase", author=user) PostSchema().dump(post) # { # 'data': { # 'id': '1', # 'type': 'posts', # 'attributes': {'title': 'Django is Omakase'}, # 'relationships': { # 'author': { # 'links': { # 'self': '/posts/1/relationships/author', # 'related': '/authors/94' # } # } # } # } # } ``` ##### Resource linkages[¶](#resource-linkages) You can serialize [resource linkages](http://jsonapi.org/format/#document-resource-object-linkage) by passing `include_resource_linkage=True` and the resource `type_` argument. ``` class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() author = fields.Relationship( self_url="/posts/{post_id}/relationships/author", self_url_kwargs={"post_id": "<id>"}, related_url="/authors/{author_id}", related_url_kwargs={"author_id": "<author.id>"}, # Include resource linkage include_resource_linkage=True, type_="users", ) class Meta: type_ = "posts" PostSchema().dump(post) # { # 'data': { # 'id': '1', # 'type': 'posts', # 'attributes': {'title': 'Django is Omakase'}, # 'relationships': { # 'author': { # 'data': {'type': 'users', 'id': '94'}, # 'links': { # 'self': '/posts/1/relationships/author', # 'related': '/authors/94' # } # } # } # } # } ``` ##### Compound documents[¶](#compound-documents) [Compound documents](http://jsonapi.org/format/#document-compound-documents) allow to include related resources into the request with the primary resource. In order to include objects, you have to define a [`Schema`](index.html#marshmallow_jsonapi.Schema) for the respective relationship, which will be used to render those objects. ``` class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() comments = fields.Relationship( related_url="/posts/{post_id}/comments", related_url_kwargs={"post_id": "<id>"}, many=True, include_resource_linkage=True, type_="comments", # define a schema for rendering included data schema="CommentSchema", ) author = fields.Relationship( self_url="/posts/{post_id}/relationships/author", self_url_kwargs={"post_id": "<id>"}, related_url="/authors/{author_id}", related_url_kwargs={"author_id": "<author.id>"}, include_resource_linkage=True, type_="users", ) class Meta: type_ = "posts" class CommentSchema(Schema): id = fields.Str(dump_only=True) body = fields.Str() author = fields.Relationship( self_url="/comments/{comment_id}/relationships/author", self_url_kwargs={"comment_id": "<id>"}, related_url="/comments/{author_id}", related_url_kwargs={"author_id": "<author.id>"}, type_="users", # define a schema for rendering included data schema="UserSchema", ) class Meta: type_ = "comments" class UserSchema(Schema): id = fields.Str(dump_only=True) name = fields.Str() class Meta: type_ = "users" ``` Just as with nested fields the `schema` can be a class or a string with a simple or fully qualified class name. Make sure to import the schema beforehand. Now you can include some data in a dump by specifying the `include_data` argument (also supports nested relations via the dot syntax). ``` armin = User(id="101", name="Armin") laura = User(id="94", name="Laura") steven = User(id="23", name="Steven") comments = [ Comment(id="5", body="Marshmallow is sweet like sugar!", author=steven), Comment(id="12", body="Flask is Fun!", author=armin), ] post = Post(id="1", title="Django is Omakase", author=laura, comments=comments) PostSchema(include_data=("comments", "comments.author")).dump(post) # { # 'data': { # 'id': '1', # 'type': 'posts', # 'attributes': {'title': 'Django is Omakase'}, # 'relationships': { # 'author': { # 'data': {'type': 'users', 'id': '94'}, # 'links': { # 'self': '/posts/1/relationships/author', # 'related': '/authors/94' # } # }, # 'comments': { # 'data': [ # {'type': 'comments', 'id': '5'}, # {'type': 'comments', 'id': '12'} # ], # 'links': { # 'related': '/posts/1/comments' # } # } # } # }, # 'included': [ # { # 'id': '5', # 'type': 'comments', # 'attributes': {'body': 'Marshmallow is sweet like sugar!'}, # 'relationships': { # 'author': { # 'data': {'type': 'users', 'id': '23'}, # 'links': { # 'self': '/comments/5/relationships/author', # 'related': '/comments/23' # } # } # } # }, # { # 'id': '12', # 'type': 'comments', # 'attributes': {'body': 'Flask is Fun!'}, # 'relationships': { # 'author': { # 'data': {'type': 'users', 'id': '101'}, # 'links': { # 'self': '/comments/12/relationships/author', # 'related': '/comments/101' # } # } # }, # # }, # { # 'id': '23', # 'type': 'users', # 'attributes': {'name': 'Steven'} # }, # { # 'id': '101', # 'type': 'users', # 'attributes': {'name': 'Armin'} # } # ] # } ``` #### Meta Information[¶](#meta-information) The [`DocumentMeta`](index.html#marshmallow_jsonapi.fields.DocumentMeta) field is used to serialize the meta object within a [document’s “top level”](http://jsonapi.org/format/#document-meta). ``` from marshmallow_jsonapi import Schema, fields class UserSchema(Schema): id = fields.Str(dump_only=True) name = fields.Str() document_meta = fields.DocumentMeta() class Meta: type_ = "users" user = {"name": "Alice", "document_meta": {"page": {"offset": 10}}} UserSchema().dump(user) # { # "meta": { # "page": { # "offset": 10 # } # }, # "data": { # "id": "1", # "type": "users" # "attributes": {"name": "Alice"}, # } # } ``` The [`ResourceMeta`](index.html#marshmallow_jsonapi.fields.ResourceMeta) field is used to serialize the meta object within a [resource object](http://jsonapi.org/format/#document-resource-objects). ``` from marshmallow_jsonapi import Schema, fields class UserSchema(Schema): id = fields.Str(dump_only=True) name = fields.Str() resource_meta = fields.ResourceMeta() class Meta: type_ = "users" user = {"name": "Alice", "resource_meta": {"active": True}} UserSchema().dump(user) # { # "data": { # "type": "users", # "attributes": {"name": "Alice"}, # "meta": { # "active": true # } # } # } ``` #### Errors[¶](#errors) `Schema.load()` and `Schema.validate()` will return JSON API-formatted [Error objects](http://jsonapi.org/format/#error-objects). ``` from marshmallow_jsonapi import Schema, fields from marshmallow import validate, ValidationError class AuthorSchema(Schema): id = fields.Str(dump_only=True) first_name = fields.Str(required=True) last_name = fields.Str(required=True) password = fields.Str(load_only=True, validate=validate.Length(6)) twitter = fields.Str() class Meta: type_ = "authors" author_data = { "data": {"type": "users", "attributes": {"first_name": "Dan", "password": "short"}} } AuthorSchema().validate(author_data) # { # 'errors': [ # { # 'detail': 'Missing data for required field.', # 'source': { # 'pointer': '/data/attributes/last_name' # } # }, # { # 'detail': 'Shorter than minimum length 6.', # 'source': { # 'pointer': '/data/attributes/password' # } # } # ] # } ``` If an invalid “type” is passed in the input data, an [`IncorrectTypeError`](index.html#marshmallow_jsonapi.exceptions.IncorrectTypeError) is raised. ``` from marshmallow_jsonapi.exceptions import IncorrectTypeError author_data = { "data": { "type": "invalid-type", "attributes": { "first_name": "Dan", "last_name": "Gebhardt", "password": "verysecure", }, } } try: AuthorSchema().validate(author_data) except IncorrectTypeError as err: pprint(err.messages) # { # 'errors': [ # { # 'detail': 'Invalid type. Expected "users".', # 'source': { # 'pointer': '/data/type' # } # } # ] # } ``` #### Inflection[¶](#inflection) You can optionally specify a function to transform attribute names. For example, you may decide to follow JSON API’s [recommendation](http://jsonapi.org/recommendations/#naming) to use “dasherized” names. ``` from marshmallow_jsonapi import Schema, fields def dasherize(text): return text.replace("_", "-") class UserSchema(Schema): id = fields.Str(dump_only=True) first_name = fields.Str(required=True) last_name = fields.Str(required=True) class Meta: type_ = "users" inflect = dasherize UserSchema().dump(user) # { # 'data': { # 'id': '9', # 'type': 'users', # 'attributes': { # 'first-name': 'Dan', # 'last-name': 'Gebhardt' # } # } # } ``` #### Flask integration[¶](#flask-integration) marshmallow-jsonapi includes optional utilities to integrate with Flask. A Flask-specific schema in [`marshmallow_jsonapi.flask`](index.html#module-marshmallow_jsonapi.flask) can be used to auto-generate self-links based on view names instead of hard-coding URLs. Additionally, the `Relationship` field in the [`marshmallow_jsonapi.flask`](index.html#module-marshmallow_jsonapi.flask) module allows you to pass view names instead of path templates to generate relationship links. ``` from marshmallow_jsonapi import fields from marshmallow_jsonapi.flask import Relationship, Schema class PostSchema(Schema): id = fields.Str(dump_only=True) title = fields.Str() author = fields.Relationship( self_view="post_author", self_url_kwargs={"post_id": "<id>"}, related_view="author_detail", related_view_kwargs={"author_id": "<author.id>"}, ) comments = Relationship( related_view="post_comments", related_view_kwargs={"post_id": "<id>"}, many=True, include_resource_linkage=True, type_="comments", ) class Meta: type_ = "posts" self_view = "post_detail" self_view_kwargs = {"post_detail": "<id>"} self_view_many = "posts_list" ``` See [here](https://github.com/marshmallow-code/marshmallow-jsonapi/blob/dev/examples/flask_example.py) for a full example. API Reference[¶](#api-reference) --- ### API Reference[¶](#api-reference) #### Core[¶](#module-marshmallow_jsonapi) *class* `marshmallow_jsonapi.``Schema`(**args*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema)[¶](#marshmallow_jsonapi.Schema) Schema class that formats data according to JSON API 1.0. Must define the `type_` `class Meta` option. Example: ``` from marshmallow_jsonapi import Schema, fields def dasherize(text): return text.replace('_', '-') class PostSchema(Schema): id = fields.Str(dump_only=True) # Required title = fields.Str() author = fields.HyperlinkRelated( '/authors/{author_id}', url_kwargs={'author_id': '<author.id>'}, ) comments = fields.HyperlinkRelated( '/posts/{post_id}/comments', url_kwargs={'post_id': '<id>'}, # Include resource linkage many=True, include_resource_linkage=True, type_='comments' ) class Meta: type_ = 'posts' # Required inflect = dasherize ``` *class* `Meta`[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.Meta)[¶](#marshmallow_jsonapi.Schema.Meta) Options object for [`Schema`](#marshmallow_jsonapi.Schema). Takes the same options as [`marshmallow.Schema.Meta`](https://marshmallow.readthedocs.io/en/latest/api_reference.html#marshmallow.Schema.Meta) with the addition of: * `type_` - required, the JSON API resource type as a string. * `inflect` - optional, an inflection function to modify attribute names. * `self_url` - optional, URL to use to `self` in links * `self_url_kwargs` - optional, replacement fields for `self_url`. String arguments enclosed in `< >` will be interpreted as attributes to pull from the schema data. * `self_url_many` - optional, URL to use to `self` in top-level `links` when a collection of resources is returned. `OPTIONS_CLASS`[¶](#marshmallow_jsonapi.Schema.OPTIONS_CLASS) alias of `marshmallow_jsonapi.schema.SchemaOpts` `check_relations`(*relations*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.check_relations)[¶](#marshmallow_jsonapi.Schema.check_relations) Recursive function which checks if a relation is valid. `format_error`(*field_name*, *message*, *index=None*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.format_error)[¶](#marshmallow_jsonapi.Schema.format_error) Override-able hook to format a single error message as an Error object. See: <http://jsonapi.org/format/#error-objects`format_errors`(*errors*, *many*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.format_errors)[¶](#marshmallow_jsonapi.Schema.format_errors) Format validation errors as JSON Error objects. `format_item`(*item*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.format_item)[¶](#marshmallow_jsonapi.Schema.format_item) Format a single datum as a Resource object. See: <http://jsonapi.org/format/#document-resource-objects`format_items`(*data*, *many*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.format_items)[¶](#marshmallow_jsonapi.Schema.format_items) Format data as a Resource object or list of Resource objects. See: <http://jsonapi.org/format/#document-resource-objects`format_json_api_response`(*data*, *many*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.format_json_api_response)[¶](#marshmallow_jsonapi.Schema.format_json_api_response) Post-dump hook that formats serialized data as a top-level JSON API object. See: <http://jsonapi.org/format/#document-top-level`generate_url`(*link*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.generate_url)[¶](#marshmallow_jsonapi.Schema.generate_url) Generate URL with any kwargs interpolated. `get_resource_links`(*item*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.get_resource_links)[¶](#marshmallow_jsonapi.Schema.get_resource_links) Hook for adding links to a resource object. `get_top_level_links`(*data*, *many*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.get_top_level_links)[¶](#marshmallow_jsonapi.Schema.get_top_level_links) Hook for adding links to the root of the response data. `inflect`(*text*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.inflect)[¶](#marshmallow_jsonapi.Schema.inflect) Inflect `text` if the `inflect` class Meta option is defined, otherwise do nothing. `on_bind_field`(*field_name*, *field_obj*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.on_bind_field)[¶](#marshmallow_jsonapi.Schema.on_bind_field) Schema hook override. When binding fields, set `data_key` to the inflected form of field_name. `wrap_response`(*data*, *many*)[[source]](_modules/marshmallow_jsonapi/schema.html#Schema.wrap_response)[¶](#marshmallow_jsonapi.Schema.wrap_response) Wrap data and links according to the JSON API *class* `marshmallow_jsonapi.``SchemaOpts`(*meta*, **args*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/schema.html#SchemaOpts)[¶](#marshmallow_jsonapi.SchemaOpts) #### Fields[¶](#module-marshmallow_jsonapi.fields) Includes all the fields classes from [`marshmallow.fields`](https://marshmallow.readthedocs.io/en/latest/marshmallow.fields.html#module-marshmallow.fields) as well as fields for serializing JSON API-formatted hyperlinks. *class* `marshmallow_jsonapi.fields.``BaseRelationship`(**, load_default: Any = <marshmallow.missing>, missing: Any = <marshmallow.missing>, dump_default: Any = <marshmallow.missing>, default: Any = <marshmallow.missing>, data_key: Optional[str] = None, attribute: Optional[str] = None, validate: Optional[Union[Callable[[Any], Any], Iterable[Callable[[Any], Any]]]] = None, required: bool = False, allow_none: Optional[bool] = None, load_only: bool = False, dump_only: bool = False, error_messages: Optional[Dict[str, str]] = None, metadata: Optional[Mapping[str, Any]] = None, **additional_metadata*)[[source]](_modules/marshmallow_jsonapi/fields.html#BaseRelationship)[¶](#marshmallow_jsonapi.fields.BaseRelationship) Base relationship field. This is used by [`marshmallow_jsonapi.Schema`](#marshmallow_jsonapi.Schema) to determine which fields should be formatted as relationship objects. See: <http://jsonapi.org/format/#document-resource-object-relationships*class* `marshmallow_jsonapi.fields.``DocumentMeta`(***kwargs*)[[source]](_modules/marshmallow_jsonapi/fields.html#DocumentMeta)[¶](#marshmallow_jsonapi.fields.DocumentMeta) Field which serializes to a “meta object” within a document’s “top level”. Examples: ``` from marshmallow_jsonapi import Schema, fields class UserSchema(Schema): id = fields.String() metadata = fields.DocumentMeta() class Meta: type_ = 'product' ``` See: <http://jsonapi.org/format/#document-meta*class* `marshmallow_jsonapi.fields.``Relationship`(*related_url=''*, *related_url_kwargs=None*, ***, *self_url=''*, *self_url_kwargs=None*, *include_resource_linkage=False*, *schema=None*, *many=False*, *type_=None*, *id_field=None*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/fields.html#Relationship)[¶](#marshmallow_jsonapi.fields.Relationship) Framework-independent field which serializes to a “relationship object”. See: <http://jsonapi.org/format/#document-resource-object-relationshipsExamples: ``` author = Relationship( related_url='/authors/{author_id}', related_url_kwargs={'author_id': '<author.id>'}, ) comments = Relationship( related_url='/posts/{post_id}/comments/', related_url_kwargs={'post_id': '<id>'}, many=True, include_resource_linkage=True, type_='comments' ) ``` This field is read-only by default. Parameters * **related_url** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – Format string for related resource links. * **related_url_kwargs** ([*dict*](https://python.readthedocs.io/en/latest/library/stdtypes.html#dict)) – Replacement fields for `related_url`. String arguments enclosed in `< >` will be interpreted as attributes to pull from the target object. * **self_url** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – Format string for self relationship links. * **self_url_kwargs** ([*dict*](https://python.readthedocs.io/en/latest/library/stdtypes.html#dict)) – Replacement fields for `self_url`. String arguments enclosed in `< >` will be interpreted as attributes to pull from the target object. * **include_resource_linkage** ([*bool*](https://python.readthedocs.io/en/latest/library/functions.html#bool)) – Whether to include a resource linkage (<http://jsonapi.org/format/#document-resource-object-linkage>) in the serialized result. * **schema** ([*marshmallow_jsonapi.Schema*](index.html#marshmallow_jsonapi.Schema)) – The schema to render the included data with. * **many** ([*bool*](https://python.readthedocs.io/en/latest/library/functions.html#bool)) – Whether the relationship represents a many-to-one or many-to-many relationship. Only affects serialization of the resource linkage. * **type** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – The type of resource. * **id_field** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – Attribute name to pull ids from if a resource linkage is included. `deserialize`(*value*, *attr=None*, *data=None*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/fields.html#Relationship.deserialize)[¶](#marshmallow_jsonapi.fields.Relationship.deserialize) Deserialize `value`. Raises **ValidationError** – If the value is not type [`dict`](https://python.readthedocs.io/en/latest/library/stdtypes.html#dict), if the value does not contain a `data` key, and if the value is required but unspecified. `extract_value`(*data*)[[source]](_modules/marshmallow_jsonapi/fields.html#Relationship.extract_value)[¶](#marshmallow_jsonapi.fields.Relationship.extract_value) Extract the id key and validate the request structure. `serialize`(*attr*, *obj*, *accessor=None*)[[source]](_modules/marshmallow_jsonapi/fields.html#Relationship.serialize)[¶](#marshmallow_jsonapi.fields.Relationship.serialize) Pulls the value for the given key from the object, applies the field’s formatting and returns the result. Parameters * **attr** – The attribute/key to get from the object. * **obj** – The object to access the attribute/key from. * **accessor** – Function used to access values from `obj`. * **kwargs** – Field-specific keyword arguments. *class* `marshmallow_jsonapi.fields.``ResourceMeta`(***kwargs*)[[source]](_modules/marshmallow_jsonapi/fields.html#ResourceMeta)[¶](#marshmallow_jsonapi.fields.ResourceMeta) Field which serializes to a “meta object” within a “resource object”. Examples: ``` from marshmallow_jsonapi import Schema, fields class UserSchema(Schema): id = fields.String() meta_resource = fields.ResourceMeta() class Meta: type_ = 'product' ``` See: <http://jsonapi.org/format/#document-resource-objects#### Flask[¶](#module-marshmallow_jsonapi.flask) Flask integration that avoids the need to hard-code URLs for links. This includes a Flask-specific schema with custom Meta options and a relationship field for linking to related resources. *class* `marshmallow_jsonapi.flask.``Relationship`(*related_view=None*, *related_view_kwargs=None*, ***, *self_view=None*, *self_view_kwargs=None*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/flask.html#Relationship)[¶](#marshmallow_jsonapi.flask.Relationship) Field which serializes to a “relationship object” with a “related resource link”. See: <http://jsonapi.org/format/#document-resource-object-relationshipsExamples: ``` author = Relationship( related_view='author_detail', related_view_kwargs={'author_id': '<author.id>'}, ) comments = Relationship( related_view='posts_comments', related_view_kwargs={'post_id': '<id>'}, many=True, include_resource_linkage=True, type_='comments' ) ``` This field is read-only by default. Parameters * **related_view** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – View name for related resource link. * **related_view_kwargs** ([*dict*](https://python.readthedocs.io/en/latest/library/stdtypes.html#dict)) – Path kwargs fields for `related_view`. String arguments enclosed in `< >` will be interpreted as attributes to pull from the target object. * **self_view** ([*str*](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)) – View name for self relationship link. * **self_view_kwargs** ([*dict*](https://python.readthedocs.io/en/latest/library/stdtypes.html#dict)) – Path kwargs for `self_view`. String arguments enclosed in `< >` will be interpreted as attributes to pull from the target object. * ****kwargs** – Same keyword arguments as [`marshmallow_jsonapi.fields.Relationship`](#marshmallow_jsonapi.fields.Relationship). *class* `marshmallow_jsonapi.flask.``Schema`(**args*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/flask.html#Schema)[¶](#marshmallow_jsonapi.flask.Schema) A Flask specific schema that resolves self URLs from view names. *class* `Meta`[[source]](_modules/marshmallow_jsonapi/flask.html#Schema.Meta)[¶](#marshmallow_jsonapi.flask.Schema.Meta) Options object that takes the same options as `marshmallow-jsonapi.Schema`, but instead of `self_url`, `self_url_kwargs` and `self_url_many` has the following options to resolve the URLs from Flask views: * `self_view` - View name to resolve the self URL link from. * `self_view_kwargs` - Replacement fields for `self_view`. String attributes enclosed in `< >` will be interpreted as attributes to pull from the schema data. * `self_view_many` - View name to resolve the self URL link when a collection of resources is returned. `self_url` *= None*[¶](#marshmallow_jsonapi.flask.Schema.Meta.self_url) `self_url_kwargs` *= None*[¶](#marshmallow_jsonapi.flask.Schema.Meta.self_url_kwargs) `self_url_many` *= None*[¶](#marshmallow_jsonapi.flask.Schema.Meta.self_url_many) `OPTIONS_CLASS`[¶](#marshmallow_jsonapi.flask.Schema.OPTIONS_CLASS) alias of [`marshmallow_jsonapi.flask.SchemaOpts`](#marshmallow_jsonapi.flask.SchemaOpts) `generate_url`(*view_name*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/flask.html#Schema.generate_url)[¶](#marshmallow_jsonapi.flask.Schema.generate_url) Generate URL with any kwargs interpolated. *class* `marshmallow_jsonapi.flask.``SchemaOpts`(*meta*, **args*, ***kwargs*)[[source]](_modules/marshmallow_jsonapi/flask.html#SchemaOpts)[¶](#marshmallow_jsonapi.flask.SchemaOpts) Options to use Flask view names instead of hard coding URLs. #### Exceptions[¶](#module-marshmallow_jsonapi.exceptions) Exception classes. *exception* `marshmallow_jsonapi.exceptions.``IncorrectTypeError`(*message=None*, *actual=None*, *expected=None*)[[source]](_modules/marshmallow_jsonapi/exceptions.html#IncorrectTypeError)[¶](#marshmallow_jsonapi.exceptions.IncorrectTypeError) Raised when client provides an invalid [`type`](https://python.readthedocs.io/en/latest/library/functions.html#type) in a request. *property* `messages`[¶](#marshmallow_jsonapi.exceptions.IncorrectTypeError.messages) JSON API-formatted error representation. *exception* `marshmallow_jsonapi.exceptions.``JSONAPIError`[[source]](_modules/marshmallow_jsonapi/exceptions.html#JSONAPIError)[¶](#marshmallow_jsonapi.exceptions.JSONAPIError) Base class for all exceptions in this package. #### Utilities[¶](#module-marshmallow_jsonapi.utils) Utility functions. This module should be considered private API. `marshmallow_jsonapi.utils.``resolve_params`(*obj*, *params*, *default=<marshmallow.missing>*)[[source]](_modules/marshmallow_jsonapi/utils.html#resolve_params)[¶](#marshmallow_jsonapi.utils.resolve_params) Given a dictionary of keyword arguments, return the same dictionary except with values enclosed in `< >` resolved to attributes on `obj`. `marshmallow_jsonapi.utils.``tpl`(*val*)[[source]](_modules/marshmallow_jsonapi/utils.html#tpl)[¶](#marshmallow_jsonapi.utils.tpl) Return value within `< >` if possible, else return `None`. Project info[¶](#project-info) --- ### Changelog[¶](#changelog) #### 0.24.0 (2020-12-27)[¶](#id2) Deprecations/Removals: * Drop support for marshmallow 2, which is now EOL ([#332](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/332)). Bug fixes: * Fix behavior when serializing `None` ([#302](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/302)). Thanks [@mahenzon](https://github.com/mahenzon). Other changes: * Test against Python 3.8 and 3.9 ([#332](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/332)). #### 0.23.2 (2020-07-20)[¶](#id3) Bug fixes: * Import from [`collections.abc`](https://python.readthedocs.io/en/latest/library/collections.abc.html#module-collections.abc) for forward-compatibility with Python 3.10 ([#318](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/318)). Thanks [@tirkarthi](https://github.com/tirkarthi). #### 0.23.1 (2020-03-22)[¶](#id4) Bug fixes: * Fix nested fields validation error formatting ([#120](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/120)). Thanks [@mahenzon](https://github.com/mahenzon) and [@debonzi](https://github.com/debonzi) for the PRs. #### 0.23.0 (2020-02-02)[¶](#id5) * Improve performance of link generation from `Relationship` ([#277](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/277)). Thanks [@iamareebjamal](https://github.com/iamareebjamal) for reporting and fixing. #### 0.22.0 (2019-09-15)[¶](#id6) Deprecation/Removals: * Drop support for Python 2.7 and 3.5. Only Python>=3.6 is supported ([#251](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/251)). * Drop support for marshmallow 3 pre-releases. Only stable versions >=2.15.2 are supported. * Remove `fields.Meta`. Bug fixes: * Address `DeprecationWarning` raised by `Field.fail` on marshmallow 3. #### 0.21.2 (2019-07-01)[¶](#id7) Bug fixes: * marshmallow 3.0.0rc7 compatibility ([#233](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/233)). Other changes: * Format with pyupgrade and black ([#235](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/235)). * Switch to Azure Pipelines for CI ([#234](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/234)). #### 0.21.1 (2019-05-05)[¶](#id8) Bug fixes: * marshmallow 3.0.0rc6 cmpatibility ([#221](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/221)). #### 0.21.0 (2018-12-16)[¶](#id9) Bug fixes: * *Backwards-incompatible*: Revert URL quoting introduced in 0.20.2 ([#184](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/184)). If you need quoting, override `Schema.generate_url`. ``` from marshmallow_jsonapi import Schema from werkzeug.urls import url_fix class MySchema(Schema): def generate_url(self, link, **kwargs): url = super().generate_url(link, **kwargs) return url_fix(url) ``` Thanks [@kgutwin](https://github.com/kgutwin) for reporting the issue. * Fix `Relationship` deserialization behavior when `required=False` ([#177](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/177)). Thanks [@aberres](https://github.com/aberres) for reporting and [@scottwernervt](https://github.com/scottwernervt) for the fix. Other changes: * Test against Python 3.7. #### 0.20.5 (2018-10-27)[¶](#id10) Bug fixes: * Fix deserializing `id` field to non-string types ([#179](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/179)). Thanks [@aberres](https://github.com/aberres) for the catch and patch. #### 0.20.4 (2018-10-04)[¶](#id11) Bug fixes: * Fix bug where multi-level nested relationships would not be properly deserialized ([#127](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/127)). Thanks [@ww3pl](https://github.com/ww3pl) for the catch and patch. #### 0.20.3 (2018-09-13)[¶](#id12) Bug fixes: * Fix missing load validation when data is not a collection but many=True ([#161](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/161)). Thanks [@grantHarris](https://github.com/grantHarris). #### 0.20.2 (2018-08-15)[¶](#id13) Bug fixes: * Fix issues where generated URLs are unquoted ([#147](https://github.com/marshmallow-code/marshmallow-jsonapi/pull/147)). Thanks [@grantHarris](https://github.com/grantHarris). Other changes: * Fix tests against marshmallow 3.0.0b13. #### 0.20.1 (2018-07-15)[¶](#id14) Bug fixes: * Fix deserializing `missing` with a `Relationship` field ([#130](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/130)). Thanks [@kumy](https://github.com/kumy) for the catch and patch. #### 0.20.0 (2018-06-10)[¶](#id15) Bug fixes: * Fix serialization of `id` for `Relationship` fields when `attribute` is set ([#69](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/69)). Thanks [@jordal](https://github.com/jordal) for reporting and thanks [@scottwernervt](https://github.com/scottwernervt) for the fix. Note: The above fix could break some code that set `Relationship.id_field` before instantiating it. Set `Relationship.default_id_field` instead. ``` # before fields.Relationship.id_field = "item_id" # after fields.Relationship.default_id_field = "item_id" ``` Support: * Test refactoring and various doc improvements ([#63](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/63), [#86](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/86), [#121](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/121), [#](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/) and [#122](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/122)). Thanks [@scottwernervt](https://github.com/scottwernervt). #### 0.19.0 (2018-05-27)[¶](#id16) Features: * Schemas passed to `fields.Relationship` will inherit context from the parent schema ([#84](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/84)). Thanks [@asteinlein](https://github.com/asteinlein) and [@scottwernervt](https://github.com/scottwernervt) for the PRs. #### 0.18.0 (2018-05-19)[¶](#id17) Features: * Add `fields.ResourceMeta` for serializing a resource-level meta object ([#107](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/107)). Thanks [@scottwernervt](https://github.com/scottwernervt). Other changes: * *Backwards-incompatible*: Drop official support for Python 3.4. #### 0.17.0 (2018-04-29)[¶](#id18) Features: * Add support for marshmallow 3 ([#97](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/97)). Thanks [@rockmnew](https://github.com/rockmnew). * Thanks [@mdodsworth](https://github.com/mdodsworth) for helping with [#101](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/101). * Move meta information object to document top level ([#95](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/95)). Thanks [@scottwernervt](https://github.com/scottwernervt). #### 0.16.0 (2017-11-08)[¶](#id19) Features: * Add support for exluding or including nested fields on relationships ([#94](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/94)). Thanks [@scottwernervt](https://github.com/scottwernervt) for the PR. Other changes: * *Backwards-incompatible*: Drop support for marshmallow<2.8.0 #### 0.15.1 (2017-08-23)[¶](#id20) Bug fixes: * Fix pointer for `id` in error objects ([#90](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/90)). Thanks [@rgant](https://github.com/rgant) for the catch and patch. #### 0.15.0 (2017-06-27)[¶](#id21) Features: * `Relationship` field supports deserializing included data ([#83](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/83)). Thanks [@anuragagarwal561994](https://github.com/anuragagarwal561994) for the suggestion and thanks [@asteinlein](https://github.com/asteinlein) for the PR. #### 0.14.0 (2017-04-30)[¶](#id22) Features: * `Relationship` respects its passed `Schema's` `get_attribute` method when getting the `id` field for resource linkages ([#80](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/80)). Thanks [@scmmmh](https://github.com/scmmmh) for the PR. #### 0.13.0 (2017-04-18)[¶](#id23) Features: * Add support for including deeply nested relationships in compount documents ([#61](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/61)). Thanks [@mrhanky17](https://github.com/mrhanky17) for the PR. #### 0.12.0 (2017-04-16)[¶](#id24) Features: * Use default attribute value instead of raising exception if relationship is `None` on `Relationship` field ([#75](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/75)). Thanks [@akira-dev](https://github.com/akira-dev). #### 0.11.1 (2017-04-06)[¶](#id25) Bug fixes: * Fix formatting JSON pointer when serializing an invalid object at index 0 ([#77](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/77)). Thanks [@danpoland](https://github.com/danpoland) for the catch and patch. #### 0.11.0 (2017-03-12)[¶](#id26) Bug fixes: * Fix compatibility with marshmallow 3.x. Other changes: * *Backwards-incompatible*: Remove unused `utils.get_value_or_raise` function. #### 0.10.2 (2017-03-08)[¶](#id27) Bug fixes: * Fix format of error object returned when `data` key is not included in input ([#66](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/66)). Thanks [@RazerM](https://github.com/RazerM). * Fix serializing compound documents when `Relationship` is passed a schema class and `many=True` ([#67](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/67)). Thanks [@danpoland](https://github.com/danpoland) for the catch and patch. #### 0.10.1 (2017-02-05)[¶](#id28) Bug fixes: * Serialize `None` and empty lists (`[]`) to valid JSON-API objects ([#58](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/58)). Thanks [@rgant](https://github.com/rgant) for reporting and sending a PR. #### 0.10.0 (2017-01-05)[¶](#id29) Features: * Add `fields.Meta` for (de)serializing `meta` data on resource objects ([#28](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/28)). Thanks [@rubdos](https://github.com/rubdos) for the suggestion and initial work. Thanks [@RazerM](https://github.com/RazerM) for the PR. Other changes: * Test against Python 3.6. #### 0.9.0 (2016-10-08)[¶](#id30) Features: * Add Flask-specific schema with class Meta options for self link generation: `self_view`, `self_view_kwargs`, and `self_view_many` ([#51](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/51)). Thanks [@asteinlein](https://github.com/asteinlein). Bug fixes: * Fix formatting of validation error messages on newer versions of marshmallow. Other changes: * Drop official support for Python 3.3. #### 0.8.0 (2016-06-20)[¶](#id31) Features: * Add support for compound documents ([#11](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/11)). Thanks [@Tim-Erwin](https://github.com/Tim-Erwin) and [@woodb](https://github.com/woodb) for implementing this. * *Backwards-incompatible*: Remove `include_data` parameter from `Relationship`. Use `include_resource_linkage` instead. #### 0.7.1 (2016-05-08)[¶](#id32) Bug fixes: * Format correction for error objects ([#47](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/47)). Thanks [@ZeeD26](https://github.com/ZeeD26) for the PR. #### 0.7.0 (2016-04-03)[¶](#id33) Features: * Correctly format `messages` attribute of `ValidationError` raised when `type` key is missing in input ([#43](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/43)). Thanks [@ZeeD26](https://github.com/ZeeD26) for the catch and patch. * JSON pointers for error objects for relationships will point to the `data` key ([#41](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/41)). Thanks [@cmanallen](https://github.com/cmanallen) for the PR. #### 0.6.0 (2016-03-24)[¶](#id34) Features: * `Relationship` deserialization improvements: properly validate to-one and to-many relatinoships and validate the presense of the `data` key ([#37](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/37)). Thanks [@cmanallen](https://github.com/cmanallen) for the PR. * `attributes` is no longer a required key in the `data` object ([#39](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/#39), [#42](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/42)). Thanks [@ZeeD26](https://github.com/ZeeD26) for reporting and [@cmanallen](https://github.com/cmanallen) for the PR. * Added `id` serialization ([#39](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/39)). Thanks again [@cmanallen](https://github.com/cmanallen). #### 0.5.0 (2016-02-08)[¶](#id35) Features: * Add relationship deserialization ([#15](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/15)). * Allow serialization of foreign key attributes ([#32](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/32)). * Relationship IDs serialize to strings, as is required by JSON-API ([#31](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/31)). * `Relationship` field respects `dump_to` parameter ([#33](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/33)). Thanks [@cmanallen](https://github.com/cmanallen) for all of these changes. Other changes: * The minimum supported marshmallow version is 2.3.0. #### 0.4.2 (2015-12-21)[¶](#id36) Bug fixes: * Relationship names are inflected when appropriate ([#22](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/22)). Thanks [@angelosarto](https://github.com/angelosarto) for reporting. #### 0.4.1 (2015-12-19)[¶](#id37) Bug fixes: * Fix serializing null and empty relationships with `flask.Relationship` ([#24](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/24)). Thanks [@floqqi](https://github.com/floqqi) for the catch and patch. #### 0.4.0 (2015-12-06)[¶](#id38) * Correctly serialize null and empty relationships ([#10](https://github.com/marshmallow-code/marshmallow-jsonapi/issues/10)). Thanks [@jo-tham](https://github.com/jo-tham) for the PR. * Add `self_url`, `self_url_kwargs`, and `self_url_many` class Meta options for adding `self` links. Thanks [@asteinlein](https://github.com/asteinlein) for the PR. #### 0.3.0 (2015-10-18)[¶](#id39) * *Backwards-incompatible*: Replace `HyperlinkRelated` with `Relationship` field. Supports related links (`related`), relationship links (`self`), and resource linkages. * *Backwards-incompatible*: Validate and deserialize JSON API-formatted request payloads. * Fix error formatting when `many=True`. * Fix error formatting in strict mode. #### 0.2.2 (2015-09-26)[¶](#id40) * Fix for marshmallow 2.0.0 compat. #### 0.2.1 (2015-09-16)[¶](#id41) * Compatibility with marshmallow>=2.0.0rc2. #### 0.2.0 (2015-09-13)[¶](#id42) Features: * Add framework-independent `HyperlinkRelated` field. * Support inflection of attribute names via the `inflect` class Meta option. Bug fixes: * Fix for making `HyperlinkRelated` read-only by defualt. Support: * Docs updates. * Tested on Python 3.5. #### 0.1.0 (2015-09-12)[¶](#id43) * First PyPI release. * Include Schema that serializes objects to resource objects. * Flask-compatible HyperlinkRelate field for serializing relationships. * Errors are formatted as JSON API errror objects. ### Authors[¶](#authors) #### Lead[¶](#lead) * <NAME> [@sloria](https://github.com/sloria) #### Contributors (chronological)[¶](#contributors-chronological) * <NAME> [@jo-tham](https://github.com/jo-tham) * <NAME> [@asteinlein](https://github.com/asteinlein) * [@floqqi](https://github.com/floqqi) * <NAME> [@cmanallen](https://github.com/cmanallen) * <NAME> [@ZeeD26](https://github.com/ZeeD26) * <NAME> [@Tim-Erwin](https://github.com/Tim-Erwin) * <NAME> [@woodb](https://github.com/woodb) * <NAME> [@RazerM](https://github.com/RazerM) * <NAME> [@rgant](https://github.com/rgant) * <NAME> [@danpoland](https://github.com/danpoland) * <NAME> [@akira-dev](https://github.com/akira-dev) * [@mrhanky17](https://github.com/mrhanky17) * <NAME> [@scmmmh](https://github.com/scmmmh) * <NAME> [@scottwernervt](https://github.com/scottwernervt) * <NAME> [@mdodsworth](https://github.com/mdodsworth) * <NAME> [@kumy](https://github.com/kumy) * <NAME> [@grantHarris](https://github.com/grantHarris) * <NAME> [@ww3pl](https://github.com/ww3pl) * [@aberres](https://github.com/aberres) * <NAME> [@georgealton](https://github.com/georgealton) * <NAME> [@iamareebjamal](https://github.com/iamareebjamal) * <NAME> [@mahenzon](https://github.com/mahenzon) * <NAME> [@tirkarthi](https://github.com/tirkarthi) ### Contributing Guidelines[¶](#contributing-guidelines) #### Questions, Feature Requests, Bug Reports, and Feedback…[¶](#questions-feature-requests-bug-reports-and-feedback) …should all be reported on the [Github Issue Tracker](https://github.com/marshmallow-code/marshmallow-jsonapi/issues?state=open) . #### Setting Up for Local Development[¶](#setting-up-for-local-development) 1. Fork [marshmallow-jsonapi](https://github.com/marshmallow-code/marshmallow-jsonapi) on Github. ``` $ git clone https://github.com/marshmallow-code/marshmallow-jsonapi.git $ cd marshmallow-jsonapi ``` 2. Install development requirements. **It is highly recommended that you use a virtualenv.** Use the following command to install an editable version of marshmallow-jsonapi along with its development requirements. ``` # After activating your virtualenv $ pip install -e '.[dev]' ``` 3. Install the pre-commit hooks, which will format and lint your git staged files. ``` # The pre-commit CLI was installed above $ pre-commit install ``` #### Git Branch Structure[¶](#git-branch-structure) Marshmallow abides by the following branching model: `dev`Current development branch. **New features should branch off here**. `X.Y-line`Maintenance branch for release `X.Y`. **Bug fixes should be sent to the most recent release branch.** The maintainer will forward-port the fix to `dev`. Note: exceptions may be made for bug fixes that introduce large code changes. **Always make a new branch for your work**, no matter how small. Also, **do not put unrelated changes in the same branch or pull request**. This makes it more difficult to merge your changes. #### Pull Requests[¶](#pull-requests) 1. Create a new local branch. ``` $ git checkout -b name-of-feature dev ``` 2. Commit your changes. Write [good commit messages](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html). ``` $ git commit -m "Detailed commit message" $ git push origin name-of-feature ``` 3. Before submitting a pull request, check the following: * If the pull request adds functionality, it is tested and the docs are updated. * You’ve added yourself to `AUTHORS.rst`. 4. Submit a pull request to `marshmallow-code:dev` or the appropriate maintenance branch. The [CI](https://dev.azure.com/sloria/sloria/_build/latest?definitionId=7&branchName=dev) build must be passing before your pull request is merged. #### Running tests[¶](#running-tests) To run all To run all tests: ``` $ pytest ``` To run syntax checks: ``` $ tox -e lint ``` (Optional) To run tests in all supported Python versions in their own virtual environments (must have each interpreter installed): ``` $ tox ``` #### Documentation[¶](#documentation) Contributions to the documentation are welcome. Documentation is written in [reStructuredText](https://docutils.sourceforge.io/rst.html) (rST). A quick rST reference can be found [here](https://docutils.sourceforge.io/docs/user/rst/quickref.html). Builds are powered by [Sphinx](http://sphinx.pocoo.org/). To build the docs in “watch” mode: ``` $ tox -e watch-docs ``` Changes in the `docs/` directory will automatically trigger a rebuild. ### License[¶](#license) ``` Copyright 2015-2020 Steven Loria and contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` Links[¶](#links) --- * [marshmallow-jsonapi @ GitHub](https://github.com/marshmallow-code/marshmallow-jsonapi) * [marshmallow-jsonapi @ PyPI](https://pypi.python.org/pypi/marshmallow-jsonapi) * [Issue Tracker](https://github.com/marshmallow-code/marshmallow-jsonapi/issues)
bower_bookshelf.jsonl
personal_doc
JavaScript
# Date: Categories: Tags: These guides should help you getting started with Bookshelf quickly. For now there's not a lot of information, because there wasn't a specific section of the documentation dedicated to guides/tutorials until recently, but we hope to begin adding more guides to make it easier for newcomers to discover as much of the general functionality of Bookshelf as possible. You can contribute to these documents by editing or creating files in the `tutorials` directory. # Construction Date: 2018-03-25 Categories: Tags: ## Bookshelf The Bookshelf library is initialized by passing an initialized Knex client instance. The knex documentation provides a number of examples for different databases. # bookshelf.model(name, [Model], [staticProperties]) → Model source ``` // Defining and registering a model module.exports = bookshelf.model('Customer', { tableName: 'customers', orders() { return this.hasMany('Order') } }) // Retrieving a previously registered model const Customer = bookshelf.model('Customer') // Registering already defined models // file: customer.js const Customer = bookshelf.Model.extend({ tableName: 'customers', orders() { return this.hasMany('Order') } }) module.exports = bookshelf.model('Customer', Customer) // file: order.js const Order = bookshelf.Model.extend({ tableName: 'orders', customer() { return this.belongsTo('Customer') } }) module.exports = bookshelf.model('Order', Order) ``` * `name` `string` The name to save the model as, or the name of the model to retrieve if no further arguments are passed to this method. * `[Model]` `Model` `|` `Object` The model to register. If a plain object is passed it will be converted to a `Model` . See example above. * `[staticProperties]` `Object` If a plain object is passed as second argument, this can be used to specify additional static properties and methods for the new model that is created. `Model` The registered model. Registers a model. Omit the second argument `Model` to return a previously registered model that matches the provided name. Note that when registering a model with this method it will also be available to all relation methods, allowing you to use a string name in that case. See the calls to `hasMany()` in the examples above. # bookshelf.plugin(plugin, options) → Bookshelf source * `plugin` `string` `|` `array` `|` `function` The plugin or plugins to load. If you provide a string it can represent an npm package or a file somewhere on your project. You can also pass a function as argument to add it as a plugin. Finally, it's also possible to pass an array of strings or functions to add them all at once. * `options` `mixed` This can be anything you want and it will be passed directly to the plugin as the second argument when loading it. `Bookshelf` The bookshelf instance for chaining. This method provides a nice, tested, standardized way of adding plugins to a `Bookshelf` instance, injecting the current instance into the plugin, which should be a `module.exports` . You can add a plugin by specifying a string with the name of the plugin to load. In this case it will try to find a module. It will pass the string to `require()` , so you can either require a third-party dependency by name or one of your own modules by relative path: ``` bookshelf.plugin('./bookshelf-plugins/my-favourite-plugin'); bookshelf.plugin('plugin-from-npm'); ``` There are a few official plugins published in `npm` , along with many independently developed ones. See the list of available plugins. You can also provide an array of strings or functions, which is the same as calling `bookshelf.plugin()` multiple times. In this case the same options object will be reused: ``` bookshelf.plugin(['cool-plugin', './my-plugins/even-cooler-plugin']); ``` Example plugin: ``` // Converts all string values to lower case when setting attributes on a model module.exports = function(bookshelf) { bookshelf.Model = bookshelf.Model.extend({ set(key, value, options) { if (!key) return this if (typeof value === 'string') value = value.toLowerCase() return bookshelf.Model.prototype.set.call(this, key, value, options) } }) } ``` # bookshelf.resolve(name) → * source ``` const Customer = bookshelf.model('Customer', { tableName: 'customers' }) bookshelf.resolve = (name) => { if (name === 'SpecialCustomer') return Customer; } ``` * `name` `string` The model name to resolve. `*` The return value will depend on what your re-implementation of this function does. Override this in your bookshelf instance to define a custom function that will resolve the location of a model or collection when using the `Bookshelf#model` method or when passing a string with a model name in any of the collection methods (e.g. `Model#hasOne` , `Model#hasMany` , etc.). This will only be used if the specified name cannot be found in the registry. Note that this function can return anything you'd like, so it's not restricted in functionality. # bookshelf.transaction(transactionCallback) → Promise source ``` var Promise = require('bluebird') Bookshelf.transaction((t) => { return new Library({name: 'Old Books'}) .save(null, {transacting: t}) .tap(function(model) { return Promise.map([ {title: 'Canterbury Tales'}, {title: '<NAME>'}, {title: 'Hamlet'} ], (info) => { return new Book(info).save({'shelf_id': model.id}, {transacting: t}) }) }) }).then((library) => { console.log(library.related('books').pluck('title')) }).catch((err) => { console.error(err) }) ``` * `transactionCallback` ``` Bookshelf~transactionCallback ``` Callback containing transaction logic. The callback should return a Promise. `Promise` A promise resolving to the value returned from `transactionCallback` . An alias to `. The` `Knex#transaction` `transaction` object must be passed along in the options of any relevant Bookshelf calls, to ensure all queries are on the same connection. The entire transaction block is wrapped around a Promise that will commit the transaction if it resolves successfully, or roll it back if the Promise is rejected. Note that there is no need to explicitly call `transaction.commit()` or ``` transaction.rollback() ``` since the entire transaction will be committed if there are no errors inside the transaction block. When fetching inside a transaction it's possible to specify a row-level lock by passing the wanted lock type in the `lock` option to `fetch` . Available options are `lock: 'forUpdate'` and `lock: 'forShare'` . # transactionCallback(transaction) → Promise source * `transaction` `Transaction` `Promise` The Promise will resolve to the return value of the callback, or be rejected with an error thrown inside it. If it resolves, the entire transaction is committed, otherwise it is rolled back. This is a transaction block to be provided to ``` Bookshelf#transaction ``` . All of the database operations inside it can be part of the same transaction by passing the ``` transacting: transaction ``` option to `fetch` , `save` or `destroy` . Note that unless you explicitly pass the `transaction` object along to any relevant model operations, those operations will not be part of the transaction, even though they may be inside the transaction callback. ## Model Models are simple objects representing individual database rows, specifying the tableName and any relations to other models. They can be extended with any domain-specific methods, which can handle components such as validations, computed properties, and access control. # new Model(attributes, [options]) source `Boolean` Initial value for `hasTimestamps` . * `[parse=false]` `Boolean` When defining a model you should use the `bookshelf.model` method, since it will allow you to avoid circular dependency problems. However, it's still possible to create models using the regular constructor. When creating an instance of a model, you can pass in the initial values of the attributes, which will be `set` on the model. If you define an `initialize` function, it will be invoked when the model is created. ``` new Book({ title: "One Thousand and One Nights", author: "Scheherazade" }); ``` In rare cases, if you're looking to get fancy, you may want to override `constructor` , which allows you to replace the actual constructor function for your model. ``` let Book = bookshelf.model('Book', { tableName: 'documents', constructor: function() { bookshelf.Model.apply(this, arguments); this.on('saving', function(model, attrs, options) { options.query.where('type', '=', 'book'); }); } }); ``` # model.initialize(attributes, [options]) source * `attributes` `Object` Initial values for this model's attributes. * `[options]` `Object` The hash of options passed to `constructor` . * See Called by the `Model constructor` when creating a new instance. Override this function to add custom initialization, such as event listeners. Because plugins may override this method in subclasses, make sure to call your super (extended) class. e.g. # Model.collection([models], [options]) → Collection source ``` Customer.collection().fetch().then((customers) => { // ... }) ``` * `[models]` `Model[]` Any models to be added to the collection. * `[options]` `Object` Additional options to pass to the `Collection` constructor. * `[comparator]` `string` `|` `function` If specified this is used to sort the collection. It can be a string representing the model attribute to sort by, or a custom function. Check the documentation for Array.prototype.sort for more info on how to use a custom comparator function. If this options is not specified the collection sort order depends on what the database returns. `Collection` The newly created collection. It will be empty unless any models were passed as the first argument. A simple static helper to instantiate a new `Collection` , setting the model it's called on as the collection's target model. # Model.count([column], [options]) → Promise ``` Duck.count().then((count) => { console.log('number of ducks', count) }) ``` `Promise` b'' Shortcut to a model's `count` method so you don't need to instantiate a new model to count the number of records. ``` const Promise = require('bluebird') const compare = require('some-crypt-library') const Customer = bookshelf.model('Customer', { initialize() { this.constructor.__super__.initialize.apply(this, arguments) // Setting up a listener for the 'saving' event this.on('saving', this.validateSave) }, validateSave() { return doValidation(this.attributes) }, account() { // Defining a relation with the Account model return this.belongsTo(Account) } }, { login: Promise.method(function(email, password) { if (!email || !password) throw new Error('Email and password are both required') return new this({email: email.toLowerCase()}) .fetch() .tap(function(customer) { if (!compare(password, customer.get('password')) throw new Error('Invalid password') }) }) }) ``` `function` Constructor for new Model subclass. This static method allows you to create your own Model classes by extending `bookshelf.Model` . It correctly sets up the prototype chain, which means that subclasses created this way can be further extended and subclassed as far as you need. # Model.fetchAll() → Promise<Collection> source `Promise<Collection>` Simple helper function for retrieving all instances of the given model. # Model.forge([attributes], [options]) source `Boolean` Initial value for `hasTimestamps` . * `[parse=false]` `Boolean` A simple helper function to instantiate a new Model without needing `new` . # model.defaults :Object|Null source ``` var MyModel = bookshelf.model('MyModel', { defaults: {property1: 'foo', property2: 'bar'}, tableName: 'my_models' }) MyModel.forge({property1: 'blah'}).save().then(function(model) { // {property1: 'blah', property2: 'bar'} }) ``` This can be used to define any default values for attributes that are not present when creating or updating a model in a `save` call. The default behavior is to not use these default values on updates unless the `defaults: true` option is passed to the `save` call. For inserts the default values will always be used if present. # model.hasTimestamps :Boolean|Array source ``` var MyModel = bookshelf.model('MyModel', { hasTimestamps: true, tableName: 'my_models' }) var myModel = MyModel.forge({name: 'blah'}).save().then(function(savedModel) { // { // name: 'blah', // created_at: 'Sun Mar 25 2018 15:07:11 GMT+0100 (WEST)', // updated_at: 'Sun Mar 25 2018 15:07:11 GMT+0100 (WEST)' // } }) myModel.save({created_at: new Date(2015, 5, 2)}).then(function(updatedModel) { // { // name: 'blah', // created_at: 'Tue Jun 02 2015 00:00:00 GMT+0100 (WEST)', // updated_at: 'Sun Mar 25 2018 15:07:11 GMT+0100 (WEST)' // } }) ``` Automatically sets the current date and time on the timestamp attributes `created_at` and `updated_at` based on the type of save method. The update method will only update `updated_at` , while the insert method will set both values. To override the default attribute names, assign an array to this property. The first element will be the created column name and the second will be the updated one. If any of these elements is set to `null` that particular timestamp attribute will not be used in the model. For example, to automatically update only the `created_at` attribute set this property to `['created_at', null]` . You can override the timestamp attribute values of a model and those values will be used instead of the automatic ones when saving. # model.hidden :null|Array source ``` const MyModel = bookshelf.model('MyModel', { tableName: 'my_models', hidden: ['password'] }) # model.id :number|string source ``` const Television = bookshelf.model('Television', { tableName: 'televisions', idAttribute: 'coolId' }) new Television({coolId: 1}).fetch(tv => { tv.get('coolId') // 1 tv.id // 1 }) ``` A special property of models which represents their unique identifier, named by the `idAttribute` . If you set the `id` in the attributes hash, it will be copied onto the model as a direct property. Models can be retrieved by their id from collections, and the id is used when fetching models and building model relations. Note that a model's `id` property can always be accessed even when the value of its `idAttribute` is not `'id'` . # model.idAttribute :string source This tells the model which attribute to expect as the unique identifier for each database row (typically an auto-incrementing primary key named `'id'` ). Note that if you are using `parse` and `format` (to have your model's attributes in `camelCase` , but your database's columns in `snake_case` , for example) this refers to the name returned by parse ( `myId` ), not the actual database column ( `my_id` ). You can also get the parsed id attribute value by using the model's `parsedIdAttribute` method. If the table you're working with does not have a Primary-Key in the form of a single column you'll have to override it with a getter that returns `null` . Overriding with `undefined` does not cascade the default behavior of the value `'id'` . Such a getter in ES6 would look like ``` get idAttribute() { return null } ``` # model.requireFetch :boolean source ``` // Default behavior const MyModel = bookshelf.model('MyModel', { tableName: 'my_models' }) new MyModel({id: 1}).fetch().catch(error => { // Will throw NotFoundError if there are no results }) // Overriding the default behavior const MyModel = bookshelf.model('MyModel', { requireFetch: false, tableName: 'my_models' }) new MyModel({id: 1}).fetch(model => { // model will be null if there are no results }) ``` * 1.0.0 Allows defining the default behavior when there are no results when fetching a model from the database. This applies only when fetching a single model using `fetch` or `Collection#fetchOne` . You can override this model option when fetching by passing the `{require: false}` or `{require: true}` option to any of the fetch methods mentioned above. # model.tableName :string source ``` var Television = bookshelf.model('Television', { tableName: 'televisions' }); ``` A required property for any database usage, The `tableName` property refers to the database table name the model will query against. # model.visible :null|Array source ``` const MyModel = bookshelf.model('MyModel', { tableName: 'my_models', visible: ['name', 'created_at'] }) # model.belongsTo(Target, [foreignKey], [foreignKeyTarget]) → Model source ``` const Book = bookshelf.model('Book', { tableName: 'books', author() { return this.belongsTo('Author') } }) // select * from `books` where id = 1 // select * from `authors` where id = book.author_id Book.where({id: 1}).fetch({withRelated: ['author']}).then((book) => { console.log(JSON.stringify(book.related('author'))) }) ``` * `Target` `Model` `|` `string` Constructor of `Model` targeted by the join. Can be a string specifying a previously registered model with `Bookshelf#model` . * `[foreignKey]` `string` Foreign key in this model. By default, the `foreignKey` is assumed to be the singular form of the `Target` model's tableName, followed by `_id` , or `_{{` if the `idAttribute` }} `idAttribute` property is set. * `[foreignKeyTarget]` `string` Column in the `Target` model's table which `foreignKey` references. This is only needed in case it's other than `Target` model's `id` / `.` `idAttribute` `Model` The return value will always be a model, even if the relation doesn't exist, but in that case the relation will be `null` when `serializing` the model. This relationship is used when a model is a member of another `Target` model. It can be used in One-to-one associations as the inverse of a `hasOne` . It can also used in One-to-many associations as the inverse of `hasMany` , and is the "one" side of that association. In both cases, the belongsTo relationship is used for a model that is a member of another Target model, referenced by the `foreignKey` attribute in the current model. # model.belongsToMany(Target, [joinTableName], [foreignKey], [otherKey], [foreignKeyTarget], [otherKeyTarget]) → Collection source ``` const Account = bookshelf.model('Account', { tableName: 'accounts' }) const User = bookshelf.model('User', { tableName: 'users', allAccounts() { return this.belongsToMany('Account') }, adminAccounts() { return this.belongsToMany('Account').query({where: {access: 'admin'}}) }, viewAccounts() { return this.belongsToMany('Account').query({where: {access: 'readonly'}}) } }) ``` * `Target` `Model` `|` `string` Constructor of `Model` targeted by join. Can be a string specifying a previously registered model with `Bookshelf#model` . * `[joinTableName]` `string` Name of the joining table. Defaults to the two table names ordered alphabetically and joined by an underscore. * `[foreignKey]` `string` Foreign key in this model. By default, the `foreignKey` is assumed to be the singular form of this model's tableName, followed by `_id` / `_{{` . `idAttribute` }} * `[otherKey]` `string` Foreign key in the `Target` model. By default, this is assumed to be the singular form of the `Target` model's tableName, followed by `_id` / `_{{` . `idAttribute` }} * `[foreignKeyTarget]` `string` Column in this model's table which `foreignKey` references. This is only needed if it's not the default `id` / `.` `idAttribute` * `[otherKeyTarget]` `string` Column in the `Target` model's table which `otherKey` references. This is only needed, if it's not the expected default of the `Target` model's `id` / `.` `idAttribute` `Collection` A new empty collection that is decorated with extra pivot helper methods. See the description below for more info. Defines a many-to-many relation, where the current model is joined to one or more of a `Target` model through another table. The default name for the joining table is the two models' table names joined by an underscore, and ordered alphabetically. For example, a `users` table and an `accounts` table would have a joining table named `accounts_users` . The default key names in the joining table are the singular versions of the model table names, followed by `_id` / `_{{` . So in the above example the columns in the joining table would be `idAttribute` }} `user_id` , `account_id` , and `access` , which is used as an example of how dynamic relations can be formed using different contexts. To customize the keys or the `tableName` used for the join table, you may specify them in the arguments to the function call: ``` this.belongsToMany(Account, 'users_accounts', 'userId', 'accountId') ``` If you wish to create a belongsToMany association where the joining table has a primary key and extra attributes in the model, you may create a `belongsToMany` `through` relation: ``` const Doctor = bookshelf.model('Doctor', { patients() { return this.belongsToMany('Patient').through('Appointment') } }) const Appointment = bookshelf.model('Appointment', { patient() { return this.belongsTo('Patient') }, doctor() { return this.belongsTo('Doctor') } }) const Patient = bookshelf.model('Patient', { doctors() { return this.belongsToMany('Doctor').through('Appointment') } }) ``` Collections returned by a `belongsToMany` relation are decorated with several pivot helper methods. If you need more information about these methods see `attach` , `detach` , `updatePivot` and `withPivot` . # model.clone() → Model source `Model` Cloned instance of this model. Returns a new instance of the model with identical `attributes` , including any relations from the cloned model. # model.count([column], [options]) → Promise ``` new Duck().where('color', 'blue').count('name').then((count) => { console.log('number of blue ducks', count) }) ``` A promise resolving to the number of matching rows. By default this will be a number, except with PostgreSQL where it will be a string. Check the description to see how to return a number instead in this case. Gets the number of matching records in the database, respecting any previous calls to `Model#query` . If the `column` argument is provided, records with a `null` value in that column will be excluded from the count. Note that in PostgreSQL the result is a string by default. To read more about the reasons for this see the pull request that implemented it in the `node-postgres` database driver. If you're sure that the results will always be less than 253 (9007199254740991) you can override the default string parser like this: ``` require('pg').defaults.parseInt8 = true ``` Put this snippet before the call to `require('knex')` wherever you are initalizing `knex` . # model.destroy([options]) → Promise<Model> source ``` new User({id: 1}) .destroy() .then(function(model) { // ... }); ``` * `[transacting]` `Transaction` Optionally run the query in a transaction. * `[require=true]` `Boolean` Throw a if no records are affected by destroy. This is the default behavior as of version 0.13.0. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. `Promise<Model>` A promise resolving to the destroyed and thus empty model, i.e. all attributes are `undefined` . `destroy` performs a `delete` on the model, using the model's `idAttribute` to constrain the query. A `"destroying"` event is triggered on the model before being destroyed. To prevent destroying the model, throwing an error inside one of the event listeners will stop destroying the model and reject the promise. A `"destroyed"` event is fired after the model's removal is completed. # model.escape(attribute) → string source * `attribute` `string` The attribute to escape. `string` HTML-escaped value of an attribute. Get the HTML-escaped value of an attribute. # model.fetch([options]) → Promise<Model|null> source * `[require=true]` `Boolean` Whether or not to reject the returned response with a `NotFoundError` if there are no results when fetching. If set to `false` it will resolve with `null` instead. * `[columns='*']` `string` `|` `string[]` Specify columns to be retrieved. * `[transacting]` `Transaction` Optionally run the query in a transaction. * `[lock]` `string` Type of row-level lock to use. Valid options are `forShare` and `forUpdate` . This only works in conjunction with the `transacting` option, and requires a database that supports it. * `[withRelated]` `string` `|` `Object` `|` `mixed[]` Relations to be retrieved with `Model` instance. Either one or more relation names or objects mapping relation names to query callbacks. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. `Promise<Model|null>` A promise resolving to the fetched `model` or `null` if none exists and the `require: false` option is passed. Fetches a `model` from the database, using any `attributes` currently set on the model to constrain the results. A `"fetching"` event will be fired just before the record is fetched; a good place to hook into for validation. `"fetched"` event will be fired when a record is successfully retrieved. If you need to constrain the query performed by fetch, you can call `query` or `where` before calling fetch. ``` // select * from `books` where `ISBN-13` = '9780440180296' new Book({'ISBN-13': '9780440180296'}) .fetch() .then(function(model) { // outputs 'Slaughterhouse Five' console.log(model.get('title')); }); ``` If you'd like to only fetch specific columns, you may specify a `columns` property in the `options` for the fetch call, or use `query` , tapping into the Knex column method to specify which columns will be fetched. A single property, or an array of properties can be specified as a value for the `withRelated` property. You can also execute callbacks on relations queries (eg. for sorting a relation). The results of these relation queries will be loaded into a `relations` property on the model, may be retrieved with the `related` method, and will be serialized as properties on a `toJSON` call unless `{shallow: true}` is passed. ``` let Book = bookshelf.model('Book', { tableName: 'books', editions: function() { return this.hasMany('Edition'); }, chapters: function() { return this.hasMany('Chapter'); }, genre: function() { return this.belongsTo('Genre'); } }) new Book({'ISBN-13': '9780440180296'}).fetch({ withRelated: [ 'genre', 'editions', { chapters: function(query) { query.orderBy('chapter_number'); }} ] }).then(function(book) { console.log(book.related('genre').toJSON()); console.log(book.related('editions').toJSON()); console.log(book.toJSON()); }); ``` # model.fetchAll([options]) → Promise source * `[options]` `Object` Set of options to modify the request. * `[require=false]` `boolean` Whether or not to reject the returned Promise with a if no records can be fetched from the database. * `[transacting]` `Transaction` Optionally run the query in a transaction. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. This error is used to reject the Promise in the event of an empty response from the database in case the `require: true` fetch option is used. `Promise` A Promise resolving to the fetched `collection` . Fetches a collection of `models` from the database, using any query parameters currently set on the model to constrain the results. Returns a Promise that will resolve with the fetched collection. If there are no results it will resolve with an empty collection. If instead you wish the Promise to be rejected with a , pass the `require: true` option. If you need to constrain the results, you can call the `query` or `where` methods before calling this method. # model.fetchPage([options]) → Promise<Collection> source ``` new Car() .fetchPage({ pageSize: 15, // Defaults to 10 if not specified page: 3, // Defaults to 1 if not specified withRelated: ['engine'] // Passed to Model#fetchAll }) .then(function(results) { console.log(results) // Paginated results object with metadata example below }) // Pagination results: { models: [ // Regular bookshelf Collection ], // other standard Collection attributes // ... pagination: { rowCount: 53, // Total number of rows found for the query before pagination pageCount: 4, // Total number of pages of results page: 3, // The requested page number pageSize: 15 // The requested number of rows per page } } ``` * `[options]` `Object` Besides the basic options that can be passed to `Model#fetchAll` , there are some additional pagination options that can be specified. * `[pageSize]` `number` How many models to include in each page, defaulting to 10 if not specified. Used only together with the `page` option. * `[page]` `number` Page number to retrieve. If greater than the available rows it will return an empty Collection. The first page is number `1` . Used only with the `pageSize` option. * `[limit]` `number` How many models to include in each page, defaulting to 10 if not specified. Used only together with the `offset` option. * `[offset]` `number` Index to begin fetching results from. The default and initial value is `0` . Used only with the `limit` option. * `[disableCount=false]` `boolean` Whether to disable the query for counting how many records are in the full result. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. `Promise<Collection>` Returns a Promise that will resolve to the paginated collection of models. This method is similar to `Model#fetchAll` , but fetches a single page of results as specified by the limit (page size) and offset (page number). Any options that may be passed to `Model#fetchAll` may also be passed in the options to this method. Additionally, to perform pagination, you may include either an `offset` and `limit` , or a `page` and `pageSize` . By default, with no parameters or some missing parameters, `fetchPage` will use default values of ``` {page: 1, pageSize: 10} ``` # model.format(attributes) → Object source * `attributes` `Object` The attributes to be converted. `Object` Formatted attributes. The `format` method is used to modify the current state of the model before it is persisted to the database. The `attributes` passed are a shallow clone of the `model` , and are only used for inserting/updating - the current values of the model are left intact. Do note that `format` is used to modify the state of the model when accessing the database, so if you remove an attribute in your `format` method, that attribute will never be persisted to the database, but it will also never be used when doing a `fetch()` , which may cause unexpected results. You should be very cautious with implementations of this method that may remove the primary key from the list of attributes. If you need to modify the database data before it is given to the model, override the `parse` method instead. That method does the opposite operation of `format` . # model.get(attribute) → mixed source `note.get("title");` * `attribute` `string` The name of the attribute to retrieve. `mixed` Attribute value. Get the current value of an attribute from the model. # model.has(attribute) → Boolean source `Boolean` True if `attribute` is set, otherwise `false` . Returns `true` if the attribute contains a value that is not null or undefined. # model.hasChanged([attribute]) → Boolean source ``` Author.forge({id: 1}).fetch().then(function(author) { author.hasChanged() // false author.set('name', 'Bob') author.hasChanged('name') // true }) ``` * `[attribute]` `string` A specific attribute to check for changes. `Boolean` `true` if any attribute has changed, `false` otherwise. Alternatively, if the `attribute` argument was specified, checks if that particular attribute has changed. Returns `true` if any `attribute` has changed since the last `fetch` or `save` . If an attribute name is passed as argument, returns `true` only if that specific attribute has changed. Note that even if an attribute is changed by using the `set` method, but the new value is exactly the same as the existing one, the attribute is not considered changed. # model.hasMany(Target, [foreignKey], [foreignKeyTarget]) → Collection source ``` const Author = bookshelf.model('Author', { tableName: 'authors', books() { return this.hasMany('Book') } }) // select * from `authors` where id = 1 // select * from `books` where author_id = 1 Author.where({id: 1}).fetch({withRelated: ['books']}).then(function(author) { console.log(JSON.stringify(author.related('books'))) }) ``` `Collection` A new empty Collection. This relation specifies that this model has one or more rows in another table which match on this model's primary key. # model.hasOne(Target, [foreignKey], [foreignKeyTarget]) → Model source ``` const Record = bookshelf.model('Record', { tableName: 'health_records' }) const Patient = bookshelf.model('Patient', { tableName: 'patients', record() { return this.hasOne('Record') } }) // select * from `health_records` where `patient_id` = 1 new Patient({id: 1}).related('record').fetch().then(function(model) { // ... }) // Alternatively, if you don't need the relation loaded on the patient's relations hash: new Patient({id: 1}).record().fetch().then(function(model) { // ... }) ``` `Model` The return value will always be a model, even if the relation doesn't exist, but in that case the relation will be `null` when `serializing` the model. This relation specifies that this table has exactly one of another type of object, specified by a foreign key in the other table. # model.isNew() source ``` var modelA = new bookshelf.Model(); modelA.isNew(); // true var modelB = new bookshelf.Model({id: 1}); modelB.isNew(); // false ``` Checks for the existence of an id to determine whether the model is considered "new". # model.load(relations, [options]) → Promise<Model> source ``` // Using an array of strings with relation names new Posts().fetch().then(function(collection) { return collection.at(0).load(['author', 'content', 'comments.tags']) }).then(function(model) { JSON.stringify(model) // { // title: 'post title', // author: {...}, // content: {...}, // comments: [ // {tags: [...]}, {tags: [...]} // ] // } }) // Using an object with query callbacks to filter the relations new Posts().fetch().then(function(collection) { return collection.at(0).load({comments: function(qb) { qb.where('comments.is_approved', '=', true) }}) }).then(function(model) { JSON.stringify(model) // the model now includes all approved comments }) ``` `Promise<Model>` A promise resolving to this `model` . The load method takes an array of relations to eager load attributes onto a `Model` , in a similar way that the `withRelated` option works on `fetch` . Dot separated attributes may be used to specify deep eager loading. It is possible to pass an object with query callbacks to filter the relations to eager load. An example is presented above. # model.morphMany(Target, [name], [columnNames], [morphValue]) → Collection source ``` let Post = bookshelf.model('Post', { tableName: 'posts', photos: function() { return this.morphMany('Photo', 'imageable'); } }); ``` And with custom columnNames: ``` let Post = bookshelf.model('Post'{ tableName: 'posts', photos: function() { return this.morphMany('Photo', 'imageable', ['ImageableType', 'ImageableId']); } }); ``` # model.morphOne(Target, [name], [columnNames], [morphValue]) → Model source ``` let Site = bookshelf.model('Site', { tableName: 'sites', photo: function() { return this.morphOne('Photo', 'imageable'); } }); ``` And with custom `columnNames` : ``` let Site = bookshelf.model('Site', { tableName: 'sites', photo: function() { return this.morphOne('Photo', 'imageable', ['ImageableType', 'ImageableId']); } }); ``` Note that both `columnNames` and `morphValue` are optional arguments. How your argument is treated when only one is specified, depends on the type. If your argument is an array, it will be assumed to contain custom `columnNames` . If it's not, it will be assumed to indicate a `morphValue` . # model.morphTo(name, [columnNames], [Target]) → Model source * `name` `string` Prefix for `_id` and `_type` columns. * `[columnNames]` `string[]` Array containing two column names, where the first is the `_type` and the second is the `_id` . * `[Target]` `Model` `|` `string` Constructor of `Model` targeted by join. Can be a string specifying a previously registered model with `Bookshelf#model` . `Model` The related but empty model. This relation is used to specify the inverse of the `morphOne` or `morphMany` relations, where the `targets` must be passed to signify which `models` are the potential opposite end of the `polymorphic relation` : And with custom column names: And with custom morphValues, the inverse of the `morphValue` of `morphOne` and `morphMany` , where the `morphValues` may be optionally set to check against a different value in the `_type` column other than the `Model#tableName` , for example, a more descriptive name, or a name that betters adheres to whatever standard you are using for models: ``` const Photo = bookshelf.model('Photo', { tableName: 'photos', imageable() { return this.morphTo('imageable', ['Site', 'favicon'], ['Post', 'cover_photo']) } }) ``` # model.off() source ``` customer.off('fetched fetching'); ship.off(); // This will remove all event listeners ``` # model.on() source ``` customer.on('fetching', function(model) { // Do something before the data is fetched from the database }) ``` # model.once(nameOrNames, callback) source # model.orderBy(sort, order) source ``` Car.forge().orderBy('color', 'ASC').fetchAll() .then(function (rows) { // ... ``` * `sort` `string` Column to sort on * `order` `string` Ascending ('ASC') or descending ('DESC') order # model.parse(attributes) → Object source ``` // Example of a parser to convert snake_case to camelCase, using lodash // This is just an example. You can use the official case converter plugin // to achieve the same functionality. model.parse = function(attrs) { return _.mapKeys(attrs, function(value, key) { return _.camelCase(key); }); }; ``` * `attributes` `Object` Hash of attributes to parse. `Object` Parsed attributes. The `parse` method is called whenever a `model` 's data is returned in a `fetch` call. The function is passed the raw database response object, and should return the `attributes` hash to be `set` on the model. The default implementation is a no-op, simply passing through the JSON response. Override this if you need to format the database responses - for example calling JSON.parse on a text field containing JSON, or explicitly typecasting a boolean in a sqlite3 database response. If you need to format your data before it is saved to the database, override the `format` method in your models. That method does the opposite operation of `parse` . # model.previous(attribute) → mixed source ``` Author.forge({id: 1}).fetch().then(function(author) { author.get('name') // Alice author.set('name', 'Bob') author.previous('name') // 'Alice' }) ``` `mixed` The previous value. Returns the value of an attribute like it was before the last change. A change is usually done with the `set` method, but it can also be done with the `save` method. This is useful for getting back the original attribute value after it's been changed. It can also be used to get the original value after a model has been saved to the database or destroyed. In case you want to get the previous value of all attributes at once you should use the `previousAttributes` method. Note that this will return `undefined` if the model hasn't been fetched, saved, destroyed or eager loaded. However, in case one of these operations did take place, it will return the current value if an attribute hasn't changed. If you want to check if an attribute has changed see the `hasChanged` method. ``` Author.forge({id: 1}).fetch().then(function(author) { author.get('name') // Alice author.set('name', 'Bob') author.previousAttributes() // {id: 1, name: 'Alice'} }) Author.forge({id: 1}).fetch().then(function(author) { author.get('name') // Alice return author.save({name: 'Bob'}) }).then(function(author) { author.get('name') // Bob author.previousAttributes() // {id: 1, name: 'Alice'} }) ``` `Object` The attributes as they were before the last change, or an empty object in case the model data hasn't been fetched yet. Returns a copy of the `model` 's attributes like they were before the last change. A change is usually done with the `set` method, but it can also be done with the `save` method. This is mostly useful for getting a diff of the model's attributes after changing some of them. It can also be used to get the previous state of a model after it has been saved to the database or destroyed. In case you want to get the previous value of a single attribute you should use the `previous` method. Note that this will return an empty object if no changes have been made to the model and it hasn't been fetched, saved or eager loaded. # model.query(arguments) → Model|QueryBuilder source ``` model .query('where', 'other_id', '=', '5') .fetch() .then(function(model) { // ... }); model .query({where: {other_id: '5'}, orWhere: {key: 'value'}}) .fetch() .then(function(model) { // ... }); model.query(function(qb) { qb.where('other_person', 'LIKE', '%Demo').orWhere('other_id', '>', 10); }).fetch() .then(function(model) { // ... }); `Model` `|` `QueryBuilder` Will return this model or, if called with no arguments, the underlying query builder. The `query` method is used to tap into the underlying Knex query builder instance for the current model. If called with no arguments, it will return the query builder directly. Otherwise, it will call the specified method on the query builder, applying any additional arguments from the `model.query` call. If the method argument is a function, it will be called with the Knex query builder as the context and the first argument, returning the current model. # model.refresh(options) → Promise<Model> source * `options` `Object` A hash of options. See `Model#fetch` for details. `Promise<Model>` A promise resolving to this model. Update the attributes of a model, fetching it by its primary key. If no attribute matches its `idAttribute` , then fetch by all available fields. # model.related(name) → Model|Collection|undefined source ``` new Photo({id: 1}).fetch({ withRelated: ['account'] }).then(function(photo) { var account = photo.related('account') // Get the eagerly loaded account if (account.id) { // Fetch a relation that has not been eager loaded yet return account.related('trips').fetch() } }) ``` * `name` `string` The name of the relation to retrieve. `Model` `|` `Collection` `|` `undefined` The specified relation as defined by a method on the model, or `undefined` if it does not exist. This method returns a specified relation loaded on the relations hash on the model, or calls the associated relation method and adds it to the relations hash if one exists and has not yet been loaded. # model.resetQuery() → Model source `Model` Self, this method is chainable. Used to reset the internal state of the current query builder instance. This method is called internally each time a database action is completed by `Sync` # model.save([attrs], [options]) → Promise<Model> source ``` // Save with no arguments Model.forge({id: 5, firstName: 'John', lastName: 'Smith'}).save().then((model) => { //... }) // Or add attributes during save Model.forge({id: 5}).save({firstName: 'John', lastName: 'Smith'}).then((model) => { //... }) // Or, if you prefer, for a single attribute Model.forge({id: 5}).save('name', '<NAME>').then((model) => { //... }) ``` * `[attrs]` `Object` Object containing the key: value pairs that you wish to save. If used with the `patch` option only these values will be saved and any values already set on the model will be ignored. Instead of specifying this argument you can provide both a `key` and `value` arguments to save a single value. This is demonstrated in the example. * `[options]` `Object` * `[transacting]` `Transaction` Optionally run the query in a transaction. * `[method]` `string` Explicitly select a save method, either `"update"` or `"insert"` . * `[defaults=false]` `Boolean` Whether to assign or not `default` attribute values on a model when performing an update or create operation. * `[patch=false]` `Boolean` Only save attributes supplied as arguments to the `save` call, ignoring any attributes that may be already set on the model. * `[require=true]` `Boolean` Whether or not to throw a if no records are affected by save. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. * `[autoRefresh=true]` `boolean` Weather to enable auto refresh such that after a model is saved it will be populated with all the attributes that are present in the database, so you don't need to manually call `refresh` to update it. This will use two queries unless the database supports the `RETURNING` statement, in which case the model will be saved and its data fetched with a single query. `Promise<Model>` A promise resolving to the saved and updated model. This method is used to perform either an insert or update query using the model's set `attributes` . If the model `isNew` , any `defaults` will be set and an `insert` query will be performed. Otherwise it will `update` the record with a corresponding ID. It is also possible to set default attributes on an `update` by passing the `{defaults: true}` option in the second argument to the `save` call. This will also use the same `defaults` as the `insert` operation. The type of operation to perform (either `insert` or `update` ) can be overriden with the `method` option: ``` // This forces an insert with the specified id instead of the expected update new Post({name: '<NAME>', id: 34}) .save(null, {method: 'insert'}) .then((model) => { // ... }) ``` If you only wish to update with the params passed to the save, you may pass a `{patch: true}` option in the second argument to `save` : ``` // UPDATE authors SET "bio" = 'Short user bio' WHERE "id" = 1 new Author({id: 1, first_name: 'User'}) .save({bio: 'Short user bio'}, {patch: true}) .then((model) => { // ... }) ``` Several events fire on the model when starting the save process: * `"creating"` if the model is being inserted. * `"updating"` event if the model is being updated. * `"saving"` event in either case. To prevent saving the model (for example, with validation), throwing an error inside one of these event listeners will stop the save process and reject the Promise. If you wish to modify the query when the `"saving"` event is fired, the `knex` query object is available in `options.query` . After the save is complete the following events will fire: * `"created"` if a new model was inserted in the database * `"updated"` if an existing model was updated. * `"saved"` event either way. See the Events guide for further details. # model.serialize([options]) → Object source ``` var artist = new bookshelf.Model({ firstName: "Wassily", lastName: "Kandinsky" }); artist.set({birthday: "December 16, 1866"}); console.log(JSON.stringify(artist)); // {firstName: "Wassily", lastName: "Kandinsky", birthday: "December 16, 1866"} ``` * `[shallow=false]` `Boolean` Whether to exclude relations from the output or not. * `[omitPivot=false]` `Boolean` Whether to exclude pivot values from the output or not. * `[hidden]` `Array` List of model attributes to exclude from the output. * `[visible]` `Array` List of model attributes to include on the output. All other attributes will be hidden. * `[visibility=true]` `Boolean` Whether to use visibility options or not. If set to `false` the `hidden` and `visible` options will be ignored. `Object` Serialized model as a plain object. Return a copy of the model's `attributes` for JSON stringification. If the `model` has any relations defined, this will also call `toJSON` on each of the related objects, and include them on the object unless `{shallow: true}` is passed as an option. You can define a whitelist of model attributes to include on the ouput with the ``` {visible: ['list', 'of', 'attributes']} ``` option. The `{hidden: []}` option produces the opposite effect, hiding attributes from the output. This method is called internally by `toJSON` . Override this function if you want to customize its output. # model.set(attribute, [value], [options]) → Model source ``` customer.set({first_name: "Joe", last_name: "Customer"}); customer.set("telephone", "555-555-1212"); ``` * `attribute` `string` `|` `Object` Attribute name, or hash of attribute names and values. * `[value]` `mixed` If a string was provided for `attribute` , the value to be set. * `[options]` `Object` * `[unset=false]` `Object` Remove attributes from the model instead of setting them. `Model` This model. Set a hash of attributes (one or many) on the model. # model.through(Interim, [throughForeignKey], [otherKey], [throughForeignKeyTarget], [otherKeyTarget]) → Model source ``` const Chapter = bookshelf.model('Chapter', { tableName: 'chapters', paragraphs() { return this.hasMany('Paragraph') } }) const Book = bookshelf.model('Book', { tableName: 'books', chapters() { return this.hasMany('Chapter') } }) const Paragraph = bookshelf.model('Paragraph', { tableName: 'paragraphs', chapter() { return this.belongsTo('Chapter') }, // Find the book where this paragraph is included, by passing through // the "Chapter" model. book() { return this.belongsTo('Book').through('Chapter') } }) ``` * `Interim` `Model` `|` `string` Pivot model. Can be a string specifying a previously registered model with `Bookshelf#model` . * `[throughForeignKey]` `string` Foreign key in this model. By default, the foreign key is assumed to be the singular form of the `Target` model's tableName, followed by `_id` or `_{{` . `idAttribute` }} * `[otherKey]` `string` Foreign key in the `Interim` model. By default, the other key is assumed to be the singular form of this model's tableName, followed by `_id` / `_{{` . `idAttribute` }} * `string` Column in the `Target` model which `throughForeignKey` references, if other than `Target` model's `id` / `.` `idAttribute` * `[otherKeyTarget]` `string` Column in this model which `otherKey` references, if other than `id` / `.` `idAttribute` `Model` The related but empty Model. Helps to create dynamic relations between `models` where a `hasOne` or `belongsTo` relation may run through another `Interim` model. This is exactly like the equivalent `collection method` except that it applies to the models that the above mentioned relation methods return instead of collections. This method creates a pivot model, which it assigns to `model.pivot` after it is created. When serializing the model with `toJSON` , the pivot model is flattened to values prefixed with `_pivot_` . A good example of where this would be useful is if a paragraph `belongsTo` a book through a chapter. See the example above on how this can be expressed. * `[method]` `string` Either `'insert'` or `'update'` to specify what kind of save the attribute update is for. * `[date]` `string` Either a Date object or ms since the epoch. Specify what date is used for updateing the timestamps, i.e. if something other than `new Date()` should be used. `Object` A hash of timestamp attributes that were set. Automatically sets the timestamp attributes on the model, if `hasTimestamps` is set to `true` or an array. It checks if the model is new and sets the `created_at` and `updated_at` attributes (or any other custom attribute names you have set) to the current date. If the model is not new and is just being updated then only the `updated_at` attribute gets automatically updated. If the model contains any user defined `created_at` or `updated_at` values, there won't be any automatic updated of these attributes and the user supplied values will be used instead. # model.toJSON([options]) source * `[options]` `Object` Options passed to `Model#serialize` . Called automatically by `JSON.stringify` . To customize serialization, override `serialize` . # model.trigger() source ``` ship.trigger('fetched'); ``` # model.triggerThen(name, […args]) → Promise source # model.unset(attribute) → Model source * `attribute` Attribute to unset. `Model` This model. Remove an attribute from the model. `unset` is a noop if the attribute doesn't exist. Note that unsetting an attribute from the model will not affect the related record's column value when saving the model. In order to clear the value of a column in the database record, set the attribute value to `null` instead: ``` model.set("column_name", null) ``` # model.where(method) → Model source ``` model.where('favorite_color', '<>', 'green').fetch().then(function() { //... // or model.where('favorite_color', 'red').fetch().then(function() { //... // or model.where({favorite_color: 'red', shoe_size: 12}).fetch().then(function() { //... ``` * `method` `Object` `|` `string` Either `Model` Self, this method is chainable. # model.on("counting", (model, options) => source `Promise` Counting event. Fired before a `count` query. A promise may be returned from the event handler for async behaviour. # model.on("created", (model, options) => source `Promise` Created event. Fired after an `insert` query. * `model` `Model` The model firing the event. * `attrs` `Object` Attributes that will be inserted. * `options` `Object` Options object passed to `save` . # model.on("destroyed", (model, options) => source `Promise` Destroyed event. Fired after a `delete` query. A promise may be returned from the event handler for async behaviour. # model.on("destroying", (model, options) => source `Promise` Destroying event. Fired before a `delete` query. A promise may be returned from the event handler for async behaviour. Throwing an exception from the handler will reject the promise and cancel the deletion. # model.on("fetched", (model, response, options) => source * `model` `Model` The model firing the event. * `response` `Object` Knex query response. * `options` `Object` Options object passed to `fetch` . `Promise` If the handler returns a promise, `fetch` will wait for it to be resolved. Fired after a `fetch` operation. A promise may be returned from the event handler for async behaviour. ``` const MyModel = bookshelf.model('MyModel', { initialize() { this.on('fetching', function(model, columns, options) { options.query.where('status', 'active') }) } }) ``` * `model` `Model` The model which is about to be fetched. * `columns` `string[]` The columns to be retrieved by the query. * `options` `Object` Options object passed to `fetch` . * `query` `QueryBuilder` Query builder to be used for fetching. This can be used to modify or add to the query before it is executed. See example above. `Promise` Saved event. Fired after an `insert` or `update` query. `Promise` Updated event. Fired after an `update` query. # model.on("fetched:collection", (collection, response, options) => source * `collection` `Collection` The collection that has been fetched. * `response` `Object` The raw response from the underlying query builder. This will be an array with objects representing each row, similar to the output of a `serialized Model` . * `options` `Object` Options object passed to `fetchAll` . # model.on("fetching:collection", (collection, columns, options) => source * `collection` `Collection` The collection that is going to be fetched. At this point it's still empty since the fetch hasn't happened yet. * `columns` `string[]` The columns to be retrieved by the query as provided by the underlying query builder. If the `columns` option is not specified the value of this will usually be an array with a single string `'tableName.*'` . * `options` `Object` Options object passed to `fetchAll` . ## Model.NoRowsDeletedError # new Model.NoRowsDeletedError() source Thrown when no record is deleted by `destroy` unless called with the `{require: false}` option. ## Model.NoRowsUpdatedError # new Model.NoRowsUpdatedError() source Thrown when no records are saved by `save` unless called with the `{require: false}` option. ## Model.NotFoundError # new Model.NotFoundError() source ## Collection Collections are ordered sets of models returned from the database, from a `fetchAll` call. # new Collection([models], [options]) source ``` const TabSet = bookshelf.collection('TabSet', { model: Tab }) const tabs = new TabSet([tab1, tab2, tab3]) ``` * `[models]` `Model[]` Initial array of models. * `[options]` `Object` * `[comparator=false]` `Boolean` `Comparator` for collection, or `false` to disable sorting. When creating a `Collection` , you may choose to pass in the initial array of `models` . The collection's `comparator` may be included as an option. Passing `false` as the comparator option will prevent sorting. If you define an `initialize` function, it will be invoked when the collection is created. If you would like to customize the Collection used by your models when calling `Model#fetchAll` or `Model#fetchPage` you can use the following process: ``` const Test = bookshelf.model('Test', { tableName: 'test' }, { collection(...args) { return new Tests(...args) } }) const Tests = bookshelf.collection('Tests', { get model() { return Test }, initialize () { this.constructor.__super__.initialize.apply(this, arguments) // Collection will emit fetching event as expected even on eager queries. this.on('fetching', () => {}) }, doStuff() { // This method will be available in the results collection returned // by Test.fetchAll() and Test.fetchPage() } }) ``` # collection.initialize() source * See Called by the ``` Collection constructor ``` when creating a new instance. Override this function to add custom initialization, such as event listeners. Because plugins may override this method in subclasses, make sure to call your super (extended) class. e.g. `function` Constructor for new `Collection` subclass. To create a `Collection` class of your own, extend `Bookshelf.Collection` . # Collection.forge([models], options) source ``` var Promise = require('bluebird'); var Accounts = bookshelf.Collection.extend({ model: Account }); var accounts = Accounts.forge([ {name: 'Person1'}, {name: 'Person2'} ]); Promise.all(accounts.invokeMap('save')).then(function() { // collection models should now be saved... }); ``` * `[models]` `Object[]` `|` `Model[]` Set of models (or attribute hashes) with which to initialize the collection. * `options` `Object` Hash of options. A simple helper function to instantiate a new Collection without needing new. # collection.count source ``` // select count(*) from shareholders where company_id = 1 and share &gt; 0.1; new Company({id: 1}) .shareholders() .where('share', '>', '0.1') .count() .then((count) => { assert(count === 3) }) ``` Get the number of records in the collection's table. # collection.create source ``` const { courses, ...attributes } = req.body; Student.forge(attributes).save().tap(student => Promise.map(courses, course => student.related('courses').create(course)) ).then(student => res.status(200).send(student) ).catch(error => res.status(500).send(error.message) ); ``` * `model` `Object` A set of attributes to be set on the new model. * `[options]` `Object` `Promise<Model>` A promise resolving with the new `model` . Convenience method to create a new `model` instance within a collection. Equivalent to instantiating a model with a hash of `attributes` , `saving` the model to the database then adding the model to the collection. When used on a relation, `create` will automatically set foreign key attributes before persisting the `Model` . # collection.fetch source * `[require=false]` `Boolean` Whether or not to throw a if no records are found. You can pass the `require: true` option to override this behavior. * `[withRelated=[]]` `string` `|` `string[]` A relation, or list of relations, to be eager loaded as part of the `fetch` operation. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. Thrown if no records are found. `Promise<Collection>` Fetch the default set of models for this collection from the database, resetting the collection when they arrive. If you wish to trigger an error if the fetched collection is empty, pass `{require: true}` as one of the options to the `fetch` call. A `"fetched"` event will be fired when records are successfully retrieved. If you need to constrain the query performed by `fetch` , you can call the `query` method before calling `fetch` . If you'd like to only fetch specific columns, you may specify a `columns` property in the options for the `fetch` call. The `withRelated` option may be specified to fetch the models of the collection, eager loading any specified `relations` named on the model. A single property, or an array of properties can be specified as a value for the `withRelated` property. The results of these relation queries will be loaded into a relations property on the respective models, may be retrieved with the `related` method. # collection.fetchOne source ``` // select * from authors where site_id = 1 and id = 2 limit 1; new Site({id:1}) .authors() .query({where: {id: 2}}) .fetchOne() .then(function(model) { // ... }); ``` * `[require=true]` `Boolean` Whether or not to reject the returned Promise with a `Model.NotFoundError` if no records can be fetched from the database. * `[columns='*']` `string` `|` `string[]` Limit the number of columns fetched. * `[transacting]` `Transaction` Optionally run the query in a transaction. * `[lock]` `string` Type of row-level lock to use. Valid options are `forShare` and `forUpdate` . This only works in conjunction with the `transacting` option, and requires a database that supports it. * `[debug=false]` `boolean` Whether to enable debugging mode or not. When enabled will show information about the queries being run. `Promise<Model|null>` A promise resolving to the fetched `Model` or `null` if none exists and the `require: false` option is passed or `requireFetch` is set to `false` . # collection.length :Number source ``` var vanHalen = new bookshelf.Collection([eddie, alex, stone, roth]); console.log(vanHalen.length) // 4 ``` This is the total number of models in the collection. Note that this may not represent how many models there are in total in the database. # collection.load source `Promise<Collection>` A promise resolving to this `collection` . This method is used to eager load relations onto a Collection, in a similar way that the `withRelated` property works on `fetch` . Nested eager loads can be specified by separating the nested relations with `.` . # collection.add(models, [options]) → Collection source ``` const ships = new bookshelf.Collection; ships.add([ {name: "<NAME>"}, {name: "<NAME>"} ]); ``` * `models` `Object[]` `|` `Model[]` `|` `Object` `|` `Model` One or more models or raw attribute objects. * `[options]` `Object` Options for controlling how models are added. * `[merge=false]` `Boolean` If set to `true` it will merge the attributes of duplicate models with the attributes of existing models in the collection. * `[at]` `Number` If set to a number equal to or greater than 0 it will splice the model into the collection at the specified index number. `Collection` Self, this method is chainable. Add a `model` , or an array of models, to the collection. You may also pass raw attribute objects, which will be converted to proper models when being added to the collection. You can pass the `{at: index}` option to splice the model into the collection at the specified `index` . By default if you're adding models to the collection that are already present, they'll be ignored, unless you pass `{merge: true}` , in which case their `attributes` will be merged with the corresponding models. # collection.at() source Get a model from a collection, specified by index. Useful if your collection is sorted, and if your collection isn't sorted, `at` will still retrieve models in insertion order. # collection.attach(ids, options) → Promise<Collection> source `Promise<Collection>` A promise resolving to the updated Collection where this method was called. Attaches one or more `ids` or models from a foreign table to the current table, on a many-to-many relation. Creates and saves a new model and attaches the model with the related model. ``` var admin1 = new Admin({username: 'user1', password: 'test'}); var admin2 = new Admin({username: 'user2', password: 'test'}); Promise.all([admin1.save(), admin2.save()]) .then(function() { return Promise.all([ new Site({id: 1}).admins().attach([admin1, admin2]), new Site({id: 2}).admins().attach(admin2) ]); }) ``` This method (along with `Collection#detach` and # collection.clone() source * Overrides Create a new collection with an identical list of models as this one. # collection.detach([ids], options) → Promise A promise resolving to the updated Collection where this method was called. Detach one or more related objects from their pivot tables. If a model or id is passed, it attempts to remove from the pivot table based on that foreign key. If no parameters are specified, we assume we will detach all related associations. This method (along with `Collection#attach` and # collection.first() → Model|undefined source # collection.get() → Model source ``` const book = library.get(110); ``` `Model` The model, or `undefined` if it is not in the collection. # collection.invokeThen(method, …arguments) → Promise * `method` `string` The `model` method to invoke. * `…arguments` `mixed` Arguments to `method` . Promise resolving to array of results from invocation. Shortcut for calling `Promise.all` around a `Collection#invoke` , this will delegate to the collection's `invoke` method, resolving the promise with an array of responses all async (and sync) behavior has settled. Useful for bulk saving or deleting models: ``` collection.invokeThen('save', null, options).then(function() { // ... all models in the collection have been saved }); collection.invokeThen('destroy', options).then(function() { // ... all models in the collection have been destroyed }); ``` # collection.last() → Model|undefined source # collection.off() source ``` ships.off('fetched') // Remove the 'fetched' event listener ``` # collection.on() source ``` const ships = new bookshelf.Collection ships.on('fetched', function(collection) { // Do something after the data has been fetched from the database }) ``` # collection.once(nameOrNames, callback) source # collection.orderBy(column, order) source ``` Cars.forge().orderBy('color', 'ASC').fetch() .then(function (rows) { // ... ``` * `column` `string` Column to sort on. * `order` `string` Ascending ( `'ASC'` ) or descending ( `'DESC'` ) order. # collection.parse(resp) source * `resp` `Object[]` Raw database response array. The `parse` method is called whenever a collection's data is returned in a `fetch` call. The function is passed the raw database `response` array, and should return an array to be set on the collection. The default implementation is a no-op, simply passing through the JSON response. # collection.pluck() → mixed[] source `mixed[]` An array of attribute values. Pluck an attribute from each model in the collection. # collection.push(model) → Collection source Add a model to the end of the collection. # collection.query(arguments) → Collection|QueryBuilder source collection.query(function(qb) { qb.where('id', '>', 5).andWhere('first_name', '=', 'Test'); }).fetch() .then(function(collection) { // ... }); collection .query('where', 'other_id', '=', '5') .fetch() .then(function(collection) { // ... }); ``` `Collection` `|` `QueryBuilder` This collection or, if called with no arguments, the underlying query builder. This method is used to tap into the underlying Knex query builder instance for the current collection. If called with no arguments, it will return the query builder directly, otherwise it will call the specified `method` on the query builder, applying any additional arguments from the `collection.query` call. If the `method` argument is a function, it will be called with the Knex query builder as the context and the first argument. # collection.reduceThen(iterator, initialValue, context) → Promise * `iterator` ``` Collection~reduceThenIterator ``` * `initialValue` `mixed` * `context` `Object` Bound to `this` in the `iterator` callback. Promise resolving to the single result from the reduction. Iterate over all the models in the collection and reduce this array to a single value using the given iterator function. # collection.remove(models, [options]) → Model|Model[] source * `models` `Model` `|` `Model[]` The model, or models, to be removed. * `[options]` `Object` Set of options for the operation. * `[silent]` `Boolean` If set to `true` will not trigger a `remove` event on the removed model. `Model` `|` `Model[]` The same value passed in the `models` argument. Remove a `model` , or an array of models, from the collection. Note that this does not remove the affected models from the database. For that purpose you have to use the model's `destroy` method. If you wish to actually remove all the models in a collection from the database you can use this method: ``` myCollection.invokeThen('destroy').then(() => { // models have been destroyed }) ``` # collection.reset(models, options) → Model[] source * `models` `Object[]` `|` `Model[]` `|` `Object` `|` `Model` One or more models or raw attribute objects. * `options` `Object` See `add` . `Model[]` Array of models. Adding and removing models one at a time is all well and good, but sometimes you have so many models to change that you'd rather just update the collection in bulk. Use `reset` to replace a collection with a new list of models (or attribute hashes). Calling `collection.reset()` without passing any models as arguments will empty the entire collection. # collection.serialize([options]) → Object source * `[shallow=false]` `Boolean` Exclude relations. * `[omitPivot=false]` `Boolean` Exclude pivot values. * `[omitNew=false]` `Boolean` Exclude models that return true for isNew. `Object` Serialized model as a plain object. Return a raw array of the collection's `attributes` for JSON stringification. If the `models` have any relations defined, this will also call `toJSON` on each of the related objects, and include them on the object unless `{shallow: true}` is passed as an option. `serialize` is called internally by `toJSON` . Override this function if you want to customize its output. # collection.set(models, [options]) → Collection source ``` var vanHalen = new bookshelf.Collection([eddie, alex, stone, roth]); vanHalen.set([eddie, alex, stone, hagar]); ``` * `models` `Object[]` `|` `Model[]` `|` `Object` `|` `Model` One or more models or raw attribute objects. * `[options]` `Object` Options for controlling how models are added or removed. * `[add=true]` `Boolean` If set to `true` it will add any new models to the collection, otherwise any new models will be ignored. * `[merge=true]` `Boolean` If set to `true` it will merge the attributes of duplicate models with the attributes of existing models in the collection, otherwise duplicate models in the list will be ignored. * `[remove=true]` `Boolean` If set to `true` any models in the collection that are not in the list will be removed from the collection, otherwise they will be kept. The set method performs a smart update of the collection with the passed model or list of models by following the following rules: * If a model in the list isn't yet in the collection it will be added * if the model is already in the collection its attributes will be merged * if the collection contains any models that aren't present in the list, they'll be removed. If you'd like to customize the behavior, you can do so with the `add` , `merge` and `remove` options. Since version 0.14.0 if both `remove` and `merge` options are set to `false` , then any duplicate models present will be added to the collection, otherwise they will either be removed or merged, according to the chosen option. # collection.through(Interim, [throughForeignKey], [otherKey], [throughForeignKeyTarget], [otherKeyTarget]) → Collection source ``` const Chapter = bookshelf.model('Chapter', { tableName: 'chapters', paragraphs() { return this.hasMany(Paragraph) } }) const Paragraph = bookshelf.model('Paragraph', { tableName: 'paragraphs', chapter() { return this.belongsTo(Chapter) } }) const Book = bookshelf.model('Book', { tableName: 'books', // Find all paragraphs associated with this book, by // passing through the "Chapter" model. paragraphs() { return this.hasMany(Paragraph).through(Chapter) } }) ``` * `Interim` `Model` Pivot model. * `[throughForeignKey]` `string` Foreign key in this collection's model. This is the model that the `hasMany` or `belongsToMany` relations return. By default, the `foreignKey` is assumed to be the singular form of the `Target` model's tableName, followed by `_id` / `_{{` . `idAttribute` }} * `[otherKey]` `string` Foreign key in the `Interim` model. By default, the `otherKey` is assumed to be the singular form of this model's tableName, followed by `_id` / `_{{` . `idAttribute` }} * `string` Column in this collection's model which `throughForeignKey` references, if other than the default of the model's `id` / `.` `idAttribute` * `[otherKeyTarget]` `string` Column in the `Interim` model which `otherKey` references, if other than `id` / `.` `idAttribute` `Collection` The related but empty collection. Used to define relationships where a `hasMany` or `belongsToMany` relation passes "through" an `Interim` model. This is exactly like the equivalent `model method` except that it applies to the collections that the above mentioned relation methods return instead of individual models. A good example of where this would be useful is if a book `hasMany` paragraphs through chapters. See the example above for how this can be used. # collection.toJSON(Options) source * `Options` `options` passed to `Collection#serialize` . Called automatically by `JSON.stringify` . To customize serialization, override `serialize` . # collection.trigger() source ``` ships.trigger('fetched') ``` # collection.triggerThen(name, […args]) → Promise source # collection.updatePivot(attributes, [options]) → Promise * `attributes` `Object` Values to be set in the `update` query. * `[options]` `Object` A hash of options. * `[query]` `function` `|` `Object` Constrain the update query. Similar to the `method` argument to `Model#query` . * `[require=false]` `Boolean` Causes promise to be rejected with an Error if no rows were updated. * `[transacting]` `Transaction` Optionally run the query in a transaction. A promise resolving to number of rows updated. The `updatePivot` method is used exclusively on `belongsToMany` relations, and allows for updating pivot rows on the joining table. This method (along with `Collection#attach` and `Collection#detach` ) are mixed in to a `Collection` when returned by a `belongsToMany` relation. # collection.where(conditions) → Collection source ``` collection .where('favorite_color', '<>', 'green') .fetch() .then(results => { // ... }) // or collection .where('favorite_color', 'red') .fetch() .then(results => { // ... }) collection .where({favorite_color: 'red', shoe_size: 12}) .fetch() .then(results => { // ... }) ``` * `conditions` `Object` `|` `string` Either # collection.withPivot(columns) → Collection source * `columns` `string[]` Names of columns to be included when retrieving pivot table rows. `Collection` Self, this method is chainable. The `withPivot` method is used exclusively on `belongsToMany` relations, and allows for additional fields to be pulled from the joining table. ``` var Tag = bookshelf.model('Tag', { comments: function() { return this.belongsToMany(Comment).withPivot(['created_at', 'order']); } }); ``` * countBy() * every() * filter() * find() * forEach() * groupBy() * includes() * invokeMap() * isEmpty() * map() * reduce() * reduceRight() * reject() * some() * sortBy() * toArray() # reduceThenIterator(acumulator, model, index, length) source * `acumulator` `mixed` * `model` `Model` The current model being iterated over. * `index` `Number` * `length` `Number` Total number of models being iterated over. This iterator is used by the reduceThen method to ietrate over all models in the collection. # collection.on("fetched", (collection, response, options) => source * `collection` `Collection` The collection performing the `Collection#fetch` . * `response` `Object` Knex query response. * `options` `Object` Options object passed to `fetch` . ## Collection.EmptyError # new Collection.EmptyError() source Thrown by default when no records are found by `fetch` or `Collection#fetchOne` . This behavior can be overrided with the `Model#requireFetch` option. ## Events # new Events() source Base Event class inherited by `Model` and `Collection` . It's not meant to be used directly, and is only displayed here for completeness. # events.off(nameOrNames, callback) source * `nameOrNames` `string` The name of the event or space separated list of events to stop listening to. * `callback` `function` That callback to remove. Remove a previously-bound callback event listener from an object. If no event name is specified, callbacks for all events will be removed. # events.on(nameOrNames, callback) → mixed source * `nameOrNames` `string` The name or space separated names of events to register a callback for. * `callback` `function` That callback to invoke whenever the event is fired. `mixed` The object where this is called on is returned to allow chaining this method call. Registers an event listener. The callback will be invoked whenever the event is fired. The event string may also be a space-delimited list of several event names. # events.once(nameOrNames, callback) source # events.trigger(nameOrNames, […args]) source * `nameOrNames` `string` The name of the event to trigger. Also accepts a space separated list of event names. * `[…args]` `mixed` Extra arguments to pass to the event listener callback function. Trigger callbacks for the given event, or space-delimited list of events. Subsequent arguments to `trigger` will be passed along to the event callback. # events.triggerThen(name, […args]) → Promise source
mjfhtml2pdf
npm
JavaScript
html2pdf === html2pdf converts any webpage or element into a printable PDF entirely client-side using [html2canvas](https://github.com/niklasvh/html2canvas) and [jsPDF](https://github.com/MrRio/jsPDF). Install --- 1. Copy `html2pdf.js` to your project directory. 2. Fetch the dependencies `html2canvas` and `jsPDF`, which can be found in the `vendor` folder. 3. Include the files in your HTML document (**order is important**, otherwise `jsPDF` will override `html2canvas` with its own internal implementation): ``` <script src="jspdf.min.js"></script><script src="html2canvas.min.js"></script><script src="html2pdf.js"></script> ``` **Note:** For best results, use the custom build of `html2canvas` found in the `vendor` folder, which contains added features and hotfixes. Usage --- ### Basic usage Including html2pdf exposes the `html2pdf` function. Calling it will create a PDF and prompt the user to save the file: ``` var element = document.getElementById('element-to-print');html2pdf(element); ``` The PDF can be configured using an optional `opt` parameter: ``` var element = document.getElementById('element-to-print');html2pdf(element, {  margin:       1,  filename:     'myfile.pdf',  image:        { type: 'jpeg', quality: 0.98 },  html2canvas:  { dpi: 192, letterRendering: true },  jsPDF:        { unit: 'in', format: 'letter', orientation: 'portrait' }}); ``` The `opt` parameter has the following optional fields: | Name | Type | Default | Description | | --- | --- | --- | --- | | margin | number or array | 0 | PDF margin. Array can be either [vMargin, hMargin] or [top, left, bottom, right]. | | filename | string | 'file.pdf' | The default filename of the exported PDF. | | image | object | {type: 'jpeg', quality: 0.95} | The image type and quality used to generate the PDF. See the Extra Features section below. | | enableLinks | boolean | true | If enabled, PDF hyperlinks are automatically added ontop of all anchor tags. | | html2canvas | object | { } | Configuration options sent directly to `html2canvas` ([see here](https://html2canvas.hertzen.com/documentation.html#available-options) for usage). | | jsPDF | object | { } | Configuration options sent directly to `jsPDF` ([see here](http://rawgit.com/MrRio/jsPDF/master/docs/jsPDF.html) for usage). | ### Extra features #### Page-breaks You may add `html2pdf`-specific page-breaks to your document by adding the CSS class `html2pdf__page-break` to any element (normally an empty `div`). During PDF creation, these elements will be given a height calculated to fill the remainder of the PDF page that they are on. Example usage: ``` <div id="element-to-print">  <span>I'm on page 1!</span>  <div class="html2pdf__page-break"></div>  <span>I'm on page 2!</span></div> ``` #### Image type and quality You may customize the image type and quality exported from the canvas by setting the `image` option. This must be an object with the following fields: | Name | Type | Default | Description | | --- | --- | --- | --- | | type | string | 'jpeg' | The image type. HTMLCanvasElement only supports 'png', 'jpeg', and 'webp' (on Chrome). | | quality | number | 0.95 | The image quality, from 0 to 1. This setting is only used for jpeg/webp (not png). | These options are limited to the available settings for [HTMLCanvasElement.toDataURL()](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toDataURL), which ignores quality settings for 'png' images. To enable png image compression, try using the [canvas-png-compression shim](https://github.com/ShyykoSerhiy/canvas-png-compression), which should be an in-place solution to enable png compression via the `quality` option. Dependencies --- html2pdf depends on the external packages [`html2canvas`](https://github.com/niklasvh/html2canvas) and [`jsPDF`](https://github.com/MrRio/jsPDF). For best results, use [this custom build](https://github.com/eKoopmans/html2canvas/tree/develop) of `html2canvas`, which includes bugfixes and adds support for box-shadows and custom resolutions (via the `dpi`/`scale` options). Contributing --- ### Issues When submitting an issue, please provide reproducible code that highlights the issue, preferably by creating a fork of [this template jsFiddle](https://jsfiddle.net/o0kL8zkk/) (which has html2canvas and its dependencies already included as external resources). Remember that html2pdf uses [html2canvas](https://github.com/niklasvh/html2canvas) and [jsPDF](https://github.com/MrRio/jsPDF) as dependencies, so it's a good idea to check each of those repositories' issue trackers to see if your problem has already been addressed. ### Pull requests Right now, html2pdf is a single source file located in `/src/`. If you want to create a new feature or bugfix, feel free to fork and submit a pull request! Credits --- [<NAME>](https://github.com/eKoopmans) License --- [The MIT License](http://opensource.org/licenses/MIT) Copyright (c) 2017 <NAME> <<http://www.erik-koopmans.com/>> Readme --- ### Keywords none
naturalsort
cran
R
Package ‘naturalsort’ October 13, 2022 Type Package Title Natural Ordering Version 0.1.3 Suggests testthat Date 2016-08-30 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Provides functions related to human natural ordering. It handles adjacent digits in a character sequence as a number so that natural sort function arranges a character vector by their numbers, not digit characters. It is typically seen when operating systems lists file names. For example, a sequence a-1.png, a-2.png, a-10.png looks naturally ordered because 1 < 2 < 10 and natural sort algorithm arranges so whereas general sort algorithms arrange it into a-1.png, a-10.png, a-2.png owing to their third and fourth characters. License BSD_3_clause + file LICENSE BugReports https://github.com/kos59125/naturalsort/issues RoxygenNote 5.0.1 NeedsCompilation no Repository CRAN Date/Publication 2016-08-30 12:48:28 R topics documented: naturalsort-packag... 2 naturalfacto... 2 naturalorde... 2 naturalsort-package Natural Ordering Sort Description Provides functions related to natural ordering. naturalfactor Natural Ordering Factor Description naturalfactor creates a factor with levels in natural order. Usage naturalfactor(x, levels, ordered = TRUE, ...) Arguments x a character vector. levels a character vector whose elements might be appeared in x. ordered logical flag that determines whether the factor is ordered. ... arguments that are passed to factor function. naturalorder Natural Ordering Sort Description Natural ordering is a kind of alphanumerical ordering. naturalorder returns the order of the argument character #’ vector in human natural ascending or descending order. naturalsort returns the sorted vector. Usage naturalorder(text, decreasing = FALSE, na.last = TRUE) naturalsort(text, decreasing = FALSE, na.last = NA) Arguments text a character vector to sort. decreasing logical. na.last logical. If NA, NAs will be removed of the result. Value For naturalorder, the results are indices of vector elements in natural order. For naturalsort, the results are sorted vectors. Examples text <- c("a-1.png", "a-2.png", "a-10.png") print(sort(text)) print(naturalsort(text))
@improbable-eng/grpc-web
npm
JavaScript
@improbable-eng/grpc-web === > Library for making gRPC-Web requests from a browser This library is intended for both JavaScript and TypeScript usage from a web browser or NodeJS (see [Usage with NodeJS](#usage-with-nodejs)). *Note: This only works if the server supports [gRPC-Web](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md)* A Golang gRPC-Web middleware and a Golang-based gRPC-Web proxy are [available here](https://github.com/improbable-eng/grpc-web). Please see the full [gRPC-Web README](https://github.com/improbable-eng/grpc-web) for known limitations. Installation --- `@improbable-eng/grpc-web` has peer dependencies of `google-protobuf` and `@types/google-protobuf`. `npm install google-protobuf @types/google-protobuf @improbable-eng/grpc-web --save` Example Project --- There is an [example project available here](https://github.com/improbable-eng/grpc-web/tree/master/client/grpc-web-react-example) Usage Overview --- * Use [`ts-protoc-gen`](https://www.npmjs.com/package/ts-protoc-gen) with [`protoc`](https://github.com/google/protobuf) to generate `.js` and `.d.ts` files for your request and response classes. `ts-protoc-gen` can also generate gRPC service definitions with the `service=true` argument. + [Go to code generation docs](docs/code-generation.md) * Make a request using [`unary()`](docs/unary.md), [`invoke()`](docs/invoke.md) or [`client()`](docs/client.md) ``` import {grpc} from "@improbable-eng/grpc-web"; // Import code-generated data structures. import {BookService} from "./generated/proto/examplecom/library/book_service_pb_service"; import {GetBookRequest} from "./generated/proto/examplecom/library/book_service_pb"; const getBookRequest = new GetBookRequest(); getBookRequest.setIsbn(60929871); grpc.unary(BookService.GetBook, { request: getBookRequest, host: host, onEnd: res => { const { status, statusMessage, headers, message, trailers } = res; if (status === grpc.Code.OK && message) { console.log("all ok. got book: ", message.toObject()); } } }); ``` * Requests can be aborted/cancelled before they complete: ``` const request = grpc.unary(BookService.GetBook, { ... }); request.cancel(); ``` Available Request Functions --- There are three functions for making gRPC requests: ### [`grpc.unary`](docs/unary.md) This is a convenience function for making requests that consist of a single request message and single response message. It can only be used with unary methods. ``` rpc GetBook(GetBookRequest) returns (Book) {} ``` ### [`grpc.invoke`](docs/invoke.md) This is a convenience function for making requests that consist of a single request message and a stream of response messages (server-streaming). It can also be used with unary methods. ``` rpc GetBook(GetBookRequest) returns (Book) {} rpc QueryBooks(QueryBooksRequest) returns (stream Book) {} ``` ### [`grpc.client`](docs/client.md) `grpc.client` returns a client. Dependant upon [transport compatibility](docs/transport.md) this client is capable of sending multiple request messages (client-streaming) and receiving multiple response messages (server-streaming). It can be used with any type of method, but will enforce limiting the sending of messages for unary methods. ``` rpc GetBook(GetBookRequest) returns (Book) {} rpc QueryBooks(QueryBooksRequest) returns (stream Book) {} rpc LogReadPages(stream PageRead) returns (google.protobuf.Empty) {} rpc ListenForBooks(stream QueryBooksRequest) returns (stream Book) {} ``` Usage with NodeJS --- Refer to [grpc-web-node-http-transport](https://www.npmjs.com/package/@improbable-eng/grpc-web-node-http-transport). All Docs --- * [unary()](docs/unary.md) * [invoke()](docs/invoke.md) * [client()](docs/client.md) * [Code Generation](docs/code-generation.md) * [Concepts](docs/concepts.md) * [Transport](docs/transport.md) Readme --- ### Keywords * grpc * grpc-web * protobuf * typescript * ts
github.com/wtetsu/gaze
go
Go
README [¶](#section-readme) --- ![gaze logo](https://user-images.githubusercontent.com/515948/179385932-48ea38a3-3bbb-4f45-8d68-63dc076e757d.png) Gaze is gazing at you [![Test](https://github.com/wtetsu/gaze/workflows/Test/badge.svg)](https://github.com/wtetsu/gaze/actions?query=workflow%3ATest) [![Go Report Card](https://goreportcard.com/badge/github.com/wtetsu/gaze)](https://goreportcard.com/report/github.com/wtetsu/gaze) [![Maintainability](https://api.codeclimate.com/v1/badges/bd322b9104f5fcd3e37e/maintainability)](https://codeclimate.com/github/wtetsu/gaze/maintainability) [![codecov](https://codecov.io/gh/wtetsu/gaze/branch/master/graph/badge.svg)](https://codecov.io/gh/wtetsu/gaze) [![Go Reference](https://pkg.go.dev/badge/github.com/wtetsu/gaze.svg)](https://pkg.go.dev/github.com/wtetsu/gaze) ### What is Gaze? 👁️Gaze runs a command, **right after** you save a file. It greatly helps you to focus on writing code! ![gaze02](https://user-images.githubusercontent.com/515948/73607575-1fbfe900-45fb-11ea-813e-6be6bf9ece6d.gif) --- Setting up Gaze is easy. ``` gaze . ``` Then, invoke your favorite editor on another terminal and edit it! ``` vi a.py ``` #### Installation ##### Brew (for macOS) ``` brew install gaze ``` Or, [download binary](https://github.com/wtetsu/gaze/releases) #### Usage examples * Modify a.py -> 👁️Runs `python a.py` * Modify a.rb -> 👁️Runs `rubocop` * Modify a.js -> 👁️Runs `npm run lint` * Modify a.go -> 👁️Runs `make build` * Modify Dockerfile -> 👁️Runs `docker build` * And so forth... --- Software development often requires us to repeatedly execute the same command manually. For example, when writing a simple Python script, you may create a.py file, write a few lines of code, and run `python a.py`. If the result isn't what you expected, you edit a.py and run `python a.py` again. Again and again... As a result, you may find yourself constantly switching between the editor and terminal, typing the same command repeatedly. This can be frustrating and a waste of time and energy🙄 --- 👁️Gaze runs a command for you, **right after** you save a file. #### Why Gaze? (Features) Gaze is designed as a CLI tool that accelerates your coding. * 📦 Easy to use, out-of-the-box * ⚡ Super quick reaction * 🌎 Language-agnostic, editor-agnostic * 🔧 Flexible configuration * 💻 Multiplatform (macOS, Windows, Linux) * 📝 Create-and-rename file actions handling * 🔍 Advanced options for more control + `-r`: restart (useful for server applications) + `-t 2000`: timeout (useful if you sometimes write infinite loops) * 🚀 Optimal parallel handling + See also: [Parallel handling](https://github.com/wtetsu/gaze/blob/v1.1.6/doc/parallel.md) + ![](https://github.com/wtetsu/gaze/raw/v1.1.6/doc/img/p04.png) --- Gaze was developed for supporting daily coding. Even though there are already many "update-and-run" type of tools, I would say Gaze is the best for quick coding because all the technical design decisions have been made for that purpose. ### How to use Gaze The top priority of the Gaze's design is "easy to invoke". ``` gaze . ``` Then, switch to another terminal and run `vi a.py`. Gaze executes a.py in response to your file modifications. ##### Other examples Gaze at one file. ``` gaze a.py ``` --- Specify files with pattern matching (*, **, ?, {, }) ``` gaze "*.py" ``` ``` gaze "src/**/*.rb" ``` ``` gaze "{aaa,bbb}/*.{rb,py}" ``` --- Specify an arbitrary command by `-c` option. ``` gaze "src/**/*.js" -c "eslint {{file}}" ``` --- Kill the previous one before launching a new process. This is useful if you are writing a server. ``` gaze -r server.py ``` --- Kill an ongoing process after 1000(ms). This is useful if you love infinite loops. ``` gaze -t 1000 complicated.py ``` --- Specify multiple commands by using quotations. ``` gaze "*.cpp" -c "gcc {{file}} -o a.out ls -l a.out ./a.out" ``` Output when a.cpp was updated. ``` [gcc a.cpp -o a.out](1/3) [ls -l a.out](2/3) -rwxr-xr-x 1 user group 42155 Mar 3 00:31 a.out [./a.out](3/3) hello, world! ``` If a certain command exited with non-zero, Gaze doesn't invoke the next command. ``` [gcc a.cpp -o a.out](1/3) a.cpp: In function 'int main()': a.cpp:5:28: error: expected ';' before '}' token printf("hello, world!\n") ^ ; } ~ exit status 1 ``` ##### Configuration Gaze is Language-agnostic. For convenience, it has useful default configurations for some major languages (e.g. Go, Python, Ruby, JavaScript, Rust, and so forth) Thanks to the default configurations, the command below is valid. ``` gaze a.py ``` The above command is equivalent to `gaze a.py -c 'python "{{file}}"'`. You can display the default YAML configuration by `gaze -y`. ``` commands: - ext: .go cmd: go run "{{file}}" - ext: .py cmd: python "{{file}}" - ext: .rb cmd: ruby "{{file}}" - ext: .js cmd: node "{{file}}" - ext: .d cmd: dmd -run "{{file}}" - ext: .groovy cmd: groovy "{{file}}" - ext: .php cmd: php "{{file}}" - ext: .java cmd: java "{{file}}" - ext: .kts cmd: kotlinc -script "{{file}}" - ext: .rs cmd: | rustc "{{file}}" -o"{{base0}}.out" ./"{{base0}}.out" - ext: .cpp cmd: | gcc "{{file}}" -o"{{base0}}.out" ./"{{base0}}.out" - ext: .ts cmd: | tsc "{{file}}" --out "{{base0}}.out" node ./"{{base0}}.out" - re: ^Dockerfile$ cmd: docker build -f "{{file}}" . ``` Note: * To specify both ext and re for one cmd is prohibited * cmd can have multiple commands. Use vertical line(|) to write multiple commands If you want to customize it, please set up your own configuration file. ``` gaze -y > ~/.gaze.yml vi ~/.gaze.yml ``` Gaze searches a configuration file according to its priority rule. 1. A file specified by -f option 2. ~/.config/gaze/gaze.yml 3. ~/.gaze.yml 4. (Default) ##### Options: ``` Usage: gaze [options...] file(s) Options: -c Command(s). -r Restart mode. Send SIGTERM to an ongoing process before invoking next. -t Timeout(ms). Send SIGTERM to an ongoing process after this time. -f Specify a YAML configuration file. -v Verbose mode. -q Quiet mode. -y Display the default YAML configuration. -h Display help. --color Color mode (0:plain, 1:colorful). --version Display version information. Examples: gaze . gaze main.go gaze a.rb b.rb gaze -c make "**/*.c" gaze -c "eslint {{file}}" "src/**/*.js" gaze -r server.py gaze -t 1000 complicated.py ``` ##### Command format You can write [Mustache](https://en.wikipedia.org/wiki/Mustache_(template_system)) templates for commands. ``` gaze -c "echo {{file}} {{ext}} {{abs}}" . ``` | Parameter | Example | | --- | --- | | {{file}} | src/mod1/main.py | | {{ext}} | .py | | {{base}} | main.py | | {{base0}} | main | | {{dir}} | src/mod1 | | {{abs}} | /my/proj/src/mod1/main.py | ### Third-party data * Great Go libraries + See [go.mod](https://github.com/wtetsu/gaze/raw/master/go.mod) and [license.json](https://github.com/wtetsu/gaze/actions/workflows/license.yml) None
divmod
readthedoc
Markdown
Date: 2007-03-01 Categories: Tags: [[PageOutline(2-3,Contents)]] [[Image(http://divmod.org/tracdocs/nevow_whtbck.png, left)]] Nevow - Pronounced as the French ‘nouveau’, or ‘noo-voh’, Nevow is a web application construction kit written in Python. It is designed to allow the programmer to express as much of the view logic as desired in Python, and includes a pure Python XML expression syntax named stan to facilitate this. However it also provides rich support for designer-edited templates, using a very small XML attribute language to provide bi-directional template manipulation capability. Nevow also includes Formless, a declarative syntax for specifying the types of method parameters and exposing these methods to the web. Forms can be rendered automatically, and form posts will be validated and input coerced, rendering error pages if appropriate. Once a form post has validated successfully, the method will be called with the coerced values. Athena - Finally, Nevow includes Divmod Athena, a two-way bridge between Javascript in a browser and Python on the server. Divmod Athena is compatible with Mozilla, Firefox, Windows Internet Explorer 6, Opera 9 and Camino (The Divmod Fan Club). Event handlers can be written in pure Python and Javascript implementation details are hidden from the programmer, with Nevow taking care of routing data to and from the server using XmlHttpRequest. Athena supports a widget authoring framework that simplifies the authoring and management of client side widgets that need to communicate with the server. Multiple widgets can be hosted on an Athena page without interfering with each other. Athena supports automatic event binding so that that a DHTML event (onclick,onkeypress,etc) is mapped to the appropriate javascript handler (which in turn may call the server). * Stable: Latest release - 0.9.31 * Trunk: svn co http://divmod.org/svn/Divmod/trunk/Nevow/ Nevow ## Features¶ * XHTML templates: contain no programming logic, only nodes tagged with nevow attributes * data/render methods: simplify the task of separating data from presentation and writing view logic * stan: An s-expression-like syntax for expressing xml in pure python * Athena: Cross-browser JavaScript library for sending client side events to the server and server side events to the client after the page has loaded, without causing the entire page to refresh * formless: (take a look at formal for an alternate form library) For describing the types of objects which may be passed to methods of your classes, validating and coercing string input from either web or command-line sources, and calling your methods automatically once validation passes. * webform: For rendering web forms based on formless type descriptions, accepting form posts and passing them to formless validators, and rendering error forms in the event validation fails ## Documentation¶ * The Nevow Guide An introductory guide covering Nevow basics (Getting Started, Object Traversal, Object Publishing, XML Templates, Deploying Nevow Applications) * Nevow API * Meet Stan: An excellent tutorial on the Nevow Document Object Model by <NAME> * Twisted Components: If you are unfamiliar with Interfaces and Adapters then Nevow may not make much sense. This is essential reading. * Error Handling: How to create custom error (404 and 500) pages * Form Handling (A summary of Nevow form handling techniques) * JavaScript WYSIWYG Editors integration with Nevow/formal * deployment * emacs * Putting Nevow Page under Apache Proxy * Using Nevow with Genshi templates: original and dynamic * Tutorial: Using Storm with Nevow * Nevow & Athena FAQ Bleeding Docs - SURGEON GENERAL’S WARNING: Reading the docs listed below pertain to code that has not yet been released and may cause Lung Cancer, Heart Disease, Emphysema, and Pregnancy complications. * Context Removal - Conversion steps for moving from `context` -based Nevow code to `context` -less code. ## Examples¶ To run the examples yourself (Source in [source:trunk/Nevow/examples]): ``` richard@lazar:/tmp$ cd Nevow/examples/ richard@lazar:/tmp/Nevow/examples$ twistd -noy examples.tac 2005/11/02 15:18 GMT [-] Log opened. 2005/11/02 15:18 GMT [-] twistd SVN-Trunk (/usr/bin/python 2.4.2) starting up 2005/11/02 15:18 GMT [-] reactor class: twisted.internet.selectreactor.SelectReactor 2005/11/02 15:18 GMT [-] Loading examples.tac... 2005/11/02 15:18 GMT [-] Loaded. 2005/11/02 15:18 GMT [-] nevow.appserver.NevowSite starting on 8080 2005/11/02 15:18 GMT [-] Starting factory <nevow.appserver.NevowSite instance at 0xb6c8110c> ``` ... visit http://localhost:8080 and you’ll begin to appreciate the possibilities! ## Help / Support¶ You will find plenty of experts on the mailing lists and in the chatrooms who will happily help you, but please make sure you read all the documentation, study all the examples and search the mailing list archives first. The chances are that your question has already been answered. * Mailing list: The twisted-web and divmod-dev mailing list pages have subscription instructions and links to the web based archives. * IRC: Nevow developers and users can be found on Freenode in #twisted.web * Blogs: dialtone, fzZzy, Tv * Tickets (More tickets) ## Index of Nevow documents¶ * Getting started with Divmod Nevow * Nevow Tutorial: part 4 * Form Handling (A summary of Nevow form handling techniques) * Error Handling * Formless * Authentication and Authorisation * Context Removal * Nevow Guard * Unit Testing * Putting Nevow Page under Apache Proxy * Reverse Proxy * Divmod Athena * Divmod Athena * Athena FAQ * Nevow & Athena FAQ * Demo: news edit * Demo: results * Tutorial: Using Storm with Nevow * A possible approach with Nevow and Storm/twisted-integration Athena is a two-way communication channel for Nevow applications. Peruse the examples. Or make this page better. ## History¶ Athena is the best-of-breed approach to developing interactive javascript (AJAX / Comet) applications with DivmodNevow. It should be noted that it supersedes previous solutions, such as livepage. These prior solutions should not be used with Athena development. Before using Athena, you may want to check out the Getting started with Divmod Nevow. ## Development Environment¶ If you haven’t developed with JavaScript before, you may wish to set up your development environment before beginning with Athena. Some tips may be available, depending on your preferred tools: * nevow-athena-emacs * Firebug - Firefox based javascript debugger ## Simple LiveElement demo¶ Subclass ``` nevow.athena.LiveElement ``` and provide a `docFactory` which uses the `liveElement` renderer: class MyElement(athena.LiveElement): docFactory = loaders.stan(T.div(render=T.directive('liveElement'))) ``` Put the result onto a ``` nevow.athena.LivePage ``` ``` class MyPage(athena.LivePage): docFactory = loaders.stan(T.html[ T.head(render=T.directive('liveglue')), T.body(render=T.directive('myElement'))]) def render_myElement(self, ctx, data): f = MyElement() f.setFragmentParent(self) return ctx.tag[f] def child_(self, ctx): return MyPage() ``` Put the page into a ``` nevow.appserver.NevowSite ``` ``` from nevow import appserver site = appserver.NevowSite(MyPage()) ``` Hook the site up to the internet: ``` from twisted.application import service, internet application = service.Application('Athena Demo') webService = internet.TCPServer(8080, site) webService.setServiceParent(application) ``` Put it all into a `.tac` file and run it: ``` twistd -noy myelement.tac ``` And hit http://localhost:8080/. You now have an extremely simple Athena page. ### Customizing Behavior¶ Add a Twisted plugin which maps your module name onto your JavaScript source file: ``` from nevow import athena myPackage = athena.JSPackage({ 'MyModule': '/absolute/path/to/mymodule.js', }) ``` Place this Python source file into `nevow/plugins/` (the Twisted plugin documentation describes where else you can put it, with the exception that Nevow plugins should be placed beneath a `nevow` directory as opposed to a `twisted` directory). In the JavaScript source file (in this case, `mymodule.js` ), import `Nevow.Athena` : ``` // import Nevow.Athena ``` Next, subclass the JavaScript `Nevow.Athena.Widget` class (notice the module name that was defined in the plugin file): ``` MyModule.MyWidget = Nevow.Athena.Widget.subclass('MyModule.MyWidget'); ``` Now, add a method to your newly defined class: ``` MyModule.MyWidget.methods( function echo(self, argument) { alert('Echoing ' + argument); return argument; }); ``` Define the JavaScript class which will correspond to your `LiveElement` subclass: ### Invoking Code in the Browser¶ Add some kind of event source (in this case, a timer, but this is incidental) which will cause the server to call a method in the browser: ``` from twisted.internet import reactor def __init__(self, *a, **kw): super(MyElement, self).__init__(*a, **kw) reactor.callLater(5, self.myEvent) def myEvent(self): print 'My Event Firing' self.callRemote('echo', 12345) ``` ### Invoking Code on the Server¶ Add an event source (in this case, a user-interface element, but this is incidental) which will cause the browser to call a method on the server: ``` class MyElement(athena.LiveElement): docFactory = loaders.stan(T.div(render=T.directive('liveElement'))[ T.input(type='submit', value='Push me', onclick='Nevow.Athena.Widget.get(this).clicked()')]) ... ``` Update the JavaScript definition of `MyModule.MyWidget` to handle this event and actually call the server method: ``` MyModule.MyWidget.method( 'clicked', function(self) { self.callRemote('echo', 'hello, world'); }); ``` Add a method to `MyElement` which the browser will call, and expose it to the browser: ``` class MyElement(athena.LiveElement): ... def echo(self, argument): print 'Echoing', argument return argument athena.expose(echo) ``` ### Download the files for this tutorial:¶ ## Testing¶ Visit the athena-testing or Test Driven Development with Athena ## Implementation¶ Though Divmod’s use of it predates the term by several years, Athena uses what some have come to call Comet. Athena’s JavaScript half makes an HTTP request before it actually needs to retrieve information from the server. The server does not respond to this request until it has something to tell the browser. In this way, the server can push events to the browser instantly. PyFlakes a Lint-like tool for Python, like PyChecker or PyLint. It is focused on identifying common errors quickly without executing Python code. Its primary advantage over PyChecker is that it is fast. You don’t have to sit around for minutes waiting for the checker to run; it runs on most large projects in only a few seconds. The two primary categories of defects reported by PyFlakes are: * Names which are used but not defined or used before they are defined * Names which are redefined without having been used These can each take many forms. For example, PyFlakes will tell you when you have forgotten an import, mistyped a variable name, defined two functions with the same name, shadowed a variable from another scope, imported a module twice, or two different modules with the same name, and so on. * 0.5.0 Release (Release Notes) or `pip install pyflakes` * Trunk: ``` bzr branch lp:divmod.org && cd divmod.org/PyFlakes ``` (Browse) A voice over IP application server. * Sine provides: * * SIP Registrar * SIP Proxy * Third party call control (3PCC) * Voice-mail * Through-the-web configuration * [wiki:DivmodMantissa Divmod Mantissa] integration * Release: [http://divmod.org/trac/attachment/wiki/SoftwareReleases/Sine-0.3.0.tar.gz?format=raw Download the latest release - 0.3.0!] (Requires [wiki:DivmodMantissa Mantissa]) ([source:/tags/releases/Sine-0.3.0/NEWS.txt Release Notes]) * Bleeding Edge: svn co http://divmod.org/svn/Divmod/trunk/Sine Sine ## See also¶ * ‘’Development’’ version of the [http://buildbot.divmod.org/apidocs/sine.html Sine API docs] Date: 2005-10-15 Categories: Tags: Vertex is an implementation of the Q2Q protocol (sort of like P2P, but one better). There are a few moving parts in Vertex: * PTCP: a protocol which is nearly identical to TCP, but which runs over UDP. This lets Q2Q penetrate most NAT configurations. * JUICE ([JU]ice [I]s [C]oncurrent [E]vents): a very simple but immensely flexible protocol which forms the basis of the high-level aspects of Q2Q * vertex: a command line tool which exposes a few features useful in many situations (such as registration and authentication) Q2Q is a very high-level protocol (alternatively, transport) the goal of which is to make communication over the internet a possibility (if you enjoy setting up tunnels or firewall rules whenever you need to transfer a file between two computers, Q2Q may not be for you). Q2Q endpoints aren’t hardware addresses or network addresses. They look a lot like email addresses and they act a lot like instant message addresses. You can hook into yours wherever you can access the internet, and you can be online or offline as you choose (Q2Q supports multiple unrelated protocols, so you also might be online for some services but offline for others). Two people with Q2Q addresses can easily communicate without out-of-band negotiation of their physical locations or the topology of their networks. If Alice wants to talk to Bob, Alice will always just open a connection to <EMAIL>/chat. If Bob is online anywhere at all, the connection will have an opportunity to succeed (Bob might be busy or not want to talk to Alice, but that is another matter ;). The connection is authenticated in both directions, so if it does succeed Alice knows she is talking to the real Bob and vice versa. The Q2Q network has some decentralized features (there is no one server or company which can control all Q2Q addresses) and features of centralization (addresses beneath a particular domain are issued by a server for that domain; once issued, some activities require the server to be contacted again, while others do not). Vertex includes an identity server capable of hosting Q2Q addresses. Once you’ve installed the 0.1 release, you can run it like this: ``` exarkun@boson:~$ cat > q2q-standalone.tac from vertex.q2qstandalone import defaultConfig application = defaultConfig() exarkun@boson:~$ twistd -noy q2q-standalone.tac 2005/10/15 00:12 EDT [-] Log opened. 2005/10/15 00:12 EDT [-] twistd 2.0.1 (/usr/bin/python2.4 2.4.1) starting up 2005/10/15 00:12 EDT [-] reactor class: twisted.internet.selectreactor.SelectReactor 2005/10/15 00:12 EDT [-] Loading q2q-standalone.tac... 2005/10/15 00:12 EDT [-] Loaded. 2005/10/15 00:12 EDT [-] vertex.q2q.Q2QService starting on 8788 2005/10/15 00:12 EDT [-] Starting factory <Q2QService 'service'@-488b6a34> 2005/10/15 00:12 EDT [-] vertex.ptcp.PTCP starting on 8788 2005/10/15 00:12 EDT [-] Starting protocol <vertex.ptcp.PTCP instance at 0xb777884c> 2005/10/15 00:12 EDT [-] Binding PTCP/UDP 8788=8788 2005/10/15 00:12 EDT [-] vertex.q2q.Q2QBootstrapFactory starting on 8789 2005/10/15 00:12 EDT [-] Starting factory <vertex.q2q.Q2QBootstrapFactory instance at 0xb77787cc> ``` You can acquire a new Q2Q address using the vertex command line tool: ``` exarkun@boson:~$ vertex register exarkun@boson password ``` boson is a name on my local network, making this address rather useless. On the other hand, no one will be able to pretend to be me by cracking my hopelessly weak password ;) If you set up a Vertex server on a public host, you will be able to register a real, honest-to-goodness Q2Q address beneath its domain (careful - so will anyone else). vertex also offers a tool for requesting a signed certificate from the server. These certificates can be used to prove ones identity to foreign domains without involving ones home server. Another feature vertex provides is a toy file transfer application. Bob can issue a vertex receive while Alice issues a vertex send pointed at him, and the file will be transferred. Much of the real power of Q2Q is exposed to developers using two methods: listenQ2Q and connectQ2Q. These work in roughly the same way Twisted’s listenTCP and connectTCP work: they offer support for writing servers and clients that operate on the Q2Q network. * [http://divmod.org/projects/vertex Old Divmod project page for Vertex] * [http://www.swik.net/vertex Vertex on swik] * [wiki:UsingVertex Using Vertex], a work-in-progress page about using Vertex in an application * ‘’Development’’ version of [http://buildbot.divmod.org/apidocs/vertex.html Vertex API docs] * Vertex does ‘’‘not’‘’ currently pass its test suite on Windows or Mac OS X. * This [query:?group=status&component=Vertex&order=priority custom Vertex ticket query] will show you all the tickets (bugs and feature requests) filed against Vertex. * Vertex is likely usable as a Q2Q server on Linux and Windows. * Vertex is likely usable as a Q2Q client on Linux. * Release: [http://divmod.org/trac/attachment/wiki/SoftwareReleases/Vertex-0.3.0.tar.gz?format=raw Latest release: 0.3.0] ([source:/tags/releases/Vertex-0.3.0/NEWS.txt Release Notes]) * Trunk: svn co http://divmod.org/svn/Divmod/trunk/Vertex Vertex * Q2q is a protocol for: * * opening authenticated connections, even through NAT * allowing a user to reliably demonstrate their identity (for distributed authentication) * receiving real-time data directly from other users Q2Q provides a mechanism for a user to decide whether they want to expose their IP address to a third party before accepting a peer-to-peer connection. It is byte-stream oriented and application-agnostic. Any peer-to-peer application can use Q2Q to open connections and deliver messages. Divmod Vertex is the Divmod implemention of Q2Q. [http://swik.net/q2q Q2Q tag on swik] [[Image(http://divmod.org/tracdocs/fanclub_whtbck.png, right)]] Do you use Divmod’s code? Do you have a poster of Exarkun on your wall? Do you love our giant-killing chutzpah? Do you have a friend or family member on the Divmod team? Has work you’ve done using a Divmod project made you fantastically wealthy, such that you don’t know what to do with all your extra disposable income? Now you know. ## Joining Up¶ We offer a few different levels of membership. * Bronze Membership - $10 per month - Buy Glyph enough caffeine to get through one average day. * Silver Membership - $25 per month - Take JP out to a restaurant. * Gold Membership - $50 per month - Pay for Allen’s cell phone calls to Divmod HQ. * Platinum Membership - $100 per month - Buy Moe’s groceries for a month. * Diamond Membership - $250 per month - Buy an iPod for a tireless contributor. * fan-club-mithril - starting at $1000 per month (email to <EMAIL>) - Buy Amir a pony. ## Huh What?!¶ Our developer community has approached us to express their appreciation of our work, and to influence us to work on particular aspects of our product for the their benefit. We’ve created the Divmod Fan Club, or the ‘DFC’, to give those users a way to pay back, and a way for us to very publicly say ‘thank you!’. ## It’s Awesome - Why should I sign up?¶ Conscience. Influence. Status. ## Conscience¶ Simple: you give because you take. ## Influence¶ The club has monthly meetings over IRC where Divmod will propose a group of open-source features to be prioritized. Club members will be allowed to vote on the order those features will be implemented in. We are currently tallying the results of our first meeting - watch this space for our club-designated priorities! Members will be allowed a number of votes corresponding to the number of dollars per month that their membership costs, except Mithril members, who receive a maximum of 250 votes each, or can instead choose to apply their membership fees directly to the expense of implementing features of their choice. Members of the club who sign up for one of our Divmod commercial services (such as the soon-to-be-re-launched Divmod Mail) will occasionally get special treats, early access to new features, and extra goodies not available to the general public. Eventually they will also receive a ‘badge’ displayed next to their user-icon in public areas. Additionally, since we are currently using this club to beta-test our billing system, members who sign up during the beta period (prior to the first day of !PyCon) will receive an ‘arcanite alloy’ bonus, and be allowed 5 additional votes for their membership level during the first 3 meetings. Members who are currently subscribers to Divmod Mail will additionally receive this bonus indefinitely. ## Where does the money go?¶ To pay employees, particularly for time spent on community work that is not essential to Divmod’s core business. To hire or buy gifts for some of our open source contributors. To pay for hosting. Once that’s done, we may send some of this money to open-source contributors to our products whom we do not employ, and then, if we have hojillions of dollars left over, start paying for development of features in software Divmod’s code depends upon, such as SQLite and Python.
worker-route
rust
Rust
Crate worker_route === Worker Route is a crate designed for usage in Cloudflare Workers. Examples --- ``` use serde::{Deserialize, Serialize}; use worker::{event, Env, Request, Response, Result, RouteContext, Router}; use worker_route::{get, Configure, Query, Service}; #[derive(Debug, Serialize, Deserialize)] struct Bar { bar: String, } #[get("/bar")] async fn bar(req: Query<Bar>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Serialize, Deserialize)] struct Foo { foo: String, } #[get("/foo")] async fn foo(req: Query<Foo>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Serialize, Deserialize)] struct FooBar { foo: String, bar: String, } // your function can consist of (Query<T>, Request, RouteContext<()>) too #[get("/foo-bar")] async fn foo_bar(req: Query<FooBar>, _req: Request, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Deserialize, Serialize)] struct Person { name: String, age: usize, } #[get("/person/:name/:age")] async fn person(req: Query<Person>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } fn init_routes(router: Router<'_, ()>) -> Router<'_, ()> { router .configure(bar) .configure(foo) .configure(person) .configure(foo_bar) } #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { let router = Router::new(); router.service(init_routes).run(req, env).await } ``` Features --- * Add routes to handler with macro attribute * Extract query parameters or path from URL Re-exports --- * `pub use crate::http::HttpHeaders;` * `pub use crate::http::HttpRequest;` * `pub use crate::http::HttpResponse;` * `pub use crate::http::Responder;` * `pub use crate::http::ResponseError;` Modules --- * httpVarious HTTP related types Structs --- * ErrorTop level Worker-Route Error. * QueryExtract typed information with the supplied struct and deserialize it with `worker::Url`. Enums --- * ErrorCauseAll possible Error variants that may occur when working with `worker_route`. Traits --- * ConfigureImplemented for `worker::Router` to configure the route’s pattern. * ServiceImplemented for `worker::Router` to run external route configuration. * WrapA handler middleware provides an access to `worker::Request` Attribute Macros --- * deleteA macro that creates route handler with `worker::Router::delete` * getA macro that creates route handler with `worker::Router::get` * headA macro that creates route handler with `worker::Router::head` * optionsA macro that creates route handler with `worker::Router::options` * patchA macro that creates route handler with `worker::Router::patch` * postA macro that creates route handler with `worker::Router::post` * putA macro that creates route handler with `worker::Router::put` * routeA macro that creates route handler with multiple methods. Crate worker_route === Worker Route is a crate designed for usage in Cloudflare Workers. Examples --- ``` use serde::{Deserialize, Serialize}; use worker::{event, Env, Request, Response, Result, RouteContext, Router}; use worker_route::{get, Configure, Query, Service}; #[derive(Debug, Serialize, Deserialize)] struct Bar { bar: String, } #[get("/bar")] async fn bar(req: Query<Bar>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Serialize, Deserialize)] struct Foo { foo: String, } #[get("/foo")] async fn foo(req: Query<Foo>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Serialize, Deserialize)] struct FooBar { foo: String, bar: String, } // your function can consist of (Query<T>, Request, RouteContext<()>) too #[get("/foo-bar")] async fn foo_bar(req: Query<FooBar>, _req: Request, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[derive(Debug, Deserialize, Serialize)] struct Person { name: String, age: usize, } #[get("/person/:name/:age")] async fn person(req: Query<Person>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } fn init_routes(router: Router<'_, ()>) -> Router<'_, ()> { router .configure(bar) .configure(foo) .configure(person) .configure(foo_bar) } #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { let router = Router::new(); router.service(init_routes).run(req, env).await } ``` Features --- * Add routes to handler with macro attribute * Extract query parameters or path from URL Re-exports --- * `pub use crate::http::HttpHeaders;` * `pub use crate::http::HttpRequest;` * `pub use crate::http::HttpResponse;` * `pub use crate::http::Responder;` * `pub use crate::http::ResponseError;` Modules --- * httpVarious HTTP related types Structs --- * ErrorTop level Worker-Route Error. * QueryExtract typed information with the supplied struct and deserialize it with `worker::Url`. Enums --- * ErrorCauseAll possible Error variants that may occur when working with `worker_route`. Traits --- * ConfigureImplemented for `worker::Router` to configure the route’s pattern. * ServiceImplemented for `worker::Router` to run external route configuration. * WrapA handler middleware provides an access to `worker::Request` Attribute Macros --- * deleteA macro that creates route handler with `worker::Router::delete` * getA macro that creates route handler with `worker::Router::get` * headA macro that creates route handler with `worker::Router::head` * optionsA macro that creates route handler with `worker::Router::options` * patchA macro that creates route handler with `worker::Router::patch` * postA macro that creates route handler with `worker::Router::post` * putA macro that creates route handler with `worker::Router::put` * routeA macro that creates route handler with multiple methods. Struct worker_route::http::HttpHeaders === ``` pub struct HttpHeaders(/* private fields */); ``` An wrapper for `worker::Headers` with additional methods. This comes with two additional method which are `self.len()` and `self.is_empty()`. Implementations --- ### impl HttpHeaders #### pub fn new() -> Self #### pub fn get(&self, name: &HeaderName) -> Option<StringReturns all the values of a header within a `Headers` object with a given name. ##### Panics Panics if `HeaderName` is constructed from using the method `from_static` and the static string is an invalid header or contains spaces. Eg: Header contains invalid header’s name or spaces. #### pub fn has(&self, name: &HeaderName) -> bool Returns a boolean stating whether a `Headers` object contains a certain header. ##### Panics Panics if `HeaderName` is constructed from using the method `from_static` and the static string is an invalid header or contains spaces. Eg: Header contains invalid header’s name or spaces. #### pub fn len(&self) -> usize Returns the number of elements in the headers. #### pub fn is_empty(&self) -> bool Returns `true` if the headers contain no elements. #### pub fn append( &mut self, name: &HeaderName, value: &HeaderValue ) -> Result<(), ErrorAppend a header, keeping any that were set with an equivalent field name. ##### Errors Errors are returned if the header name or value is invalid (e.g. contains spaces) ##### Panics Panics if `HeaderName` or `HeaderValue` is constructed from using the method `from_static` and the static string is an invalid header or contains spaces. Eg: Header contains invalid header’s name or spaces. #### pub fn set( &mut self, name: &HeaderName, value: &HeaderValue ) -> Result<(), ErrorSets a new value for an existing header inside a `Headers` object, or adds the header if it does not already exist. ##### Errors Errors are returned if the header name or value is invalid (e.g. contains spaces) ##### Panics Panics if `HeaderName` or `HeaderValue` is constructed from using the method `from_static` and the static string is an invalid header or contains spaces. Eg: Header contains invalid header’s name or spaces. #### pub fn delete(&mut self, name: &HeaderName) -> Result<(), ErrorDeletes a header from a `Headers` object. ##### Errors Errors are returned if the header name or value is invalid (e.g. contains spaces) or if the JS Headers object’s guard is immutable (e.g. for an incoming request) ##### Panics Panics if `HeaderName` is constructed from using the method `from_static` and the static string is an invalid header or contains spaces. Eg: Header contains invalid header’s name or spaces. #### pub fn entries( &self ) -> Map<Map<IntoIter, fn(_: Result<JsValue, JsValue>) -> Array>, fn(_: Array) -> (String, String)Returns an iterator allowing to go through all key/value pairs contained in this object. #### pub fn keys(&self) -> impl Iterator<Item = StringReturns an iterator allowing you to go through all keys of the key/value pairs contained in this object. #### pub fn values(&self) -> impl Iterator<Item = StringReturns an iterator allowing you to go through all values of the key/value pairs contained in this object. Methods from Deref<Target = Headers> --- #### pub fn get(&self, name: &str) -> Result<Option<String>, ErrorReturns all the values of a header within a `Headers` object with a given name. Returns an error if the name is invalid (e.g. contains spaces) #### pub fn has(&self, name: &str) -> Result<bool, ErrorReturns a boolean stating whether a `Headers` object contains a certain header. Returns an error if the name is invalid (e.g. contains spaces) #### pub fn append(&mut self, name: &str, value: &str) -> Result<(), ErrorReturns an error if the name is invalid (e.g. contains spaces) #### pub fn set(&mut self, name: &str, value: &str) -> Result<(), ErrorSets a new value for an existing header inside a `Headers` object, or adds the header if it does not already exist. Returns an error if the name is invalid (e.g. contains spaces) #### pub fn delete(&mut self, name: &str) -> Result<(), ErrorDeletes a header from a `Headers` object. Returns an error if the name is invalid (e.g. contains spaces) or if the JS Headers object’s guard is immutable (e.g. for an incoming request) #### pub fn entries( &self ) -> Map<Map<IntoIter, fn(_: Result<JsValue, JsValue>) -> Array>, fn(_: Array) -> (String, String)Returns an iterator allowing to go through all key/value pairs contained in this object. #### pub fn keys(&self) -> impl Iterator<Item = StringReturns an iterator allowing you to go through all keys of the key/value pairs contained in this object. #### pub fn values(&self) -> impl Iterator<Item = StringReturns an iterator allowing you to go through all values of the key/value pairs contained in this object. Trait Implementations --- ### impl Clone for HttpHeaders #### fn clone(&self) -> HttpHeaders Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> HttpHeaders Returns the “default value” for a type. #### type Target = Headers The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target Dereferences the value.### impl DerefMut for HttpHeaders #### fn deref_mut(&mut self) -> &mut Headers Mutably dereferences the value.### impl<'a> From<&'a mut HttpHeaders> for &'a mut Headers #### fn from(headers: &'a mut HttpHeaders) -> Self Converts to this type from the input type.### impl From<&Headers> for HttpHeaders #### fn from(headers: &Headers) -> Self Converts to this type from the input type.### impl From<&HttpHeaders> for Headers #### fn from(headers: &HttpHeaders) -> Self Converts to this type from the input type.### impl From<Headers> for HttpHeaders #### fn from(headers: Headers) -> Self Converts to this type from the input type.### impl From<HttpHeaders> for Headers #### fn from(headers: HttpHeaders) -> Self Converts to this type from the input type.### impl IntoIterator for &HttpHeaders #### type Item = (String, String) The type of the elements being iterated over.#### type IntoIter = Map<Map<IntoIter, fn(_: Result<JsValue, JsValue>) -> Array>, fn(_: Array) -> (String, String)Which kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for HttpHeaders ### impl !Send for HttpHeaders ### impl !Sync for HttpHeaders ### impl Unpin for HttpHeaders ### impl UnwindSafe for HttpHeaders Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct worker_route::http::HttpRequest === ``` pub struct HttpRequest { /* private fields */ } ``` Extracted from `worker::Request` mainly used for `Responder` trait. Implementations --- ### impl HttpRequest #### pub fn headers(&self) -> &HttpHeaders Returns the cloned request’s headers. #### pub fn method(&self) -> &Method Request method. #### pub fn path(&self) -> &str The path of this request. #### pub fn url(&self) -> Option<&UrlThe parsed `Url` of this `Request`. None if errors occured from parsing the `Url`. #### pub fn cookies(&self) -> impl Iterator<Item = Cookie<'_>Available on **crate feature `cookies`** only.Request cookies. Trait Implementations --- ### impl Debug for HttpRequest #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(req: &Request) -> Self This is constructed from code generation. Not a public method. Auto Trait Implementations --- ### impl RefUnwindSafe for HttpRequest ### impl !Send for HttpRequest ### impl !Sync for HttpRequest ### impl Unpin for HttpRequest ### impl UnwindSafe for HttpRequest Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct worker_route::http::HttpResponse === ``` pub struct HttpResponse(/* private fields */); ``` A wrapper for `worker::Response`. By using `HttpResponse`, it allows you to work with with the response object without having to work with `Result` and unecessary unwrap. Implementations --- ### impl HttpResponse #### pub fn from_response(res: Response) -> Self Constructs a response from worker::Response. #### pub fn empty() -> Self Constructs an empty response. Trait Implementations --- ### impl Debug for HttpResponse #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(res: HttpResponse) -> Self Converts to this type from the input type.### impl From<HttpResponse> for Response #### fn from(res: HttpResponse) -> Self Converts to this type from the input type.### impl From<HttpResponse> for Result<Response#### fn from(res: HttpResponse) -> Self Converts to this type from the input type.### impl From<Response> for HttpResponse #### fn from(res: Response) -> Self Converts to this type from the input type.### impl From<Result<Response, Error>> for HttpResponse #### fn from(res: Result<Response>) -> Self Converts to this type from the input type.### impl Responder for HttpResponse #### fn to_response(self, _: HttpRequest) -> HttpResponse Convert `Self` to `HttpResponse`Auto Trait Implementations --- ### impl RefUnwindSafe for HttpResponse ### impl !Send for HttpResponse ### impl !Sync for HttpResponse ### impl Unpin for HttpResponse ### impl UnwindSafe for HttpResponse Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait worker_route::http::Responder === ``` pub trait Responder { // Required method fn to_response(self, req: HttpRequest) -> HttpResponse; } ``` A worker’s custom response implementation. Examples --- ``` use serde::{Deserialize, Serialize}; use worker::{Request, Response, RouteContext}; use worker_route::{get, HttpResponse, HttpRequest, http::ResponseBuilder, Responder}; #[allow(unused)] #[derive(Deserialize, Serialize)] struct Foo { foo: String, } impl Responder for Foo { fn to_response(self, _req: HttpRequest) -> HttpResponse { ResponseBuilder::init().json(self) } } #[get("/custom_response")] async fn custom_response(_: Request, _: RouteContext<()>) -> Result<Foo, worker::Error> { Ok(Foo { foo: String::from("Bar") }) } ``` Required Methods --- #### fn to_response(self, req: HttpRequest) -> HttpResponse Convert `Self` to `HttpResponse` Implementations on Foreign Types --- ### impl Responder for Response #### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for &'static str #### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for Vec<u8#### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for String #### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for Value #### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for Cow<'_, str#### fn to_response(self, _: HttpRequest) -> HttpResponse ### impl Responder for &'static [u8] #### fn to_response(self, _: HttpRequest) -> HttpResponse Implementors --- ### impl Responder for HttpResponse Trait worker_route::http::ResponseError === ``` pub trait ResponseError: Debug + Display { // Required methods fn error_response(&self, req: HttpRequest) -> HttpResponse; fn description(&self) -> String; // Provided method fn status_code(&self) -> StatusCode { ... } } ``` Generate `HttpResponse` for custom error implementations. Required Methods --- #### fn error_response(&self, req: HttpRequest) -> HttpResponse Creates a `Response` from error. #### fn description(&self) -> String Get the underlying error message. Provided Methods --- #### fn status_code(&self) -> StatusCode Returns status code for an error. Defaults to 500 Internal Server Error. Implementations on Foreign Types --- ### impl ResponseError for Error #### fn error_response(&self, req: HttpRequest) -> HttpResponse #### fn status_code(&self) -> StatusCode #### fn description(&self) -> String ### impl ResponseError for Error #### fn error_response(&self, _: HttpRequest) -> HttpResponse #### fn description(&self) -> String ### impl ResponseError for Error #### fn error_response(&self, _: HttpRequest) -> HttpResponse #### fn description(&self) -> String #### fn status_code(&self) -> StatusCode Implementors --- ### impl ResponseError for worker_route::Error ### impl ResponseError for ToStrError Module worker_route::http === Various HTTP related types Modules --- * headerHTTP header types Structs --- * ContentType`Content-Type` header, defined in RFC 9110 8.3. * Cookie`cookies`Representation of an HTTP cookie. * HttpHeadersAn wrapper for `worker::Headers` with additional methods. * HttpRequestExtracted from `worker::Request` mainly used for `Responder` trait. * HttpResponseA wrapper for `worker::Response`. * ResponseBuilderAn alternative `worker::Response` builder. * StatusCodeAn HTTP status code (`status-code` in RFC 7230 et al.). Enums --- * BodyA wrapper for `worker::ResponseBody`. Traits --- * ResponderA worker’s custom response implementation. * ResponseErrorGenerate `HttpResponse` for custom error implementations. Struct worker_route::Error === ``` pub struct Error { /* private fields */ } ``` Top level Worker-Route Error. Implementations --- ### impl Error #### pub fn cause(&self) -> &ErrorCause Returns the underlying error’s occurrence #### pub fn description(&self) -> String Get the underlying error message. Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(err: Error) -> Self Converts to this type from the input type.### impl From<Error> for Error #### fn from(err: Error) -> Self Converts to this type from the input type.### impl From<Error> for Error #### fn from(err: Error) -> Self Converts to this type from the input type.### impl From<Error> for Error #### fn from(err: Error) -> Self Converts to this type from the input type.### impl From<ToStrError> for Error #### fn from(err: ToStrError) -> Self Converts to this type from the input type.### impl ResponseError for Error #### fn error_response(&self, req: HttpRequest) -> HttpResponse Creates a `Response` from error.#### fn status_code(&self) -> StatusCode Returns status code for an error. Get the underlying error message.Auto Trait Implementations --- ### impl !RefUnwindSafe for Error ### impl !Send for Error ### impl !Sync for Error ### impl Unpin for Error ### impl !UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct worker_route::Query === ``` pub struct Query<T>(/* private fields */); ``` Extract typed information with the supplied struct and deserialize it with `worker::Url`. To extract typed data from `worker::Url`, `T` must implement the `DeserializeOwned` trait. ``` use serde::{Deserialize, Serialize}; use worker::{Response, Result, RouteContext}; use worker_route::{get, Query}; #[derive(Debug, Serialize, Deserialize)] struct StructFoo { foo: String, } #[get("/foo-struct")] async fn struct_foo(req: Query<StructFoo>, _: RouteContext<()>) -> Result<Response> { // works let Foo { foo } = req.into_inner(); // rest code } #[derive(Debug, Serialize, Deserialize)] struct TupleFoo(String); #[get("/foo-tuple")] async fn tuple_foo(req: Query<TupleFoo>, _: RouteContext<()>) -> Result<Response> { // you won't even get here let TupleFoo ( foo ) = req.into_inner(); // rest code } ``` Notes --- Request can be an ommited from the parameter too. When ommitting either of them, the sequence must always be in the correct order. The correct orders are: * (`Request`, `RouteContext<D: Params>`) * (`Query<T>`, `RouteContext<D: Params>`) * (`Query<T>`, `Request`, `RouteContext<D: Params>`) ``` use serde::{Deserialize, Serialize}; use worker::{Response, Request, Result, RouteContext}; use worker_route::{get, Query}; #[derive(Debug, Serialize, Deserialize)] struct Foo { foo: String, } #[get("/foo-query")] async fn without_req(req: Query<Foo>, _: RouteContext<()>) -> Result<Response> { // rest code Response::empty() } #[get("/foo-with-request")] async fn with_request(req: Query<Foo>, _: Request, _: RouteContext<()>) -> Result<Response> { // rest code Response::empty() } ``` Implementations --- ### impl<T> Query<T#### pub fn into_inner(self) -> T Acess the owned `T` ### impl<T: DeserializeOwned> Query<T#### pub fn from_query_path<D: Params>( url: &Url, ctx: &D, strict: bool ) -> Result<Self, ErrorDeserialize the given `T` from the URL query string. ``` use serde::{Deserialize, Serialize}; use worker::{console_log, Request, Response, Result, RouteContext}; use worker_route::{get, Query}; #[derive(Debug, Deserialize, Serialize)] struct Person { name: String, age: usize, } #[get("/persons/:name/:age")] async fn person(req: Request, ctx: RouteContext<()>) -> Result<Response> { let person = Query::<Person>::from_query_path(&req.url().unwrap(), &ctx, true); let Person { name, age } = person.unwrap().into_inner(); console_log!("name: {name}, age: {age}"); Response::empty() } ``` ##### Errors Currently only regular structs are supported. Errors are returned if the given `T` is not a regular struct (eg: tuple, unit). Trait Implementations --- ### impl<T> AsMut<T> for Query<T#### fn as_mut(&mut self) -> &mut T Converts this type into a mutable reference of the (usually inferred) input type.### impl<T> AsRef<T> for Query<T#### fn as_ref(&self) -> &T Converts this type into a shared reference of the (usually inferred) input type.### impl<T: Clone> Clone for Query<T#### fn clone(&self) -> Query<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target Dereferences the value.### impl<T> DerefMut for Query<T#### fn deref_mut(&mut self) -> &mut T Mutably dereferences the value.### impl<T: Display> Display for Query<T#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<T> RefUnwindSafe for Query<T>where T: RefUnwindSafe, ### impl<T> Send for Query<T>where T: Send, ### impl<T> Sync for Query<T>where T: Sync, ### impl<T> Unpin for Query<T>where T: Unpin, ### impl<T> UnwindSafe for Query<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> Formattable for Twhere T: Deref, <T as Deref>::Target: Formattable, ### impl<T> Parsable for Twhere T: Deref, <T as Deref>::Target: Parsable, Enum worker_route::ErrorCause === ``` pub enum ErrorCause { Worker(Error), Query, Header, Json, } ``` All possible Error variants that may occur when working with `worker_route`. Variants --- ### Worker(Error) Errors occured from `worker::Error` ### Query Errors occured from `Query` ### Header Errors occured from `HttpHeaders` operations ### Json Errors occured from `ResponseBuilder` Trait Implementations --- ### impl Debug for ErrorCause #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for ErrorCause ### impl !Send for ErrorCause ### impl !Sync for ErrorCause ### impl Unpin for ErrorCause ### impl !UnwindSafe for ErrorCause Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait worker_route::Configure === ``` pub trait Configure<D> { // Required method fn configure<F: RouteFactory<D>>(self, f: F) -> Self; } ``` Implemented for `worker::Router` to configure the route’s pattern. Example --- ``` use serde::{Deserialize, Serialize}; use worker::{event, Env, Request, Response, ResponseBody, Result, RouteContext, Router}; use worker_route::{get, Configure, Query, Service}; #[derive(Debug, Deserialize, Serialize)] struct Person { name: String, age: usize, } #[get("/person/:name/:age")] async fn person(req: Query<Person>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { let router = Router::new(); router.configure(person).run(req, env).await } ``` Required Methods --- #### fn configure<F: RouteFactory<D>>(self, f: F) -> Self Implementations on Foreign Types --- ### impl<D> Configure<D> for Router<'_, D#### fn configure<F: RouteFactory<D>>(self, f: F) -> Self Implementors --- Trait worker_route::Service === ``` pub trait Service { // Required method fn service<F: FnOnce(Self) -> Self>(self, f: F) -> Self where Self: Sized; } ``` Implemented for `worker::Router` to run external route configuration. This trait is useful for splitting the configuration to a different module. Example --- ``` use serde::{Deserialize, Serialize}; use worker::{event, Env, Request, Response, ResponseBody, Result, RouteContext, Router}; use worker_route::{get, Service, Configure, Query}; #[derive(Debug, Serialize, Deserialize)] struct Bar { bar: String, } #[get("/bar")] async fn bar(req: Query<Bar>, _: RouteContext<()>) -> Result<Response> { Response::from_body(ResponseBody::Body(req.into_inner().bar.as_bytes().into())) } #[derive(Debug, Serialize, Deserialize)] struct Foo { foo: String, } #[get("/foo")] async fn foo(req: Query<Foo>, _: RouteContext<()>) -> Result<Response> { Response::from_body(ResponseBody::Body(req.into_inner().foo.as_bytes().into())) } #[derive(Debug, Deserialize, Serialize)] struct Person { name: String, age: usize, } #[get("/person/:name/:age")] async fn person(req: Query<Person>, _: RouteContext<()>) -> Result<Response> { Response::from_json(&req.into_inner()) } // wrapper function fn init_routes(router: Router<'_, ()>) -> Router<'_, ()> { router.configure(bar).configure(foo).configure(person) } #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result<Response> { let router = Router::new(); // before // router.configure(bar).configure(foo).configure(person).run(req, env).await // after // router.service(init_routes).run(req, env).await router.service(init_routes).run(req, env).await } ``` Required Methods --- #### fn service<F: FnOnce(Self) -> Self>(self, f: F) -> Selfwhere Self: Sized, Implementations on Foreign Types --- ### impl<D> Service for Router<'_, D#### fn service<F: FnOnce(Self) -> Self>(self, f: F) -> Self Implementors --- Trait worker_route::Wrap === ``` pub trait Wrap { type Output; // Required method fn wrap(req: &Request) -> Self::Output; } ``` A handler middleware provides an access to `worker::Request` Currently this is only used to return `Cors` Examples --- ``` use worker::{Cors, Request, Response, Result, RouteContext}; use worker_route::{route, Wrap}; // Doesn't necessarily have to be a unit struct. // It can be anything. pub struct MyCors; impl Wrap for MyCors { type Output = Cors; fn wrap(req: &Request) -> Self::Output { Cors::default() } } #[route("/hello-world", method = "get", cors = MyCors)] fn hello_world(req: Request, ctx: RouteContext<()>) -> Result<String> { Ok("Hello world.".to_owned()) } ``` Required Associated Types --- #### type Output The output of the return value. Required Methods --- #### fn wrap(req: &Request) -> Self::Output Implementors --- Attribute Macro worker_route::delete === ``` #[delete] ``` A macro that creates route handler with `worker::Router::delete` `worker::Router::delete_async` will be used if the handler is an async fn. Usage --- ``` #[delete("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::delete; #[delete("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::get === ``` #[get] ``` A macro that creates route handler with `worker::Router::get` `worker::Router::get_async` will be used if the handler is an async fn. Usage --- ``` #[get("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::get; #[get("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::head === ``` #[head] ``` A macro that creates route handler with `worker::Router::head` `worker::Router::head_async` will be used if the handler is an async fn. Usage --- ``` #[head("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::head; #[head("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::options === ``` #[options] ``` A macro that creates route handler with `worker::Router::options` `worker::Router::options_async` will be used if the handler is an async fn. Usage --- ``` #[options("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::options; #[options("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::patch === ``` #[patch] ``` A macro that creates route handler with `worker::Router::patch` `worker::Router::patch_async` will be used if the handler is an async fn. Usage --- ``` #[patch("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::patch; #[patch("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::post === ``` #[post] ``` A macro that creates route handler with `worker::Router::post` `worker::Router::post_async` will be used if the handler is an async fn. Usage --- ``` #[post("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::post; #[post("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::put === ``` #[put] ``` A macro that creates route handler with `worker::Router::put` `worker::Router::put_async` will be used if the handler is an async fn. Usage --- ``` #[put("/path")] ``` Attributes --- * `"path"`: Worker’s path. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response}; use worker_route::put; #[put("/path")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response>{ Response::empty() } ``` Attribute Macro worker_route::route === ``` #[route] ``` A macro that creates route handler with multiple methods. Usage --- ``` #[route("path", method = "method", cors = "cors", lazy_cors = "lazy_cors", wrap)] ``` Attributes --- * `"path"`: Worker’s path. * `method`: An array of methods or a method in string literal. * `Option<cors>`: Wrap a struct that implements `worker_route::MwService`. * `Option<lazy_cors>`: Wrap a lazy initialized Cors. * `Option<wrap>`: Register an options handler with the provided cors. Defaults to `None`. Examples --- ``` use worker::{Result, Request, RouteContext, Response, route}; #[route("/path", method = "get", method = "post")] async fn foo(req: Request, ctx: RouteContext<()>) -> Result<Response> { Response::empty() } ```
rkyv-test
rust
Rust
Crate rkyv_test === rkyv --- rkyv (*archive*) is a zero-copy deserialization framework for Rust. It’s similar to other zero-copy deserialization frameworks such as Cap’n Proto and FlatBuffers. However, while the former have external schemas and heavily restricted data types, rkyv allows all serialized types to be defined in code and can serialize a wide variety of types that the others cannot. Additionally, rkyv is designed to have little to no overhead, and in most cases will perform exactly the same as native types. ### Design Like serde, rkyv uses Rust’s powerful trait system to serialize data without the need for reflection. Despite having a wide array of features, you also only pay for what you use. If your data checks out, the serialization process can be as simple as a `memcpy`! Like serde, this allows rkyv to perform at speeds similar to handwritten serializers. Unlike serde, rkyv produces data that is guaranteed deserialization free. If you wrote your data to disk, you can just `mmap` your file into memory, cast a pointer, and your data is ready to use. This makes it ideal for high-performance and IO-bound applications. Limited data mutation is supported through `Pin` APIs, and archived values can be truly deserialized with `Deserialize` if full mutation capabilities are needed. The book has more details on the design and capabilities of rkyv. ### Type support rkyv has a hashmap implementation that is built for zero-copy deserialization, so you can serialize your hashmaps with abandon. The implementation performs perfect hashing with the compress, hash and displace algorithm to use as little memory as possible while still performing fast lookups. It also comes with a B+ tree implementation that is built for maximum performance by splitting data into easily-pageable 4KB segments. This makes it perfect for building immutable databases and structures for bulk data. rkyv also has support for contextual serialization, deserialization, and validation. It can properly serialize and deserialize shared pointers like `Rc` and `Arc`, and can be extended to support custom contextual types. Finally, rkyv makes it possible to serialize trait objects and use them *as trait objects* without deserialization. See the `archive_dyn` crate for more details. ### Tradeoffs While rkyv is a great format for final data, it lacks a full schema system and isn’t well equipped for data migration and schema upgrades. If your use case requires these capabilities, you may need additional libraries the build these features on top of rkyv. You can use other serialization frameworks like serde with the same types as rkyv conflict-free. ### Features * `alloc`: Enables types that require the `alloc` crate. Enabled by default. * `arbitrary_enum_discriminant`: Enables the `arbitrary_enum_discriminant` feature for stable multibyte enum discriminants using `archive_le` and `archive_be`. Requires nightly. * `archive_be`: Forces archives into a big-endian format. This guarantees cross-endian compatibility optimized for big-endian architectures. * `archive_le`: Forces archives into a little-endian format. This guarantees cross-endian compatibility optimized for little-endian architectures. * `copy`: Enables copy optimizations for packed copyable data types. Requires nightly. * `copy_unsafe`: Automatically opts all potentially copyable types into copy optimization. This broadly improves performance but may cause uninitialized bytes to be copied to the output. Requires nightly. * `size_16`: Archives integral `*size` types as 16-bit integers. This is intended to be used only for small archives and may not handle large, more general data. * `size_32`: Archives integral `*size` types as 32-bit integers. Enabled by default. * `size_64`: Archives integral `*size` types as 64-bit integers. This is intended to be used only for very large archives and may cause unnecessary data bloat. * `std`: Enables standard library support. Enabled by default. * `strict`: Guarantees that types will have the same representations across platforms and compilations. This is already the case in practice, but this feature provides a guarantee along with C type compatibility. *Note*: Enabling `strict` will disable `Archive` implementations for tuples, as tuples do not have a C type layout. Making a generic `Tuple<T1, T2>` and deriving `Archive` for it should provide similar functionality. * `validation`: Enables validation support through `bytecheck`. ### Crate support Some common crates need to be supported by rkyv before an official integration has been made. Support is provided by rkyv for these crates, but in the future crates should depend on rkyv and provide their own implementations. The crates that already have support provided by rkyv should work toward integrating the implementations into themselves. Crates supported by rkyv: * `indexmap` * `rend` *Enabled automatically when using endian-specific archive features.* * `tinyvec` * `uuid` Support for each of these crates can be enabled with a feature of the same name. Additionally, the following external crate features are available: * `tinyvec_alloc`: Supports types behind the `alloc` feature in `tinyvec`. * `uuid_std`: Enables the `std` feature in `uuid`. ### Examples * See `Archive` for examples of how to use rkyv through the derive macro and manual implementation. * For more details on the derive macro and its capabilities, see `Archive`. * Fully worked examples using rkyv are available in the `examples` directory of the source repo. Re-exports --- `pub use rend;``pub use validation::check_archived_root_with_context;``pub use validation::check_archived_value_with_context;``pub use validation::validators::check_archived_root;``pub use validation::validators::check_archived_value;``pub use util::*;`Modules --- boxedAn archived version of `Box`. collectionsArchived versions of standard library containers. deDeserialization traits, deserializers, and adapters. ffiArchived versions of FFI types. netArchived versions of network types. nicheManually niched type replacements. opsArchived versions of `ops` types. optionAn archived version of `Option`. rcArchived versions of shared pointers. rel_ptrRelative pointer implementations and options. resultAn archived version of `Result`. serSerialization traits, serializers, and adapters. stringArchived versions of string types. timeArchived versions of `time` types. utilUtilities for common archive operations. validationValidation implementations and helper types. vecAn archived version of `Vec`. withWrapper type support and commonly used wrappers. Macros --- from_archivedReturns the unarchived value of the given archived primitive. out_fieldReturns a tuple of the field offset and a mutable pointer to the field of the given struct pointer. to_archivedReturns the archived value of the given archived primitive. Structs --- InfallibleA fallible type that cannot produce errors. Traits --- ArchiveA type that can be used without deserializing. ArchivePointeeAn archived type with associated metadata for its relative pointer. ArchiveUnsizedA counterpart of `Archive` that’s suitable for unsized types. DeserializeConverts a type back from its archived form. DeserializeUnsizedA counterpart of `Deserialize` that’s suitable for unsized types. FallibleA type that can produce an error. SerializeConverts a type to its archived form. SerializeUnsizedA counterpart of `Serialize` that’s suitable for unsized types. Functions --- from_bytesChecks and deserializes a value from the given bytes. Type Definitions --- ArchivedAlias for the archived version of some `Archive` type. ArchivedMetadataAlias for the archived metadata for some `ArchiveUnsized` type. FixedIsizeThe native type that `isize` is converted to for archiving. FixedUsizeThe native type that `usize` is converted to for archiving. MetadataResolverAlias for the metadata resolver for some `ArchiveUnsized` type. RawRelPtrThe default raw relative pointer. RelPtrThe default relative pointer. ResolverAlias for the resolver for some `Archive` type. Derive Macros --- ArchiveDerives `Archive` for the labeled type. DeserializeDerives `Deserialize` for the labeled type. SerializeDerives `Serialize` for the labeled type. Crate rkyv_test === rkyv --- rkyv (*archive*) is a zero-copy deserialization framework for Rust. It’s similar to other zero-copy deserialization frameworks such as Cap’n Proto and FlatBuffers. However, while the former have external schemas and heavily restricted data types, rkyv allows all serialized types to be defined in code and can serialize a wide variety of types that the others cannot. Additionally, rkyv is designed to have little to no overhead, and in most cases will perform exactly the same as native types. ### Design Like serde, rkyv uses Rust’s powerful trait system to serialize data without the need for reflection. Despite having a wide array of features, you also only pay for what you use. If your data checks out, the serialization process can be as simple as a `memcpy`! Like serde, this allows rkyv to perform at speeds similar to handwritten serializers. Unlike serde, rkyv produces data that is guaranteed deserialization free. If you wrote your data to disk, you can just `mmap` your file into memory, cast a pointer, and your data is ready to use. This makes it ideal for high-performance and IO-bound applications. Limited data mutation is supported through `Pin` APIs, and archived values can be truly deserialized with `Deserialize` if full mutation capabilities are needed. The book has more details on the design and capabilities of rkyv. ### Type support rkyv has a hashmap implementation that is built for zero-copy deserialization, so you can serialize your hashmaps with abandon. The implementation performs perfect hashing with the compress, hash and displace algorithm to use as little memory as possible while still performing fast lookups. It also comes with a B+ tree implementation that is built for maximum performance by splitting data into easily-pageable 4KB segments. This makes it perfect for building immutable databases and structures for bulk data. rkyv also has support for contextual serialization, deserialization, and validation. It can properly serialize and deserialize shared pointers like `Rc` and `Arc`, and can be extended to support custom contextual types. Finally, rkyv makes it possible to serialize trait objects and use them *as trait objects* without deserialization. See the `archive_dyn` crate for more details. ### Tradeoffs While rkyv is a great format for final data, it lacks a full schema system and isn’t well equipped for data migration and schema upgrades. If your use case requires these capabilities, you may need additional libraries the build these features on top of rkyv. You can use other serialization frameworks like serde with the same types as rkyv conflict-free. ### Features * `alloc`: Enables types that require the `alloc` crate. Enabled by default. * `arbitrary_enum_discriminant`: Enables the `arbitrary_enum_discriminant` feature for stable multibyte enum discriminants using `archive_le` and `archive_be`. Requires nightly. * `archive_be`: Forces archives into a big-endian format. This guarantees cross-endian compatibility optimized for big-endian architectures. * `archive_le`: Forces archives into a little-endian format. This guarantees cross-endian compatibility optimized for little-endian architectures. * `copy`: Enables copy optimizations for packed copyable data types. Requires nightly. * `copy_unsafe`: Automatically opts all potentially copyable types into copy optimization. This broadly improves performance but may cause uninitialized bytes to be copied to the output. Requires nightly. * `size_16`: Archives integral `*size` types as 16-bit integers. This is intended to be used only for small archives and may not handle large, more general data. * `size_32`: Archives integral `*size` types as 32-bit integers. Enabled by default. * `size_64`: Archives integral `*size` types as 64-bit integers. This is intended to be used only for very large archives and may cause unnecessary data bloat. * `std`: Enables standard library support. Enabled by default. * `strict`: Guarantees that types will have the same representations across platforms and compilations. This is already the case in practice, but this feature provides a guarantee along with C type compatibility. *Note*: Enabling `strict` will disable `Archive` implementations for tuples, as tuples do not have a C type layout. Making a generic `Tuple<T1, T2>` and deriving `Archive` for it should provide similar functionality. * `validation`: Enables validation support through `bytecheck`. ### Crate support Some common crates need to be supported by rkyv before an official integration has been made. Support is provided by rkyv for these crates, but in the future crates should depend on rkyv and provide their own implementations. The crates that already have support provided by rkyv should work toward integrating the implementations into themselves. Crates supported by rkyv: * `indexmap` * `rend` *Enabled automatically when using endian-specific archive features.* * `tinyvec` * `uuid` Support for each of these crates can be enabled with a feature of the same name. Additionally, the following external crate features are available: * `tinyvec_alloc`: Supports types behind the `alloc` feature in `tinyvec`. * `uuid_std`: Enables the `std` feature in `uuid`. ### Examples * See `Archive` for examples of how to use rkyv through the derive macro and manual implementation. * For more details on the derive macro and its capabilities, see `Archive`. * Fully worked examples using rkyv are available in the `examples` directory of the source repo. Re-exports --- `pub use rend;``pub use validation::check_archived_root_with_context;``pub use validation::check_archived_value_with_context;``pub use validation::validators::check_archived_root;``pub use validation::validators::check_archived_value;``pub use util::*;`Modules --- boxedAn archived version of `Box`. collectionsArchived versions of standard library containers. deDeserialization traits, deserializers, and adapters. ffiArchived versions of FFI types. netArchived versions of network types. nicheManually niched type replacements. opsArchived versions of `ops` types. optionAn archived version of `Option`. rcArchived versions of shared pointers. rel_ptrRelative pointer implementations and options. resultAn archived version of `Result`. serSerialization traits, serializers, and adapters. stringArchived versions of string types. timeArchived versions of `time` types. utilUtilities for common archive operations. validationValidation implementations and helper types. vecAn archived version of `Vec`. withWrapper type support and commonly used wrappers. Macros --- from_archivedReturns the unarchived value of the given archived primitive. out_fieldReturns a tuple of the field offset and a mutable pointer to the field of the given struct pointer. to_archivedReturns the archived value of the given archived primitive. Structs --- InfallibleA fallible type that cannot produce errors. Traits --- ArchiveA type that can be used without deserializing. ArchivePointeeAn archived type with associated metadata for its relative pointer. ArchiveUnsizedA counterpart of `Archive` that’s suitable for unsized types. DeserializeConverts a type back from its archived form. DeserializeUnsizedA counterpart of `Deserialize` that’s suitable for unsized types. FallibleA type that can produce an error. SerializeConverts a type to its archived form. SerializeUnsizedA counterpart of `Serialize` that’s suitable for unsized types. Functions --- from_bytesChecks and deserializes a value from the given bytes. Type Definitions --- ArchivedAlias for the archived version of some `Archive` type. ArchivedMetadataAlias for the archived metadata for some `ArchiveUnsized` type. FixedIsizeThe native type that `isize` is converted to for archiving. FixedUsizeThe native type that `usize` is converted to for archiving. MetadataResolverAlias for the metadata resolver for some `ArchiveUnsized` type. RawRelPtrThe default raw relative pointer. RelPtrThe default relative pointer. ResolverAlias for the resolver for some `Archive` type. Derive Macros --- ArchiveDerives `Archive` for the labeled type. DeserializeDerives `Deserialize` for the labeled type. SerializeDerives `Serialize` for the labeled type. Trait rkyv_test::Deserialize === ``` pub trait Deserialize<T, D: Fallible + ?Sized> { fn deserialize(&self, deserializer: &mutD) -> Result<T, D::Error>; } ``` Converts a type back from its archived form. Some types may require specific deserializer capabilities, such as `Rc` and `Arc`. In these cases, the deserializer type `D` should be bound so that it implements traits that provide those capabilities (e.g. `SharedDeserializeRegistry`). This can be derived with `Deserialize`. Required Methods --- #### fn deserialize(&self, deserializer: &mutD) -> Result<T, D::ErrorDeserializes using the given deserializer Implementations on Foreign Types --- ### impl<D: Fallible + ?Sized> Deserialize<RangeFull, D> for RangeFull #### fn deserialize(&self, _: &mutD) -> Result<Self, D::Error### impl<D: Fallible + ?Sized> Deserialize<i16, D> for i16 #### fn deserialize(&self, _: &mutD) -> Result<i16, D::Error### impl<D: Fallible + ?Sized> Deserialize<i32, D> for i32 #### fn deserialize(&self, _: &mutD) -> Result<i32, D::Error### impl<D: Fallible + ?Sized> Deserialize<i64, D> for i64 #### fn deserialize(&self, _: &mutD) -> Result<i64, D::Error### impl<D: Fallible + ?Sized> Deserialize<i128, D> for i128 #### fn deserialize(&self, _: &mutD) -> Result<i128, D::Error### impl<D: Fallible + ?Sized> Deserialize<u16, D> for u16 #### fn deserialize(&self, _: &mutD) -> Result<u16, D::Error### impl<D: Fallible + ?Sized> Deserialize<u32, D> for u32 #### fn deserialize(&self, _: &mutD) -> Result<u32, D::Error### impl<D: Fallible + ?Sized> Deserialize<u64, D> for u64 #### fn deserialize(&self, _: &mutD) -> Result<u64, D::Error### impl<D: Fallible + ?Sized> Deserialize<u128, D> for u128 #### fn deserialize(&self, _: &mutD) -> Result<u128, D::Error### impl<D: Fallible + ?Sized> Deserialize<f32, D> for f32 #### fn deserialize(&self, _: &mutD) -> Result<f32, D::Error### impl<D: Fallible + ?Sized> Deserialize<f64, D> for f64 #### fn deserialize(&self, _: &mutD) -> Result<f64, D::Error### impl<D: Fallible + ?Sized> Deserialize<char, D> for char #### fn deserialize(&self, _: &mutD) -> Result<char, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroI16, D> for NonZeroI16 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroI16, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroI32, D> for NonZeroI32 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroI32, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroI64, D> for NonZeroI64 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroI64, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroI128, D> for NonZeroI128 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroI128, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroU16, D> for NonZeroU16 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroU16, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroU32, D> for NonZeroU32 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroU32, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroU64, D> for NonZeroU64 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroU64, D::Error### impl<D: Fallible + ?Sized> Deserialize<NonZeroU128, D> for NonZeroU128 #### fn deserialize(&self, _: &mutD) -> Result<NonZeroU128, D::Error### impl<T: ?Sized, D: Fallible + ?Sized> Deserialize<PhantomData<T>, D> for PhantomData<T#### fn deserialize(&self, _: &mutD) -> Result<PhantomData<T>, D::Error### impl<D: Fallible + ?Sized> Deserialize<PhantomPinned, D> for PhantomPinned #### fn deserialize(&self, _: &mutD) -> Result<PhantomPinned, D::Error### impl<D: Fallible + ?Sized, T11: Archive, T10: Archive, T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T11, T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D> for (T11::Archived, T10::Archived, T9::Archived, T8::Archived, T7::Archived, T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T11::Archived: Deserialize<T11, D>,    T10::Archived: Deserialize<T10, D>,    T9::Archived: Deserialize<T9, D>,    T8::Archived: Deserialize<T8, D>,    T7::Archived: Deserialize<T7, D>,    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T11, T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T10: Archive, T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D> for (T10::Archived, T9::Archived, T8::Archived, T7::Archived, T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T10::Archived: Deserialize<T10, D>,    T9::Archived: Deserialize<T9, D>,    T8::Archived: Deserialize<T8, D>,    T7::Archived: Deserialize<T7, D>,    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D> for (T9::Archived, T8::Archived, T7::Archived, T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T9::Archived: Deserialize<T9, D>,    T8::Archived: Deserialize<T8, D>,    T7::Archived: Deserialize<T7, D>,    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T9, T8, T7, T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T8, T7, T6, T5, T4, T3, T2, T1, T0), D> for (T8::Archived, T7::Archived, T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T8::Archived: Deserialize<T8, D>,    T7::Archived: Deserialize<T7, D>,    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T8, T7, T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T7, T6, T5, T4, T3, T2, T1, T0), D> for (T7::Archived, T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T7::Archived: Deserialize<T7, D>,    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T7, T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T6, T5, T4, T3, T2, T1, T0), D> for (T6::Archived, T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T6::Archived: Deserialize<T6, D>,    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T6, T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T5, T4, T3, T2, T1, T0), D> for (T5::Archived, T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T5::Archived: Deserialize<T5, D>,    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T5, T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T4, T3, T2, T1, T0), D> for (T4::Archived, T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T4::Archived: Deserialize<T4, D>,    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<(T4, T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T3, T2, T1, T0), D> for (T3::Archived, T2::Archived, T1::Archived, T0::Archived) where    T3::Archived: Deserialize<T3, D>,    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<(T3, T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T2: Archive, T1: Archive, T0: Archive> Deserialize<(T2, T1, T0), D> for (T2::Archived, T1::Archived, T0::Archived) where    T2::Archived: Deserialize<T2, D>,    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<(T2, T1, T0), D::Error### impl<D: Fallible + ?Sized, T1: Archive, T0: Archive> Deserialize<(T1, T0), D> for (T1::Archived, T0::Archived) where    T1::Archived: Deserialize<T1, D>,    T0::Archived: Deserialize<T0, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<(T1, T0), D::Error### impl<D: Fallible + ?Sized, T0: Archive> Deserialize<(T0,), D> for (T0::Archived,) where    T0::Archived: Deserialize<T0, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<(T0,), D::Error### impl<T: Archive, D: Fallible + ?Sized, const N: usize> Deserialize<[T; N], D> for [T::Archived; N] where    T::Archived: Deserialize<T, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<[T; N], D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicI16>, D> for i16_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicI16_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicI32>, D> for i32_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicI32_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicI64>, D> for i64_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicI64_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicU16>, D> for u16_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicU16_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicU32>, D> for u32_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicU32_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<BigEndian<AtomicU64>, D> for u64_be #### fn deserialize(&self, _: &mutD) -> Result<AtomicU64_be, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicI16>, D> for i16_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicI16_le, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicI32>, D> for i32_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicI32_le, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicI64>, D> for i64_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicI64_le, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicU16>, D> for u16_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicU16_le, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicU32>, D> for u32_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicU32_le, D::Error### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<AtomicU64>, D> for u64_le #### fn deserialize(&self, _: &mutD) -> Result<AtomicU64_le, D::ErrorImplementors --- ### impl<D: Fallible + ?Sized> Deserialize<SocketAddr, D> for ArchivedSocketAddr ### impl<D: Fallible + ?Sized> Deserialize<IpAddr, D> for ArchivedIpAddr ### impl<D: Fallible + ?Sized> Deserialize<bool, D> for Archived<bool### impl<D: Fallible + ?Sized> Deserialize<i8, D> for Archived<i8### impl<D: Fallible + ?Sized> Deserialize<isize, D> for Archived<isize### impl<D: Fallible + ?Sized> Deserialize<u8, D> for Archived<u8### impl<D: Fallible + ?Sized> Deserialize<(), D> for Archived<()### impl<D: Fallible + ?Sized> Deserialize<usize, D> for Archived<usize### impl<D: Fallible + ?Sized> Deserialize<CString, D> for Archived<CString> where    CStr: DeserializeUnsized<CStr, D>, ### impl<D: Fallible + ?Sized> Deserialize<String, D> for ArchivedString where    str: DeserializeUnsized<str, D>, ### impl<D: Fallible + ?Sized> Deserialize<NonZeroI8, D> for Archived<NonZeroI8### impl<D: Fallible + ?Sized> Deserialize<NonZeroIsize, D> for Archived<NonZeroIsize### impl<D: Fallible + ?Sized> Deserialize<NonZeroU8, D> for Archived<NonZeroU8### impl<D: Fallible + ?Sized> Deserialize<NonZeroUsize, D> for Archived<NonZeroUsize### impl<D: Fallible + ?Sized> Deserialize<AtomicBool, D> for Archived<AtomicBool### impl<D: Fallible + ?Sized> Deserialize<AtomicI8, D> for Archived<AtomicI8### impl<D: Fallible + ?Sized> Deserialize<AtomicI16, D> for Archived<AtomicI16### impl<D: Fallible + ?Sized> Deserialize<AtomicI32, D> for Archived<AtomicI32### impl<D: Fallible + ?Sized> Deserialize<AtomicI64, D> for Archived<AtomicI64### impl<D: Fallible + ?Sized> Deserialize<AtomicIsize, D> for Archived<AtomicIsize### impl<D: Fallible + ?Sized> Deserialize<AtomicU8, D> for Archived<AtomicU8### impl<D: Fallible + ?Sized> Deserialize<AtomicU16, D> for Archived<AtomicU16### impl<D: Fallible + ?Sized> Deserialize<AtomicU32, D> for Archived<AtomicU32### impl<D: Fallible + ?Sized> Deserialize<AtomicU64, D> for Archived<AtomicU64### impl<D: Fallible + ?Sized> Deserialize<AtomicUsize, D> for Archived<AtomicUsize### impl<D: Fallible + ?Sized> Deserialize<Duration, D> for ArchivedDuration ### impl<D: Fallible + ?Sized> Deserialize<SocketAddrV4, D> for ArchivedSocketAddrV4 ### impl<D: Fallible + ?Sized> Deserialize<SocketAddrV6, D> for ArchivedSocketAddrV6 ### impl<D: Fallible + ?Sized> Deserialize<Ipv4Addr, D> for ArchivedIpv4Addr ### impl<D: Fallible + ?Sized> Deserialize<Ipv6Addr, D> for ArchivedIpv6Addr ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<char>, D> for Archived<char_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<f32>, D> for Archived<f32_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<f64>, D> for Archived<f64_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i16>, D> for Archived<i16_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i32>, D> for Archived<i32_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i64>, D> for Archived<i64_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i128>, D> for Archived<i128_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u16>, D> for Archived<u16_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u32>, D> for Archived<u32_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u64>, D> for Archived<u64_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u128>, D> for Archived<u128_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI16>, D> for Archived<NonZeroI16_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI32>, D> for Archived<NonZeroI32_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI64>, D> for Archived<NonZeroI64_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI128>, D> for Archived<NonZeroI128_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU16>, D> for Archived<NonZeroU16_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU32>, D> for Archived<NonZeroU32_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU64>, D> for Archived<NonZeroU64_be### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU128>, D> for Archived<NonZeroU128_be### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<char>, D> for Archived<char_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<f32>, D> for Archived<f32_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<f64>, D> for Archived<f64_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i16>, D> for Archived<i16_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i32>, D> for Archived<i32_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i64>, D> for Archived<i64_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i128>, D> for Archived<i128_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u16>, D> for Archived<u16_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u32>, D> for Archived<u32_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u64>, D> for Archived<u64_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u128>, D> for Archived<u128_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI16>, D> for Archived<NonZeroI16_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI32>, D> for Archived<NonZeroI32_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI64>, D> for Archived<NonZeroI64_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI128>, D> for Archived<NonZeroI128_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU16>, D> for Archived<NonZeroU16_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU32>, D> for Archived<NonZeroU32_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU64>, D> for Archived<NonZeroU64_le### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU128>, D> for Archived<NonZeroU128_le### impl<F, W, T, D> Deserialize<With<T, W>, D> for F where    F: ?Sized,    W: DeserializeWith<F, T, D>,    D: Fallible + ?Sized, ### impl<K, D> Deserialize<BTreeSet<K, Global>, D> for ArchivedBTreeSet<K::Archived> where    K: Archive + Ord,    K::Archived: Deserialize<K, D> + Ord,    D: Fallible + ?Sized, ### impl<K, D, S> Deserialize<HashSet<K, S>, D> for ArchivedHashSet<K::Archived> where    K: Archive + Hash + Eq,    K::Archived: Deserialize<K, D> + Hash + Eq,    D: Fallible + ?Sized,    S: Default + BuildHasher, ### impl<K, D, S> Deserialize<HashSet<K, S, Global>, D> for ArchivedHashSet<K::Archived> where    K: Archive + Hash + Eq,    K::Archived: Deserialize<K, D> + Hash + Eq,    D: Fallible + ?Sized,    S: Default + BuildHasher, ### impl<K: Archive + Ord, V: Archive, D: Fallible + ?Sized> Deserialize<BTreeMap<K, V, Global>, D> for ArchivedBTreeMap<K::Archived, V::Archived> where    K::Archived: Deserialize<K, D> + Ord,    V::Archived: Deserialize<V, D>, ### impl<K: Archive + Hash + Eq, V: Archive, D: Fallible + ?Sized, S: Default + BuildHasher> Deserialize<HashMap<K, V, S>, D> for ArchivedHashMap<K::Archived, V::Archived> where    K::Archived: Deserialize<K, D> + Hash + Eq,    V::Archived: Deserialize<V, D>, ### impl<K: Archive + Hash + Eq, V: Archive, D: Fallible + ?Sized, S: Default + BuildHasher> Deserialize<HashMap<K, V, S, Global>, D> for ArchivedHashMap<K::Archived, V::Archived> where    K::Archived: Deserialize<K, D> + Hash + Eq,    V::Archived: Deserialize<V, D>, ### impl<T, D> Deserialize<Box<T, Global>, D> for ArchivedBox<T::Archived> where    T: ArchiveUnsized + ?Sized,    T::Archived: DeserializeUnsized<T, D>,    D: Fallible + ?Sized, ### impl<T, D> Deserialize<Rc<T>, D> for ArchivedRc<T::Archived, RcFlavor> where    T: ArchiveUnsized + ?Sized + 'static,    T::Archived: DeserializeUnsized<T, D>,    D: SharedDeserializeRegistry + ?Sized, ### impl<T, D> Deserialize<Weak<T>, D> for ArchivedRcWeak<T::Archived, RcFlavor> where    T: Archive + 'static,    T::Archived: DeserializeUnsized<T, D>,    D: SharedDeserializeRegistry + ?Sized, ### impl<T, D> Deserialize<Weak<T>, D> for ArchivedRcWeak<T::Archived, ArcFlavor> where    T: Archive + 'static,    T::Archived: DeserializeUnsized<T, D>,    D: SharedDeserializeRegistry + ?Sized, ### impl<T, D> Deserialize<RangeInclusive<T>, D> for Archived<RangeInclusive<T>> where    T: Archive,    T::Archived: Deserialize<T, D>,    D: Fallible + ?Sized, ### impl<T, D> Deserialize<RangeToInclusive<T>, D> for Archived<RangeToInclusive<T>> where    T: Archive,    T::Archived: Deserialize<T, D>,    D: Fallible + ?Sized, ### impl<T, E, D> Deserialize<Result<T, E>, D> for ArchivedResult<T::Archived, E::Archived> where    T: Archive,    E: Archive,    D: Fallible + ?Sized,    T::Archived: Deserialize<T, D>,    E::Archived: Deserialize<E, D>, ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<Option<T>, D> for ArchivedOption<T::Archived> where    T::Archived: Deserialize<T, D>, ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<Vec<T, Global>, D> for ArchivedVec<T::Archived> where    [T::Archived]: DeserializeUnsized<[T], D>, ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<Range<T>, D> for Archived<Range<T>> where    T::Archived: Deserialize<T, D>, ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<RangeFrom<T>, D> for Archived<RangeFrom<T>> where    T::Archived: Deserialize<T, D>, ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<RangeTo<T>, D> for Archived<RangeTo<T>> where    T::Archived: Deserialize<T, D>, ### impl<T: ArchiveUnsized + ?Sized + 'static, D: SharedDeserializeRegistry + ?Sized> Deserialize<Arc<T>, D> for ArchivedRc<T::Archived, ArcFlavor> where    T::Archived: DeserializeUnsized<T, D>, Trait rkyv_test::Archive === ``` pub trait Archive { type Archived; type Resolver; unsafe fn resolve(         &self,         pos: usize,         resolver: Self::Resolver,         out: *mutSelf::Archived     ); } ``` A type that can be used without deserializing. `Archive` is one of three basic traits used to work with zero-copy data and controls the layout of the data in its archived zero-copy representation. The `Serialize` trait helps transform types into that representation, and the `Deserialize` trait helps transform types back out. Types that implement `Archive` must have a well-defined archived size. Unsized types can be supported using the `ArchiveUnsized` trait, along with `SerializeUnsized` and `DeserializeUnsized`. Archiving is done depth-first, writing any data owned by a type before writing the data for the type itself. The type must be able to create the archived type from only its own data and its resolver. Archived data is always treated as if it is tree-shaped, with the root owning its direct descendents and so on. Data that is not tree-shaped can be supported using special serializer and deserializer bounds (see `ArchivedRc` for example). In a buffer of serialized data, objects are laid out in *reverse order*. This means that the root object is located near the end of the buffer and leaf objects are located near the beginning. Examples --- Most of the time, `#[derive(Archive)]` will create an acceptable implementation. You can use the `#[archive(...)]` and `#[archive_attr(...)]` attributes to control how the implementation is generated. See the `Archive` derive macro for more details. ``` use rkyv::{Archive, Deserialize, Serialize}; #[derive(Archive, Deserialize, Serialize, Debug, PartialEq)] // This will generate a PartialEq impl between our unarchived and archived types #[archive(compare(PartialEq))] // We can pass attributes through to generated types with archive_attr #[archive_attr(derive(Debug))] struct Test { int: u8, string: String, option: Option<Vec<i32>>, } let value = Test { int: 42, string: "hello world".to_string(), option: Some(vec![1, 2, 3, 4]), }; // Serializing is as easy as a single function call let bytes = rkyv::to_bytes::<_, 256>(&value).unwrap(); // Or you can customize your serialization for better performance // and compatibility with #![no_std] environments use rkyv::ser::{Serializer, serializers::AllocSerializer}; let mut serializer = AllocSerializer::<0>::default(); serializer.serialize_value(&value).unwrap(); let bytes = serializer.into_serializer().into_inner(); // You can use the safe API with the validation feature turned on, // or you can use the unsafe API (shown here) for maximum performance let archived = unsafe { rkyv::archived_root::<Test>(&bytes[..]) }; assert_eq!(archived, &value); // And you can always deserialize back to the original type let deserialized: Test = archived.deserialize(&mut rkyv::Infallible).unwrap(); assert_eq!(deserialized, value); ``` *Note: the safe API requires the `validation` feature.* Many of the core and standard library types already have `Archive` implementations available, but you may need to implement `Archive` for your own types in some cases the derive macro cannot handle. In this example, we add our own wrapper that serializes a `&'static str` as if it’s owned. Normally you can lean on the archived version of `String` to do most of the work, or use the `Inline` to do exactly this. This example does everything to demonstrate how to implement `Archive` for your own types. ``` use core::{slice, str}; use rkyv::{ archived_root, ser::{Serializer, serializers::AlignedSerializer}, out_field, AlignedVec, Archive, Archived, ArchiveUnsized, MetadataResolver, RelPtr, Serialize, SerializeUnsized, }; struct OwnedStr { inner: &'static str, } struct ArchivedOwnedStr { // This will be a relative pointer to our string ptr: RelPtr<str>, } impl ArchivedOwnedStr { // This will help us get the bytes of our type as a str again. fn as_str(&self) -> &str { unsafe { // The as_ptr() function of RelPtr will get a pointer the str &*self.ptr.as_ptr() } } } struct OwnedStrResolver { // This will be the position that the bytes of our string are stored at. // We'll use this to resolve the relative pointer of our // ArchivedOwnedStr. pos: usize, // The archived metadata for our str may also need a resolver. metadata_resolver: MetadataResolver<str>, } // The Archive implementation defines the archived version of our type and // determines how to turn the resolver into the archived form. The Serialize // implementations determine how to make a resolver from the original value. impl Archive for OwnedStr { type Archived = ArchivedOwnedStr; // This is the resolver we can create our Archived version from. type Resolver = OwnedStrResolver; // The resolve function consumes the resolver and produces the archived // value at the given position. unsafe fn resolve( &self, pos: usize, resolver: Self::Resolver, out: *mut Self::Archived, ) { // We have to be careful to add the offset of the ptr field, // otherwise we'll be using the position of the ArchivedOwnedStr // instead of the position of the relative pointer. let (fp, fo) = out_field!(out.ptr); self.inner.resolve_unsized( pos + fp, resolver.pos, resolver.metadata_resolver, fo, ); } } // We restrict our serializer types with Serializer because we need its // capabilities to archive our type. For other types, we might need more or // less restrictive bounds on the type of S. impl<S: Serializer + ?Sized> Serialize<S> for OwnedStr { fn serialize( &self, serializer: &mut S ) -> Result<Self::Resolver, S::Error> { // This is where we want to write the bytes of our string and return // a resolver that knows where those bytes were written. // We also need to serialize the metadata for our str. Ok(OwnedStrResolver { pos: self.inner.serialize_unsized(serializer)?, metadata_resolver: self.inner.serialize_metadata(serializer)? }) } } let mut serializer = AlignedSerializer::new(AlignedVec::new()); const STR_VAL: &'static str = "I'm in an OwnedStr!"; let value = OwnedStr { inner: STR_VAL }; // It works! serializer.serialize_value(&value).expect("failed to archive test"); let buf = serializer.into_inner(); let archived = unsafe { archived_root::<OwnedStr>(buf.as_ref()) }; // Let's make sure our data got written correctly assert_eq!(archived.as_str(), STR_VAL); ``` Required Associated Types --- #### type Archived The archived representation of this type. In this form, the data can be used with zero-copy deserialization. #### type Resolver The resolver for this type. It must contain all the additional information from serializing needed to make the archived type from the normal type. Required Methods --- #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) Creates the archived version of this value at the given position and writes it to the given output. The output should be initialized field-by-field rather than by writing a whole struct. Performing a typed copy will mark all of the padding bytes as uninitialized, but they must remain set to the value they currently have. This prevents leaking uninitialized memory to the final archive. ##### Safety * `pos` must be the position of `out` within the archive * `resolver` must be the result of serializing this object Implementations on Foreign Types --- ### impl<T: ArchiveUnsized + ?Sized> Archive for Box<T#### type Archived = ArchivedBox<<T as ArchiveUnsized>::Archived#### type Resolver = BoxResolver<<T as ArchiveUnsized>::MetadataResolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<K: Archive + Ord, V: Archive> Archive for BTreeMap<K, V> where    K::Archived: Ord, #### type Archived = ArchivedBTreeMap<<K as Archive>::Archived, <V as Archive>::Archived#### type Resolver = BTreeMapResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<K: Archive + Ord> Archive for BTreeSet<K> where    K::Archived: Ord, #### type Archived = ArchivedBTreeSet<<K as Archive>::Archived#### type Resolver = BTreeSetResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: ArchiveUnsized + ?Sized> Archive for Rc<T#### type Archived = ArchivedRc<<T as ArchiveUnsized>::Archived, RcFlavor#### type Resolver = RcResolver<<T as ArchiveUnsized>::MetadataResolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: ArchiveUnsized + ?Sized> Archive for Weak<T#### type Archived = ArchivedRcWeak<<T as ArchiveUnsized>::Archived, RcFlavor#### type Resolver = RcWeakResolver<<T as ArchiveUnsized>::MetadataResolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: ArchiveUnsized + ?Sized> Archive for Arc<T#### type Archived = ArchivedRc<<T as ArchiveUnsized>::Archived, ArcFlavor#### type Resolver = RcResolver<<T as ArchiveUnsized>::MetadataResolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: ArchiveUnsized + ?Sized> Archive for Weak<T#### type Archived = ArchivedRcWeak<<T as ArchiveUnsized>::Archived, ArcFlavor#### type Resolver = RcWeakResolver<<T as ArchiveUnsized>::MetadataResolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for String #### type Archived = ArchivedString #### type Resolver = StringResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for Vec<T#### type Archived = ArchivedVec<<T as Archive>::Archived#### type Resolver = VecResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for RangeFull #### type Archived = RangeFull #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, _: *mutSelf::Archived) ### impl<T: Archive> Archive for Range<T#### type Archived = ArchivedRange<<T as Archive>::Archived#### type Resolver = Range<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for RangeInclusive<T#### type Archived = ArchivedRangeInclusive<<T as Archive>::Archived#### type Resolver = Range<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for RangeFrom<T#### type Archived = ArchivedRangeFrom<<T as Archive>::Archived#### type Resolver = RangeFrom<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for RangeTo<T#### type Archived = ArchivedRangeTo<<T as Archive>::Archived#### type Resolver = RangeTo<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for RangeToInclusive<T#### type Archived = ArchivedRangeToInclusive<<T as Archive>::Archived#### type Resolver = RangeToInclusive<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive> Archive for Option<T#### type Archived = ArchivedOption<<T as Archive>::Archived#### type Resolver = Option<<T as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for () #### type Archived = () #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for bool #### type Archived = bool #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i8 #### type Archived = i8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u8 #### type Archived = u8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI8 #### type Archived = NonZeroI8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU8 #### type Archived = NonZeroU8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicBool #### type Archived = bool #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI8 #### type Archived = i8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU8 #### type Archived = u8 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i16 #### type Archived = i16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i32 #### type Archived = i32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i64 #### type Archived = i64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i128 #### type Archived = i128 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u16 #### type Archived = u16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u32 #### type Archived = u32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u64 #### type Archived = u64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u128 #### type Archived = u128 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f32 #### type Archived = f32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f64 #### type Archived = f64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for char #### type Archived = char #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI16 #### type Archived = NonZeroI16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI32 #### type Archived = NonZeroI32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI64 #### type Archived = NonZeroI64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI128 #### type Archived = NonZeroI128 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU16 #### type Archived = NonZeroU16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU32 #### type Archived = NonZeroU32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU64 #### type Archived = NonZeroU64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU128 #### type Archived = NonZeroU128 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI16 #### type Archived = i16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI32 #### type Archived = i32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI64 #### type Archived = i64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU16 #### type Archived = u16 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU32 #### type Archived = u32 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU64 #### type Archived = u64 #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl<T: ?Sized> Archive for PhantomData<T#### type Archived = PhantomData<T#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, _: *mutSelf::Archived) ### impl Archive for PhantomPinned #### type Archived = PhantomPinned #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, _: *mutSelf::Archived) ### impl Archive for usize #### type Archived = <u32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for isize #### type Archived = <i32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroUsize #### type Archived = <NonZeroU32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroIsize #### type Archived = <NonZeroI32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicUsize #### type Archived = <u32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicIsize #### type Archived = <i32 as Archive>::Archived #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl<T: Archive, E: Archive> Archive for Result<T, E#### type Archived = ArchivedResult<<T as Archive>::Archived, <E as Archive>::Archived#### type Resolver = Result<<T as Archive>::Resolver, <E as Archive>::Resolver#### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for Duration #### type Archived = ArchivedDuration #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl<T11: Archive, T10: Archive, T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T11, T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T11 as Archive>::Archived, <T10 as Archive>::Archived, <T9 as Archive>::Archived, <T8 as Archive>::Archived, <T7 as Archive>::Archived, <T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T11 as Archive>::Resolver, <T10 as Archive>::Resolver, <T9 as Archive>::Resolver, <T8 as Archive>::Resolver, <T7 as Archive>::Resolver, <T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T10: Archive, T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T10 as Archive>::Archived, <T9 as Archive>::Archived, <T8 as Archive>::Archived, <T7 as Archive>::Archived, <T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T10 as Archive>::Resolver, <T9 as Archive>::Resolver, <T8 as Archive>::Resolver, <T7 as Archive>::Resolver, <T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T9: Archive, T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T9 as Archive>::Archived, <T8 as Archive>::Archived, <T7 as Archive>::Archived, <T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T9 as Archive>::Resolver, <T8 as Archive>::Resolver, <T7 as Archive>::Resolver, <T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T8: Archive, T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T8, T7, T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T8 as Archive>::Archived, <T7 as Archive>::Archived, <T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T8 as Archive>::Resolver, <T7 as Archive>::Resolver, <T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T7: Archive, T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T7, T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T7 as Archive>::Archived, <T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T7 as Archive>::Resolver, <T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T6: Archive, T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T6, T5, T4, T3, T2, T1, T0) #### type Archived = (<T6 as Archive>::Archived, <T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T6 as Archive>::Resolver, <T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T5: Archive, T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T5, T4, T3, T2, T1, T0) #### type Archived = (<T5 as Archive>::Archived, <T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T5 as Archive>::Resolver, <T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T4: Archive, T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T4, T3, T2, T1, T0) #### type Archived = (<T4 as Archive>::Archived, <T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T4 as Archive>::Resolver, <T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T3: Archive, T2: Archive, T1: Archive, T0: Archive> Archive for (T3, T2, T1, T0) #### type Archived = (<T3 as Archive>::Archived, <T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T3 as Archive>::Resolver, <T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T2: Archive, T1: Archive, T0: Archive> Archive for (T2, T1, T0) #### type Archived = (<T2 as Archive>::Archived, <T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T2 as Archive>::Resolver, <T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T1: Archive, T0: Archive> Archive for (T1, T0) #### type Archived = (<T1 as Archive>::Archived, <T0 as Archive>::Archived) #### type Resolver = (<T1 as Archive>::Resolver, <T0 as Archive>::Resolver) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T0: Archive> Archive for (T0,) #### type Archived = (<T0 as Archive>::Archived,) #### type Resolver = (<T0 as Archive>::Resolver,) #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<T: Archive, const N: usize> Archive for [T; N] #### type Archived = [<T as Archive>::Archived; N] #### type Resolver = [<T as Archive>::Resolver; N] #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for i16_be #### type Archived = BigEndian<i16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i32_be #### type Archived = BigEndian<i32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i64_be #### type Archived = BigEndian<i64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i128_be #### type Archived = BigEndian<i128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u16_be #### type Archived = BigEndian<u16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u32_be #### type Archived = BigEndian<u32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u64_be #### type Archived = BigEndian<u64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u128_be #### type Archived = BigEndian<u128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f32_be #### type Archived = BigEndian<f32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f64_be #### type Archived = BigEndian<f64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for char_be #### type Archived = BigEndian<char#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI16_be #### type Archived = BigEndian<NonZeroI16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI32_be #### type Archived = BigEndian<NonZeroI32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI64_be #### type Archived = BigEndian<NonZeroI64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI128_be #### type Archived = BigEndian<NonZeroI128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU16_be #### type Archived = BigEndian<NonZeroU16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU32_be #### type Archived = BigEndian<NonZeroU32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU64_be #### type Archived = BigEndian<NonZeroU64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU128_be #### type Archived = BigEndian<NonZeroU128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI16_be #### type Archived = BigEndian<i16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI32_be #### type Archived = BigEndian<i32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI64_be #### type Archived = BigEndian<i64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU16_be #### type Archived = BigEndian<u16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU32_be #### type Archived = BigEndian<u32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU64_be #### type Archived = BigEndian<u64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i16_le #### type Archived = LittleEndian<i16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i32_le #### type Archived = LittleEndian<i32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i64_le #### type Archived = LittleEndian<i64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for i128_le #### type Archived = LittleEndian<i128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u16_le #### type Archived = LittleEndian<u16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u32_le #### type Archived = LittleEndian<u32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u64_le #### type Archived = LittleEndian<u64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for u128_le #### type Archived = LittleEndian<u128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f32_le #### type Archived = LittleEndian<f32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for f64_le #### type Archived = LittleEndian<f64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for char_le #### type Archived = LittleEndian<char#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI16_le #### type Archived = LittleEndian<NonZeroI16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI32_le #### type Archived = LittleEndian<NonZeroI32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI64_le #### type Archived = LittleEndian<NonZeroI64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroI128_le #### type Archived = LittleEndian<NonZeroI128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU16_le #### type Archived = LittleEndian<NonZeroU16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU32_le #### type Archived = LittleEndian<NonZeroU32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU64_le #### type Archived = LittleEndian<NonZeroU64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for NonZeroU128_le #### type Archived = LittleEndian<NonZeroU128#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI16_le #### type Archived = LittleEndian<i16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI32_le #### type Archived = LittleEndian<i32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicI64_le #### type Archived = LittleEndian<i64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU16_le #### type Archived = LittleEndian<u16#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU32_le #### type Archived = LittleEndian<u32#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for AtomicU64_le #### type Archived = LittleEndian<u64#### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl<K: Archive + Hash + Eq, V: Archive, S> Archive for HashMap<K, V, S> where    K::Archived: Hash + Eq, #### type Archived = ArchivedHashMap<<K as Archive>::Archived, <V as Archive>::Archived#### type Resolver = HashMapResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<K: Archive + Hash + Eq, S> Archive for HashSet<K, S> where    K::Archived: Hash + Eq, #### type Archived = ArchivedHashSet<<K as Archive>::Archived#### type Resolver = HashSetResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for CString #### type Archived = ArchivedCString #### type Resolver = CStringResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for Ipv4Addr #### type Archived = ArchivedIpv4Addr #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for Ipv6Addr #### type Archived = ArchivedIpv6Addr #### type Resolver = () #### unsafe fn resolve(&self, _: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for IpAddr #### type Archived = ArchivedIpAddr #### type Resolver = () #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl Archive for SocketAddrV4 #### type Archived = ArchivedSocketAddrV4 #### type Resolver = () #### unsafe fn resolve(&self, pos: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for SocketAddrV6 #### type Archived = ArchivedSocketAddrV6 #### type Resolver = () #### unsafe fn resolve(&self, pos: usize, _: Self::Resolver, out: *mutSelf::Archived) ### impl Archive for SocketAddr #### type Archived = ArchivedSocketAddr #### type Resolver = () #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<K: Archive + Hash + Eq, V: Archive, S> Archive for HashMap<K, V, S> where    K::Archived: Hash + Eq, #### type Archived = ArchivedHashMap<<K as Archive>::Archived, <V as Archive>::Archived#### type Resolver = HashMapResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) ### impl<K: Archive + Hash + Eq, S> Archive for HashSet<K, S> where    K::Archived: Hash + Eq, #### type Archived = ArchivedHashSet<<K as Archive>::Archived#### type Resolver = HashSetResolver #### unsafe fn resolve(    &self,    pos: usize,    resolver: Self::Resolver,    out: *mutSelf::Archived) Implementors --- ### impl<F: ?Sized, W: ArchiveWith<F>> Archive for With<F, W#### type Archived = <W as ArchiveWith<F>>::Archived #### type Resolver = <W as ArchiveWith<F>>::Resolver ### impl<K: Archive, V: Archive> Archive for Entry<&K, &V#### type Archived = Entry<<K as Archive>::Archived, <V as Archive>::Archived#### type Resolver = (<K as Archive>::Resolver, <V as Archive>::Resolver) Derive Macro rkyv_test::Archive === ``` #[derive(Archive)] { // Attributes available to this derive: #[archive] #[archive_attr] #[omit_bounds] #[with] } ``` Derives `Archive` for the labeled type. Attributes --- Additional arguments can be specified using the `#[archive(...)]` and `#[archive_attr(...)]` attributes. `#[archive(...)]` takes the following arguments: * `archived = "..."`: Changes the name of the generated archived type to the given value. By default, archived types are named “Archived” + `the name of the type`. * `resolver = "..."`: Changes the name of the generated resolver type to the given value. By default, resolver types are named `the name of the type` + “Resolver”. * `repr(...)`: *Deprecated, use `#[archive_attr(repr(...))]` instead.* Sets the representation for the archived type to the given representation. Available representation options may vary depending on features and type layout. * `compare(...)`: Implements common comparison operators between the original and archived types. Supported comparisons are `PartialEq` and `PartialOrd` (i.e. `#[archive(compare(PartialEq, PartialOrd))]`). * `bound(...)`: Adds additional bounds to trait implementations. This can be especially useful when dealing with recursive structures, where bounds may need to be omitted to prevent recursive type definitions. Use `archive = "..."` to specify `Archive` bounds, `serialize = "..."` to specify `Serialize` bounds, and `deserialize = "..."` to specify `Deserialize` bounds. * `copy_safe`: States that the archived type is tightly packed with no padding bytes. This qualifies it for copy optimizations. (requires nightly) * `as = "..."`: Instead of generating a separate archived type, this type will archive as the named type. This is useful for types which are generic over their parameters. * `crate = "..."`: Chooses an alternative crate path to import rkyv from. `#[archive_attr(...)]` adds the attributes passed as arguments as attributes to the generated type. This is commonly used with attributes like `derive(...)` to derive trait implementations for the archived type. Recursive types --- This derive macro automatically adds a type bound `field: Archive` for each field type. This can cause an overflow while evaluating trait bounds if the structure eventually references its own type, as the implementation of `Archive` for a struct depends on each field type implementing it as well. Adding the attribute `#[omit_bounds]` to a field will suppress this trait bound and allow recursive structures. This may be too coarse for some types, in which case additional type bounds may be required with `bound(...)`. Wrappers --- Wrappers transparently customize archived types by providing different implementations of core traits. For example, references cannot be archived, but the `Inline` wrapper serializes a reference as if it were a field of the struct. Wrappers can be applied to fields using the `#[with(...)]` attribute. Multiple wrappers can be used, and they are applied in reverse order (i.e. `#[with(A, B, C)]` will archive `MyType` as `With<With<With<MyType, C>, B, A>`). Function rkyv_test::validation::check_archived_root_with_context === ``` pub fn check_archived_root_with_context<'a, T, C>(     buf: &'a [u8],     context: &mutC ) -> Result<&'a T::Archived, CheckTypeError<T::Archived, C>> where     T: Archive,     T::Archived: CheckBytes<C> + Pointee<Metadata = ()>,     C: ArchiveContext + ?Sized,  ``` Checks the given archive with an additional context. See `check_archived_value` for more details. Function rkyv_test::validation::check_archived_value_with_context === ``` pub fn check_archived_value_with_context<'a, T, C>(     buf: &'a [u8],     pos: usize,     context: &mutC ) -> Result<&'a T::Archived, CheckTypeError<T::Archived, C>> where     T: Archive,     T::Archived: CheckBytes<C> + Pointee<Metadata = ()>,     C: ArchiveContext + ?Sized,  ``` Checks the given archive with an additional context. See `check_archived_value` for more details. Function rkyv_test::validation::validators::check_archived_root === ``` pub fn check_archived_root<'a, T: Archive>(     bytes: &'a [u8] ) -> Result<&'a T::Archived, CheckTypeError<T::Archived, DefaultValidator<'a>>> where     T::Archived: CheckBytes<DefaultValidator<'a>>,  ``` Checks the given archive at the given position for an archived version of the given type. This is a safe alternative to `archived_value` for types that implement `CheckBytes`. See `check_archived_value` for more details. Function rkyv_test::validation::validators::check_archived_value === ``` pub fn check_archived_value<'a, T: Archive>(     bytes: &'a [u8],     pos: usize ) -> Result<&'_ T::Archived, CheckTypeError<T::Archived, DefaultValidator<'a>>> where     T::Archived: CheckBytes<DefaultValidator<'a>>,  ``` Checks the given archive at the given position for an archived version of the given type. This is a safe alternative to `archived_value` for types that implement `CheckBytes`. Examples --- ``` use rkyv::{ check_archived_value, ser::{Serializer, serializers::AlignedSerializer}, AlignedVec, Archive, Serialize, }; use bytecheck::CheckBytes; #[derive(Archive, Serialize)] #[archive_attr(derive(CheckBytes))] struct Example { name: String, value: i32, } let value = Example { name: "pi".to_string(), value: 31415926, }; let mut serializer = AlignedSerializer::new(AlignedVec::new()); let pos = serializer.serialize_value(&value) .expect("failed to archive test"); let buf = serializer.into_inner(); let archived = check_archived_value::<Example>(buf.as_ref(), pos).unwrap(); ``` Module rkyv_test::util === Utilities for common archive operations. ### Buffer access Helper functions to get the root object of an archive under certain conditions. ### Alignment Alignment helpers ensure that byte buffers are properly aligned when accessing and deserializing data. Structs --- AlignedBytesA buffer of bytes aligned to 16 bytes. AlignedVecA vector of bytes that aligns its memory to 16 bytes. DrainA draining iterator for `ScratchVec<T>`. ScratchVecA vector view into serializer scratch space. Functions --- archived_root⚠Casts an archived value from the given byte slice by calculating the root position. archived_root_mut⚠Casts a mutable archived value from the given byte slice by calculating the root position. archived_unsized_root⚠Casts a `RelPtr` to the given unsized type from the given byte slice by calculating the root position. archived_unsized_root_mut⚠Casts a `RelPtr` to the given unsized type from the given byte slice by calculating the root position. archived_unsized_value⚠Casts a `RelPtr` to the given unsized type from the given byte slice at the given position and returns the value it points to. archived_unsized_value_mut⚠Casts a mutable `RelPtr` to the given unsized type from the given byte slice at the given position and returns the value it points to. archived_value⚠Casts an archived value from the given byte slice at the given position. archived_value_mut⚠Casts a mutable archived value from the given byte slice at the given position. from_bytes_unchecked⚠Deserializes a value from the given bytes. to_bytesSerializes the given value and returns the resulting bytes. Module rkyv_test::boxed === An archived version of `Box`. Structs --- ArchivedBoxAn archived `Box`. BoxResolverThe resolver for `Box`. Module rkyv_test::collections === Archived versions of standard library containers. Re-exports --- `pub use self::btree_map::ArchivedBTreeMap;``pub use self::hash_index::ArchivedHashIndex;``pub use self::hash_map::ArchivedHashMap;``pub use self::hash_set::ArchivedHashSet;``pub use self::index_map::ArchivedIndexMap;``pub use self::index_set::ArchivedIndexSet;`Modules --- btree_map`Archive` implementation for B-tree maps. btree_set`Archive` implementation for B-tree sets. hash_indexA helper type that archives index data for hashed collections using compress, hash and displace. hash_mapArchived hash map implementation. hash_setArchived hash set implementation. index_mapArchived index map implementation. index_setArchived index set implementation. utilUtilities for archived collections. Module rkyv_test::de === Deserialization traits, deserializers, and adapters. Modules --- deserializersDeserializers that can be used standalone and provide basic capabilities. Traits --- SharedDeserializeRegistryA registry that tracks deserialized shared memory. SharedPointerA deserializable shared pointer type. Module rkyv_test::ffi === Archived versions of FFI types. Structs --- ArchivedCStringAn archived `CString`. CStringResolverThe resolver for `CString`. Module rkyv_test::net === Archived versions of network types. Structs --- ArchivedIpv4AddrAn archived `Ipv4Addr`. ArchivedIpv6AddrAn archived `Ipv6Addr`. ArchivedSocketAddrV4An archived `SocketAddrV4`. ArchivedSocketAddrV6An archived `SocketAddrV6`. Enums --- ArchivedIpAddrAn archived `IpAddr`. ArchivedSocketAddrAn archived `SocketAddr`. Module rkyv_test::niche === Manually niched type replacements. Modules --- option_boxA niched archived `Option<Box<T>>` that uses less space. option_nonzeroNiched archived `Option<NonZero>` integers that use less space. Module rkyv_test::ops === Archived versions of `ops` types. Structs --- ArchivedRangeAn archived `Range`. ArchivedRangeFromAn archived `RangeFrom`. ArchivedRangeInclusiveAn archived `RangeInclusive`. ArchivedRangeToAn archived `RangeTo`. ArchivedRangeToInclusiveAn archived `RangeToInclusive`. Module rkyv_test::option === An archived version of `Option`. Structs --- IterAn iterator over a reference to the `Some` variant of an `ArchivedOption`. IterMutAn iterator over a mutable reference to the `Some` variant of an `ArchivedOption`. Enums --- ArchivedOptionAn archived `Option`. Module rkyv_test::rc === Archived versions of shared pointers. Modules --- validationValidation implementations for shared pointers. Structs --- ArchivedRcAn archived `Rc`. RcResolverThe resolver for `Rc`. Enums --- ArchivedRcWeakAn archived `rc::Weak`. RcWeakResolverThe resolver for `rc::Weak`. Module rkyv_test::rel_ptr === Relative pointer implementations and options. Structs --- RawRelPtrAn untyped pointer which resolves relative to its position in memory. RelPtrA pointer which resolves to relative to its position in memory. Enums --- OffsetErrorAn error where the distance between two positions cannot be represented by the offset type. RelPtrErrorErrors that can occur while creating raw relative pointers. Traits --- OffsetA offset that can be used with `RawRelPtr`. Functions --- signed_offsetCalculates the offset between two positions as an `isize`. Type Definitions --- RawRelPtrI8A raw relative pointer that uses an archived `i8` as the underlying offset. RawRelPtrI16A raw relative pointer that uses an archived `i16` as the underlying offset. RawRelPtrI32A raw relative pointer that uses an archived `i32` as the underlying offset. RawRelPtrI64A raw relative pointer that uses an archived `i64` as the underlying offset. RawRelPtrU8A raw relative pointer that uses an archived `u8` as the underlying offset. RawRelPtrU16A raw relative pointer that uses an archived `u16` as the underlying offset. RawRelPtrU32A raw relative pointer that uses an archived `u32` as the underlying offset. RawRelPtrU64A raw relative pointer that uses an archived `u64` as the underlying offset. Module rkyv_test::result === An archived version of `Result`. Structs --- IterAn iterator over a reference to the `Ok` variant of an `ArchivedResult`. IterMutAn iterator over a mutable reference to the `Ok` variant of an `ArchivedResult`. Enums --- ArchivedResultAn archived `Result` that represents either success (`Ok`) or failure (`Err`). Module rkyv_test::ser === Serialization traits, serializers, and adapters. Modules --- serializersSerializers that can be used standalone and provide basic capabilities. Traits --- ScratchSpaceA serializer that can allocate scratch space. SerializerA byte sink that knows where it is. SharedSerializeRegistryA registry that tracks serialized shared memory. Module rkyv_test::string === Archived versions of string types. Modules --- reprAn archived string representation that supports inlining short strings. Structs --- ArchivedStringAn archived `String`. StringResolverThe resolver for `String`. Module rkyv_test::time === Archived versions of `time` types. Structs --- ArchivedDurationAn archived `Duration`. Module rkyv_test::validation === Validation implementations and helper types. Modules --- ownedCommon validation utilities for owned containers (`Box`, `String`, `Vec`, etc.). validatorsValidators that can check archived types. Enums --- CheckArchiveErrorErrors that can occur when checking an archive. Traits --- ArchiveContextA context that can validate nonlocal archive memory. LayoutRawGets the layout of a type from its pointer. SharedContextA context that can validate shared archive memory. Functions --- check_archived_root_with_contextChecks the given archive with an additional context. check_archived_value_with_contextChecks the given archive with an additional context. Type Definitions --- CheckTypeErrorThe error type that can be produced by checking the given type with the given validator. Module rkyv_test::vec === An archived version of `Vec`. Structs --- ArchivedVecAn archived `Vec`. RawArchivedVecAn archived `Vec`. VecResolverThe resolver for `ArchivedVec`. Module rkyv_test::with === Wrapper type support and commonly used wrappers. Wrappers can be applied with the `#[with(...)]` attribute in the `Archive` macro. See `With` for examples. Structs --- AsBoxA wrapper that serializes a field into a box. AsOwnedA wrapper that serializes a `Cow` as if it were owned. AsStringA wrapper that attempts to convert a type to and from UTF-8. AsVecA wrapper that serializes associative containers as a `Vec` of key-value pairs. AtomicA wrapper that archives an atomic with an underlying atomic. CopyOptimizeA wrapper that provides specialized, performant implementations of serialization and deserialization. ImmutableA wrapper to make a type immutable. InlineA wrapper that serializes a reference inline. LockA wrapper that locks a lock and serializes the value immutably. MapA generic wrapper that allows wrapping an Option. NicheA wrapper that niches some type combinations. RawA wrapper that provides an optimized bulk data array. This is primarily intended for large amounts of raw data, like bytes, floats, or integers. RefAsBoxA wrapper that serializes a reference as if it were boxed. SkipA wrapper that skips serializing a field. UnixTimestampA wrapper that converts a `SystemTime` to a `Duration` since `UNIX_EPOCH`. UnsafeA wrapper that allows serialize-unsafe types to be serialized. WithA transparent wrapper for archived fields. Enums --- AsStringErrorErrors that can occur when serializing a `AsString` wrapper. LockErrorErrors that can occur while serializing a `Lock` wrapper UnixTimestampErrorErrors that can occur when serializing a `UnixTimestamp` wrapper. Traits --- ArchiveWithA variant of `Archive` that works with `With` wrappers. DeserializeWithA variant of `Deserialize` that works with `With` wrappers. SerializeWithA variant of `Serialize` that works with `With` wrappers. Type Definitions --- BoxedDeprecatedA wrapper that serializes a reference as if it were boxed. Macro rkyv_test::from_archived === ``` macro_rules! from_archived { ($expr:expr) => { ... }; } ``` Returns the unarchived value of the given archived primitive. This macro is not needed for most use cases. Its primary purpose is to simultaneously: * Convert values from (potentially) different archived primitives to their native counterparts * Allow transformation in `const` contexts * Prevent linter warnings from unused `into()` calls Users should feel free to use the more ergonomic `into()` where appropriate. Macro rkyv_test::out_field === ``` macro_rules! out_field { ($out:ident.$field:tt) => { ... }; } ``` Returns a tuple of the field offset and a mutable pointer to the field of the given struct pointer. Examples --- ``` use core::mem::MaybeUninit; use rkyv::out_field; struct Example { a: i32, b: bool, } let mut result = MaybeUninit::<Example>::zeroed(); let out = result.as_mut_ptr(); let (a_off, a) = out_field!(out.a); unsafe { a.write(42); } let (b_off, b) = out_field!(out.b); unsafe { b.write(true); } let result = unsafe { result.assume_init() }; assert_eq!(result.a, 42); assert_eq!(result.b, true); ``` Macro rkyv_test::to_archived === ``` macro_rules! to_archived { ($expr:expr) => { ... }; } ``` Returns the archived value of the given archived primitive. This macro is not needed for most use cases. Its primary purpose is to simultaneously: * Convert values from (potentially) different primitives to their archived counterparts * Allow transformation in `const` contexts * Prevent linter warnings from unused `into()` calls Users should feel free to use the more ergonomic `into()` where appropriate. Struct rkyv_test::Infallible === ``` pub struct Infallible; ``` A fallible type that cannot produce errors. This type can be used to serialize and deserialize types that cannot fail to serialize or deserialize. Trait Implementations --- ### impl Debug for Infallible #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more ### impl Default for Infallible #### fn default() -> Self Returns the “default value” for a type. Read more ### impl Fallible for Infallible #### type Error = Infallible The error produced by any failing methods. Auto Trait Implementations --- ### impl RefUnwindSafe for Infallible ### impl Send for Infallible ### impl Sync for Infallible ### impl Unpin for Infallible ### impl UnwindSafe for Infallible Blanket Implementations --- ### impl<T> Any for T where    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more ### impl<T> ArchivePointee for T #### type ArchivedMetadata = () The archived version of the pointer metadata for this type. #### fn pointer_metadata(    &<T as ArchivePointee>::ArchivedMetadata) -> <T as Pointee>::Metadata Converts some archived metadata to the pointer metadata for itself. ### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more ### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more ### impl<F, W, T, D> Deserialize<With<T, W>, D> for F where    W: DeserializeWith<F, T, D>,    D: Fallible + ?Sized,    F: ?Sized, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<With<T, W>, <D as Fallible>::ErrorDeserializes using the given deserializer ### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> LayoutRaw for T #### fn layout_raw(*const T) -> Layout Gets the layout of the type. ### impl<T> Pointee for T #### type Metadata = () The type for metadata in pointers and references to `Self`. ### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. ### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait rkyv_test::ArchivePointee === ``` pub trait ArchivePointee: Pointee { type ArchivedMetadata; fn pointer_metadata(         archived: &Self::ArchivedMetadata     ) -> <Self as Pointee>::Metadata; } ``` An archived type with associated metadata for its relative pointer. This is mostly used in the context of smart pointers and unsized types, and is implemented for all sized types by default. Required Associated Types --- #### type ArchivedMetadata The archived version of the pointer metadata for this type. Required Methods --- #### fn pointer_metadata(    archived: &Self::ArchivedMetadata) -> <Self as Pointee>::Metadata Converts some archived metadata to the pointer metadata for itself. Implementations on Foreign Types --- ### impl<T> ArchivePointee for [T] #### type ArchivedMetadata = <usize as Archive>::Archived #### fn pointer_metadata(    archived: &Self::ArchivedMetadata) -> <Self as Pointee>::Metadata ### impl ArchivePointee for str #### type ArchivedMetadata = <usize as Archive>::Archived #### fn pointer_metadata(    archived: &Self::ArchivedMetadata) -> <Self as Pointee>::Metadata ### impl ArchivePointee for CStr #### type ArchivedMetadata = <usize as Archive>::Archived #### fn pointer_metadata(    archived: &Self::ArchivedMetadata) -> <Self as Pointee>::Metadata Implementors --- ### impl<T> ArchivePointee for T #### type ArchivedMetadata = () Trait rkyv_test::ArchiveUnsized === ``` pub trait ArchiveUnsized: Pointee { type Archived: ArchivePointee + ?Sized; type MetadataResolver; unsafe fn resolve_metadata(         &self,         pos: usize,         resolver: Self::MetadataResolver,         out: *mutArchivedMetadata<Self    ); unsafe fn resolve_unsized(         &self,         from: usize,         to: usize,         resolver: Self::MetadataResolver,         out: *mutRelPtr<Self::Archived    ) { ... } } ``` A counterpart of `Archive` that’s suitable for unsized types. Unlike `Archive`, types that implement `ArchiveUnsized` must be serialized separately from their owning object. For example, whereas an `i32` might be laid out as part of a larger struct, a `Box<i32>` would serialize the `i32` somewhere in the archive and the `Box` would point to it as part of the larger struct. Because of this, the equivalent `Resolver` type for `ArchiveUnsized` is always a `usize` representing the position of the serialized value. `ArchiveUnsized` is automatically implemented for all types that implement `Archive`. Nothing special needs to be done to use them with types like `Box`, `Rc`, and `Arc`. It is also already implemented for slices and string slices, and the `rkyv_dyn` crate can be used to archive trait objects. Other unsized types must manually implement `ArchiveUnsized`. Examples --- This example shows how to manually implement `ArchiveUnsized` for an unsized type. Special care must be taken to ensure that the types are laid out correctly. ``` use core::{mem::transmute, ops::{Deref, DerefMut}}; use ptr_meta::Pointee; use rkyv::{ from_archived, to_archived, archived_unsized_value, ser::{serializers::AlignedSerializer, Serializer}, AlignedVec, Archive, Archived, ArchivedMetadata, ArchivePointee, ArchiveUnsized, FixedUsize, RelPtr, Serialize, SerializeUnsized, }; // We're going to be dealing mostly with blocks that have a trailing slice pub struct Block<H, T: ?Sized> { head: H, tail: T, } impl<H, T> Pointee for Block<H, [T]> { type Metadata = usize; } // For blocks with trailing slices, we need to store the length of the slice // in the metadata. pub struct BlockSliceMetadata { len: Archived<usize>, } // ArchivePointee is automatically derived for sized types because pointers // to sized types don't need to store any extra information. Because we're // making an unsized block, we need to define what metadata gets stored with // our data pointer. impl<H, T> ArchivePointee for Block<H, [T]> { // This is the extra data that needs to get stored for blocks with // trailing slices type ArchivedMetadata = BlockSliceMetadata; // We need to be able to turn our archived metadata into regular // metadata for our type fn pointer_metadata( archived: &Self::ArchivedMetadata ) -> <Self as Pointee>::Metadata { from_archived!(archived.len) as usize } } // We're implementing ArchiveUnsized for just Block<H, [T]>. We can still // implement Archive for blocks with sized tails and they won't conflict. impl<H: Archive, T: Archive> ArchiveUnsized for Block<H, [T]> { // We'll reuse our block type as our archived type. type Archived = Block<Archived<H>, [Archived<T>]>; // This is where we'd put any resolve data for our metadata. // Most of the time, this can just be () because most metadata is Copy, // but the option is there if you need it. type MetadataResolver = (); // Here's where we make the metadata for our pointer. // This also gets the position and resolver for the metadata, but we // don't need it in this case. unsafe fn resolve_metadata( &self, _: usize, _: Self::MetadataResolver, out: *mut ArchivedMetadata<Self>, ) { unsafe { out.write(BlockSliceMetadata { len: to_archived!(self.tail.len() as FixedUsize), }); } } } // The bounds we use on our serializer type indicate that we need basic // serializer capabilities, and then whatever capabilities our head and tail // types need to serialize themselves. impl< H: Serialize<S>, T: Serialize<S>, S: Serializer + ?Sized > SerializeUnsized<S> for Block<H, [T]> { // This is where we construct our unsized type in the serializer fn serialize_unsized( &self, serializer: &mut S ) -> Result<usize, S::Error> { // First, we archive the head and all the tails. This will make sure // that when we finally build our block, we don't accidentally mess // up the structure with serialized dependencies. let head_resolver = self.head.serialize(serializer)?; let mut resolvers = Vec::new(); for tail in self.tail.iter() { resolvers.push(tail.serialize(serializer)?); } // Now we align our serializer for our archived type and write it. // We can't align for unsized types so we treat the trailing slice // like an array of 0 length for now. serializer.align_for::<Block<Archived<H>, [Archived<T>; 0]>>()?; let result = unsafe { serializer.resolve_aligned(&self.head, head_resolver)? }; serializer.align_for::<Archived<T>>()?; for (item, resolver) in self.tail.iter().zip(resolvers.drain(..)) { unsafe { serializer.resolve_aligned(item, resolver)?; } } Ok(result) } // This is where we serialize the metadata for our type. In this case, // we do all the work in resolve and don't need to do anything here. fn serialize_metadata( &self, serializer: &mut S ) -> Result<Self::MetadataResolver, S::Error> { Ok(()) } } let value = Block { head: "Numbers 1-4".to_string(), tail: [1, 2, 3, 4], }; // We have a Block<String, [i32; 4]> but we want to it to be a // Block<String, [i32]>, so we need to do more pointer transmutation let ptr = (&value as *const Block<String, [i32; 4]>).cast::<()>(); let unsized_value = unsafe { &*transmute::<(*const (), usize), *const Block<String, [i32]>>((ptr, 4)) }; let mut serializer = AlignedSerializer::new(AlignedVec::new()); let pos = serializer.serialize_unsized_value(unsized_value) .expect("failed to archive block"); let buf = serializer.into_inner(); let archived_ref = unsafe { archived_unsized_value::<Block<String, [i32]>>(buf.as_slice(), pos) }; assert_eq!(archived_ref.head, "Numbers 1-4"); assert_eq!(archived_ref.tail.len(), 4); assert_eq!(archived_ref.tail, [1, 2, 3, 4]); ``` Required Associated Types --- #### type Archived: ArchivePointee + ?Sized The archived counterpart of this type. Unlike `Archive`, it may be unsized. This type must implement `ArchivePointee`, a trait that helps make valid pointers using archived pointer metadata. #### type MetadataResolver The resolver for the metadata of this type. Because the pointer metadata must be archived with the relative pointer and not with the structure itself, its resolver must be passed back to the structure holding the pointer. Required Methods --- #### unsafe fn resolve_metadata(    &self,    pos: usize,    resolver: Self::MetadataResolver,    out: *mutArchivedMetadata<Self>) Creates the archived version of the metadata for this value at the given position and writes it to the given output. The output should be initialized field-by-field rather than by writing a whole struct. Performing a typed copy will mark all of the padding bytes as uninitialized, but they must remain set to the value they currently have. This prevents leaking uninitialized memory to the final archive. ##### Safety * `pos` must be the position of `out` within the archive * `resolver` must be the result of serializing this object’s metadata Provided Methods --- #### unsafe fn resolve_unsized(    &self,    from: usize,    to: usize,    resolver: Self::MetadataResolver,    out: *mutRelPtr<Self::Archived>) Resolves a relative pointer to this value with the given `from` and `to` and writes it to the given output. The output should be initialized field-by-field rather than by writing a whole struct. Performing a typed copy will mark all of the padding bytes as uninitialized, but they must remain set to the value they currently have. This prevents leaking uninitialized memory to the final archive. ##### Safety * `from` must be the position of `out` within the archive * `to` must be the position of some `Self::Archived` within the archive * `resolver` must be the result of serializing this object Implementations on Foreign Types --- ### impl<T: Archive> ArchiveUnsized for [T] #### type Archived = [<T as Archive>::Archived] #### type MetadataResolver = () #### unsafe fn resolve_metadata(    &self,    _: usize,    _: Self::MetadataResolver,    out: *mutArchivedMetadata<Self>) ### impl ArchiveUnsized for str `str` #### type Archived = str #### type MetadataResolver = () #### unsafe fn resolve_metadata(    &self,    _: usize,    _: Self::MetadataResolver,    out: *mutArchivedMetadata<Self>) ### impl ArchiveUnsized for CStr #### type Archived = CStr #### type MetadataResolver = () #### unsafe fn resolve_metadata(    &self,    _: usize,    _: Self::MetadataResolver,    out: *mutArchivedMetadata<Self>) Implementors --- ### impl<T: Archive> ArchiveUnsized for T #### type Archived = <T as Archive>::Archived #### type MetadataResolver = () Trait rkyv_test::DeserializeUnsized === ``` pub trait DeserializeUnsized<T: Pointee + ?Sized, D: Fallible + ?Sized>: ArchivePointee { unsafe fn deserialize_unsized(         &self,         deserializer: &mutD,         alloc: impl FnMut(Layout) -> *mutu8     ) -> Result<*mut(), D::Error>; fn deserialize_metadata(         &self,         deserializer: &mutD     ) -> Result<T::Metadata, D::Error>; } ``` A counterpart of `Deserialize` that’s suitable for unsized types. Required Methods --- #### unsafe fn deserialize_unsized(    &self,    deserializer: &mutD,    alloc: impl FnMut(Layout) -> *mutu8) -> Result<*mut(), D::ErrorDeserializes a reference to the given value. ##### Safety `out` must point to memory with the layout returned by `deserialized_layout`. #### fn deserialize_metadata(    &self,    deserializer: &mutD) -> Result<T::Metadata, D::ErrorDeserializes the metadata for the given type. Implementations on Foreign Types --- ### impl<T: Deserialize<U, D>, U, D: Fallible + ?Sized> DeserializeUnsized<[U], D> for [T] #### unsafe fn deserialize_unsized(    &self,    deserializer: &mutD,    alloc: impl FnMut(Layout) -> *mutu8) -> Result<*mut(), D::Error#### fn deserialize_metadata(    &self,    _: &mutD) -> Result<<[U] as Pointee>::Metadata, D::Error### impl<D: Fallible + ?Sized> DeserializeUnsized<str, D> for <str as ArchiveUnsized>::Archived #### unsafe fn deserialize_unsized(    &self,    _: &mutD,    alloc: impl FnMut(Layout) -> *mutu8) -> Result<*mut(), D::Error#### fn deserialize_metadata(    &self,    _: &mutD) -> Result<<str as Pointee>::Metadata, D::Error### impl<D: Fallible + ?Sized> DeserializeUnsized<CStr, D> for <CStr as ArchiveUnsized>::Archived #### unsafe fn deserialize_unsized(    &self,    _: &mutD,    alloc: impl FnMut(Layout) -> *mutu8) -> Result<*mut(), D::Error#### fn deserialize_metadata(    &self,    _: &mutD) -> Result<<CStr as Pointee>::Metadata, D::ErrorImplementors --- ### impl<T: Archive, D: Fallible + ?Sized> DeserializeUnsized<T, D> for T::Archived where    T::Archived: Deserialize<T, D>, Trait rkyv_test::Fallible === ``` pub trait Fallible { type Error: 'static; } ``` A type that can produce an error. This trait is always implemented by serializers and deserializers. Its purpose is to provide an error type without restricting what other capabilities the type must provide. When writing implementations for `Serialize` and `Deserialize`, it’s best practice to bound the serializer or deserializer by `Fallible` and then require that the serialized types support it (i.e. `S: Fallible, MyType: Serialize<S>`). Required Associated Types --- #### type Error: 'static The error produced by any failing methods. Implementors --- ### impl Fallible for SharedDeserializeMap #### type Error = SharedDeserializeMapError ### impl Fallible for AllocScratch #### type Error = AllocScratchError ### impl Fallible for SharedSerializeMap #### type Error = SharedSerializeMapError ### impl Fallible for Infallible #### type Error = Infallible ### impl Fallible for SharedValidator #### type Error = SharedError ### impl<'a> Fallible for ArchiveValidator<'a#### type Error = ArchiveError ### impl<'a> Fallible for DefaultValidator<'a#### type Error = DefaultValidatorError ### impl<A> Fallible for AlignedSerializer<A#### type Error = Infallible ### impl<M, F: Fallible> Fallible for FallbackScratch<M, F#### type Error = <F as Fallible>::Error ### impl<S: Fallible, C: Fallible, H: Fallible> Fallible for CompositeSerializer<S, C, H#### type Error = CompositeSerializerError<<S as Fallible>::Error, <C as Fallible>::Error, <H as Fallible>::Error### impl<T> Fallible for BufferScratch<T#### type Error = FixedSizeScratchError ### impl<T> Fallible for BufferSerializer<T#### type Error = BufferSerializerError ### impl<T: Fallible> Fallible for ScratchTracker<T#### type Error = <T as Fallible>::Error ### impl<W: Write> Fallible for WriteSerializer<W#### type Error = Error ### impl<const N: usize> Fallible for HeapScratch<N#### type Error = <BufferScratch<Box<[u8], Global>> as Fallible>::Error Trait rkyv_test::Serialize === ``` pub trait Serialize<S: Fallible + ?Sized>: Archive { fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error>; } ``` Converts a type to its archived form. Objects perform any supportive serialization during `serialize`. For types that reference nonlocal (pointed-to) data, this is when that data must be serialized to the output. These types will need to bound `S` to implement `Serializer` and any other required traits (e.g. `SharedSerializeRegistry`). They should then serialize their dependencies during `serialize`. See `Archive` for examples of implementing `Serialize`. Required Methods --- #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::ErrorWrites the dependencies for the object and returns a resolver that can create the archived type. Implementations on Foreign Types --- ### impl<T: SerializeUnsized<S> + ?Sized, S: Fallible + ?Sized> Serialize<S> for Box<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<K: Serialize<S> + Ord, V: Serialize<S>, S: Serializer + ?Sized> Serialize<S> for BTreeMap<K, V> where    K::Archived: Ord, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<K: Serialize<S> + Ord, S: Serializer + ?Sized> Serialize<S> for BTreeSet<K> where    K::Archived: Ord, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T, S> Serialize<S> for Rc<T> where    T: SerializeUnsized<S> + ?Sized + 'static,    S: Serializer + SharedSerializeRegistry + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T, S> Serialize<S> for Weak<T> where    T: SerializeUnsized<S> + ?Sized + 'static,    S: Serializer + SharedSerializeRegistry + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T, S> Serialize<S> for Arc<T> where    T: SerializeUnsized<S> + ?Sized + 'static,    S: Serializer + SharedSerializeRegistry + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T, S> Serialize<S> for Weak<T> where    T: SerializeUnsized<S> + ?Sized + 'static,    S: Serializer + SharedSerializeRegistry + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for String where    str: SerializeUnsized<S>, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: ScratchSpace + Serializer + ?Sized> Serialize<S> for Vec<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for RangeFull #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Range<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for RangeInclusive<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for RangeFrom<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for RangeTo<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for RangeToInclusive<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Option<T#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for () #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for bool #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicBool #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU8 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i128 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u128 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for char #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI128 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU128 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU16 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU32 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU64 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<T: ?Sized, S: Fallible + ?Sized> Serialize<S> for PhantomData<T#### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for PhantomPinned #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for usize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for isize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroUsize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroIsize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicUsize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicIsize #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, E: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Result<T, E#### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for Duration #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<T11: Serialize<S>, T10: Serialize<S>, T9: Serialize<S>, T8: Serialize<S>, T7: Serialize<S>, T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T11, T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T10: Serialize<S>, T9: Serialize<S>, T8: Serialize<S>, T7: Serialize<S>, T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T10, T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T9: Serialize<S>, T8: Serialize<S>, T7: Serialize<S>, T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T9, T8, T7, T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T8: Serialize<S>, T7: Serialize<S>, T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T8, T7, T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T7: Serialize<S>, T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T7, T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T6: Serialize<S>, T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T6, T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T5: Serialize<S>, T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T5, T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T4: Serialize<S>, T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T4, T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T3: Serialize<S>, T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T3, T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T2: Serialize<S>, T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T2, T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T1: Serialize<S>, T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T1, T0) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T0: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for (T0,) #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<T: Serialize<S>, S: Fallible + ?Sized, const N: usize> Serialize<S> for [T; N] #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i128_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u128_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for char_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI128_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU128_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU16_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU32_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU64_be #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for i128_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for u128_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for f64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for char_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroI128_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for NonZeroU128_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicI64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU16_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU32_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for AtomicU64_le #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<K, V, S, RandomState> Serialize<S> for HashMap<K, V, RandomState> where    K: Serialize<S> + Hash + Eq,    K::Archived: Hash + Eq,    V: Serialize<S>,    S: Serializer + ScratchSpace + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<K, S, RS> Serialize<S> for HashSet<K, RS> where    K::Archived: Hash + Eq,    K: Serialize<S> + Hash + Eq,    S: ScratchSpace + Serializer + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Serializer + ?Sized> Serialize<S> for CString #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for Ipv4Addr #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for Ipv6Addr #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for IpAddr #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for SocketAddrV4 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for SocketAddrV6 #### fn serialize(&self, _: &mutS) -> Result<Self::Resolver, S::Error### impl<S: Fallible + ?Sized> Serialize<S> for SocketAddr #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<K, V, S, RandomState> Serialize<S> for HashMap<K, V, RandomState> where    K: Serialize<S> + Hash + Eq,    K::Archived: Hash + Eq,    V: Serialize<S>,    S: Serializer + ScratchSpace + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::Error### impl<K, S, RS> Serialize<S> for HashSet<K, RS> where    K::Archived: Hash + Eq,    K: Serialize<S> + Hash + Eq,    S: ScratchSpace + Serializer + ?Sized, #### fn serialize(&self, serializer: &mutS) -> Result<Self::Resolver, S::ErrorImplementors --- ### impl<F: ?Sized, W: SerializeWith<F, S>, S: Fallible + ?Sized> Serialize<S> for With<F, W### impl<K: Serialize<S>, V: Serialize<S>, S: Fallible + ?Sized> Serialize<S> for Entry<&K, &VTrait rkyv_test::SerializeUnsized === ``` pub trait SerializeUnsized<S: Fallible + ?Sized>: ArchiveUnsized { fn serialize_unsized(&self, serializer: &mutS) -> Result<usize, S::Error>; fn serialize_metadata(         &self,         serializer: &mutS     ) -> Result<Self::MetadataResolver, S::Error>; } ``` A counterpart of `Serialize` that’s suitable for unsized types. See `ArchiveUnsized` for examples of implementing `SerializeUnsized`. Required Methods --- #### fn serialize_unsized(&self, serializer: &mutS) -> Result<usize, S::ErrorWrites the object and returns the position of the archived type. #### fn serialize_metadata(    &self,    serializer: &mutS) -> Result<Self::MetadataResolver, S::ErrorSerializes the metadata for the given type. Implementations on Foreign Types --- ### impl<T: Serialize<S>, S: ScratchSpace + Serializer + ?Sized> SerializeUnsized<S> for [T] #### fn serialize_unsized(&self, serializer: &mutS) -> Result<usize, S::Error#### fn serialize_metadata(    &self,    _: &mutS) -> Result<Self::MetadataResolver, S::Error### impl<S: Serializer + ?Sized> SerializeUnsized<S> for str #### fn serialize_unsized(&self, serializer: &mutS) -> Result<usize, S::Error#### fn serialize_metadata(    &self,    _: &mutS) -> Result<Self::MetadataResolver, S::Error### impl<S: Serializer + ?Sized> SerializeUnsized<S> for CStr #### fn serialize_unsized(&self, serializer: &mutS) -> Result<usize, S::Error#### fn serialize_metadata(    &self,    _: &mutS) -> Result<Self::MetadataResolver, S::ErrorImplementors --- ### impl<T: Serialize<S>, S: Serializer + ?Sized> SerializeUnsized<S> for T Function rkyv_test::from_bytes === ``` pub fn from_bytes<'a, T>(bytes: &'a [u8]) -> Result<T, FromBytesError<'a, T>> where     T: Archive,     T::Archived: 'a + CheckBytes<DefaultValidator<'a>> + Deserialize<T, SharedDeserializeMap>,  ``` Checks and deserializes a value from the given bytes. This function is only available with the `alloc` and `validation` features because it uses a general-purpose deserializer and performs validation on the data before deserializing. In no-alloc and high-performance environments, the deserializer should be customized for the specific situation. Examples --- ``` let value = vec![1, 2, 3, 4]; let bytes = rkyv::to_bytes::<_, 1024>(&value).expect("failed to serialize vec"); let deserialized = rkyv::from_bytes::<Vec<i32>>(&bytes).expect("failed to deserialize vec"); assert_eq!(deserialized, value); ``` Type Definition rkyv_test::Archived === ``` pub type Archived<T> = <T as Archive>::Archived; ``` Alias for the archived version of some `Archive` type. This can be useful for reducing the lengths of type definitions. Trait Implementations --- ### impl<D: Fallible + ?Sized> Deserialize<(), D> for Archived<()#### fn deserialize(&self, _: &mutD) -> Result<(), D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicBool, D> for Archived<AtomicBool#### fn deserialize(&self, _: &mutD) -> Result<AtomicBool, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicI16, D> for Archived<AtomicI16#### fn deserialize(&self, _: &mutD) -> Result<AtomicI16, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicI32, D> for Archived<AtomicI32#### fn deserialize(&self, _: &mutD) -> Result<AtomicI32, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicI64, D> for Archived<AtomicI64#### fn deserialize(&self, _: &mutD) -> Result<AtomicI64, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicI8, D> for Archived<AtomicI8#### fn deserialize(&self, _: &mutD) -> Result<AtomicI8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicIsize, D> for Archived<AtomicIsize#### fn deserialize(&self, _: &mutD) -> Result<AtomicIsize, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicU16, D> for Archived<AtomicU16#### fn deserialize(&self, _: &mutD) -> Result<AtomicU16, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicU32, D> for Archived<AtomicU32#### fn deserialize(&self, _: &mutD) -> Result<AtomicU32, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicU64, D> for Archived<AtomicU64#### fn deserialize(&self, _: &mutD) -> Result<AtomicU64, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicU8, D> for Archived<AtomicU8#### fn deserialize(&self, _: &mutD) -> Result<AtomicU8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<AtomicUsize, D> for Archived<AtomicUsize#### fn deserialize(&self, _: &mutD) -> Result<AtomicUsize, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI128>, D> for Archived<NonZeroI128_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI128_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI16>, D> for Archived<NonZeroI16_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI16_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI32>, D> for Archived<NonZeroI32_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI32_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroI64>, D> for Archived<NonZeroI64_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI64_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU128>, D> for Archived<NonZeroU128_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU128_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU16>, D> for Archived<NonZeroU16_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU16_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU32>, D> for Archived<NonZeroU32_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU32_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<NonZeroU64>, D> for Archived<NonZeroU64_be#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU64_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<char>, D> for Archived<char_be#### fn deserialize(&self, _: &mutD) -> Result<char_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<f32>, D> for Archived<f32_be#### fn deserialize(&self, _: &mutD) -> Result<f32_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<f64>, D> for Archived<f64_be#### fn deserialize(&self, _: &mutD) -> Result<f64_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i128>, D> for Archived<i128_be#### fn deserialize(&self, _: &mutD) -> Result<i128_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i16>, D> for Archived<i16_be#### fn deserialize(&self, _: &mutD) -> Result<i16_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i32>, D> for Archived<i32_be#### fn deserialize(&self, _: &mutD) -> Result<i32_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<i64>, D> for Archived<i64_be#### fn deserialize(&self, _: &mutD) -> Result<i64_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u128>, D> for Archived<u128_be#### fn deserialize(&self, _: &mutD) -> Result<u128_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u16>, D> for Archived<u16_be#### fn deserialize(&self, _: &mutD) -> Result<u16_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u32>, D> for Archived<u32_be#### fn deserialize(&self, _: &mutD) -> Result<u32_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<BigEndian<u64>, D> for Archived<u64_be#### fn deserialize(&self, _: &mutD) -> Result<u64_be, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<CString, D> for Archived<CString> where    CStr: DeserializeUnsized<CStr, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<CString, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI128>, D> for Archived<NonZeroI128_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI128_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI16>, D> for Archived<NonZeroI16_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI16_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI32>, D> for Archived<NonZeroI32_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI32_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroI64>, D> for Archived<NonZeroI64_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI64_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU128>, D> for Archived<NonZeroU128_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU128_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU16>, D> for Archived<NonZeroU16_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU16_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU32>, D> for Archived<NonZeroU32_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU32_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<NonZeroU64>, D> for Archived<NonZeroU64_le#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU64_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<char>, D> for Archived<char_le#### fn deserialize(&self, _: &mutD) -> Result<char_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<f32>, D> for Archived<f32_le#### fn deserialize(&self, _: &mutD) -> Result<f32_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<f64>, D> for Archived<f64_le#### fn deserialize(&self, _: &mutD) -> Result<f64_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i128>, D> for Archived<i128_le#### fn deserialize(&self, _: &mutD) -> Result<i128_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i16>, D> for Archived<i16_le#### fn deserialize(&self, _: &mutD) -> Result<i16_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i32>, D> for Archived<i32_le#### fn deserialize(&self, _: &mutD) -> Result<i32_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<i64>, D> for Archived<i64_le#### fn deserialize(&self, _: &mutD) -> Result<i64_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u128>, D> for Archived<u128_le#### fn deserialize(&self, _: &mutD) -> Result<u128_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u16>, D> for Archived<u16_le#### fn deserialize(&self, _: &mutD) -> Result<u16_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u32>, D> for Archived<u32_le#### fn deserialize(&self, _: &mutD) -> Result<u32_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<LittleEndian<u64>, D> for Archived<u64_le#### fn deserialize(&self, _: &mutD) -> Result<u64_le, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<NonZeroI8, D> for Archived<NonZeroI8#### fn deserialize(&self, _: &mutD) -> Result<NonZeroI8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<NonZeroIsize, D> for Archived<NonZeroIsize#### fn deserialize(&self, _: &mutD) -> Result<NonZeroIsize, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<NonZeroU8, D> for Archived<NonZeroU8#### fn deserialize(&self, _: &mutD) -> Result<NonZeroU8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<NonZeroUsize, D> for Archived<NonZeroUsize#### fn deserialize(&self, _: &mutD) -> Result<NonZeroUsize, D::ErrorDeserializes using the given deserializer ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<Range<T>, D> for Archived<Range<T>> where    T::Archived: Deserialize<T, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<Range<T>, D::ErrorDeserializes using the given deserializer ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<RangeFrom<T>, D> for Archived<RangeFrom<T>> where    T::Archived: Deserialize<T, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<RangeFrom<T>, D::ErrorDeserializes using the given deserializer ### impl<T, D> Deserialize<RangeInclusive<T>, D> for Archived<RangeInclusive<T>> where    T: Archive,    T::Archived: Deserialize<T, D>,    D: Fallible + ?Sized, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<RangeInclusive<T>, D::ErrorDeserializes using the given deserializer ### impl<T: Archive, D: Fallible + ?Sized> Deserialize<RangeTo<T>, D> for Archived<RangeTo<T>> where    T::Archived: Deserialize<T, D>, #### fn deserialize(&self, deserializer: &mutD) -> Result<RangeTo<T>, D::ErrorDeserializes using the given deserializer ### impl<T, D> Deserialize<RangeToInclusive<T>, D> for Archived<RangeToInclusive<T>> where    T: Archive,    T::Archived: Deserialize<T, D>,    D: Fallible + ?Sized, #### fn deserialize(    &self,    deserializer: &mutD) -> Result<RangeToInclusive<T>, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<bool, D> for Archived<bool#### fn deserialize(&self, _: &mutD) -> Result<bool, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<i8, D> for Archived<i8#### fn deserialize(&self, _: &mutD) -> Result<i8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<isize, D> for Archived<isize#### fn deserialize(&self, _: &mutD) -> Result<isize, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<u8, D> for Archived<u8#### fn deserialize(&self, _: &mutD) -> Result<u8, D::ErrorDeserializes using the given deserializer ### impl<D: Fallible + ?Sized> Deserialize<usize, D> for Archived<usize#### fn deserialize(&self, _: &mutD) -> Result<usize, D::ErrorDeserializes using the given deserializer ### impl Offset for Archived<i16#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. ### impl Offset for Archived<i32#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. ### impl Offset for Archived<i64#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. ### impl Offset for Archived<u16#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. ### impl Offset for Archived<u32#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. ### impl Offset for Archived<u64#### fn between(from: usize, to: usize) -> Result<Self, OffsetErrorCreates a new offset between a `from` position and a `to` position. #### fn to_isize(&self) -> isize Gets the offset as an `isize`. Type Definition rkyv_test::ArchivedMetadata === ``` pub type ArchivedMetadata<T> = <<T as ArchiveUnsized>::Archived as ArchivePointee>::ArchivedMetadata; ``` Alias for the archived metadata for some `ArchiveUnsized` type. This can be useful for reducing the lengths of type definitions. Type Definition rkyv_test::FixedIsize === ``` pub type FixedIsize = i32; ``` The native type that `isize` is converted to for archiving. This will be `i16`, `i32`, or `i64` when the `size_16`, `size_32`, or `size_64` features are enabled, respectively. Type Definition rkyv_test::FixedUsize === ``` pub type FixedUsize = u32; ``` The native type that `usize` is converted to for archiving. This will be `u16`, `u32`, or `u64` when the `size_16`, `size_32`, or `size_64` features are enabled, respectively. Type Definition rkyv_test::MetadataResolver === ``` pub type MetadataResolver<T> = <T as ArchiveUnsized>::MetadataResolver; ``` Alias for the metadata resolver for some `ArchiveUnsized` type. This can be useful for reducing the lengths of type definitions. Type Definition rkyv_test::RawRelPtr === ``` pub type RawRelPtr = RawRelPtr<Archived<isize>>; ``` The default raw relative pointer. This will use an archived `FixedIsize` to hold the offset. Type Definition rkyv_test::RelPtr === ``` pub type RelPtr<T> = RelPtr<T, Archived<isize>>; ``` The default relative pointer. This will use an archived `FixedIsize` to hold the offset. Type Definition rkyv_test::Resolver === ``` pub type Resolver<T> = <T as Archive>::Resolver; ``` Alias for the resolver for some `Archive` type. This can be useful for reducing the lengths of type definitions. Derive Macro rkyv_test::Deserialize === ``` #[derive(Deserialize)] { // Attributes available to this derive: #[archive] #[omit_bounds] #[with] } ``` Derives `Deserialize` for the labeled type. This macro also supports the `#[archive]`, `#[omit_bounds]`, and `#[with]` attributes. See `Archive` for more information. Derive Macro rkyv_test::Serialize === ``` #[derive(Serialize)] { // Attributes available to this derive: #[archive] #[omit_bounds] #[with] } ``` Derives `Serialize` for the labeled type. This macro also supports the `#[archive]`, `#[omit_bounds]`, and `#[with]` attributes. See `Archive` for more information.
macc
cran
R
Package ‘macc’ October 13, 2022 Type Package Title Mediation Analysis of Causality under Confounding Version 1.0.1 Date 2017-08-20 Author <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Performs causal mediation analysis under confounding or correlated errors. This pack- age includes a single level mediation model, a two-level mediation model, and a three-level me- diation model for data with hierarchical structures. Under the two/three-level media- tion model, the correlation parameter is identifiable and is estimated based on a hierarchical- likelihood, a marginal-likelihood or a two-stage method. See <NAME>., & <NAME>. (2014), Esti- mating Mediation Effects under Correlated Errors with an Applica- tion to fMRI, <arXiv:1410.7217> for details. License GPL (>= 2) Depends lme4, nlme, optimx, MASS, car NeedsCompilation no Repository CRAN Date/Publication 2017-08-23 20:03:54 UTC R topics documented: macc-packag... 2 env.singl... 2 env.thre... 3 env.tw... 4 mac... 5 sim.data.mult... 12 sim.data.singl... 16 macc-package Causal Mediation Analysis under Correlated Errors Description macc performs causal mediation analysis under confounding or correlated errors. This package includes a single level mediation model, a two-level mediation model and a three-level mediation model for data with hierarchical structure. Under the two/three-level mediation model, the cor- relation parameter is identifiable and estimated based on a hierarchical-likelihood or a two-stage method. Details Package: macc Type: Package Version: 1.0.1 Date: 2017-08-20 License: GPL (>=2) Author(s) <NAME> <<EMAIL>> and <NAME> <<EMAIL>> Maintainer: <NAME> <<EMAIL>> References Zhao, Y., & <NAME>. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. env.single Simulated single-level dataset Description "env.single" is an R environment containing a data frame of data generated from 100 trials, the true coefficients and the coavariance matrix of the model errors. Usage data("env.single") Format An R environment. data1 a data frame with Z the treatment assignment, M the mediator and R the interested outcome. Theta a 2 by 2 matrix, which is the coefficients of the model. Sigma a 2 by 2 matrix, which is the covariance matrix of the model errors. Details The number of subjects is 100. The coefficients are set to be A = 0.5, C = 0.5 and B = −1. The variances of the model errors are σ12 = 1, σ22 = 4 and the correlation is δ = 0.5. See Section 5.1 of the reference. References <NAME>., & <NAME>. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. Examples data(env.single) dt<-get("data1",env.single) env.three Simulated three-level dataset Description "env.three" is an R environment containing a data list generated from 50 subjects and 4 sessions, and the parameter settings used to generate the data. Usage data("env.three") Format An R environment. data3 a list of length 50, each contains a list of length 4 of a data frame with 3 variables. Theta a 2 by 2 matrix, which is the value of the fixed effects. Sigma the covariance matrix of the model error terms for the single level model. n a 50 by 4 matrix, is the number of trials for each subject each session. Psi the covariance matrix of the random effects in the mixed effects model. Lambda the covariance matrix of the model errors in the mixed effects model. A a 50 by 4 matrix, is the A value in the single-level for each subject each session. B a 50 by 4 matrix, is the B value in the single-level for each subject each session. C a 50 by 4 matrix, is the C value in the single-level for each subject each session. Details The number of subjects is N = 50 and the number of sessions is K = 4. Under each session of each subject, the number of trials is a random draw from a Poisson distribution with mean 100. The fixed effects are set to be A = 0.5, C = 0.5, and B = −1, and the variances of the model errors are σ12ik = 1, σ22ik = 4 and the correlation is δ = 0.5. See Section 5.2 of the reference for details. References <NAME>., & <NAME>. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. Examples data(env.three) dt<-get("data3",env.three) env.two Simulated two-level dataset Description "env.three" is an R environment containing a data list generated from 50 subjects, and the parameter settings used to generate the data. Usage data("env.two") Format An R environment. data2 a list of length 50, each contains a data frame with 3 variables. Theta a 2 by 2 matrix, which is the population level model coefficients. Sigma the covariance matrix of the model error terms for the single level model. n a 50 by 1 matrix, is the number of trials for each subject. Lambda the covariance matrix of the model errors in the coefficient regression model. A a vector of length 50, is the A value in the single-level for each subject each session. B a vector of length 50, is the B value in the single-level for each subject each session. C a vector of length 50, is the C value in the single-level for each subject each session. Details The number of subjects is N = 50. For each subject, the number of trials is a random draw from a Poisson distribution with mean 100. The population level coefficients are set to be A = 0.5, C = 0.5 and B = −1, and the variances of the model errors are σ12i = 1, σ22i = 4 and the correlation is δ = 0.5. See Section 5.2 of the reference for details. This is a special case of the three-level data with K = 1. References <NAME>., & <NAME>. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. Examples data(env.two) dt<-get("data2",env.two) macc Mediation Analysis of Causality under Confounding Description This function performs causal mediation analysis under confounding or correlated errors for single level, two-level, and three-level mediation models. Usage macc(dat, model.type = c("single", "multilevel", "twolevel"), method = c("HL", "TS", "HL-TS"), delta = NULL, interval = c(-0.9, 0.9), tol = 0.001, max.itr = 500, conf.level = 0.95, optimizer = c("optimx", "bobyqa", "Nelder_Mead"), mix.pkg = c("nlme", "lme4"), random.indep = TRUE, random.var.equal = FALSE, u.int = FALSE, Sigma.update = TRUE, var.constraint = TRUE, random.var.update = TRUE, logLik.type = c("logLik", "HL"), error.indep = TRUE, error.var.equal = FALSE, sens.plot = FALSE, sens.interval = seq(-1, 1, by = 0.01), legend.pos = "topright", xlab = expression(delta), ylab = expression(hat(AB)), cex.lab = 1, cex.axis = 1, lgd.cex = 1, lgd.pt.cex = 1, plot.delta0 = TRUE, ...) Arguments dat a data frame or a list of data. When it is a data frame, it contains Z as the treat- ment/exposure assignment, M as the mediator and R as the interested outcome and model.type should be "single". Z, M and R are all in one column. When it is a list, the list length is the number of subjects. For a two-level dataset, each list contains one data frame with Z, M and R, and model.type should be "twolevel"; for a three-level dataset, each subject list consists of K lists of data frame, and model.type should be "multilevel". model.type a character of model type, "single" for single level model, "multilevel" for three- level model and "twolevel" for two-level model. method a character of method that is used for the two/three-level mediation model. When delta is given, the method can be either "HL" (hierarchical-likelihood) or "TS" (two-stage); when delta is not given, the method can be "HL", "TS" or "HL-TS". The "HL-TS" method estimates delta by the "HL" method first and uses the "TS" method to estimate the rest parameters. For three-level model, when method = "HL" and u.int = TRUE, the parameters are estimated through a marginal-likelihood method. delta a number gives the correlation between the model errors. Default value is NULL. When model.type = "single", the default will be 0. For two/three-level model, if delta = NULL, the value of delta will be estimated. interval a vector of length two indicates the searching interval when estimating delta. Default is (-0.9,0.9). tol a number indicates the tolerence of convergence for the "HL" method. Default is 0.001. max.itr an integer indicates the maximum number of interation for the "HL" method. Default is 500. conf.level a number indicates the significance level of the confidence intervals. Default is 0.95. optimizer a character of the name of optimizing function(s) in the mixed effects model. This is used only for three-level model. For details, see lmerControl. mix.pkg a character of the package used for the mixed effects model in a three-level mediation model. random.indep a logic value indicates if the random effects in the mixed effects model are inde- pendent. Default is TRUE assuming the random effects are independent. This is used for model.type = "multilevel" only. random.var.equal a logic value indicates if the variances of the random effects are identical. De- fault is FALSE assuming the variances are not identical. This is used for model.type = "multilevel" only. u.int a logic value. Default is FALSE. When u.int = TRUE, a marginal-likelihood method, which integrates out the random effects in the mixed effects model, is used to estimate the parameters. This is used when model.type = "multilevel" and method = "HL" or method = "HL-TS". Sigma.update a logic value. Default is TRUE, and the estimated variances of the errors in the single level model will be updated in each iteration when running a two/three- level mediation model. var.constraint a logic value. Default is TRUE, and an interval constraint is added on the variance components in the two/three-level mediation model. random.var.update a logic value. Default is TRUE, and the estimates of the variance of the random effects in the mixed effects model are updated in each iteration. This is used when model.type = "multilevel" and method = "HL" or method = "HL-TS". logLik.type a character value indicating the type of likelihood value returned. It is used for "TS" method. When logLik.type = "logLik", the log-likelihood of the mixed effects model is maximized; when logLik.type = "HL", the summation of log- likelihood of the single level model and the mixed effects model is maximized. This is used for model.type = "multilevel". error.indep a logic value. Default is TRUE. This is used for model.type = "twolevel". When error.indep = TRUE, the error terms in the three linear models for A, B and C are independent. error.var.equal a logic value, Default is FALSE, This is used for model.type = "twolevel". When error.var.equal = TRUE, the variances of the error terms in the three linear models for A, B and C are assumed to be identical. sens.plot a logic value. Default is FALSE. This is used only for single level model. When sens.plot = TRUE, the sensitivity analysis will be performed and plotted. sens.interval a sequence of delta values under which the sensitivity analysis is performed. Default is a sequence from -1 to 1 with increment 0.01. The elements with absolute value 1 will be excluded from the analysis. legend.pos a character indicates the location of the legend when sens.plot = TRUE. This is used for single level model. xlab a title for x axis in the sensitivity plot. ylab a title for y axis in the sensitivity plot. cex.lab the magnigication to be used for x and y labels relative to the current setting of cex. cex.axis the magnification to be used for axis annotation relative to the current setting of cex. lgd.cex the magnification to be used for legend relative to the current setting of cex. lgd.pt.cex the magnification to be used for the points in legend relative to the current setting of cex. plot.delta0 a logic value. Default is TRUE. When plot.delta0 = TRUE, the estimates when δ = 0 is plotted. ... additional argument to be passed. Details The single level mediation model is R = ZC + M B + E2 . A correlation between the model errors E1 and E2 is assumed to be δ. The coefficients are esti- mated by maximizing the log-likelihood function. The confidence intervals of the coefficients are calculated based on the asymptotic joint distribution. The variance of AB estimator based on either the product method or the difference method is obtained from the Delta method. Under this single level model, δ is not identifiable. Sensitivity analysis for the indirect effect (AB) can be used to assess the deviation of the findings, when assuming δ = 0 violates the independence assumption. The two/three-level mediation models are proposed to estimate δ from data without sensitivity anal- ysis. They address the within/between-subject variation issue for datasets with hierarchical struc- ture. For simplicity, we refer to the three levels of data by trials, sessions and subjects, respectively. See reference for more details. Under the two-level mediation model, the data consists of N inde- pendent subjects and ni trials for subject i; under the three-level mediation model, the data consists of N independent subjects, K sessions for each and nik trials. Under the two-level (three-level) models, the single level mediation model is first applied on the trials from (the same session of) a single subject. The coefficients then follow a linear (mixed effects) model. Here we enforce the assumption that δ is a constant across (sessions and) subjects. The parameters are estimated through a hierarchical-likelihood (or marginal likelihood) or a two-stage method. Value When model.type = "single", Coefficients point estimate of the coefficients, as well as the corresponding standard error and confidence interval. The indirect effect is estimated by both the product (ABp) and the difference (ABd) methods. D point estimate of the regression coefficients in matrix form. Sigma estimated covariance matrix of the model errors. delta the δ value used to estimate the rest parameters. time the CPU time used, see system.time. When model.type = "multilevel", delta the specified or estimated value of correlation. Coefficients the estimated fixed effects in the mixed effects model for the coefficients, as well as the corresponding confidence intervals and standard errors. Here confidence intervals and standard errors are the estimates directly from the mixed effects model, the variation of estimating these parameters is not accounted for. Cor.comp estimated correlation matrix of the random effects in the mixed effects model. Var.comp estimated variance components in the mixed effects model. Var.C2 estimated variance components for C 0 , the total effect, if a mixed effects model is considered. logLik the value of maximized log-likelihood of the mixed effects model. HL the value of hierarchical-likelihood. convergence the logic value indicating if the method converges. time the CPU time used, see system.time . When model.type = "twolevel" delta the specified or estimated value of correlation. Coefficients the estimated population level effect in the regression models. Lambda the estimated covariance matrix of the model errors in the coefficient regression models. Sigma the estimated variances of E1 and E2 for each subject. HL the value of full-likelihood (hierarchical-likelihood). convergence the logic value indicating if the method converges. Var.constraint the interval constraints used for the variances in the coefficient regression mod- els. time the CPU time used, see system.time . Author(s) <NAME>, Brown University, <<EMAIL>>; <NAME>, Brown University, <<EMAIL>>. References Zhao, Y., & Luo, X. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. Examples # Examples with simulated data ############################################################################## # Single level mediation model # Data was generated with 500 independent trials. The correlation between model errors is 0.5. data(env.single) data.SL<-get("data1",env.single) ## Example 1: Given delta is 0.5. macc(data.SL,model.type="single",delta=0.5) # $Coefficients # Estimate SE LB UB # A 0.3572722 0.1483680 0.06647618 0.64806816 # C 0.8261253 0.2799667 0.27740060 1.37485006 # B -0.9260217 0.1599753 -1.23956743 -0.61247594 # C2 0.4952836 0.2441369 0.01678400 0.97378311 # ABp -0.3308418 0.1488060 -0.62249617 -0.03918738 # ABd -0.3308418 0.3714623 -1.05889442 0.39721087 ## Example 2: Assume the errors are independent. macc(data.SL,model.type="single",delta=0) # $Coefficients # Estimate SE LB UB # A 0.3572721688 0.14836803 0.06647618 0.6480682 # C 0.4961424716 0.24413664 0.01764345 0.9746415 # B -0.0024040905 0.15997526 -0.31594984 0.3111417 # C2 0.4952835570 0.24413691 0.01678400 0.9737831 # ABp -0.0008589146 0.05715582 -0.11288227 0.1111644 # ABd -0.0008589146 0.34526154 -0.67755910 0.6758413 ## Example 3: Sensitivity analysis (given delta is 0.5). macc(data.SL,model.type="single",delta=0.5,sens.plot=TRUE) ############################################################################## ############################################################################## # Three-level mediation model # Data was generated with 50 subjects and 4 sessions. # The correlation between model errors in the single level is 0.5. # We comment out our examples due to the computation time. data(env.three) data.ML<-get("data3",env.three) ## Example 1: Correlation is unknown and to be estimated. # Assume random effects are independent # Add an interval constraint on the variance components. # "HL" method # macc(data.ML,model.type="multilevel",method="HL") # $delta # [1] 0.5224803 # $Coefficients # Estimate LB UB SE # A 0.51759400 0.3692202 0.6659678 0.07570229 # C 0.56882035 0.3806689 0.7569718 0.09599742 # B -1.13624114 -1.3688690 -0.9036133 0.11868988 # C2 -0.06079748 -0.4163135 0.2947186 0.18138908 # AB.prod -0.58811160 -0.7952826 -0.3809406 0.10570145 # AB.diff -0.62961784 -1.0318524 -0.2273833 0.20522549 # # $time # user system elapsed # 44.34 3.53 17.71 # "ML" method # macc(data.ML,model.type="multilevel",method="HL",u.int=TRUE) # $delta # [1] 0.5430744 # $Coefficients # Estimate LB UB SE # A 0.51764821 0.3335094 0.7017871 0.09395011 # C 0.59652821 0.3715001 0.8215563 0.11481236 # B -1.19426328 -1.4508665 -0.9376601 0.13092240 # C2 -0.06079748 -0.4163135 0.2947186 0.18138908 # AB.prod -0.61820825 -0.8751214 -0.3612951 0.13108056 # AB.diff -0.65732570 -1.0780742 -0.2365772 0.21467155 # # $time # user system elapsed # 125.49 9.52 39.10 # "TS" method # macc(data.ML,model.type="multilevel",method="TS") # $delta # [1] 0.5013719 # $Coefficients # Estimate LB UB SE # A 0.51805823 0.3316603 0.7044561 0.09510271 # C 0.53638546 0.3066109 0.7661601 0.11723409 # B -1.07930526 -1.3386926 -0.8199179 0.13234293 # C2 -0.06079748 -0.4163135 0.2947186 0.18138908 # AB.prod -0.55914297 -0.8010745 -0.3172114 0.12343672 # AB.diff -0.59718295 -1.0204890 -0.1738769 0.21597645 # # $time # user system elapsed # 19.53 0.00 19.54 ## Example 2: Given the correlation is 0.5. # Assume random effects are independent. # Add an interval constraint on the variance components. # "HL" method macc(data.ML,model.type="multilevel",method="HL",delta=0.5) # $delta # [1] 0.5 # $Coefficients # Estimate LB UB SE # A 0.51760568 0.3692319 0.6659794 0.07570229 # C 0.53916412 0.3512951 0.7270331 0.09585330 # B -1.07675116 -1.3093989 -0.8441035 0.11869999 # C2 -0.06079748 -0.4163135 0.2947186 0.18138908 # AB.prod -0.55733252 -0.7573943 -0.3572708 0.10207419 # AB.diff -0.59996161 -1.0020641 -0.1978591 0.20515811 # # $time # user system elapsed # 2.44 0.22 1.03 ############################################################################## ############################################################################## # Two-level mediation model # Data was generated with 50 subjects. # The correlation between model errors in the single level is 0.5. # We comment out our examples due to the computation time. data(env.two) data.TL<-get("data2",env.two) ## Example 1: Correlation is unknown and to be estimated. # Assume errors in the coefficients regression models are independent. # Add an interval constraint on the variance components. # "HL" method # macc(data.TL,model.type="twolevel",method="HL") # $delta # [1] 0.5066551 # $Coefficients # Estimate # A 0.51714224 # C 0.54392056 # B -1.05048406 # C2 -0.02924135 # AB.prod -0.54324968 # AB.diff -0.57316190 # # $time # user system elapsed # 3.07 0.00 3.07 # "TS" method # macc(data.TL,model.type="twolevel",method="TS") # $delta # [1] 0.4481611 # $Coefficients # Estimate # A 0.52013697 # C 0.47945755 # B -0.90252718 # C2 -0.02924135 # AB.prod -0.46943775 # AB.diff -0.50869890 # # $time # user system elapsed # 1.60 0.00 1.59 ## Example 2: Given the correlation is 0.5. # Assume random effects are independent. # Add an interval constraint on the variance components. # "HL" method macc(data.TL,model.type="twolevel",method="HL",delta=0.5) # $delta # [1] 0.5 # $Coefficients # Estimate # A 0.51718063 # C 0.53543300 # B -1.03336668 # C2 -0.02924135 # AB.prod -0.53443723 # AB.diff -0.56467434 # # $time # user system elapsed # 0.21 0.00 0.20 ############################################################################## sim.data.multi Generate two/three-level simulation data Description This function generates a two/three-level dataset with given parameters. Usage sim.data.multi(Z.list, N, K = 1, Theta, Sigma, Psi = diag(rep(1, 3)), Lambda = diag(rep(1, 3))) Arguments Z.list a list of data. When K = 1 (a two-level dataset), each list is a vector containing the treatment/exposure assignment of the trials for each subject; When K > 1 (a three-level dataset), each list is a list of length K, and each contains a vector of treatment/exposure assignment. N an integer, indicates the number of subjects. K an integer, indicates the number of sessions of each subject. Theta a 2 by 2 matrix, containing the population level model coefficients. Sigma a 2 by 2 matrix, is the covariance matrix of the model errors in the single-level model. Psi the covariance matrix of the random effects in the mixed effects model of the model coefficients. This is used only when K > 1. Default is a 3-dimensional identity matrix. Lambda the covariance matrix of the model errors in the mixed effects model if K > 1 or the linear model if K = 1 of the model coefficients. Details When K > 1 (three-level data), for the nik trials in each session of each subject, the single level mediation model is Mik = Zik Aik + E1ik , Rik = Zik Cik + Mik Bik + E2ik , where Zik , Mik , Rik , E1ik , and E2ik are vectors of length nik . Sigma is the covariance matrix of (E1ik , E2ik ) (for simplicity, Sigma is the same across sessions and subjects). For coefficients Aik , Bik and Cik , we assume a mixed effects model. The random effects are from a trivariate normal distribution with mean zero and covariance Psi; and the model errors are from a trivariate normal distribution with mean zero and covariance Lambda. For the fixed effects A, B and C, the values are specified in Theta with Theta[1,1] = A, Theta[1,2] = C and Theta[2,2] = B. When K = 1 (two-level data), the single-level model coefficients Ai , Bi and Ci are assumed to follow a trivariate linear regression model, where the population level coefficients are specified in Theta and the model errors are from a trivariate normal distribution with mean zero and covariance Lambda. See Section 5.2 of the reference for details. Value data a list of data. When K = 1, each list is a contains a dataframe; when K > 1, each list is a list of length K, and within each list is a dataframe. A the value of As. When K = 1, it is a vector of length N; when K > 1, it is a N by K matrix. B the value of Bs. When K = 1, it is a vector of length N; when K > 1, it is a N by K matrix. C the value of Cs. When K = 1, it is a vector of length N; when K > 1, it is a N by K matrix. type a character indicates the type of the dataset. When K = 1, type = twolevel; when K > 1, type = multilevel Author(s) <NAME>, Brown University, <<EMAIL>>; <NAME>, Brown University, <<EMAIL>> References Zhao, Y., & Luo, X. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. Examples ################################################### # Generate a two-level dataset # covariance matrix of errors delta<-0.5 Sigma<-matrix(c(1,2*delta,2*delta,4),2,2) # model coefficients A0<-0.5 B0<--1 C0<-0.5 Theta<-matrix(c(A0,0,C0,B0),2,2) # number of subjects, and trials of each set.seed(2000) N<-50 K<-1 n<-matrix(NA,N,K) for(i in 1:N) { n0<-rpois(1,100) n[i,]<-rpois(K,n0) } # treatment assignment list set.seed(100000) Z.list<-list() for(i in 1:N) { Z.list[[i]]<-rbinom(n[i,1],size=1,prob=0.5) } # Lambda rho.AB=rho.AC=rho.BC<-0 Lambda<-matrix(0,3,3) lambda2.alpha=lambda2.beta=lambda2.gamma<-0.5 Lambda[1,2]=Lambda[2,1]<-rho.AB*sqrt(lambda2.alpha*lambda2.beta) Lambda[1,3]=Lambda[3,1]<-rho.AC*sqrt(lambda2.alpha*lambda2.gamma) Lambda[2,3]=Lambda[3,2]<-rho.BC*sqrt(lambda2.beta*lambda2.gamma) diag(Lambda)<-c(lambda2.alpha,lambda2.beta,lambda2.gamma) # Data set.seed(5000) re.dat<-sim.data.multi(Z.list=Z.list,N=N,K=K,Theta=Theta,Sigma=Sigma,Lambda=Lambda) data2<-re.dat$data ################################################### ################################################### # Generate a three-level dataset # covariance matrix of errors delta<-0.5 Sigma<-matrix(c(1,2*delta,2*delta,4),2,2) # model coefficients A0<-0.5 B0<--1 C0<-0.5 Theta<-matrix(c(A0,0,C0,B0),2,2) # number of subjects, and trials of each set.seed(2000) N<-50 K<-4 n<-matrix(NA,N,K) for(i in 1:N) { n0<-rpois(1,100) n[i,]<-rpois(K,n0) } # treatment assignment list set.seed(100000) Z.list<-list() for(i in 1:N) { Z.list[[i]]<-list() for(j in 1:K) { Z.list[[i]][[j]]<-rbinom(n[i,j],size=1,prob=0.5) } } # Psi and Lambda sigma2.alpha=sigma2.beta=sigma2.gamma<-0.5 theta2.alpha=theta2.beta=theta2.gamma<-0.5 Psi<-diag(c(sigma2.alpha,sigma2.beta,sigma2.gamma)) Lambda<-diag(c(theta2.alpha,theta2.beta,theta2.gamma)) # Data set.seed(5000) re.dat<-sim.data.multi(Z.list,N,K,Theta,Sigma,Psi,Lambda) data3<-re.dat$data ################################################### sim.data.single Generate single-level simulation data Description This function generates a single-level dataset with given parameters. Usage sim.data.single(Z, Theta, Sigma) Arguments Z a vector of treatment/exposure assignment. Theta a 2 by 2 matrix, containing the model coefficients. Sigma a 2 by 2 matrix, is the covariance matrix of the model errors. Details The single level mediation model is M = ZA + E1 , R = ZC + M B + E2 . Theta[1,1] = A, Theta[1,2] = C and Theta[2,2] = B; Sigma is the covariance matrix of (E1 , E2 ). Value The function returns a dataframe with variables Z, M and R. Author(s) <NAME>, Brown University, <<EMAIL>>; <NAME>, Brown University, <<EMAIL>> References <NAME>., & <NAME>. (2014). Estimating Mediation Effects under Correlated Errors with an Appli- cation to fMRI. arXiv preprint arXiv:1410.7217. sim.data.single 17 Examples ################################################### # Generate a single-level dataset # covariance matrix of errors delta<-0.5 Sigma<-matrix(c(1,2*delta,2*delta,4),2,2) # model coefficients A0<-0.5 B0<--1 C0<-0.5 Theta<-matrix(c(A0,0,C0,B0),2,2) # number of trials n<-100 # generate a treatment assignment vector set.seed(100) Z<-matrix(rbinom(n,size=1,0.5),n,1) # Data set.seed(5000) data.single<-sim.data.single(Z,Theta,Sigma) ###################################################
SeqKat
cran
R
Package ‘SeqKat’ October 12, 2022 Type Package Title Detection of Kataegis Version 0.0.8 Date 2020-03-09 Author <NAME>, <NAME>, <NAME>, <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description Kataegis is a localized hypermutation occurring when a region is enriched in so- matic SNVs. Kataegis can result from multiple cytosine deaminations cat- alyzed by the AID/APOBEC family of proteins. This package contains functions to de- tect kataegis from SNVs in BED format. This package re- ports two scores per kataegic event, a hypermutation score and an APOBEC medi- ated kataegic score. Yousif, F. et al.; The Origins and Consequences of Localized and Global So- matic Hypermutation; Biorxiv 2018 <doi:10.1101/287839>. Depends R (>= 2.15.1), foreach, doParallel Imports Rcpp(>= 0.11.0) LinkingTo Rcpp Suggests testthat, doMC, rmarkdown, knitr License GPL-2 LazyLoad yes RoxygenNote 6.0.1 VignetteBuilder rmarkdown, knitr NeedsCompilation yes Repository CRAN Date/Publication 2020-03-11 00:40:02 UTC R topics documented: combine.tabl... 2 final.scor... 3 get.contex... 4 get.exprobntc... 5 get.nucleotide.chunk.count... 6 get.pai... 7 get.t... 7 get.topt... 8 get.trinucleotide.count... 9 seqka... 9 test.kataegi... 11 combine.table Combine Table Description Merges overlapped windows to identify genomic boundaries of kataegic events. This function also assigns hypermuation and kataegic score for combined windows Usage combine.table(test.table, somatic, mutdistance, segnum, output.name) Arguments test.table Data frame of kataegis test scores somatic Data frame of somatic variants mutdistance The maximum intermutational distance allowed for SNVs to be grouped in the same kataegic event. Recommended value: 3.2 segnum Minimum mutation count. The minimum number of mutations required within a cluster to be identified as kataegic. Recommended value: 4 output.name Name of the generated output directory. Author(s) <NAME> <NAME> Examples load( paste0( path.package("SeqKat"), "/extdata/test/somatic.rda" ) ); load( paste0( path.package("SeqKat"), "/extdata/test/final.score.rda" ) ); combine.table( final.score, somatic, 3.2, 4, tempdir() ); final.score Final Score Description Assigns hypermutation score (hm.score) and kataegic score (k.score) Usage final.score(test.table, cutoff, somatic, output.name) Arguments test.table Data frame of kataegis test scores cutoff The minimum hypermutation score used to classify the windows in the sliding binomial test as significant windows. The score is calculated per window as follows: -log10(binomial test p-value). Recommended value: 5 somatic Data frame of somatic variants output.name Name of the generated output directory. Author(s) <NAME> <NAME> Examples load( paste0( path.package("SeqKat"), "/extdata/test/somatic.rda" ) ); load( paste0( path.package("SeqKat"), "/extdata/test/test.table.rda" ) ); final.score( test.table, 5, somatic, tempdir() ); get.context Get Context Description Gets the 5’ and 3’ neighboring bases to the mutated base Usage get.context(file, start) Arguments file Reference files directory start The position of the mutation gene Value The trinucleotide context. Author(s) <NAME> <NAME> Examples example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); get.context(file.path(example.ref.dir, 'chr4.fa'), c(1582933, 1611781)) get.exprobntcx get.exprobntcx Description Gets the expected probability for each trinucleotide and total number of tcx Usage get.exprobntcx(somatic, ref.dir, trinucleotide.count.file) Arguments somatic Data frame of somatic variants ref.dir Path to a directory containing the reference genome. trinucleotide.count.file A tab seprarated file containing a count of all trinucleotides present in the refer- ence genome. This can be generated with the get.trinucleotide.counts() function in this package. Author(s) <NAME> <NAME> Examples load( paste0( path.package("SeqKat"), "/extdata/test/somatic.rda" ) ); trinucleotide.count.file <- paste0( path.package("SeqKat"), "/extdata/tn_count.txt" ); example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); get.exprobntcx(somatic, example.ref.dir, trinucleotide.count.file) get.nucleotide.chunk.counts Get Nucleotide Chunk Counts Description Obtain counts for all possible trinucleotides within a specified genomic region Usage get.nucleotide.chunk.counts(key, chr, upstream = 1, downstream = 1, start = 1, end = -1) Arguments key List of specify trinucleotides to count chr Chromosome upstream Length upstream to read downstream Length downstream to read start Starting position end Ending position Author(s) <NAME> Examples example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); bases.raw <- c('A','C','G','T','N'); tri.types.raw <- c( outer( c(outer(bases.raw, bases.raw, function(x, y) paste0(x,y))), bases.raw, function(x, y) paste0(x,y)) ); tri.types.raw <- sort(tri.types.raw); get.nucleotide.chunk.counts( tri.types.raw, file.path(example.ref.dir, 'chr4.fa'), upstream = 1, downstream = 1, start = 1, end = -1 ); get.pair Get Pair Description Generates the reverse compliment of a nucleotide sequence Usage get.pair(x) Arguments x asdf Details Reverses and compliments the bases of the input string. Bases must be (A, C, G, T, or N). Author(s) <NAME> Examples get.pair("GATTACA") get.tn Get Trinucleotides Description Count the frequencies of 32 trinucleotide in a region respectively Usage get.tn(chr, start.bp, end.bp, ref.dir) Arguments chr Chromosome start.bp Starting position end.bp Ending position ref.dir Path to a directory containing the reference genome. Author(s) <NAME> Examples example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); get.tn(chr=4, start.bp=1, end.bp=-1, example.ref.dir) get.toptn Get Top Trinucleotides Description Generate a tri-nucleotide summary for each sliding window Usage get.toptn(somatic.subset, chr, start.bp, end.bp, ref.dir) Arguments somatic.subset Data frame of somatic variants subset for a specific chromosome chr Chromosome start.bp Starting position end.bp Ending position ref.dir Path to a directory containing the reference genome. Author(s) <NAME> <NAME> Examples ## Not run: get.toptn(somatic.subset, chr, start.bp, end.bp, ref.dir) ## End(Not run) get.trinucleotide.counts Get Trinucleotide Counts Description Aggregates the total counts of each possible trinucleotide. Usage get.trinucleotide.counts(ref.dir, ref.name, output.dir) Arguments ref.dir Path to a directory containing the reference genome. ref.name Name of the reference genome being used (i.e. hg19, GRCh38, etc) output.dir Path to a directory where output will be created. Author(s) <NAME> <NAME> Examples ## Not run: get.trinucleotide.counts(ref.dir, "hg19", tempdir()); ## End(Not run) seqkat SeqKat Description Kataegis detection from SNV BED files Usage seqkat(sigcutoff = 5, mutdistance = 3.2, segnum = 4, ref.dir = NULL, bed.file = "./", output.dir = "./", chromosome = "all", chromosome.length.file = NULL, trinucleotide.count.file = NULL) Arguments sigcutoff The minimum hypermutation score used to classify the windows in the sliding binomial test as significant windows. The score is calculated per window as follows: -log10(binomial test p-value). Recommended value: 5 mutdistance The maximum intermutational distance allowed for SNVs to be grouped in the same kataegic event. Recommended value: 3.2 segnum Minimum mutation count. The minimum number of mutations required within a cluster to be identified as kataegic. Recommended value: 4 ref.dir Path to a directory containing the reference genome. Each chromosome should have its own .fa file and chromosomes X and Y are named as chr23 and chr24. The fasta files should contain no header bed.file Path to the SNV BED file. The BED file should contain the following informa- tion: Chromosome, Position, Reference allele, Alternate allele output.dir Path to a directory where output will be created. chromosome The chromosome to be analysed. This can be (1, 2, ..., 23, 24) or "all" to run sequentially on all chromosomes. chromosome.length.file A tab separated file containing the lengths of all chromosomes in the reference genome. trinucleotide.count.file A tab seprarated file containing a count of all trinucleotides present in the refer- ence genome. This can be generated with the get.trinucleotide.counts() function in this package. Details The default paramters in SeqKat have been optimized using Alexanrov’s "Signatures of mutational processes in human cancer" dataset. SeqKat accepts a BED file and outputs the results in TXT format. A file per chromosome is generated if a kataegic event is detected, otherwise no file is generated. SeqKat reports two scores per kataegic event, a hypermutation score and an APOBEC mediated kataegic score. Author(s) <NAME> <NAME> <NAME> Examples example.bed.file <- paste0( path.package("SeqKat"), "/extdata/test/PD4120a-chr4-1-2000000_test_snvs.bed" ); example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); example.chromosome.length.file <- paste0( path.package("SeqKat"), "/extdata/test/length_hg19_chr_test.txt" ); seqkat( 5, 3.2, 2, bed.file = example.bed.file, output.dir = tempdir(), chromosome = "4", ref.dir = example.ref.dir, chromosome.length.file = example.chromosome.length.file ); test.kataegis Test Kataegis Description Performs exact binomial test to test the deviation of the 32 tri-nucleotides counts from expected Usage test.kataegis(chromosome.num, somatic, units, exprobntcx, output.name, ref.dir, chromosome.length.file) Arguments chromosome.num Chromosome somatic Data frame of somatic variants units Base window size exprobntcx Expected probability for each trinucleotide and total number of tcx output.name Name of the generated output directory. ref.dir Path to a directory containing the reference genome. chromosome.length.file A tab separated file containing the lengths of all chromosomes in the reference genome. Author(s) <NAME> 12 test.kataegis Examples load( paste0( path.package("SeqKat"), "/extdata/test/somatic.rda" ) ); load( paste0( path.package("SeqKat"), "/extdata/test/exprobntcx.rda" ) ); example.chromosome.length.file <- paste0( path.package("SeqKat"), "/extdata/test/length_hg19_chr_test.txt" ); example.ref.dir <- paste0( path.package("SeqKat"), "/extdata/test/ref/" ); test.kataegis( 4, somatic, 2, exprobntcx, tempdir(), example.ref.dir, example.chromosome.length.file );
SE
cran
R
Package ‘SE.EQ’ October 12, 2022 Type Package Title SE-Test for Equivalence Version 1.0 Date 2020-10-06 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Implements the SE-test for equivalence according to Hoffelder et al. (2015) <DOI:10.1080/10543406.2014.920344>. The SE-test for equivalence is a multivariate two-sample equivalence test. Distance measure of the test is the sum of standardized differences between the expected values or in other words: the sum of effect sizes (SE) of all components of the two multivariate samples. The test is an asymptotically valid test for normally distributed data (see Hoffelder et al.,2015). The function SE.EQ() implements the SE-test for equivalence according to Hoffelder et al. (2015). The function SE.EQ.dissolution.profiles() implements a variant of the SE-test for equivalence for similarity analyses of dissolution profiles as mentioned in Suarez-Sharp et al.(2020) <DOI:10.1208/s12248-020-00458-9>). The equivalence margin used in SE.EQ.dissolution.profiles() is analogically defined as for the T2EQ approach according to Hoffelder (2019) <DOI:10.1002/bimj.201700257>) by means of a systematic shift in location of 10 [\% of label claim] of both dissolution profile populations. SE.EQ.dissolution.profiles() checks whether the weighted mean of the differences of the expected values of both dissolution profile populations is statistically significantly smaller than 10 [\% of label claim]. The weights are built up by the inverse variances. Imports MASS License GPL-3 NeedsCompilation no Depends R (>= 3.5.0) Repository CRAN Date/Publication 2020-10-13 14:10:05 UTC R topics documented: SE.EQ-packag... 2 ex_data_JoB... 6 SE.E... 7 SE.EQ.dissolution.profile... 8 SE.EQ-package SE-Test for Equivalence Description Implements the SE-test for equivalence according to Hoffelder et al. (2015) <DOI:10.1080/10543406.2014.920344>. The SE-test for equivalence is a multivariate two-sample equivalence test. Distance measure of the test is the sum of standardized differences between the expected values or in other words: the sum of effect sizes (SE) of all components of the two multivariate samples. The test is an asymptotically valid test for normally distributed data (see Hoffelder et al.,2015). The function SE.EQ() implements the SE-test for equivalence according to Hoffelder et al. (2015). The function SE.EQ.dissolution.profiles() implements a variant of the SE-test for equivalence for similarity anal- yses of dissolution profiles as mentioned in Suarez-Sharp et al.(2020) <DOI:10.1208/s12248-020- 00458-9>). The equivalence margin used in SE.EQ.dissolution.profiles() is analogically defined as for the T2EQ approach according to Hoffelder (2019) <DOI:10.1002/bimj.201700257>) by means of a systematic shift in location of 10 [% of label claim] of both dissolution profile populations. SE.EQ.dissolution.profiles() checks whether the weighted mean of the differences of the expected values of both dissolution profile populations is statistically significantly smaller than 10 [% of label claim]. The weights are built up by the inverse variances. Details The DESCRIPTION file: Package: SE.EQ Type: Package Title: SE-Test for Equivalence Version: 1.0 Date: 2020-10-06 Author: <NAME> Maintainer: <NAME> <<EMAIL>> Description: Implements the SE-test for equivalence according to Hoffelder et al. (2015) <DOI:10.1080/10543406.2014.92 Imports: MASS License: GPL-3 Index of help topics: SE.EQ The SE-test for equivalence SE.EQ-package SE-Test for Equivalence SE.EQ.dissolution.profiles The SE-test for equivalence for dissolution profile similarity analyses ex_data_JoBS Example dataset from Hoffelder et al. (2015) Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References EMA (2010). Guidance on the Investigation of Bioequivalence. European Medicines Agency, CHMP, London. Doc. Ref.: CPMP/EWP/QWP/1401/98 Rev. 1/ Corr **. URL: https://www. ema.europa.eu/en/documents/scientific-guideline/guideline-investigation-bioequivalence-rev1_ en.pdf FDA (1997). Guidance for Industry: Dissolution Testing of Immediate Release Solid Oral Dosage Forms. Food and Drug Administration FDA, CDER, Rockville. URL: https://www.fda.gov/ media/70936/download <NAME>., <NAME>., <NAME>. (2015). Multivariate Equivalence Tests for Use in Phar- maceutical Development. Journal of Biopharmaceutical Statistics, 25:3, 417-437. URL: http: //dx.doi.org/10.1080/10543406.2014.920344 Hoffelder, T. (2019) Equivalence analyses of dissolution profiles with the Mahalanobis distance. Biometrical Journal, 61:5, 1120-1137. URL: https://doi.org/10.1002/bimj.201700257 <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2020). In Vitro Dissolution Profiles Similarity Assessment in Support of Drug Product Quality: What, How, When - Workshop Summary Report. The AAPS Journal, 22:74. URL: http://dx. doi.org/10.1208/s12248-020-00458-9 Examples # A reproduction of the three-dimensional SE example evaluation # in Hoffelder et al. (2015) can be done with the following code: data(ex_data_JoBS) REF_JoBS <- cbind(ex_data_JoBS[ which(ex_data_JoBS$Group=='REF'), ] [c("Diss_15_min","Diss_20_min","Diss_25_min")]) TEST_JoBS <- cbind(ex_data_JoBS[ which(ex_data_JoBS$Group=='TEST'), ] [c("Diss_15_min","Diss_20_min","Diss_25_min")]) equivalence_margin_SE_JoBS <- 0.74^2 test_SE_JoBS <- SE.EQ(X=REF_JoBS , Y=TEST_JoBS , eq_margin=equivalence_margin_SE_JoBS , print.results = TRUE) # Apart from simulation errors, a recalculation of the SE results # of some parts (normal distribution only) of the simulation study in # Hoffelder et al. (2015) can be done with the following code. Please note that # the simulation takes approximately 20 minutes for 50.000 simulation # runs (number_of_simu_runs <- 50000). To shorten calculation time for # test users, number_of_simu_runs is set to 100 here and can/should be adapted. # In the result of the simulation the variable empirical.size.se presents the # simulated size obtained by function \code{SE.EQ()} whereas variable # empirical.size.se.disso shows the # simulated size obtained by function \code{SE.EQ.dissolution.profiles()}. # A detailed analysis of the operating characteristics of the SE variant # implemented in \code{SE.EQ.dissolution.profiles()} is the content of # a future paper. library(MASS) number_of_simu_runs <- 100 set.seed(2020) mu1 <- c(41,76,97) mu2 <- mu1 - c(10,10,10) SIGMA_1 <- matrix(data = c(537.4 , 323.8 , 91.8 , 323.8 , 207.5 , 61.7 , 91.8 , 61.7 , 26.1) ,ncol = 3) SIGMA_2 <- matrix(data = c(324.1 , 233.6 , 24.5 , 233.6 , 263.5 , 61.4 , 24.5 , 61.4 , 32.5) ,ncol = 3) SIGMA <- matrix(data = c(430.7 , 278.7 , 58.1 , 278.7 , 235.5 , 61.6 , 58.1 , 61.6 , 29.3) ,ncol = 3) SIMULATION_SIZE_SE <- function(disttype , Hom , Var , mu_1 , mu_2 , n_per_group , n_simus ) { n_success_SE <- 0 n_success_SE_disso <- 0 if ( Hom == "Yes" ) { COVMAT_1 <- SIGMA COVMAT_2 <- SIGMA } else { COVMAT_1 <- SIGMA_1 COVMAT_2 <- SIGMA_2 } if ( Var == "Low" ) { COVMAT_1 <- COVMAT_1 / 4 COVMAT_2 <- COVMAT_2 / 4 } d <- ncol(COVMAT_1) Mean_diff <- mu_1 - mu_2 # Difference of both exp. values vars_X <- diag(COVMAT_1) # variances of first sample vars_Y <- diag(COVMAT_2) # variances of second sample dist_SE <- sum( (Mean_diff * Mean_diff) / (0.5 * (vars_X + vars_Y) ) ) # true SE distance and equivalence margin for SE.EQ if ( n_per_group == 10 ) { cat("Expected value sample 1:",mu_1,"\n", "Expected value sample 2:",mu_2,"\n", "Covariance matrix sample 1:",COVMAT_1,"\n", "Covariance matrix sample 2:",COVMAT_2,"\n", "EM_SE:",dist_SE,"\n") } for (i in 1:n_simus) { if ( disttype == "Normal" ) { REF <- mvrnorm(n = n_per_group, mu=mu_1, Sigma=COVMAT_1) TEST<- mvrnorm(n = n_per_group, mu=mu_2, Sigma=COVMAT_2) } n_success_SE_disso <- n_success_SE_disso + SE.EQ.dissolution.profiles( X = REF , Y = TEST , print.results = FALSE )$testresult.num n_success_SE <- n_success_SE + SE.EQ( X=REF , Y=TEST , eq_margin = dist_SE , print.results = FALSE )$testresult.num } empirical_succ_prob_SE <- n_success_SE / n_simus empirical_succ_prob_SE_disso <- n_success_SE_disso / n_simus simuresults <- data.frame(dist = disttype , Hom = Hom , Var = Var , dimension = d , em_se = dist_SE , sample.size = n_per_group , empirical.size.se = empirical_succ_prob_SE , empirical.size.se.disso = empirical_succ_prob_SE_disso) } SIMULATION_LOOP_SAMPLE_SIZE <- function(disttype , Hom , Var , mu_1 , mu_2 , n_simus ) { run_10 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 10 , n_simus = n_simus) run_30 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 30 , n_simus = n_simus) run_50 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 50 , n_simus = n_simus) run_100 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 100 , n_simus = n_simus) RESULT_MATRIX <- rbind(run_10 , run_30 , run_50 , run_100) RESULT_MATRIX } simu_1 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "Yes" , Var = "High" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_2 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "Yes" , Var = "Low" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_3 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "No" , Var = "High" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_4 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "No" , Var = "Low" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) FINAL_RESULT <- rbind(simu_1 , simu_2 , simu_3 , simu_4) cat("****** Simu results n_simu_runs: ",number_of_simu_runs," ***** \n") FINAL_RESULT ex_data_JoBS Example dataset from Hoffelder et al. (2015) Description Multivariate example dataset of dissolution profiles. Dataset consists of two three-dimensional samples. The names of the three variables are "Diss_15_min","Diss_20_min" and "Diss_25_min". Variable "Group" discriminates between first sample (Group == "REF") and second sample (Group == "Test"). Sample size is 12 per group. Usage data("ex_data_JoBS") Format A data frame with 24 observations on the following 4 variables. Group a factor with levels REF TEST Diss_15_min a numeric vector Diss_20_min a numeric vector Diss_25_min a numeric vector Details Example dataset from Hoffelder et al. (2015). Source <NAME>., <NAME>., <NAME>. (2015), "Multivariate Equivalence Tests for Use in Pharmaceu- tical Development", Journal of Biopharmaceutical Statistics, 25:3, 417-437. References URL: http://dx.doi.org/10.1080/10543406.2014.920344 Examples data(ex_data_JoBS) SE.EQ The SE-test for equivalence Description The function SE.EQ() implements the SE-test for equivalence according to Hoffelder et al. (2015). It is a multivariate two-sample equivalence procedure. Distance measure of the test is the sum of standardized differences between the expected values or in other words: the sum of effect sizes of all components of the two multivariate samples. Usage SE.EQ(X, Y, eq_margin, alpha = 0.05, print.results = TRUE) Arguments X numeric data matrix of the first sample (REF). The rows of X contain the individ- ual observations of the REF sample, the columns contain the variables/components of the multivariate sample. Y numeric data matrix of the second sample (TEST). The rows of Y contain the in- dividual observations of the TEST sample, the columns contain the variables/components of the multivariate sample. eq_margin numeric (>0). The equivalence margin of the test. alpha numeric (0<alpha<1). The significance level of the SE-test for equivalence. Usually set to 0.05 which is the default. print.results logical; if TRUE (default) summary statistics and test results are printed in the output. If FALSE no output is created Details This function implements the SE-test for equivalence. Distance measure of the test is the sum of standardized differences between the expected values or in other words: the sum of effect sizes of all components of the two multivariate samples. The test is an asymptotically valid test for normally distributed data (see Hoffelder et al.,2015). Value a data frame; three columns containing the results of the test p.value numeric; the p-value of the SE test for equivalence testresult.num numeric; 0 (null hypothesis of nonequivalence not rejected) or 1 (null hypothesis of nonequivalence rejected, decision in favor of equivalence) testresult.text character; test result of the test in text mode Author(s) <NAME> <thomas.hoffelder at boehringer-ingelheim.com> References <NAME>., <NAME>., <NAME>. (2015). Multivariate Equivalence Tests for Use in Phar- maceutical Development. Journal of Biopharmaceutical Statistics, 25:3, 417-437. URL: http: //dx.doi.org/10.1080/10543406.2014.920344 Examples # A reproduction of the three-dimensional SE example evaluation # in Hoffelder et al. (2015) can be done with the following code: data(ex_data_JoBS) REF_JoBS <- cbind(ex_data_JoBS[ which(ex_data_JoBS$Group=='REF'), ] [c("Diss_15_min","Diss_20_min","Diss_25_min")]) TEST_JoBS <- cbind(ex_data_JoBS[ which(ex_data_JoBS$Group=='TEST'), ] [c("Diss_15_min","Diss_20_min","Diss_25_min")]) equivalence_margin_SE_JoBS <- 0.74^2 test_SE_JoBS <- SE.EQ(X=REF_JoBS , Y=TEST_JoBS , eq_margin=equivalence_margin_SE_JoBS , print.results = TRUE) SE.EQ.dissolution.profiles The SE-test for equivalence for dissolution profile similarity analyses Description The function SE.EQ.dissolution.profiles() implements a variant of the SE-test for equivalence with a concrete equivalence margin for analyses of dissolution profiles. It is a multivariate two- sample equivalence procedure. Distance measure of the test is the sum of standardized differences between the expected values or in other words: the sum of effect sizes of all components of the two multivariate samples. Usage SE.EQ.dissolution.profiles(X, Y, alpha = 0.05, print.results = TRUE) Arguments X numeric data matrix of the first sample (REF). The rows of X contain the individ- ual observations of the REF sample, the columns contain the variables/components of the multivariate sample. Y numeric data matrix of the second sample (TEST). The rows of Y contain the in- dividual observations of the TEST sample, the columns contain the variables/components of the multivariate sample. alpha numeric (0<alpha<1). The significance level of the SE-test for equivalence. Usually set to 0.05 which is the default. print.results logical; if TRUE (default) summary statistics and test results are printed in the output. If FALSE no output is created Details The function SE.EQ.dissolution.profiles() implements a variant of the SE-test for equiv- alence for similarity analyses of dissolution profiles as mentioned in Suarez-Sharp et al.(2020) <DOI:10.1208/s12248-020-00458-9>). The equivalence margin is analogically defined as for the T2EQ approach according to Hoffelder (2019) <DOI:10.1002/bimj.201700257>) by means of a sys- tematic shift in location of 10 [% of label claim] of both dissolution profile populations. SE.EQ.dissolution.profiles() checks whether the weighted mean of the differences between the expected values of both dissolu- tion profile populations is statistically significantly smaller than 10 [% of label claim]. The weights are built up by the inverse variances. The current regulatory standard approach for comparing dissolution profiles is the similarity factor f2 (see FDA, 1997, EMA, 2010, among others) with which the type I error cannot be controlled. According to EMA (2010) "similarity acceptance limits should be pre-defined and justified and not be greater than a 10% difference". The functions • SE.EQ.dissolution.profiles • EDNE.EQ.dissolution.profiles • T2EQ.dissolution.profiles.hoffelder and f2 have in common that they all check wether a kind of average difference between the expected values is smaller than 10 [% of label claim] (see Suarez-Sharp et al., 2020). Thus, all three methods • SE.EQ.dissolution.profiles • EDNE.EQ.dissolution.profiles • T2EQ.dissolution.profiles.hoffelder are compliant with current regulatory requirements. In contrast to the standard approach f2 they all allow (at least approximate) type I error control. Value a data frame; three columns containing the results of the test p.value numeric; the p-value of the SE test for equivalence testresult.num numeric; 0 (null hypothesis of nonequivalence not rejected) or 1 (null hypothesis of nonequivalence rejected, decision in favor of equivalence) testresult.text character; test result of the test in text mode Author(s) <NAME> <thomas.hoffelder at boehringer-ingelheim.com> References EMA (2010). Guidance on the Investigation of Bioequivalence. European Medicines Agency, CHMP, London. Doc. Ref.: CPMP/EWP/QWP/1401/98 Rev. 1/ Corr **. URL: https://www. ema.europa.eu/en/documents/scientific-guideline/guideline-investigation-bioequivalence-rev1_ en.pdf FDA (1997). Guidance for Industry: Dissolution Testing of Immediate Release Solid Oral Dosage Forms. Food and Drug Administration FDA, CDER, Rockville. URL: https://www.fda.gov/ media/70936/download <NAME>., <NAME>., <NAME>. (2015). Multivariate Equivalence Tests for Use in Phar- maceutical Development. Journal of Biopharmaceutical Statistics, 25:3, 417-437. URL: http: //dx.doi.org/10.1080/10543406.2014.920344 <NAME>. (2019) Equivalence analyses of dissolution profiles with the Mahalanobis distance. Biometrical Journal, 61:5, 1120-1137. URL: https://doi.org/10.1002/bimj.201700257 <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2020). In Vitro Dissolution Profiles Similarity Assessment in Support of Drug Product Quality: What, How, When - Workshop Summary Report. The AAPS Journal, 22:74. URL: http://dx. doi.org/10.1208/s12248-020-00458-9 Examples # Apart from simulation errors, a recalculation of the SE results # of some parts (normal distribution only) of the simulation study in # Hoffelder et al. (2015) can be done with the following code. Please note that # the simulation takes approximately 20 minutes for 50.000 simulation # runs (number_of_simu_runs <- 50000). To shorten calculation time for # test users, number_of_simu_runs is set to 100 here and can/should be adapted. # In the result of the simulation the variable empirical.size.se presents the # simulated size obtained by function \code{SE.EQ()} whereas variable # empirical.size.se.disso shows the # simulated size obtained by function \code{SE.EQ.dissolution.profiles()}. # A detailed analysis of the operating characteristics of the SE variant # implemented in \code{SE.EQ.dissolution.profiles()} is the content of # a future paper. library(MASS) number_of_simu_runs <- 100 set.seed(2020) mu1 <- c(41,76,97) mu2 <- mu1 - c(10,10,10) SIGMA_1 <- matrix(data = c(537.4 , 323.8 , 91.8 , 323.8 , 207.5 , 61.7 , 91.8 , 61.7 , 26.1) ,ncol = 3) SIGMA_2 <- matrix(data = c(324.1 , 233.6 , 24.5 , 233.6 , 263.5 , 61.4 , 24.5 , 61.4 , 32.5) ,ncol = 3) SIGMA <- matrix(data = c(430.7 , 278.7 , 58.1 , 278.7 , 235.5 , 61.6 , 58.1 , 61.6 , 29.3) ,ncol = 3) SIMULATION_SIZE_SE <- function(disttype , Hom , Var , mu_1 , mu_2 , n_per_group , n_simus ) { n_success_SE <- 0 n_success_SE_disso <- 0 if ( Hom == "Yes" ) { COVMAT_1 <- SIGMA COVMAT_2 <- SIGMA } else { COVMAT_1 <- SIGMA_1 COVMAT_2 <- SIGMA_2 } if ( Var == "Low" ) { COVMAT_1 <- COVMAT_1 / 4 COVMAT_2 <- COVMAT_2 / 4 } d <- ncol(COVMAT_1) Mean_diff <- mu_1 - mu_2 # Difference of both exp. values vars_X <- diag(COVMAT_1) # variances of first sample vars_Y <- diag(COVMAT_2) # variances of second sample dist_SE <- sum( (Mean_diff * Mean_diff) / (0.5 * (vars_X + vars_Y) ) ) # true SE distance and equivalence margin for SE.EQ if ( n_per_group == 10 ) { cat("Expected value sample 1:",mu_1,"\n", "Expected value sample 2:",mu_2,"\n", "Covariance matrix sample 1:",COVMAT_1,"\n", "Covariance matrix sample 2:",COVMAT_2,"\n", "EM_SE:",dist_SE,"\n") } for (i in 1:n_simus) { if ( disttype == "Normal" ) { REF <- mvrnorm(n = n_per_group, mu=mu_1, Sigma=COVMAT_1) TEST<- mvrnorm(n = n_per_group, mu=mu_2, Sigma=COVMAT_2) } n_success_SE_disso <- n_success_SE_disso + SE.EQ.dissolution.profiles( X = REF , Y = TEST , print.results = FALSE )$testresult.num n_success_SE <- n_success_SE + SE.EQ( X=REF , Y=TEST , eq_margin = dist_SE , print.results = FALSE )$testresult.num } empirical_succ_prob_SE <- n_success_SE / n_simus empirical_succ_prob_SE_disso <- n_success_SE_disso / n_simus simuresults <- data.frame(dist = disttype , Hom = Hom , Var = Var , dimension = d , em_se = dist_SE , sample.size = n_per_group , empirical.size.se = empirical_succ_prob_SE , empirical.size.se.disso = empirical_succ_prob_SE_disso) } SIMULATION_LOOP_SAMPLE_SIZE <- function(disttype , Hom , Var , mu_1 , mu_2 , n_simus ) { run_10 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 10 , n_simus = n_simus) run_30 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 30 , n_simus = n_simus) run_50 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 50 , n_simus = n_simus) run_100 <- SIMULATION_SIZE_SE(disttype = disttype , Hom = Hom , Var = Var , mu_1 = mu_1 , mu_2 = mu_2 , n_per_group = 100 , n_simus = n_simus) RESULT_MATRIX <- rbind(run_10 , run_30 , run_50 , run_100) RESULT_MATRIX } simu_1 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "Yes" , Var = "High" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_2 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "Yes" , Var = "Low" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_3 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "No" , Var = "High" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) simu_4 <- SIMULATION_LOOP_SAMPLE_SIZE(disttype = "Normal", Hom = "No" , Var = "Low" , mu_1 = mu1 , mu_2 = mu2 , n_simus = number_of_simu_runs) FINAL_RESULT <- rbind(simu_1 , simu_2 , simu_3 , simu_4) cat("****** Simu results n_simu_runs: ",number_of_simu_runs," ***** \n") FINAL_RESULT
near-metrics
rust
Rust
Crate near_metrics === A fork of the lighthouse_metrics crate used to implement prometheus A wrapper around the `prometheus` crate that provides a global, `lazy_static` metrics registry and functions to add and use the following components (more info at Prometheus docs): * `Histogram`: used with `start_timer()` and `observe_duration()` or `observe()` method to record durations (e.g., block processing time). * `IncCounter`: used to represent an ideally ever-growing, never-shrinking integer (e.g., number of block processing requests). * `IntGauge`: used to represent an varying integer (e.g., number of attestations per block). ### Important Metrics will fail if two items have the same `name`. All metrics must have a unique `name`. Because we use a global registry there is no namespace per crate, it’s one big global space. See the Prometheus naming best practices when choosing metric names. ### Example ``` use once_cell::sync::Lazy; use near_metrics::*; // These metrics are "magically" linked to the global registry defined in `lighthouse_metrics`. pub static RUN_COUNT: Lazy<IntCounter> = Lazy::new(|| { try_create_int_counter( "runs_total", "Total number of runs", ) .unwrap() }); pub static CURRENT_VALUE: Lazy<IntGauge> = Lazy::new(|| { try_create_int_gauge( "current_value", "The current value", ) .unwrap() }); pub static RUN_TIME: Lazy<Histogram> = Lazy::new(|| { try_create_histogram( "run_seconds", "Time taken (measured to high precision)", ) .unwrap() }); fn main() { for i in 0..100 { RUN_COUNT.inc(); let timer = RUN_TIME.start_timer(); for j in 0..10 { CURRENT_VALUE.set(j); println!("Howdy partner"); } timer.observe_duration(); } assert_eq!(100, RUN_COUNT.get()); assert_eq!(9, CURRENT_VALUE.get()); assert_eq!(100, RUN_TIME.get_sample_count()); assert!(0.0 < RUN_TIME.get_sample_sum()); } ``` Structs --- HistogramA `Metric` counts individual observations from an event or sample stream in configurable buckets. Similar to a `Summary`, it also provides a sum of observations and an observation count. HistogramOptsA struct that bundles the options for creating a `Histogram` metric. It is mandatory to set Name and Help to a non-empty string. All other fields are optional and can safely be left at their zero value. OptsA struct that bundles the options for creating most `Metric` types. TextEncoderAn implementation of an `Encoder` that converts a `MetricFamily` proto message into text format. Traits --- EncoderAn interface for encoding metric families into an underlying wire protocol. Functions --- do_create_int_counter_vecCreates ‘IntCounterVec’ - if it has trouble registering to Prometheus it will keep appending a number until the name is unique. gatherCollect all the metrics for reporting. try_create_counterAttempts to crate an `Counter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_gaugeAttempts to crate an `Gauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_gauge_vecAttempts to crate an `GaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_histogramAttempts to crate a `Histogram`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_histogram_vecAttempts to create a `HistogramVector`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_counterAttempts to crate an `IntCounter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_counter_vecAttempts to crate an `IntCounterVec`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_gaugeAttempts to crate an `IntGauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_int_gauge_vecAttempts to crate an `IntGaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Type Definitions --- CounterA `Metric` represents a single numerical value that only ever goes up. GaugeA `Metric` represents a single numerical value that can arbitrarily go up and down. GaugeVecA `Collector` that bundles a set of `Gauge`s that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. number of operations queued, partitioned by user and operation type). HistogramVecA `Collector` that bundles a set of Histograms that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. HTTP request latencies, partitioned by status code and method). IntCounterThe integer version of `Counter`. Provides better performance if metric values are all positive integers (natural numbers). IntCounterVecThe integer version of `CounterVec`. Provides better performance if metric are all positive integers (natural numbers). IntGaugeThe integer version of `Gauge`. Provides better performance if metric values are all integers. IntGaugeVecThe integer version of `GaugeVec`. Provides better performance if metric values are all integers. ResultA specialized Result type for prometheus. Crate near_metrics === A fork of the lighthouse_metrics crate used to implement prometheus A wrapper around the `prometheus` crate that provides a global, `lazy_static` metrics registry and functions to add and use the following components (more info at Prometheus docs): * `Histogram`: used with `start_timer()` and `observe_duration()` or `observe()` method to record durations (e.g., block processing time). * `IncCounter`: used to represent an ideally ever-growing, never-shrinking integer (e.g., number of block processing requests). * `IntGauge`: used to represent an varying integer (e.g., number of attestations per block). ### Important Metrics will fail if two items have the same `name`. All metrics must have a unique `name`. Because we use a global registry there is no namespace per crate, it’s one big global space. See the Prometheus naming best practices when choosing metric names. ### Example ``` use once_cell::sync::Lazy; use near_metrics::*; // These metrics are "magically" linked to the global registry defined in `lighthouse_metrics`. pub static RUN_COUNT: Lazy<IntCounter> = Lazy::new(|| { try_create_int_counter( "runs_total", "Total number of runs", ) .unwrap() }); pub static CURRENT_VALUE: Lazy<IntGauge> = Lazy::new(|| { try_create_int_gauge( "current_value", "The current value", ) .unwrap() }); pub static RUN_TIME: Lazy<Histogram> = Lazy::new(|| { try_create_histogram( "run_seconds", "Time taken (measured to high precision)", ) .unwrap() }); fn main() { for i in 0..100 { RUN_COUNT.inc(); let timer = RUN_TIME.start_timer(); for j in 0..10 { CURRENT_VALUE.set(j); println!("Howdy partner"); } timer.observe_duration(); } assert_eq!(100, RUN_COUNT.get()); assert_eq!(9, CURRENT_VALUE.get()); assert_eq!(100, RUN_TIME.get_sample_count()); assert!(0.0 < RUN_TIME.get_sample_sum()); } ``` Structs --- HistogramA `Metric` counts individual observations from an event or sample stream in configurable buckets. Similar to a `Summary`, it also provides a sum of observations and an observation count. HistogramOptsA struct that bundles the options for creating a `Histogram` metric. It is mandatory to set Name and Help to a non-empty string. All other fields are optional and can safely be left at their zero value. OptsA struct that bundles the options for creating most `Metric` types. TextEncoderAn implementation of an `Encoder` that converts a `MetricFamily` proto message into text format. Traits --- EncoderAn interface for encoding metric families into an underlying wire protocol. Functions --- do_create_int_counter_vecCreates ‘IntCounterVec’ - if it has trouble registering to Prometheus it will keep appending a number until the name is unique. gatherCollect all the metrics for reporting. try_create_counterAttempts to crate an `Counter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_gaugeAttempts to crate an `Gauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_gauge_vecAttempts to crate an `GaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_histogramAttempts to crate a `Histogram`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_histogram_vecAttempts to create a `HistogramVector`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_counterAttempts to crate an `IntCounter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_counter_vecAttempts to crate an `IntCounterVec`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). try_create_int_gaugeAttempts to crate an `IntGauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). try_create_int_gauge_vecAttempts to crate an `IntGaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Type Definitions --- CounterA `Metric` represents a single numerical value that only ever goes up. GaugeA `Metric` represents a single numerical value that can arbitrarily go up and down. GaugeVecA `Collector` that bundles a set of `Gauge`s that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. number of operations queued, partitioned by user and operation type). HistogramVecA `Collector` that bundles a set of Histograms that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. HTTP request latencies, partitioned by status code and method). IntCounterThe integer version of `Counter`. Provides better performance if metric values are all positive integers (natural numbers). IntCounterVecThe integer version of `CounterVec`. Provides better performance if metric are all positive integers (natural numbers). IntGaugeThe integer version of `Gauge`. Provides better performance if metric values are all integers. IntGaugeVecThe integer version of `GaugeVec`. Provides better performance if metric values are all integers. ResultA specialized Result type for prometheus. Struct near_metrics::Histogram === ``` pub struct Histogram { /* private fields */ } ``` A `Metric` counts individual observations from an event or sample stream in configurable buckets. Similar to a `Summary`, it also provides a sum of observations and an observation count. On the Prometheus server, quantiles can be calculated from a `Histogram` using the `histogram_quantile` function in the query language. Note that Histograms, in contrast to Summaries, can be aggregated with the Prometheus query language (see the prometheus documentation for detailed procedures). However, Histograms require the user to pre-define suitable buckets, (see `linear_buckets` and `exponential_buckets` for some helper provided here) and they are in general less accurate. The Observe method of a `Histogram` has a very low performance overhead in comparison with the Observe method of a Summary. Implementations --- source### impl Histogram source#### pub fn with_opts(opts: HistogramOpts) -> Result<Histogram, Error`with_opts` creates a `Histogram` with the `opts` options. source### impl Histogram source#### pub fn observe(&self, v: f64) Add a single observation to the `Histogram`. source#### pub fn start_timer(&self) -> HistogramTimer Return a `HistogramTimer` to track a duration. source#### pub fn observe_closure_duration<F, T>(&self, f: F) -> T where    F: FnOnce() -> T, Observe execution time of a closure, in second. source#### pub fn local(&self) -> LocalHistogram Return a `LocalHistogram` for single thread usage. source#### pub fn get_sample_sum(&self) -> f64 Return accumulated sum of all samples. source#### pub fn get_sample_count(&self) -> u64 Return count of all samples. Trait Implementations --- source### impl Clone for Histogram source#### fn clone(&self) -> Histogram Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Collector for Histogram source#### fn desc(&self) -> Vec<&Desc, Global>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where    A: Allocator,` Return descriptors for metrics. source#### fn collect(&self) -> Vec<MetricFamily, Global>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where    A: Allocator,` Collect metrics. source### impl Debug for Histogram source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read more source### impl Metric for Histogram source#### fn metric(&self) -> Metric Return the protocol Metric. Auto Trait Implementations --- ### impl RefUnwindSafe for Histogram ### impl Send for Histogram ### impl Sync for Histogram ### impl Unpin for Histogram ### impl UnwindSafe for Histogram Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct near_metrics::HistogramOpts === ``` pub struct HistogramOpts { pub common_opts: Opts, pub buckets: Vec<f64, Global>, } ``` A struct that bundles the options for creating a `Histogram` metric. It is mandatory to set Name and Help to a non-empty string. All other fields are optional and can safely be left at their zero value. Fields --- `common_opts: Opts`A container holding various options. `buckets: Vec<f64, Global>`Defines the buckets into which observations are counted. Each element in the slice is the upper inclusive bound of a bucket. The values must be sorted in strictly increasing order. There is no need to add a highest bucket with +Inf bound, it will be added implicitly. The default value is DefBuckets. Implementations --- source### impl HistogramOpts source#### pub fn new<S1, S2>(name: S1, help: S2) -> HistogramOpts where    S1: Into<String>,    S2: Into<String>, Create a `HistogramOpts` with the `name` and `help` arguments. source#### pub fn namespace<S>(self, namespace: S) -> HistogramOpts where    S: Into<String>, `namespace` sets the namespace. source#### pub fn subsystem<S>(self, subsystem: S) -> HistogramOpts where    S: Into<String>, `subsystem` sets the sub system. source#### pub fn const_labels(    self,     const_labels: HashMap<String, String, RandomState>) -> HistogramOpts `const_labels` sets the const labels. source#### pub fn const_label<S1, S2>(self, name: S1, value: S2) -> HistogramOpts where    S1: Into<String>,    S2: Into<String>, `const_label` adds a const label. source#### pub fn variable_labels(    self,     variable_labels: Vec<String, Global>) -> HistogramOpts `variable_labels` sets the variable labels. source#### pub fn variable_label<S>(self, name: S) -> HistogramOpts where    S: Into<String>, `variable_label` adds a variable label. source#### pub fn fq_name(&self) -> String `fq_name` returns the fq_name. source#### pub fn buckets(self, buckets: Vec<f64, Global>) -> HistogramOpts `buckets` set the buckets. Trait Implementations --- source### impl Clone for HistogramOpts source#### fn clone(&self) -> HistogramOpts Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for HistogramOpts source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read more source### impl Describer for HistogramOpts source#### fn describe(&self) -> Result<Desc, Error`describe` returns a `Desc`. source### impl From<Opts> for HistogramOpts source#### fn from(opts: Opts) -> HistogramOpts Converts to this type from the input type. Auto Trait Implementations --- ### impl RefUnwindSafe for HistogramOpts ### impl Send for HistogramOpts ### impl Sync for HistogramOpts ### impl Unpin for HistogramOpts ### impl UnwindSafe for HistogramOpts Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct near_metrics::Opts === ``` pub struct Opts { pub namespace: String, pub subsystem: String, pub name: String, pub help: String, pub const_labels: HashMap<String, String, RandomState>, pub variable_labels: Vec<String, Global>, } ``` A struct that bundles the options for creating most `Metric` types. Fields --- `namespace: String`namespace, subsystem, and name are components of the fully-qualified name of the `Metric` (created by joining these components with “_”). Only Name is mandatory, the others merely help structuring the name. Note that the fully-qualified name of the metric must be a valid Prometheus metric name. `subsystem: String`namespace, subsystem, and name are components of the fully-qualified name of the `Metric` (created by joining these components with “_”). Only Name is mandatory, the others merely help structuring the name. Note that the fully-qualified name of the metric must be a valid Prometheus metric name. `name: String`namespace, subsystem, and name are components of the fully-qualified name of the `Metric` (created by joining these components with “_”). Only Name is mandatory, the others merely help structuring the name. Note that the fully-qualified name of the metric must be a valid Prometheus metric name. `help: String`help provides information about this metric. Mandatory! Metrics with the same fully-qualified name must have the same Help string. `const_labels: HashMap<String, String, RandomState>`const_labels are used to attach fixed labels to this metric. Metrics with the same fully-qualified name must have the same label names in their ConstLabels. Note that in most cases, labels have a value that varies during the lifetime of a process. Those labels are usually managed with a metric vector collector (like CounterVec, GaugeVec). ConstLabels serve only special purposes. One is for the special case where the value of a label does not change during the lifetime of a process, e.g. if the revision of the running binary is put into a label. Another, more advanced purpose is if more than one `Collector` needs to collect Metrics with the same fully-qualified name. In that case, those Metrics must differ in the values of their ConstLabels. See the `Collector` examples. If the value of a label never changes (not even between binaries), that label most likely should not be a label at all (but part of the metric name). `variable_labels: Vec<String, Global>`variable_labels contains names of labels for which the metric maintains variable values. Metrics with the same fully-qualified name must have the same label names in their variable_labels. Note that variable_labels is used in `MetricVec`. To create a single metric must leave it empty. Implementations --- source### impl Opts source#### pub fn new<S1, S2>(name: S1, help: S2) -> Opts where    S1: Into<String>,    S2: Into<String>, `new` creates the Opts with the `name` and `help` arguments. source#### pub fn namespace<S>(self, namespace: S) -> Opts where    S: Into<String>, `namespace` sets the namespace. source#### pub fn subsystem<S>(self, subsystem: S) -> Opts where    S: Into<String>, `subsystem` sets the sub system. source#### pub fn const_labels(    self,     const_labels: HashMap<String, String, RandomState>) -> Opts `const_labels` sets the const labels. source#### pub fn const_label<S1, S2>(self, name: S1, value: S2) -> Opts where    S1: Into<String>,    S2: Into<String>, `const_label` adds a const label. source#### pub fn variable_labels(self, variable_labels: Vec<String, Global>) -> Opts `variable_labels` sets the variable labels. source#### pub fn variable_label<S>(self, name: S) -> Opts where    S: Into<String>, `variable_label` adds a variable label. source#### pub fn fq_name(&self) -> String `fq_name` returns the fq_name. Trait Implementations --- source### impl Clone for Opts source#### fn clone(&self) -> Opts Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Opts source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read more source### impl Describer for Opts source#### fn describe(&self) -> Result<Desc, Error`describe` returns a `Desc`. source### impl From<Opts> for HistogramOpts source#### fn from(opts: Opts) -> HistogramOpts Converts to this type from the input type. Auto Trait Implementations --- ### impl RefUnwindSafe for Opts ### impl Send for Opts ### impl Sync for Opts ### impl Unpin for Opts ### impl UnwindSafe for Opts Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct near_metrics::TextEncoder === ``` pub struct TextEncoder; ``` An implementation of an `Encoder` that converts a `MetricFamily` proto message into text format. Implementations --- source### impl TextEncoder source#### pub fn new() -> TextEncoder Create a new text encoder. Trait Implementations --- source### impl Debug for TextEncoder source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Read more source### impl Default for TextEncoder source#### fn default() -> TextEncoder Returns the “default value” for a type. Read more source### impl Encoder for TextEncoder source#### fn encode<W>(    &self,     metric_families: &[MetricFamily],     writer: &mutW) -> Result<(), Error> where    W: Write, `encode` converts a slice of MetricFamily proto messages into target format and writes the resulting lines to `writer`. It returns the number of bytes written and any error encountered. This function does not perform checks on the content of the metric and label names, i.e. invalid metric or label names will result in invalid text format output. Read more source#### fn format_type(&self) -> &str `format_type` returns target format. Auto Trait Implementations --- ### impl RefUnwindSafe for TextEncoder ### impl Send for TextEncoder ### impl Sync for TextEncoder ### impl Unpin for TextEncoder ### impl UnwindSafe for TextEncoder Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait near_metrics::Encoder === ``` pub trait Encoder { fn encode<W>(&self, &[MetricFamily], &mutW) -> Result<(), Error    where         W: Write; fn format_type(&self) -> &str; } ``` An interface for encoding metric families into an underlying wire protocol. Required Methods --- source#### fn encode<W>(&self, &[MetricFamily], &mutW) -> Result<(), Error> where    W: Write, `encode` converts a slice of MetricFamily proto messages into target format and writes the resulting lines to `writer`. It returns the number of bytes written and any error encountered. This function does not perform checks on the content of the metric and label names, i.e. invalid metric or label names will result in invalid text format output. source#### fn format_type(&self) -> &str `format_type` returns target format. Implementations on Foreign Types --- source### impl Encoder for ProtobufEncoder source#### fn encode<W>(    &self,     metric_families: &[MetricFamily],     writer: &mutW) -> Result<(), Error> where    W: Write, source#### fn format_type(&self) -> &str Implementors --- source### impl Encoder for TextEncoder Function near_metrics::do_create_int_counter_vec === ``` pub fn do_create_int_counter_vec(     name: &str,     help: &str,     labels: &[&str] ) -> IntCounterVec ``` Creates ‘IntCounterVec’ - if it has trouble registering to Prometheus it will keep appending a number until the name is unique. Function near_metrics::gather === ``` pub fn gather() -> Vec<MetricFamily>Notable traits for Vec<u8, A>`impl<A> Write for Vec<u8, A> where     A: Allocator,` ``` Collect all the metrics for reporting. Function near_metrics::try_create_counter === ``` pub fn try_create_counter(name: &str, help: &str) -> Result<Counter> ``` Attempts to crate an `Counter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). Function near_metrics::try_create_gauge === ``` pub fn try_create_gauge(name: &str, help: &str) -> Result<Gauge> ``` Attempts to crate an `Gauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Function near_metrics::try_create_gauge_vec === ``` pub fn try_create_gauge_vec(     name: &str,     help: &str,     labels: &[&str] ) -> Result<GaugeVec> ``` Attempts to crate an `GaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Function near_metrics::try_create_histogram === ``` pub fn try_create_histogram(name: &str, help: &str) -> Result<Histogram> ``` Attempts to crate a `Histogram`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). Function near_metrics::try_create_histogram_vec === ``` pub fn try_create_histogram_vec(     name: &str,     help: &str,     labels: &[&str],     buckets: Option<Vec<f64>) -> Result<HistogramVec> ``` Attempts to create a `HistogramVector`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). Function near_metrics::try_create_int_counter === ``` pub fn try_create_int_counter(name: &str, help: &str) -> Result<IntCounter> ``` Attempts to crate an `IntCounter`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). Function near_metrics::try_create_int_counter_vec === ``` pub fn try_create_int_counter_vec(     name: &str,     help: &str,     labels: &[&str] ) -> Result<IntCounterVec> ``` Attempts to crate an `IntCounterVec`, returning `Err` if the registry does not accept the counter (potentially due to naming conflict). Function near_metrics::try_create_int_gauge === ``` pub fn try_create_int_gauge(name: &str, help: &str) -> Result<IntGauge> ``` Attempts to crate an `IntGauge`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Function near_metrics::try_create_int_gauge_vec === ``` pub fn try_create_int_gauge_vec(     name: &str,     help: &str,     labels: &[&str] ) -> Result<IntGaugeVec> ``` Attempts to crate an `IntGaugeVec`, returning `Err` if the registry does not accept the gauge (potentially due to naming conflict). Type Definition near_metrics::Counter === ``` pub type Counter = GenericCounter<AtomicF64>; ``` A `Metric` represents a single numerical value that only ever goes up. Type Definition near_metrics::Gauge === ``` pub type Gauge = GenericGauge<AtomicF64>; ``` A `Metric` represents a single numerical value that can arbitrarily go up and down. Type Definition near_metrics::GaugeVec === ``` pub type GaugeVec = MetricVec<GaugeVecBuilder<AtomicF64>>; ``` A `Collector` that bundles a set of `Gauge`s that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. number of operations queued, partitioned by user and operation type). Type Definition near_metrics::HistogramVec === ``` pub type HistogramVec = MetricVec<HistogramVecBuilder>; ``` A `Collector` that bundles a set of Histograms that all share the same `Desc`, but have different values for their variable labels. This is used if you want to count the same thing partitioned by various dimensions (e.g. HTTP request latencies, partitioned by status code and method). Type Definition near_metrics::IntCounter === ``` pub type IntCounter = GenericCounter<AtomicU64>; ``` The integer version of `Counter`. Provides better performance if metric values are all positive integers (natural numbers). Type Definition near_metrics::IntCounterVec === ``` pub type IntCounterVec = MetricVec<CounterVecBuilder<AtomicU64>>; ``` The integer version of `CounterVec`. Provides better performance if metric are all positive integers (natural numbers). Type Definition near_metrics::IntGauge === ``` pub type IntGauge = GenericGauge<AtomicI64>; ``` The integer version of `Gauge`. Provides better performance if metric values are all integers. Type Definition near_metrics::IntGaugeVec === ``` pub type IntGaugeVec = MetricVec<GaugeVecBuilder<AtomicI64>>; ``` The integer version of `GaugeVec`. Provides better performance if metric values are all integers. Type Definition near_metrics::Result === ``` pub type Result<T> = Result<T, Error>; ``` A specialized Result type for prometheus.
shuttle-service
rust
Rust
Enum shuttle_service::error::Error === ``` pub enum Error { Io(Error), Database(String), BuildPanic(String), BindPanic(String), StringInterpolation(FmtError), Custom(CustomError), } ``` Variants --- ### Io(Error) ### Database(String) ### BuildPanic(String) ### BindPanic(String) ### StringInterpolation(FmtError) ### Custom(CustomError) Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(source: Error) -> Self Converts to this type from the input type.### impl From<Error> for Error #### fn from(source: CustomError) -> Self Converts to this type from the input type.### impl From<FmtError> for Error #### fn from(source: FmtError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl !UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Module shuttle_service::error === Types representing various errors that can occur in the process of building and deploying a service. Enums --- * Error Type Aliases --- * CustomError Struct shuttle_service::DeploymentMetadata === ``` pub struct DeploymentMetadata { pub env: Environment, pub project_name: ProjectName, pub service_name: String, pub storage_path: PathBuf, } ``` Fields --- `env: Environment``project_name: ProjectName``service_name: String`Typically your crate name `storage_path: PathBuf`Path to a folder that persists between deployments Trait Implementations --- ### impl Clone for DeploymentMetadata #### fn clone(&self) -> DeploymentMetadata Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn deserialize<__D>( __deserializer: __D ) -> Result<DeploymentMetadata, <__D as Deserializer<'de>>::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn serialize<__S>( &self, __serializer: __S ) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for DeploymentMetadata ### impl Send for DeploymentMetadata ### impl Sync for DeploymentMetadata ### impl Unpin for DeploymentMetadata ### impl UnwindSafe for DeploymentMetadata Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct shuttle_service::SecretStore === ``` pub struct SecretStore { /* private fields */ } ``` Store that holds all the secrets available to a deployment Implementations --- ### impl SecretStore #### pub fn new(secrets: BTreeMap<String, String, Global>) -> SecretStore #### pub fn get(&self, key: &str) -> Option<StringTrait Implementations --- ### impl Clone for SecretStore #### fn clone(&self) -> SecretStore Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn deserialize<__D>( __deserializer: __D ) -> Result<SecretStore, <__D as Deserializer<'de>>::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### type Item = (String, String) The type of the elements being iterated over.#### type IntoIter = <BTreeMap<String, String, Global> as IntoIterator>::IntoIter Which kind of iterator are we turning this into?#### fn into_iter(self) -> <SecretStore as IntoIterator>::IntoIter Creates an iterator from a value. #### fn serialize<__S>( &self, __serializer: __S ) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for SecretStore ### impl Send for SecretStore ### impl Sync for SecretStore ### impl Unpin for SecretStore ### impl UnwindSafe for SecretStore Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Enum shuttle_service::Type === ``` pub enum Type { Database(Type), Secrets, StaticFolder, Persist, Turso, Metadata, Custom, } ``` Variants --- ### Database(Type) ### Secrets ### StaticFolder ### Persist ### Turso ### Metadata ### Custom Trait Implementations --- ### impl Clone for Type #### fn clone(&self) -> Type Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn deserialize<__D>( __deserializer: __D ) -> Result<Type, <__D as Deserializer<'de>>::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### type Err = String The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<Type, <Type as FromStr>::ErrParses a string `s` to return a value of this type. #### fn eq(&self, other: &Type) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Type #### fn serialize<__S>( &self, __serializer: __S ) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where __S: Serializer, Serialize this value into the given Serde serializer. ### impl Eq for Type ### impl StructuralEq for Type ### impl StructuralPartialEq for Type Auto Trait Implementations --- ### impl RefUnwindSafe for Type ### impl Send for Type ### impl Sync for Type ### impl Unpin for Type ### impl UnwindSafe for Type Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Trait shuttle_service::Service === ``` pub trait Service: Send { // Required method fn bind<'async_trait>( self, addr: SocketAddr ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait>> where Self: 'async_trait; } ``` The core trait of the shuttle platform. Every crate deployed to shuttle needs to implement this trait. Use the [main][main] macro to expose your implementation to the deployment backend. Required Methods --- #### fn bind<'async_trait>( self, addr: SocketAddr ) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'async_trait>>where Self: 'async_trait, This function is run exactly once on each instance of a deployment. The deployer expects this instance of Service to bind to the passed SocketAddr. Implementors ---
DecorateR
cran
R
Package ‘DecorateR’ October 12, 2022 Type Package Title Fit and Deploy DECORATE Trees Version 0.1.2 Imports RWeka, RWekajars, rJava, stats Author <NAME> Maintainer <NAME> <<EMAIL>> Description DECORATE (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples) builds an ensemble of J48 trees by recursively adding artificial samples of the train- ing data (``Melville, P., & Mooney, R. J. (2005) <DOI:10.1016/j.inffus.2004.04.001>''). License GPL (>= 2) Depends R(>= 2.10.0) Encoding UTF-8 LazyData true RoxygenNote 7.1.1 NeedsCompilation no Repository CRAN Date/Publication 2020-11-20 11:20:02 UTC R topics documented: DECORAT... 2 predict.DECORAT... 3 DECORATE Binary classification with DECORATE (Melville and Mooney, 2005) Description DECORATE (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples) builds an ensemble of J48 trees by recursively adding artificial samples of the training data. Usage DECORATE(x, y, C = 15, I = 50, R = 1, verbose = FALSE) Arguments x a data frame of predictor (numeric, integer or factors). Character variables should be transformed to factors. y a vector of response labels. Only {0, 1} is allowed. C the desired ensemble size. Set to 15 as recommended by Melville and Mooney (2005). I the maximum number of iterations. Set to 50 as recommended by Melville and Mooney (2005). R the amount of articially generated examples, expressed as a fraction of the num- ber of training examples. R is set to 1, meaning that the number of artificially created samples is equal to the training set size. verbose TRUE or FALSE. Should information be printed on the screen? Value an object of class DECORATE. Author(s) Authors: <NAME>, Maintainer: <<EMAIL>> References Melville, P., & <NAME>. (2005). Creating diversity in ensembles using artificial data. Informa- tion Fusion, 6(1), 99-111. <doi: 10.1016/j.inffus.2004.04.001> See Also predict.DECORATE Examples data(iris) y <- as.factor(ifelse(iris$Species[1:100]=="setosa",0,1)) x <- iris[1:100,-5] dec <- DECORATE(x = x, y = y) predict.DECORATE Predict method for DECORATE objects Description Prediction of new data using DECORATE Usage ## S3 method for class 'DECORATE' predict(object, newdata, type = "prob", all = FALSE, ...) Arguments object an object of the class DECORATE, as created by the function DECORATE. newdata a data frame containing the same predictors as in the training phase. type character specifying whether to return the probabilites (’prob’) or class (’class’). Default: prob. all Return the predictions per tree instead of the average (default = FALSE). ... Not used currently. Value vector containing the response probabilities. Examples data(iris) y <- as.factor(ifelse(iris$Species[1:100]=="setosa",0,1)) x <- iris[1:100,-5] dec <- DECORATE(x = x, y = y) predict(object=dec,newdata=x)
@cmsgov/ds-medicare-gov
npm
JavaScript
[Medicare.gov Design System](#medicaregov-design-system) === [>> **View the full documentation site here** <<](https://design.cms.gov/?theme=medicare) The *Medicare.gov Design System* contains shared design and front-end development resources for Medicare.gov applications, and is built on top of the [CMS Design System](https://design.cms.gov/) (CMSDS). As a *child design system*, it inherits base styles, components, and guidance from the CMS Design System, while also adding its own features and customizations. [Usage](#usage) --- `yarn add @cmsgov/ds-medicare-gov` For full documentation on installation and usage in your Medicare.gov product, please refer to [our documentation site](https://design.cms.gov/getting-started/developers/installation/?theme=medicare). [Contributing](#contributing) --- This site-wide design system has a much smaller group of users than the core CMS Design System. It's up to us to make it useful for our apps. It is a place to share code and collaborate across teams. It is our collective source of truth for design. If you want to contribute but need help getting started, shout in the [`#mgov-design-system` Slack channel](https://cmsgov.slack.com/archives/C010T7LE5RC) on the CMS Slack or open up an issue on this repo. [Development](#development) --- See the [root CMSDS README](https://github.com/CMSgov/design-system/blob/HEAD/README.md). [Design assets](#design-assets) --- You can find the Medicare design system Sketch file and related fonts in the [design-assets folder](https://github.com/CMSgov/design-system/blob/HEAD/packages/ds-medicare-gov/design-assets). You can also view the [Medicare InVision Design System Manager design assets](https://cms.invisionapp.com/dsm/cms/medicare?mode=edit). --- ### [Additional links](#additional-links) * For more information on the original Design System, check out [its GitHub page](https://github.com/cmsgov/design-system). Readme --- ### Keywords * design-system * medicare.gov
ml5-save
npm
JavaScript
***This project is currently in development.*** Friendly machine learning for the web! --- ml5.js aims to make machine learning approachable for a broad audience of artists, creative coders, and students. The library provides access to machine learning algorithms and models in the browser, building on top of [TensorFlow.js](https://js.tensorflow.org/). The library is supported by code examples, tutorials, and sample data sets with an emphasis on ethical computing. Bias in data, stereotypical harms, and responsible crowdsourcing are part of the documentation around data collection and usage. ml5.js is heavily inspired by [Processing](https://processing.org/) and [p5.js](https://p5js.org/). Usage --- There are several ways you can use the ml5.js library: * You can use the latest version (0.4.3) by adding it to the head section of your HTML document: **v0.4.3** ``` <script src="https://unpkg.com/[email protected]/dist/ml5.min.js" type="text/javascript"></script> ``` * If you need to use an earlier version for any reason, you can change the version number. The [previous versions of ml5 can be found here](https://www.npmjs.com/package/ml5). You can use those previous versions by replacing `<version>` with the ml5 version of interest: ``` <script src="https://unpkg.com/ml5@<version>/dist/ml5.min.js" type="text/javascript"></script``` For example: ``` <script src="https://unpkg.com/[email protected]/dist/ml5.min.js" type="text/javascript"></script> ``` * You can also reference "latest", but we do not recommend this as your code may break as we update ml5. ``` <script src="https://unpkg.com/ml5@latest/dist/ml5.min.js" type="text/javascript"></script> ``` Resources --- * [Getting Started](https://ml5js.org/getting-started/) * [API Reference](https://ml5js.org/reference/) * [Examples](https://github.com/ml5js/ml5-examples) * [Community](https://ml5js.org/community) * [FAQ](https://ml5js.org/getting-started/faq/) Standalone Examples --- You can find a collection of standalone examples in this repository: [github.com/ml5js/ml5-examples](https://github.com/ml5js/ml5-examples) These examples are meant to serve as an introduction to the library and machine learning concepts. Code of Conduct --- We believe in a friendly internet and community as much as we do in building friendly machine learning for the web. Please refer to our [CODE OF CONDUCT](https://github.com/ml5js/ml5-library/blob/HEAD/CODE_OF_CONDUCT.md) for our rules for interacting with ml5 as a developer, contributor, or user. Contributing --- Want to be a **contributor 🏗 to the ml5.js library**? If yes and you're interested to submit new features, fix bugs, or help develop the ml5.js ecosystem, please go to our [CONTRIBUTING](https://github.com/ml5js/ml5-library/blob/HEAD/CONTRIBUTING.md) documentation to get started. See [CONTRIBUTING](https://github.com/ml5js/ml5-library/blob/HEAD/CONTRIBUTING.md) 🛠 Acknowledgements --- ml5.js is supported by the time and dedication of open source developers from all over the world. Funding and support is generously provided by a [Google Education grant](https://edu.google.com/giving/?modal_active=none) at NYU's ITP/IMA program. Many thanks [BrowserStack](https://www.browserstack.com/) for providing testing support. Contributors --- Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)): | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | [**<NAME>**](http://www.shiffman.net)[💻](https://github.com/ml5js/ml5-library/commits?author=shiffman "Code") [💡](#example-shiffman "Examples") [📆](#projectManagement-shiffman "Project Management") [👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3Ashiffman "Reviewed Pull Requests") [⚠️](https://github.com/ml5js/ml5-library/commits?author=shiffman "Tests") [📹](#video-shiffman "Videos") | [**<NAME>**](https://cvalenzuelab.com/)[💻](https://github.com/ml5js/ml5-library/commits?author=cvalenzuela "Code") [💡](#example-cvalenzuela "Examples") [👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3Acvalenzuela "Reviewed Pull Requests") [🔧](#tool-cvalenzuela "Tools") [⚠️](https://github.com/ml5js/ml5-library/commits?author=cvalenzuela "Tests") | [**<NAME>**](https://1023.io)[💻](https://github.com/ml5js/ml5-library/commits?author=yining1023 "Code") [💡](#example-yining1023 "Examples") [👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3Ayining1023 "Reviewed Pull Requests") [🔧](#tool-yining1023 "Tools") [⚠️](https://github.com/ml5js/ml5-library/commits?author=yining1023 "Tests") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Ayining1023 "Bug reports") | [**<NAME>**](http://www.hannahishere.com)[💻](https://github.com/ml5js/ml5-library/commits?author=handav "Code") [💡](#example-handav "Examples") | [**<NAME>**](https://jk-lee.com/)[💻](https://github.com/ml5js/ml5-library/commits?author=joeyklee "Code") [💡](#example-joeyklee "Examples") [👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3Ajoeyklee "Reviewed Pull Requests") [🖋](#content-joeyklee "Content") [⚠️](https://github.com/ml5js/ml5-library/commits?author=joeyklee "Tests") | [**AshleyJaneLewis**](https://github.com/AshleyJaneLewis)[📝](#blog-AshleyJaneLewis "Blogposts") [🎨](#design-AshleyJaneLewis "Design") [📋](#eventOrganizing-AshleyJaneLewis "Event Organizing") [🖋](#content-AshleyJaneLewis "Content") | [**<NAME>**](https://ellennickles.site/)[📝](#blog-ellennickles "Blogposts") [🖋](#content-ellennickles "Content") [🤔](#ideas-ellennickles "Ideas, Planning, & Feedback") [✅](#tutorial-ellennickles "Tutorials") | | [**<NAME>**](http://www.itayniv.com)[💻](https://github.com/ml5js/ml5-library/commits?author=itayniv "Code") [💡](#example-itayniv "Examples") | [**<NAME>**](http://nikitahuggins.com)[📝](#blog-nikitahuggins "Blogposts") [🖋](#content-nikitahuggins "Content") [🤔](#ideas-nikitahuggins "Ideas, Planning, & Feedback") | [**<NAME>y**](http://www.arnabchakravarty.com)[🖋](#content-AbolTaabol "Content") [📓](#userTesting-AbolTaabol "User Testing") | [**<NAME>**](http://www.aidanjnelson.com/)[💻](https://github.com/ml5js/ml5-library/commits?author=AidanNelson "Code") [💡](#example-AidanNelson "Examples") | [**WenheLI**](http://portfolio.steins.live)[💻](https://github.com/ml5js/ml5-library/commits?author=WenheLI "Code") [💡](#example-WenheLI "Examples") [🚧](#maintenance-WenheLI "Maintenance") [🤔](#ideas-WenheLI "Ideas, Planning, & Feedback") | [**<NAME>**](https://tinysubversions.com)[🤔](#ideas-dariusk "Ideas, Planning, & Feedback") [💬](#question-dariusk "Answering Questions") | [**<NAME>**](https://wangdingsu.com)[💻](https://github.com/ml5js/ml5-library/commits?author=Derek-Wds "Code") [💡](#example-Derek-Wds "Examples") | | [**garym140**](https://github.com/garym140)[🖋](#content-garym140 "Content") [📝](#blog-garym140 "Blogposts") [🤔](#ideas-garym140 "Ideas, Planning, & Feedback") [📓](#userTesting-garym140 "User Testing") | [**Gene Kogan**](http://genekogan.com)[💻](https://github.com/ml5js/ml5-library/commits?author=genekogan "Code") [💡](#example-genekogan "Examples") [🤔](#ideas-genekogan "Ideas, Planning, & Feedback") | [**<NAME>**](http://hhayeon.com)[💻](https://github.com/ml5js/ml5-library/commits?author=hhayley "Code") [💡](#example-hhayley "Examples") [🤔](#ideas-hhayley "Ideas, Planning, & Feedback") | [**<NAME>**](http://lisajamhoury.com)[💡](#example-lisajamhoury "Examples") [🤔](#ideas-lisajamhoury "Ideas, Planning, & Feedback") | [**<NAME>**](https://www.matamala.info)[🎨](#design-matamalaortiz "Design") [🖋](#content-matamalaortiz "Content") [📝](#blog-matamalaortiz "Blogposts") | [**<NAME>**](http://mayaontheinter.net)[💻](https://github.com/ml5js/ml5-library/commits?author=mayaman "Code") [💡](#example-mayaman "Examples") | [**<NAME>**](http://mimionuoha.com)[🤔](#ideas-MimiOnuoha "Ideas, Planning, & Feedback") [🖋](#content-MimiOnuoha "Content") [👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3AMimiOnuoha "Reviewed Pull Requests") | | [**<NAME>**](https://i.yuuno.cc/)[💻](https://github.com/ml5js/ml5-library/commits?author=NHibiki "Code") [💡](#example-NHibiki "Examples") [🚧](#maintenance-NHibiki "Maintenance") | [**<NAME>**](http://www.danioved.com/)[💻](https://github.com/ml5js/ml5-library/commits?author=oveddan "Code") [💡](#example-oveddan "Examples") [💬](#question-oveddan "Answering Questions") [🤔](#ideas-oveddan "Ideas, Planning, & Feedback") | [**<NAME>**](http://anothersideproject.co)[💻](https://github.com/ml5js/ml5-library/commits?author=stephkoltun "Code") [💡](#example-stephkoltun "Examples") [🖋](#content-stephkoltun "Content") [📝](#blog-stephkoltun "Blogposts") [🎨](#design-stephkoltun "Design") | [**<NAME>**](https://github.com/viztopia)[💻](https://github.com/ml5js/ml5-library/commits?author=viztopia "Code") [💡](#example-viztopia "Examples") [🤔](#ideas-viztopia "Ideas, Planning, & Feedback") | [**<NAME>**](https://www.wenqi.li)[💻](https://github.com/ml5js/ml5-library/commits?author=wenqili "Code") [💡](#example-wenqili "Examples") [🚇](#infra-wenqili "Infrastructure (Hosting, Build-Tools, etc)") | [**<NAME>**](http://brentlbailey.com)[⚠️](https://github.com/ml5js/ml5-library/commits?author=brondle "Tests") [💻](https://github.com/ml5js/ml5-library/commits?author=brondle "Code") [💡](#example-brondle "Examples") | [**Jonarod**](https://github.com/Jonarod)[💻](https://github.com/ml5js/ml5-library/commits?author=Jonarod "Code") | | [**Jasmine Otto**](https://jazztap.github.io)[💻](https://github.com/ml5js/ml5-library/commits?author=JazzTap "Code") [⚠️](https://github.com/ml5js/ml5-library/commits?author=JazzTap "Tests") [💡](#example-JazzTap "Examples") | [**<NAME>**](https://twitter.com/zaidalyafeai)[💻](https://github.com/ml5js/ml5-library/commits?author=zaidalyafeai "Code") [💡](#example-zaidalyafeai "Examples") [🤔](#ideas-zaidalyafeai "Ideas, Planning, & Feedback") [💬](#question-zaidalyafeai "Answering Questions") | [**<NAME>**](https://alca.tv)[💻](https://github.com/ml5js/ml5-library/commits?author=AlcaDesign "Code") [💡](#example-AlcaDesign "Examples") [⚠️](https://github.com/ml5js/ml5-library/commits?author=AlcaDesign "Tests") | [**Memo Akten**](http://www.memo.tv)[💻](https://github.com/ml5js/ml5-library/commits?author=memo "Code") [💡](#example-memo "Examples") | [**<NAME>**](https://thehidden1.github.io/)[💻](https://github.com/ml5js/ml5-library/commits?author=TheHidden1 "Code") [💡](#example-TheHidden1 "Examples") [🤔](#ideas-TheHidden1 "Ideas, Planning, & Feedback") [⚠️](https://github.com/ml5js/ml5-library/commits?author=TheHidden1 "Tests") | [**<NAME>**](http://meiamso.me)[💻](https://github.com/ml5js/ml5-library/commits?author=meiamsome "Code") [⚠️](https://github.com/ml5js/ml5-library/commits?author=meiamsome "Tests") | [**<NAME>**](https://marshalhayes.dev)[📖](https://github.com/ml5js/ml5-library/commits?author=marshalhayes "Documentation") | | [**<NAME>**](https://reiinakano.github.io)[💻](https://github.com/ml5js/ml5-library/commits?author=reiinakano "Code") [⚠️](https://github.com/ml5js/ml5-library/commits?author=reiinakano "Tests") [💡](#example-reiinakano "Examples") | [**<NAME>**](https://deeplearnjs.org/)[💻](https://github.com/ml5js/ml5-library/commits?author=nsthorat "Code") [💡](#example-nsthorat "Examples") [🤔](#ideas-nsthorat "Ideas, Planning, & Feedback") [🚇](#infra-nsthorat "Infrastructure (Hosting, Build-Tools, etc)") | [**<NAME>**](http://www.irenealvarado.com)[💻](https://github.com/ml5js/ml5-library/commits?author=irealva "Code") [💡](#example-irealva "Examples") [🚧](#maintenance-irealva "Maintenance") [🤔](#ideas-irealva "Ideas, Planning, & Feedback") | [**<NAME>**](http://www.vndrewlee.com/)[💻](https://github.com/ml5js/ml5-library/commits?author=vndrewlee "Code") [💡](#example-vndrewlee "Examples") [🤔](#ideas-vndrewlee "Ideas, Planning, & Feedback") | [**Jerhone**](https://medium.com/@fjcamillo.dev)[📖](https://github.com/ml5js/ml5-library/commits?author=fjcamillo "Documentation") | [**achimkoh**](https://scalarvectortensor.net/)[💻](https://github.com/ml5js/ml5-library/commits?author=achimkoh "Code") [💡](#example-achimkoh "Examples") [⚠️](https://github.com/ml5js/ml5-library/commits?author=achimkoh "Tests") | [**Jim**](http://ixora.io)[💡](#example-hx2A "Examples") [📖](https://github.com/ml5js/ml5-library/commits?author=hx2A "Documentation") [🖋](#content-hx2A "Content") | | [**<NAME>**](https://github.com/champierre/resume)[🚧](#maintenance-champierre "Maintenance") [💻](https://github.com/ml5js/ml5-library/commits?author=champierre "Code") | [**<NAME>**](http://naotohieda.com)[🚧](#maintenance-micuat "Maintenance") | [**<NAME>**](http://montoyamoraga.io)[🚧](#maintenance-montoyamoraga "Maintenance") [💡](#example-montoyamoraga "Examples") | [**b2renger**](http://b2renger.github.io/)[💻](https://github.com/ml5js/ml5-library/commits?author=b2renger "Code") [🚇](#infra-b2renger "Infrastructure (Hosting, Build-Tools, etc)") | [**<NAME>**](http://adityasharma.me)[🚧](#maintenance-adityaas26 "Maintenance") | [**okuna291**](https://github.com/okuna291)[🤔](#ideas-okuna291 "Ideas, Planning, & Feedback") | [**Jenna**](http://www.xujenna.com)[🤔](#ideas-xujenna "Ideas, Planning, & Feedback") | | [**nicoleflloyd**](https://github.com/nicoleflloyd)[🖋](#content-nicoleflloyd "Content") [🎨](#design-nicoleflloyd "Design") [📓](#userTesting-nicoleflloyd "User Testing") | [**jepster-dk**](http://jepster.dk)[💻](https://github.com/ml5js/ml5-library/commits?author=jepster-dk "Code") [🤔](#ideas-jepster-dk "Ideas, Planning, & Feedback") | [**<NAME>**](https://xanderjakeq.page/)[🤔](#ideas-xanderjakeq "Ideas, Planning, & Feedback") | [**<NAME>**](https://github.com/catarak)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Acatarak "Bug reports") [🚇](#infra-catarak "Infrastructure (Hosting, Build-Tools, etc)") [🤔](#ideas-catarak "Ideas, Planning, & Feedback") | [**<NAME>**](http://davebsoft.com)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Adcbriccetti "Bug reports") | [**Sblob1**](https://github.com/Sblob1)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3ASblob1 "Bug reports") | [**<NAME>**](https://www.jwilber.me/)[💡](#example-jwilber "Examples") [🤔](#ideas-jwilber "Ideas, Planning, & Feedback") [💻](https://github.com/ml5js/ml5-library/commits?author=jwilber "Code") | | [**danilo**](https://github.com/tezzutezzu)[💻](https://github.com/ml5js/ml5-library/commits?author=tezzutezzu "Code") [🤔](#ideas-tezzutezzu "Ideas, Planning, & Feedback") | [**<NAME>**](https://github.com/EmmaGoodliffe)[🤔](#ideas-EmmaGoodliffe "Ideas, Planning, & Feedback") [💬](#question-EmmaGoodliffe "Answering Questions") [🚧](#maintenance-EmmaGoodliffe "Maintenance") | [**Yang**](http://yangyang.blog)[💻](https://github.com/ml5js/ml5-library/commits?author=EonYang "Code") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3AEonYang "Bug reports") | [**<NAME>**](https://github.com/lydiajessup)[💻](https://github.com/ml5js/ml5-library/commits?author=lydiajessup "Code") [🤔](#ideas-lydiajessup "Ideas, Planning, & Feedback") [💡](#example-lydiajessup "Examples") | [**CJ R.**](https://coding.garden)[📖](https://github.com/ml5js/ml5-library/commits?author=w3cj "Documentation") [🖋](#content-w3cj "Content") | [**<NAME>**](https://github.com/badunit)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Abadunit "Bug reports") | [**<NAME>**](http://tnickel.de/)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3ATobiasNickel "Bug reports") [💻](https://github.com/ml5js/ml5-library/commits?author=TobiasNickel "Code") | | [**<NAME>**](https://wakatime.com/@barakplasma)[🖋](#content-barakplasma "Content") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Abarakplasma "Bug reports") | [**Rob**](http://sankeybuilder.com)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Aeformx "Bug reports") [💬](#question-eformx "Answering Questions") | [**<NAME>**](http://[email protected])[💡](#example-pujaarajan "Examples") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Apujaarajan "Bug reports") | [**<NAME>**](https://mcintyre.io)[⚠️](https://github.com/ml5js/ml5-library/commits?author=nickmcintyre "Tests") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Anickmcintyre "Bug reports") | [**<NAME>**](http://waxy.org/)[🖋](#content-waxpancake "Content") [🚧](#maintenance-waxpancake "Maintenance") | [**Wenqi Li**](https://www.wenqi.li)[🖋](#content-wenqili "Content") [💻](https://github.com/ml5js/ml5-library/commits?author=wenqili "Code") [🚇](#infra-wenqili "Infrastructure (Hosting, Build-Tools, etc)") [🚧](#maintenance-wenqili "Maintenance") [🤔](#ideas-wenqili "Ideas, Planning, & Feedback") | [**garym140**](https://github.com/garym140)[🎨](#design-garym140 "Design") | | [**nicoleflloyd**](https://github.com/nicoleflloyd)[🎨](#design-nicoleflloyd "Design") | [**Jim**](http://ixora.io)[🖋](#content-hx2A "Content") [🚧](#maintenance-hx2A "Maintenance") [🤔](#ideas-hx2A "Ideas, Planning, & Feedback") | [**Yeswanth**](https://medium.com/@s1thsv) [🚧](#maintenance-yeswanth "Maintenance") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Ayeswanth "Bug reports") | [**<NAME>**](http://psherlock.com.br)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3APettrus "Bug reports") [🚧](#maintenance-Pettrus "Maintenance") | [**danilo**](https://github.com/tezzutezzu)[🖋](#content-tezzutezzu "Content") | [**<NAME>**](http://andreasrefsgaard.dk)[🖋](#content-AndreasRef "Content") | [**<NAME>**](http://bcjordan.com)[🖋](#content-bcjordan "Content") | | [**<NAME>**](http://bradley.im)[🖋](#content-ratley "Content") | [**Kaushlend<NAME>ap**](https://github.com/Kaushl2208)[🖋](#content-Kaushl2208 "Content") | [**maxdevjs**](http://twitter.com/maxdevjs)[🖋](#content-maxdevjs "Content") | [**josher19**](http://about.me/joshuaweinstein/)[🖋](#content-josher19 "Content") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Ajosher19 "Bug reports") | [**<NAME>**](https://www.enigmeta.com)[🖋](#content-fdb "Content") | [**Violet**](https://github.com/violetcraze)[🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Avioletcraze "Bug reports") | [**<NAME>**](http://tirtawr.com)[💻](https://github.com/ml5js/ml5-library/commits?author=tirtawr "Code") [🖋](#content-tirtawr "Content") [🤔](#ideas-tirtawr "Ideas, Planning, & Feedback") | | [**<NAME>**](http://kruschel.dev)[💻](https://github.com/ml5js/ml5-library/commits?author=mikakruschel "Code") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3Amikakruschel "Bug reports") | [**<NAME>**](https://github.com/tasanuma)[🖋](#content-tasanuma "Content") | [**<NAME>**](https://www.linkedin.com/in/martinloesethjensen/)[🖋](#content-martinloesethjensen "Content") | [**<NAME>**](https://adaptive.link)[🖋](#content-adaptive "Content") | [**<NAME>**](https://github.com/RaisinTen)[🖋](#content-RaisinTen "Content") | [**Ludwig Stumpp**](https://twitter.com/ludwig_stumpp)[👀](https://github.com/ml5js/ml5-library/pulls?q=is%3Apr+reviewed-by%3ALudwigStumpp "Reviewed Pull Requests") [🐛](https://github.com/ml5js/ml5-library/issues?q=author%3ALudwigStumpp "Bug reports") [💡](#example-LudwigStumpp "Examples") | [**<NAME>**](http://bomani.xyz/)[🖋](#content-bomanimc "Content") [💻](https://github.com/ml5js/ml5-library/commits?author=bomanimc "Code") | This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome! Readme --- ### Keywords * ml5 * js * save