Author

J Bridge

Published

February 9, 2024

This report contains all code used in the analysis and the output of the results and was created using R 4.1.2 (R Core Team, 2021) with RMarkdown. (Allaire et al., 2022; Xie et al., 2018, 2020) Several R packages were used. (Gagolewski, 2021; Gohel and Skintzos, 2023; Harrell Jr, 2022; Henry and Wickham, 2023; Heymans, 2022; Hofmann et al., 2022; Honaker et al., 2011; Ooms, 2022; Pedersen, 2022; Pedersen and Robinson, 2022; Robitzsch and Grund, 2022; van Buuren and Groothuis-Oudshoorn, 2011; Wickham, 2016; Wickham et al., 2022; Xie, 2014; Zhu, 2021)

Code
set.seed(1)
options(digits=2)
library(rlang)
library(dplyr) 
library(knitr)
library(ggplot2)
library(Hmisc)
library(ggpcp)
library(Amelia)
library(mice)
library(miceadds)
library(stringi)
library(miceafter)
library(patchwork)
library(flextable)
library(gganimate)
library(gifski)
library(stringr)
library(ggsci)
library(dataMeta)
library(DT)
set_flextable_defaults(
  na_str="Unknown",
  nan_str="Unknown",
)
load(file="misc_files/helpers.RData") # Some helper functions are saved in an RData file

l = readRDS("misc_files/TrueNTHLong.rds")
w = readRDS("misc_files/TrueNTHWide.rds")

EPIC-26

EPIC-26, a short form-version of the Expanded Prostate Cancer Index Composite (EPIC), was used to collect participant reported outcome measures (PROMs). (Wei et al., 2000) We were mainly interested in the urinary incontinence and sexual function domains as these were identified as important by participants feedback.

The urinary incontinence domain has four questions (EPIC-26 questions 23, 26, 27 and 28)

  • Over the past 4 weeks, how often have you leaked urine?
  • Which of the following best describes your control during the last 4 weeks?
  • How many pads or adult diapers per day did you usually use to control leakage during the last 4 weeks?
  • How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping or leaking urine

The sexual function has six questions (EPIC-26 questions 57, 58, 59, 60, 64, and 68)

  • How would you rate each of the following during the last 4 weeks? Your ability to have an erection
  • How would you rate each of the following during the last 4 weeks? Your ability to reach orgasm (climax)
  • How would you describe the usual quality of your erections during the last 4 weeks?
  • How would you describe the frequency of your erections during the last 4 weeks?
  • Overall, how would you rate your ability to function sexually during the last 4 weeks?
  • Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

The responses are given a numerical value based on a scale for 0 to 100, according to the EPIC scoring criteria. The mean of these values can be calculated to obtain a domain score. The score calculation allows up-to 20% missing data according to the scoring criteria; this means that the urinary incontinence domain score must have all questions answered, whereas the sexual function domain score may have one missing question. (Wei et al., 2000)

Data

The .csv files extracted from the records consist of

  • PROM reports named Phase X Y.csv where X is either “1” or “2” and Y is one of “baseline”, “Month 1”, “Month 3”, “Month 6”, “Month 12”
  • Patient demographics in “Patient Demographics.csv”
  • Clinical data in “BAUS.csv”
  • Information on participant consent in “Consent.csv”
  • Mapping of hospital ID in “BAUS.csv” to the hospital name

The PROMs were in long format and needed to be converted to wide format (by question) and then concatenated. This resulted in them being in long format by time point. The study IDs were then checked in the consent file to double check that only consented participants were included. Finally, the age at surgery was calculated for each participant using the date of birth and date of surgery. If either the date of birth or date of surgery were missing, then the age was treated as missing data.

From this, the EPIC-26 domain scores needed to be calculated by converting the responses into values and calculating the mean. This follows the EPIC-26 guidance. The urinary incontinence score and sexual function score were calculated according to the EPIC-26 scoring instructions. (Wei et al., 2000)

Some participants also reported baseline clinical variables (e.g., PSA, Gleason score, nerve sparing approach used, and D’Amico risk classification). The clinical data was contained in another file and needed to be added to the dataset.

This then created a dataset in wide format. Phase (1 or 2) is the study phase, time 0 is baseline and the other times (1, 3, 6, 12) are months, study IDs are the anonymised IDs, DoB and DoS are the date of birth and date of surgery, respectively.

Headings starting with A are answers and headings starting with C are coded answers. The heading numbers for refer to the original EPIC question numbers. For example, A23 is the answer for EPIC question 23 (EPIC-26 question 1) and C23 is the coded answer (0, 25, 50, 75, 100) which was used to calculate the domain score.

The headings Umiss and Smiss show how many items are missing from the urinary (incontinence) and sexual (function) domains, respectively. The urinary domain allows no missing items, while the sexual domain allows 1 missing item. Uscore and Sscore are the urinary (incontinence) and sexual (function) domain scores; these are calculated as the mean of the coded items.(Wei et al., 2000)

A total of 2,030 participants were used in the final analysis, with 1,118 of those reporting clinical variables. The table below summarises the participant numbers.

Code
# Count the number of participants
df_ccN = data.frame("Set"=c(
  "Full analytical cohort",
  "PROM and clinical data",
  "PROM data only"
  ),
  "Number"=c(
    n_distinct(l["Study_ID"]),
     sum(w$Clinical_data),
    sum(abs(1-w$Clinical_data))
  ))
tab = flextable(df_ccN)
ggplot()+
  gen_grob(tab)+
  plot_layout(nrow=2,heights=c(0,1))

We analyse the participants with complete baseline and 12 month urinary and complete baseline and 12 month sexual function data, separately. Here, the participants with complete urinary incontinence and sexual function data at baseline and 12 months are extracted and stored as separate data frames.

The number of participants with full baseline and 12 month data for urinary incontinence and sexual function outcomes is shown in the table below.

Code
# How many of the participants have complete data 
ub = w[w$Umiss.0==0 & !is.na(w$Umiss.0),] # Number with complete urinary data at baseline
uf = w[w$Umiss.12==0 & !is.na(w$Umiss.12),] # Number with complete urinary data at 12 months
sb = w[w$Smiss.0==0 & !is.na(w$Assist.0) & !is.na(w$Smiss.0),] # Number with complete sexual data at baseline
sf = w[w$Smiss.12==0 & !is.na(w$Assist.12) & !is.na(w$Smiss.12),] # Number with complete sexual data at 12 months
uf = uf %>% filter(Study_ID %in% unique(ub$Study_ID)) # Number with complete urinary data at baseline and 12 months
sf = sf %>% filter(Study_ID %in% unique(sb$Study_ID)) # Number with complete sexual data at baseline and 12 months

df_ccN = data.frame("Set"=c(
  "Full urinary incontinence data at baseline",
  "Full urinary incontinence data at baseline and 12 months",
  "Full sexual function data at baseline",
  "Full sexual function data at baseline and 12 months"
  ),
  "Number"=c(
    length(ub$Study_ID),
    length(uf$Study_ID),
    length(sb$Study_ID),
    length(sf$Study_ID)
  ))
tab=flextable(df_ccN)
ggplot()+
  gen_grob(tab)+
  plot_layout(nrow=2,heights=c(0,1))

Baseline characteristics

Here, we present the baseline characteristics for the PROM participants. “Missing” or “NA” indicates the number of missing responses. These are for all participants and missing data is reported. These values were used in constructing Table 2 in the manuscript.

Baseline characteristics are reported for all participants, participants with PROM data only, and PROM with clinical data participants separately. For this, the participants first need to be split into these groups and stored as data frames.

Code
# w is the data frame with all participants
prom = w[w$Clinical_data==0,] # PROM only
promwc = w[w$Clinical_data==1,] # PROM with clinical data

The ages of participants at surgery time were calculated using the date of birth and date of surgery. Any date of births before 1900 and after 2000, were assumed to be entered incorrectly as these participants would not be candidates for surgery. Any surgery dates before 2015 were also assumed to be entered incorrectly as all surgeries were carried out after 2015. These values were deemed to be missing.

Ages ranged from 36 to 83, with a mean of 64.7 years, with 73 missing ages.

Code
# Table of the summary data
sum_tab(w$Age)

The frequencies of ages in different age categories are given in the table below with a histogram shown.

Code
hist_age(w$Age, xlab='Range', ylab='Count')

Ages range from 40 to 79 with a median age of 64.3. There are 17 missing ages.

Code
sum_tab(promwc$Age)

The frequencies of ages in different age categories are given in the table below with a histogram shown.

Code
hist_age(promwc$Age, xlab='Age', ylab='Count')

Ages range from 36 to 83 with a mean of 64.2. There are 56 missing ages.

Code
sum_tab(prom$Age)

The frequencies of ages in different age categories are given in the table below with a histogram shown.

Code
hist_age(prom$Age, xlab='Range', ylab='Count')

Here, we present the years that the surgeries were performed.

For all 2030 participants, the years of surgery are shown in the bar plot and table below.

Code
years = format(as.Date(w$DoS), format="%Y")
bar_tab(years, xlab = "Year of surgery", ylab = "Count")

For the participants with clinical and PROM data, the years of surgery are shown in the bar plot and table below.

Code
years = format(as.Date(promwc$DoS), format="%Y")
bar_tab(years, xlab = "Year of surgery", ylab = "Count")

For the participants with PROM data only, the years of surgery are shown in the bar plot and table below.

Code
years = format(as.Date(prom$DoS), format="%Y")
bar_tab(years, xlab = "Year of surgery", ylab = "Count")

A table of the frequencies of reported ethnicities for all participants is shown below.

Code
tab = proc_freq(w, "Race.0")
tab = set_header_labels(tab, Race.0="Race", Count="Number of participants")
tab

Race

Count

Percent

White and Asian

5

0.2%

African

44

2.2%

Any other Asian background

7

0.3%

Any other Black / African / Caribbean background

7

0.3%

Any other ethnic group

9

0.4%

Any other Mixed / Multiple ethnic background

6

0.3%

Any other White background

45

2.2%

Arab

4

0.2%

Bangladeshi

1

0.0%

Caribbean

39

1.9%

Chinese

5

0.2%

English / Welsh / Scottish / Northern Irish / British

1,776

87.5%

Indian

22

1.1%

Irish

31

1.5%

Pakistani

7

0.3%

White and Black African

9

0.4%

White and Black Caribbean

5

0.2%

Missing

8

0.4%

Total

2,030

100.0%

A table of the frequencies of reported ethnicities for participants with clinical data available is shown below.

Code
tab = proc_freq(promwc, "Race.0")
tab = set_header_labels(tab, Race.0="Race", Count="Number of participants")
tab

Race

Count

Percent

White and Asian

3

0.3%

African

32

2.9%

Any other Asian background

7

0.6%

Any other Black / African / Caribbean background

5

0.4%

Any other ethnic group

7

0.6%

Any other Mixed / Multiple ethnic background

2

0.2%

Any other White background

33

3.0%

Arab

2

0.2%

Bangladeshi

1

0.1%

Caribbean

27

2.4%

Chinese

2

0.2%

English / Welsh / Scottish / Northern Irish / British

948

84.8%

Indian

13

1.2%

Irish

19

1.7%

Pakistani

4

0.4%

White and Black African

6

0.5%

White and Black Caribbean

4

0.4%

Missing

3

0.3%

Total

1,118

100.0%

A table of the frequencies of reported ethnicities for participants with no clinical data available is shown below.

Code
tab = proc_freq(prom, "Race.0")
tab = set_header_labels(tab, Race.0="Race", Count="Number of participants")
tab

Race

Count

Percent

White and Asian

2

0.2%

African

12

1.3%

Any other Black / African / Caribbean background

2

0.2%

Any other ethnic group

2

0.2%

Any other Mixed / Multiple ethnic background

4

0.4%

Any other White background

12

1.3%

Arab

2

0.2%

Caribbean

12

1.3%

Chinese

3

0.3%

English / Welsh / Scottish / Northern Irish / British

828

90.8%

Indian

9

1.0%

Irish

12

1.3%

Pakistani

3

0.3%

White and Black African

3

0.3%

White and Black Caribbean

1

0.1%

Missing

5

0.5%

Total

912

100.0%

For all participants, the number reporting marital status is shown below.

Code
tab = proc_freq(w, "Marital.0")
tab = set_header_labels(tab, Marital.0="Marital status", Count="Number of participants")
tab

Marital status

Count

Percent

In a significant relationship, but not living together

93

4.6%

Married or living with a partner

1,749

86.2%

Other

18

0.9%

Single / living alone

133

6.6%

Single / not living alone

22

1.1%

Missing

15

0.7%

Total

2,030

100.0%

For participants with PROM and clinical data, the number reporting marital status is shown below.

Code
tab = proc_freq(promwc, "Marital.0")
tab = set_header_labels(tab, Marital.0="Marital status", Count="Number of participants")
tab

Marital status

Count

Percent

In a significant relationship, but not living together

54

4.8%

Married or living with a partner

954

85.3%

Other

11

1.0%

Single / living alone

76

6.8%

Single / not living alone

16

1.4%

Missing

7

0.6%

Total

1,118

100.0%

For participants with PROM data only, the number reporting marital status is shown below.s

Code
tab = proc_freq(prom, "Marital.0")
tab = set_header_labels(tab, Marital.0="Marital status", Count="Number of participants")
tab 

Marital status

Count

Percent

In a significant relationship, but not living together

39

4.3%

Married or living with a partner

795

87.2%

Other

7

0.8%

Single / living alone

57

6.2%

Single / not living alone

6

0.7%

Missing

8

0.9%

Total

912

100.0%

There were two categories for diabetes: “Diabetes” and “Diabetes”. One of them had a white space making it recognised as a separate category. As there was no option for “no diabetes”, it is impossible to distinguish between participants without diabetes and participants with missing data.

The number of participants reporting diabetes and heart disease among all participants is shown below.

Code
table(w$Diabetes.0)

 Diabetes Diabetes  
       28       108 
Code
table(w$HeartD.0)

Heart disease (for example angina, heart attack or heart failure) 
                                                               99 

For those with PROM and clinical data, the number reporting diabetes and heart disease is shown below.

Code
table(promwc$Diabetes.0)

 Diabetes Diabetes  
       16        60 
Code
table(promwc$HeartD.0)

Heart disease (for example angina, heart attack or heart failure) 
                                                               45 

For those with PROM data only, the number reporting diabetes and heart disease is shown below.

Code
table(prom$Diabetes.0)

 Diabetes Diabetes  
       12        48 
Code
table(prom$HeartD.0)

Heart disease (for example angina, heart attack or heart failure) 
                                                               54 

Here, we show the reported baseline leakage (EPIC question 23) and pad use (EPIC question 27).(Wei et al., 2000)

Here are the results for all participants in the analytical cohort.

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(w$A23.0, xlab="Leakage", "Count")

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(w$A27.0, xlab="Pad use", "Count")

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(w$A23.0, w$A27.0, "Leakage", "Count", "Pad use")

The number reporting all four urinary incontinence domain responses to calculate the urinary incontinence score was

Code
as.numeric(colSums(!is.na(w["Uscore.0"])))
[1] 1996

The median and interquartile range among these participants was

Code
quantile(w["Uscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 86 100 100 

Here are the results for the participants who had clinical data available.

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(promwc$A23.0, xlab="Leakage", "Count")

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(promwc$A27.0, xlab="Pad use", "Count")

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(promwc$A23.0, promwc$A27.0, "Leakage", "Count", "Pad use")

For participants with PROM and clinical data, the number reporting all four urinary incontinence domain responses to calculate the urinary incontinence score was

Code
as.numeric(colSums(!is.na(promwc["Uscore.0"])))
[1] 1100

The median and interquartile range of the urinary incontinence score for these participants was

Code
quantile(promwc["Uscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 86 100 100 

Here are the results for participants who did not have clinical data available.

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(prom$A23.0, xlab="Leakage", "Count")

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(prom$A27.0, xlab="Pad use", "Count")

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(prom$A23.0, prom$A27.0, "Leakage", "Count", "Pad use")

For participants with PROM data only, the number with all four urinary incontinence domain responses to calculate the urinary incontinence score was

Code
as.numeric(colSums(!is.na(prom["Uscore.0"])))
[1] 896

The median and interquartile range of the urinary incontinence score for these participants was

Code
quantile(prom["Uscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 86 100 100 

Here, we show the reported baseline leakage (EPIC question 59) and whether the participant used medication or devices to improve erections (this question is not included in EPIC).(Wei et al., 2000)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(w$A59.0, w$Assist.0, "Erection", "Count", "Assistance")

Here, we present the responses for specific assistance use.

Code
assist_use(w, 0)

Among all participants, the number with sufficient PROMs to calculate the sexual function score was

Code
as.numeric(colSums(!is.na(w["Sscore.0"])))
[1] 1953

The median score and interquartile range for these participants was

Code
quantile(w["Sscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 42  71  92 

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(promwc$A59.0, promwc$Assist.0, "Erection", "Count", "Assistance")

Here, we present the responses for specific assistance use.

Code
assist_use(promwc, 0)

For participants with PROM and clinical data, the number with sufficient PROMs to calculate the sexual function score was

Code
as.numeric(colSums(!is.na(promwc["Sscore.0"])))
[1] 1079

The median score and interquartile range for these participants was

Code
quantile(promwc["Sscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 44  75  92 

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(prom$A59.0, prom$Assist.0, "Erection", "Count", "Assistance")

Here, we present the responses for specific assistance use.

Code
assist_use(prom, 0)

For participants with PROM data only, the number with sufficient PROMs to calculate the sexual function score was

Code
as.numeric(colSums(!is.na(prom["Sscore.0"])))
[1] 874

The median score and interquartile range for these participants was

Code
quantile(prom["Sscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 39  70  92 

These are for the participants with PROMs and clinical data only.

Reported PSAs sorted into categories are shown in the table and bar chart below.

Code
tab = cut(as.numeric(promwc$PSA), c(0,10, 20.00001, 100000000), right=F)
levels(tab)=c(
  "0 to less than 10",
  "10 to less than 20",
  "Over 20"
  )
bar_tab(tab, xlab="PSA", ylab="count")

Reported Gleason scores are shown in the table and bar chart below. In the graph, “other” is grouped with missing.

Code
bar_tab(promwc$GS, xlab="Gleason Score", ylab="Count")

For this, other was counted as unknown.

Reported nerve sparing approaches are shown in the table and bar chart below.

Code
bar_tab(promwc$NSA, xlab="Nerve Sparing Approach", ylab="Count")

Reported lymphadenectomies are shown in the table and bar chart below.

Code
bar_tab(promwc$Lymph, xlab = "Lymphadenectomy", ylab="Count")

Reported D’Amico categories are shown in the table and bar chart below.

Code
bar_tab(promwc$DAmico, "D'Amico risk", "Count")

Code
bar_tab(promwc$Approach, "Surgical approach", "Count")

There were 587 missing values which have been removed to make the graph easier to read. One of the categories shown here is “other” and may include more than one surgeon.

Code
bar_tab(w$Surgeon[complete.cases(w$Surgeon)], "Surgeon", "Count")

Code
bar_tab(w$Hospital_ID.0, "Centre", "Count")

Histograms of domain scores

Histograms showing the distributions of urinary incontinence and sexual function domain scores for each of the time points are shown in this section.

The median and interquartile range among the 1,996 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Uscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 86 100 100 

A histogram of the available scores at baseline is shown below.

Code
hist(unlist(w["Uscore.0"]), main="Histogram of urinary incontinence scores at baseline", xlab="Score")

The median and interquartile range among the 1,415 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Uscore.1"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 27  50  65 

A histogram of the available scores shown below.

Code
hist(unlist(w["Uscore.1"]), main="Histogram of urinary incontinence scores at month 1", xlab="Score")

The median and interquartile range among the 1,405 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Uscore.3"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 46  60  77 

A histogram of the available scores shown below.

Code
hist(unlist(w["Uscore.3"]), main="Histogram of urinary incontinence scores at month 3", xlab="Score")

The median and interquartile range among the 1,360 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Uscore.6"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 58  71  92 

A histogram of the available scores shown below.

Code
hist(unlist(w["Uscore.6"]), main="Histogram of urinary incontinence scores at month 6", xlab="Score")

The median and interquartile range among the 1,409 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Uscore.12"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 60  77 100 

A histogram of the available scores shown below.

Code
hist(unlist(w["Uscore.12"]), main="Histogram of urinary incontinence scores at month 12", xlab="Score")

The median and interquartile range among the 1,953 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Sscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 42  71  92 

A histogram of the available scores shown below.

Code
hist(unlist(w["Sscore.0"]), main="Histogram of sexual function scores at baseline", xlab="Score")

The median and interquartile range among the 1,368 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Sscore.1"], c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 4.2 12.5 18.0 

A histogram of the available scores shown below.

Code
hist(unlist(w["Sscore.1"]), main="Histogram of sexual function scores at month 1", xlab="Score")

The median and interquartile range among the 1,343 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Sscore.3"], c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 4.2 13.8 25.0 

A histogram of the available scores shown below.

Code
hist(unlist(w["Sscore.3"]), main="Histogram of sexual function scores at month 3", xlab="Score")

The median and interquartile range among the 1,321 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Sscore.6"], c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 8.3 16.7 30.5 

A histogram of the available scores shown below.

Code
hist(unlist(w["Sscore.6"]), main="Histogram of sexual function scores at month 6", xlab="Score")

The median and interquartile range among the 1,350 participants with sufficient responses to calculate the domain score is

Code
quantile(w["Sscore.12"], c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 8.3 18.0 40.3 

A histogram of the available scores shown below.

Code
hist(unlist(w["Sscore.12"]), main="Histogram of sexual function scores at month 12", xlab="Score")

Baseline complete case results

Here, reported frequencies of reported PROMs at baseline are presented. These results are used in the main body of the text. Complete case analysis using only participants with baseline data available is used so that the proportions can easily be compared.

Frequencies of the reported outcomes are shown in tables. Proportions of participants are then given with 95% confidence intervals constructed using Wilson’s method.

Participants with complete urinary continence domain question data at baseline.

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(ub$A23.0, "Leakage", "Count")

Proportion rarely or never leaking urine.

Code
prop_conf(ub$A23.0, "Rarely or never")

PointEst

Lower

Upper

0.77

0.75

0.79

Proportion with leakage

Code
prop_conf(ub$A23.0, "Rarely or never", T)

PointEst

Lower

Upper

0.23

0.21

0.25

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(ub$A27.0, "Pad use", "Count")

Proportion using no pads per day.

Code
prop_conf(ub$A27.0, "None")

PointEst

Lower

Upper

0.97

0.96

0.97

Proportion using pads

Code
prop_conf(ub$A27.0, "None", T)

PointEst

Lower

Upper

0.034

0.027

0.043

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(ub$A23.0, ub$A27.0, "Leakage", "Count", "Pad use", missing=F)

Proportion who were leak free and pad free at baseline.

Code
prop_conf2(ub$A23.0, ub$A27.0, "Rarely or never", "None")

PointEst

Lower

Upper

0.77

0.75

0.78

The median and interquartile range was

Code
quantile(ub["Uscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 86 100 100 

Participants with complete sexual function domain question data at baseline.

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(sb$A59.0, sb$Assist.0, "Quality", "Count", "Medication and device Use", F, save="Figures/Figure3C.jpg")

Proportion with natural erections firm enough for intercourse.

Code
prop_conf2(sb$A59.0, sb$Assist.0, "Firm enough for intercourse", "No")

PointEst

Lower

Upper

0.52

0.5

0.54

There are missing values here as use of medications and devices is not included as a question in the sexual function domain.

Code
assist_use(sb, 0)

The median and interquartile range was

Code
quantile(sb["Sscore.0"], c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 43  74  92 

Twelve month results

Here, reported frequencies of reported PROMs at 12 months are presented. These results are used in the main body of the text. Complete case analysis using only participants with baseline and 12 month data available is used so that the proportions can easily be compared.

Frequencies of the reported outcomes are shown in tables. Proportions of participants are then given with 95% confidence intervals constructed using Wilson’s method.

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(uf$A23.12, "Leakage", "Count")

Leak free proportion

Code
prop_conf(uf$A23.12, "Rarely or never")

PointEst

Lower

Upper

0.45

0.42

0.47

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(uf$A27.12, "Pad use", "Count")

Pad free proportion

Code
prop_conf(uf$A27.12, "None")

PointEst

Lower

Upper

0.65

0.63

0.68

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(uf$A23.12, uf$A27.12, "Leakage", "Count", "Pad use", missing=F)

Proportion pad free and leak free

Code
prop_conf2(uf$A23.12, uf$A27.12, "Rarely or never", "None")

PointEst

Lower

Upper

0.42

0.39

0.45

Median and interquartile range of the urinary function score

Code
quantile(uf$Uscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 60  76 100 

Here is the spaghetti plot for all participants at all time points.

Code
spaghetti(l, l$Time, l$Uscore, l$Study_ID)

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(temp$A23.12, "Leakage", "Count")

Leak free proportion

Code
prop_conf(temp$A23.12, "Rarely or never")

PointEst

Lower

Upper

0.5

0.47

0.53

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(temp$A27.12, "Pad use", "Count")

Pad free proportion

Code
prop_conf(temp$A27.12, "None")

PointEst

Lower

Upper

0.7

0.67

0.72

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(temp$A23.12, temp$A27.12, "Leakage", "Count", "Pad use", missing=F)

Proportion pad free and leak free

Code
prop_conf2(temp$A23.12, temp$A27.12, "Rarely or never", "None")

PointEst

Lower

Upper

0.48

0.45

0.51

Median and interquartile range of the urinary function score

Code
quantile(temp$Uscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 65  84 100 

Here is the spaghetti plot for all participants at all time points.

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Uscore, d_tmp$Study_ID)

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(temp$A23.12, "Leakage", "Count")

Leak free proportion

Code
prop_conf(temp$A23.12, "Rarely or never")

PointEst

Lower

Upper

0.45

0.43

0.48

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(temp$A27.12, "Pad use", "Count")

Pad free proportion

Code
prop_conf(temp$A27.12, "None")

PointEst

Lower

Upper

0.66

0.64

0.69

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(temp$A23.12, temp$A27.12, "Leakage", "Count", "Pad use", missing=F)

Proportion pad free and leak free

Code
prop_conf2(temp$A23.12, temp$A27.12, "Rarely or never", "None")

PointEst

Lower

Upper

0.43

0.4

0.45

Median and interquartile range of the urinary function score

Code
quantile(temp$Uscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 60  77 100 

Here is the spaghetti plot for all participants at all time points.

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Uscore, d_tmp$Study_ID)

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(temp$A23.12, "Leakage", "Count")

Leak free proportion

Code
prop_conf(temp$A23.12, "Rarely or never")

PointEst

Lower

Upper

0.51

0.48

0.54

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(temp$A27.12, "Pad use", "Count")

Pad free proportion

Code
prop_conf(temp$A27.12, "None")

PointEst

Lower

Upper

0.7

0.67

0.72

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(temp$A23.12, temp$A27.12, "Leakage", "Count", "Pad use", missing=F)

Proportion pad free and leak free

Code
prop_conf2(temp$A23.12, temp$A27.12, "Rarely or never", "None")

PointEst

Lower

Upper

0.48

0.45

0.51

Median and interquartile range of the urinary function score

Code
quantile(temp$Uscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 65  84 100 

Here is the spaghetti plot for all participants at all time points.

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Uscore, group=d_tmp$Study_ID)

Q23: Over the past four weeks, how often have you leaked urine?

Code
bar_tab(temp$A23.12, "Leakage", "Count")

Leak free proportion

Code
prop_conf(temp$A23.12, "Rarely or never")

PointEst

Lower

Upper

0.23

0.19

0.28

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab(temp$A27.12, "Pad use", "Count")

Pad free proportion

Code
prop_conf(temp$A27.12, "None")

PointEst

Lower

Upper

0.48

0.43

0.54

Q23: Over the past four weeks, how often have you leaked urine? AND Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
bar_tab2(temp$A23.12, temp$A27.12, "Leakage", "Count", "Pad use", missing=F)

Proportion pad free and leak free

Code
prop_conf2(temp$A23.12, temp$A27.12, "Rarely or never", "None")

PointEst

Lower

Upper

0.2

0.16

0.25

Median and interquartile range of the urinary function score

Code
quantile(temp$Uscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 52  65  79 

Here is the spaghetti plot for all participants at all time points.

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Uscore, d_tmp$Study_ID)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(sf$A59.12, sf$Assist.12, "Quality", "Count", "Medication and device use", missing=F, save="Figures/Figure3D.jpg")

Proportion with adequate erections

Code
prop_conf(sf$A59.12, "Firm enough for intercourse")

PointEst

Lower

Upper

0.1

0.088

0.12

Proportion with natural adequate erections

Code
prop_conf2(sf$A59.12, sf$Assist.12, "Firm enough for intercourse", "No")

PointEst

Lower

Upper

0.035

0.026

0.047

Proportion with assisted adequate erections

Code
prop_conf2(sf$A59.12, sf$Assist.12, "Firm enough for intercourse", "Yes")

PointEst

Lower

Upper

0.069

0.056

0.085

Proportion without adequate erections

Code
prop_conf(sf$A59.12, "Firm enough for intercourse", T)

PointEst

Lower

Upper

0.9

0.88

0.91

Code
assist_use(sf, 12)

The median score and interquartile range was

Code
quantile(sf$Sscore.12, c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 8.3 18.0 40.3 

A spaghetti plot of the sexual function scores is shown below

Code
spaghetti(l, l$Time, l$Sscore, l$Study_ID)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(temp$A59.12, temp$Assist.12, "Erection", "Count", "Medication and device use", missing=F)

Proportion with adequate erections

Code
prop_conf(temp$A59.12, "Firm enough for intercourse")

PointEst

Lower

Upper

0.17

0.14

0.2

Proportion with natural adequate erections

Code
prop_conf2(temp$A59.12, temp$Assist.12, "Firm enough for intercourse", "No")

PointEst

Lower

Upper

0.059

0.043

0.08

Proportion with assisted adequate erections

Code
prop_conf2(temp$A59.12, temp$Assist.12, "Firm enough for intercourse", "Yes")

PointEst

Lower

Upper

0.11

0.086

0.13

Proportion without adequate erections

Code
prop_conf(temp$A59.12, "Firm enough for intercourse", T)

PointEst

Lower

Upper

0.83

0.8

0.86

Code
assist_use(temp, 12)

The median score and interquartile range was

Code
quantile(temp$Sscore.12, c(.25, .5, .75), na.rm=T)
25% 50% 75% 
 12  26  57 

A spaghetti plot of the sexual function scores is shown below

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Sscore, d_tmp$Study_ID)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Have you used any medications or devices to aid or improve erections?

Code
bar_tab2(temp$A59.12, temp$Assist.12, "Quality", "Count", "Medication and device use", missing=F)

Proportion with adequate erections

Code
prop_conf(temp$A59.12, "Firm enough for intercourse")

PointEst

Lower

Upper

0.035

0.023

0.054

Proportion with natural adequate erections

Code
prop_conf2(temp$A59.12, temp$Assist.12, "Firm enough for intercourse", "No")

PointEst

Lower

Upper

0.0088

0.0038

0.02

Proportion with assisted adequate erections

Code
prop_conf2(temp$A59.12, temp$Assist.12, "Firm enough for intercourse", "Yes")

PointEst

Lower

Upper

0.026

0.016

0.043

Proportion without adequate erections

Code
prop_conf(temp$A59.12, "Firm enough for intercourse", T)

PointEst

Lower

Upper

0.96

0.95

0.98

Code
assist_use(temp, 12)

The median score and interquartile range was

Code
quantile(temp$Sscore.12, c(.25, .5, .75), na.rm=T)
 25%  50%  75% 
 8.3 16.7 27.8 

A spaghetti plot of the sexual function scores is shown below

Code
d_tmp = l[l$Study_ID %in% temp$Study_ID,]
spaghetti(d_tmp, d_tmp$Time, d_tmp$Sscore, d_tmp$Study_ID)

Score and definitions comparison

Table of the proportion leak free at each time point with upper and lower 95% confidence intervals.

Code
times_leakfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

691

0.78

0.755

0.81

1

92

0.10

0.086

0.13

3

190

0.22

0.190

0.24

6

337

0.38

0.351

0.41

12

411

0.47

0.433

0.50

Table of the proportion pad free at each time point with upper and lower 95% confidence intervals.

Code
times_padfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

862

0.98

0.965

0.99

1

101

0.11

0.095

0.14

3

266

0.30

0.272

0.33

6

465

0.53

0.494

0.56

12

571

0.65

0.615

0.68

Table of the proportion leak and pad free at each time point with upper and lower 95% confidence intervals.

Code
times_leakpadfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

0.779

0.750

0.805

1

64

0.073

0.057

0.092

3

165

0.187

0.163

0.214

6

302

0.342

0.312

0.374

12

385

0.437

0.404

0.469

A table showing the median and interquartile range at the different time points is given below

Code
times_Uscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

100

86

100

Month 1

50

27

65

Month 3

60

48

77

Month 6

71

58

94

Month 12

77

60

100

Table of the proportion leak free at each time point with upper and lower 95% confidence intervals.

Code
times_leakfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

691

1.00

0.994

1.00

1

82

0.12

0.097

0.14

3

167

0.24

0.211

0.27

6

290

0.42

0.383

0.46

12

364

0.53

0.490

0.56

Table of the proportion pad free at each time point with upper and lower 95% confidence intervals.

Code
times_padfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

0.99

0.99

1.00

1

87

0.13

0.10

0.15

3

214

0.31

0.28

0.35

6

391

0.57

0.53

0.60

12

476

0.69

0.65

0.72

Table of the proportion leak and pad free at each time point with upper and lower 95% confidence intervals.

Code
times_leakpadfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

0.994

0.985

1.00

1

60

0.087

0.068

0.11

3

143

0.207

0.178

0.24

6

260

0.376

0.341

0.41

12

345

0.499

0.462

0.54

A table showing the median and interquartile range at the different time points is given below

Code
times_Uscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

100

100

100

Month 1

50

29

65

Month 3

65

50

79

Month 6

73

60

94

Month 12

86

65

100

Table of the proportion leak free at each time point with upper and lower 95% confidence intervals.

Code
times_leakfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

0.80

0.769

0.82

1

92

0.11

0.088

0.13

3

189

0.22

0.193

0.25

6

335

0.39

0.357

0.42

12

408

0.47

0.440

0.51

Table of the proportion pad free at each time point with upper and lower 95% confidence intervals.

Code
times_padfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

862

1.00

0.996

1.00

1

101

0.12

0.097

0.14

3

264

0.31

0.276

0.34

6

463

0.54

0.504

0.57

12

568

0.66

0.627

0.69

Table of the proportion leak and pad free at each time point with upper and lower 95% confidence intervals.

Code
times_leakpadfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

0.797

0.769

0.822

1

64

0.074

0.059

0.094

3

164

0.190

0.165

0.218

6

301

0.349

0.318

0.382

12

383

0.444

0.411

0.478

A table showing the median and interquartile range at the different time points is given below

Code
times_Uscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

100

86

100

Month 1

50

27

65

Month 3

62

50

77

Month 6

71

58

94

Month 12

77

60

100

Table of the proportion leak free at each time point with upper and lower 95% confidence intervals.

Code
times_leakfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

1.00

0.994

1.00

1

82

0.12

0.097

0.15

3

167

0.24

0.212

0.28

6

290

0.42

0.386

0.46

12

364

0.53

0.492

0.57

Table of the proportion pad free at each time point with upper and lower 95% confidence intervals.

Code
times_padfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

1.00

0.99

1.00

1

87

0.13

0.10

0.15

3

214

0.31

0.28

0.35

6

391

0.57

0.53

0.61

12

476

0.69

0.66

0.73

Table of the proportion leak and pad free at each time point with upper and lower 95% confidence intervals.

Code
times_leakpadfree(all_tmp)

Time

Frequency

Point

Lower

Upper

0

687

1.000

0.994

1.00

1

60

0.087

0.068

0.11

3

143

0.208

0.179

0.24

6

260

0.378

0.343

0.42

12

345

0.502

0.465

0.54

A table showing the median and interquartile range at the different time points is given below

Code
times_Uscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

100

100

100

Month 1

50

29

65

Month 3

65

50

79

Month 6

73

60

94

Month 12

86

65

100

Table of the proportion of adequate natural erections at each time point with upper and lower 95% confidence intervals.

Code
times_natural(all_tmp)

Time

Frequency

Point

Lower

Upper

0

344

0.501

0.463

0.538

1

17

0.025

0.016

0.039

3

13

0.019

0.011

0.032

6

17

0.025

0.016

0.039

12

24

0.035

0.024

0.051

Table of the proportion of adequate assisted erections at each time point with upper and lower 95% confidence intervals.

Code
times_assist(all_tmp)

Time

Frequency

Point

Lower

Upper

0

42

0.0611

0.045545

0.0816

1

1

0.0015

0.000075

0.0082

3

10

0.0146

0.007925

0.0266

6

18

0.0262

0.016636

0.0410

12

38

0.0553

0.040561

0.0750

A table showing the median and interquartile range at the different time points is given below

Code
times_Sscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

75

44.5

92

Month 1

14

4.2

20

Month 3

17

5.5

26

Month 6

17

8.3

32

Month 12

20

12.5

42

Table of the proportion of adequate natural erections at each time point with upper and lower 95% confidence intervals.

Code
times_natural(all_tmp)

Time

Frequency

Point

Lower

Upper

0

344

1.000

0.989

1.000

1

16

0.047

0.029

0.074

3

13

0.038

0.022

0.064

6

16

0.047

0.029

0.074

12

22

0.064

0.043

0.095

Table of the proportion of adequate assisted erections at each time point with upper and lower 95% confidence intervals.

Code
times_assist(all_tmp)

Time

Frequency

Point

Lower

Upper

0

0

0.0000

0.00000

0.011

1

1

0.0029

0.00015

0.016

3

10

0.0291

0.01587

0.053

6

14

0.0407

0.02440

0.067

12

28

0.0814

0.05691

0.115

A table showing the median and interquartile range at the different time points is given below

Code
times_Sscore(all_tmp)

Time

Median

Lower.IQR

Upper.IQR

Baseline

91

83.3

100

Month 1

17

4.2

26

Month 3

17

8.3

36

Month 6

18

8.3

40

Month 12

28

12.5

56

All time points

Leakage at all time points for 915 participants who reported leakage at every time point.

Q23: Over the last 4 weeks, how often have you leaked urine?

Code
tl = w[!is.na(w$A23.0) & !is.na(w$A23.1) & !is.na(w$A23.3) & !is.na(w$A23.6) & !is.na(w$A23.12),] 
tl = l[l$Study_ID %in% tl$Study_ID,]
bar_tab2(tl$Time, tl$A23,"Time", "Count", "Leakage", missing=F)

Leakage at all time points for 939 participants who reported leakage at every time point.

Q27: Over the last 4 weeks, how often have you leaked urine?

Code
tl = w[!is.na(w$A27.0) & !is.na(w$A27.1) & !is.na(w$A27.3) & !is.na(w$A27.6) & !is.na(w$A27.12),] 
tl = l[l$Study_ID %in% tl$Study_ID,]
bar_tab2(tl$Time, tl$A27,"Time", "Count", "Pad use", missing=F)

Erection quality at all time points for 822 participants who reported erection quality at every time point.

Q59: How would you rate the usual QUALITY if your erections during the last 4 weeks?

Code
tl = w[!is.na(w$A59.0) & !is.na(w$A59.1) & !is.na(w$A59.3) & !is.na(w$A59.6) & !is.na(w$A59.12),] 
tl = l[l$Study_ID %in% tl$Study_ID,]
bar_tab2(tl$Time, tl$A59,"Time", "Count", "Erection quality", missing=F)

Problem scores

Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping or leaking urine

Code
Lpcp(lp)

Twelve month results for leakage.

Code
tab = proc_freq(lp, "A28.12")
tab = set_header_labels(tab, A28.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No Problem

543

38.1%

Very Small Problem

554

38.9%

Small Problem

197

13.8%

Moderate Problem

78

5.5%

Big Problem

52

3.7%

Total

1,424

100.0%

Proportion with no problems

Code
prop_conf(lp$A28.12, "No Problem")

PointEst

Lower

Upper

0.38

0.36

0.41

Proportion with small problem or lower

Code
prop_conf(lp$A28.12, c("No Problem", "Very Small Problem", "Small Problem"))

PointEst

Lower

Upper

0.91

0.89

0.92

Twelve month results for leakage.

Code
l0 = lp[lp$A28.0=="No Problem",]
tab = proc_freq(l0, "A28.12")
tab = set_header_labels(tab, A28.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No Problem

462

44.8%

Very Small Problem

389

37.7%

Small Problem

109

10.6%

Moderate Problem

42

4.1%

Big Problem

30

2.9%

Total

1,032

100.0%

Proportion with no problems

Code
prop_conf(l0$A28.12, "No Problem")

PointEst

Lower

Upper

0.45

0.42

0.48

Proportion with small problem or lower

Code
prop_conf(l0$A28.12, c("No Problem", "Very Small Problem", "Small Problem"))

PointEst

Lower

Upper

0.93

0.91

0.94

Q34: Overall, how big a problem has your urinary function been for you during the last 4 weeks?

Code
Upcp(up)

Twelve month results for urinary function.

Code
tab = proc_freq(up, "A34.12")
tab = set_header_labels(tab, A34.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No problem

671

47.0%

Very small problem

451

31.6%

Small problem

178

12.5%

Moderate problem

94

6.6%

Big problem

34

2.4%

Total

1,428

100.0%

Proportion with no problems

Code
prop_conf(up$A34.12, "No problem")

PointEst

Lower

Upper

0.47

0.44

0.5

Proportion with small problem or lower

Code
prop_conf(up$A34.12, c("No problem", "Very small problem", "Small problem"))

PointEst

Lower

Upper

0.91

0.89

0.92

Twelve month results for urinary function.

Code
u0 = up[up$A34.0=="No problem",]
tab = proc_freq(u0, "A34.12")
tab = set_header_labels(tab, A34.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No problem

400

63.4%

Very small problem

169

26.8%

Small problem

38

6.0%

Moderate problem

17

2.7%

Big problem

7

1.1%

Total

631

100.0%

Proportion with no problems

Code
prop_conf(u0$A34.12, "No problem")

PointEst

Lower

Upper

0.63

0.6

0.67

Proportion with small problem or lower

Code
prop_conf(up$A34.12, c("No problem", "Very small problem", "Small problem"))

PointEst

Lower

Upper

0.91

0.89

0.92

Twelve month results for sexual function.

Code
tab = proc_freq(sp, "A68.12")
tab = set_header_labels(tab, A68.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No problem

248

17.9%

Very small problem

233

16.8%

Small problem

302

21.8%

Moderate problem

321

23.1%

Big problem

284

20.5%

Total

1,388

100.0%

Proportion with no problems

Code
prop_conf(sp$A68.12, "No problem")

PointEst

Lower

Upper

0.18

0.16

0.2

Proportion with small problem or lower

Code
prop_conf(sp$A68.12, c("No problem", "Very small problem", "Small problem"))

PointEst

Lower

Upper

0.56

0.54

0.59

Twelve month results for sexual function.

Code
s0 = sp[sp$A68.0=="No problem",]
tab = proc_freq(s0, "A68.12")
tab = set_header_labels(tab, A68.12="Problem", count="Number of participants")
tab

Problem

Number of participants

Percent

No problem

180

26.5%

Very small problem

125

18.4%

Small problem

137

20.2%

Moderate problem

131

19.3%

Big problem

105

15.5%

Total

678

100.0%

Proportion with no problems

Code
prop_conf(sp$A68.12, "No problem")

PointEst

Lower

Upper

0.18

0.16

0.2

Proportion with small problem or lower

Code
prop_conf(sp$A68.12, c("No problem", "Very small problem", "Small problem"))

PointEst

Lower

Upper

0.56

0.54

0.59

Problem and function

This sections shows problems associated with each of the function domains.

Code
tl = w[!is.na(w$A23.0) & !is.na(w$A23.1) & !is.na(w$A23.3) & !is.na(w$A23.6) & !is.na(w$A23.12) &
  !is.na(w$A28.0) & !is.na(w$A28.1) & !is.na(w$A28.3) & !is.na(w$A28.6) & !is.na(w$A28.12),]
tp = w[!is.na(w$A27.0) & !is.na(w$A27.1) & !is.na(w$A27.3) & !is.na(w$A27.6) & !is.na(w$A27.12) &
  !is.na(w$A28.0) & !is.na(w$A28.1) & !is.na(w$A28.3) & !is.na(w$A28.6) & !is.na(w$A28.12),]
tq = w[!is.na(w$A59.0) & !is.na(w$A59.1) & !is.na(w$A59.3) & !is.na(w$A59.6) & !is.na(w$A59.12) &
  !is.na(w$A68.0) & !is.na(w$A68.1) & !is.na(w$A68.3) & !is.na(w$A68.6) & !is.na(w$A68.12),]
tf = w[!is.na(w$A60.0) & !is.na(w$A60.1) & !is.na(w$A60.3) & !is.na(w$A60.6) & !is.na(w$A60.12) &
  !is.na(w$A68.0) & !is.na(w$A68.1) & !is.na(w$A68.3) & !is.na(w$A68.6) & !is.na(w$A68.12),]

Comparison of the number of pads used and the leakage problem scor for those reporting leakage and leakage problem at all time points.

Q23: Over the past four weeks, how often have you leaked urine? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tl$A23.0, tl$A28.0, "Leakage", "Count", "Problem", missing=F)

Q23: Over the past four weeks, how often have you leaked urine? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tl$A23.1, tl$A28.1, "Leakage", "Count", "Problem", missing=F)

Q23: Over the past four weeks, how often have you leaked urine? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tl$A23.3, tl$A28.3, "Leakage", "Count", "Problem", missing=F)

Q23: Over the past four weeks, how often have you leaked urine? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tl$A23.6, tl$A28.6, "Leakage", "Count", "Problem", missing=F)

Q23: Over the past four weeks, how often have you leaked urine? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tl$A23.12, tl$A28.12, "Leakage", "Count", "Problem", missing=F)

Comparison of the number of pads used and the leakage problem score for those reporting pad-use and leakage problem at all time points.

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tp$A27.0, tp$A28.0, "Pads", "Count", "Problem", missing=F)

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tp$A27.1, tp$A28.1, "Pads", "Count", "Problem", missing=F)

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tp$A27.3, tp$A28.3, "Pads", "Count", "Problem", missing=F)

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tp$A27.6, tp$A28.6, "Pads", "Count", "Problem", missing=F)

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks? AND Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping and leaking urine

Code
bar_tab2(tp$A27.12, tp$A28.12, "Pads", "Count", "Problem", missing=F)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tq$A59.0, tq$A68.0, "Quality", "Count", "Problem", missing=F)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tq$A59.1, tq$A68.1, "Quality", "Count", "Problem", missing=F)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tq$A59.3, tq$A68.3, "Quality", "Count", "Problem", missing=F)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tq$A59.6, tq$A68.6, "Quality", "Count", "Problem", missing=F)

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tq$A59.12, tq$A68.12, "Quality", "Count", "Problem", missing=F)

Comparison of erection frequency and sexual function problem score for those reporting at every time point.

Q60: How would you describe the FREQUENCY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tf$A60.0, tf$A68.0, "Frequency", "Count", "Problem", missing=F)

Q60: How would you describe the FREQUENCY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tf$A60.1, tf$A68.1, "Frequency", "Count", "Problem", missing=F)

Q60: How would you describe the FREQUENCY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tf$A60.3, tf$A68.3, "Frequency", "Count", "Problem", missing=F)

Q60: How would you describe the FREQUENCY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tf$A60.6, tf$A68.6, "Frequency", "Count", "Problem", missing=F)

Q60: How would you describe the FREQUENCY of your erections during the last 4 weeks? AND Q68: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(tf$A60.12, tf$A68.12, "Frequency", "Count", "Problem", missing=F)

Problem over time

Code
tl = w[!is.na(w$A28.0) & !is.na(w$A28.1) & !is.na(w$A28.3) & !is.na(w$A28.6) & !is.na(w$A28.12),]
tu = w[!is.na(w$A34.0) & !is.na(w$A34.1) & !is.na(w$A34.3) & !is.na(w$A34.6) & !is.na(w$A34.12),]
ts = w[!is.na(w$A68.0) & !is.na(w$A68.1) & !is.na(w$A68.3) & !is.na(w$A68.6) & !is.na(w$A68.12),] 
tl = l[l$Study_ID %in% tl$Study_ID,]
tu = l[l$Study_ID %in% tu$Study_ID,]
ts = l[l$Study_ID %in% ts$Study_ID,]
tl = tl[!is.na(tl$A28),]
tu = tu[!is.na(tu$A34),]
ts = ts[!is.na(ts$A68),]

Leakage problem at all time points for 934 participants who reported leakage problem at every time point.

Q28: How big a problem, if any, has each of the following been for you during the last 4 weeks? Dripping or leaking urine.

Code
bar_tab2(tl$Time, tl$A28, "Time", "Count", "Problem", missing=F)

Urinary problem at all time points for 934 participants who reported urinary problem at every time point.

Q34: Overall, how big a problem has your urinary function been for you during the last 4 weeks?

Code
bar_tab2(tu$Time, tu$A34, "Time", "Count", "Problem", missing=F)

Sexual problem at all time points for 890 participants who reported urinary problem at every time point.

Q58: Overall, how big a problem has your sexual function or lack of sexual function been for you during the last 4 weeks?

Code
bar_tab2(ts$Time, ts$A68, "Time", "Count", "Problem", missing=F)

Baseline only

Code
sprob0 =  ts[ts$Time==0,]
bar_tab(sprob0$A68, "Problem", "Count", save="Figures/Figure3A.jpg")

Twelve months only

Code
sprob12 =  ts[ts$Time==12,]
bar_tab(sprob12$A68, "Problem", "Count", save="Figures/Figure3B.jpg")

Comparisons

These values are used in the discussion to compare with previous studies.

Percentage with low-risk cancer for participants with risk category reported (according to D’Amico classification)

Code
tmp = promwc[complete.cases(promwc$DAmico),]
proc_freq(tmp, "DAmico")

DAmico

Count

Percent

Low

48

5.2%

Intermediate

256

27.6%

High

625

67.3%

Total

929

100.0%

The scores are skewed and median should be used instead of mean (as we have shown in Section 3), the means are reported here only to compare to previous work which may have used means.

Mean urinary score

Code
mean(uf$Uscore.12)
[1] 76

Mean sexual function score

Code
mean(sf$Sscore.12)
[1] 28

This excludes the unknown participants.

Baseline erection quality

Code
tmp = sf[complete.cases(sf$A59.0),]
tab = proc_freq(tmp, "A59.0")
tab = set_header_labels(tab,A59.0="Erection quality")
tab

Erection quality

Count

Percent

Firm enough for intercourse

703

58.6%

Firm enough for masturbation and foreplay only

287

23.9%

Not firm enough for any sexual activity

107

8.9%

None at all

103

8.6%

Total

1,200

100.0%

Of the 703 with erections firm enough for intercourse at baseline, the erection quality and medication/device use at 12 months is

Code
baseSuff = sf[sf$A59.0=='Firm enough for intercourse',]
tmp = data.frame("Erection"=baseSuff$A59.12, "Assistance"=baseSuff$Assist.12)
proc_freq(tmp, "Erection", "Assistance", include.row_percent = F, include.column_percent = F, include.table_percent = F)

Erection

Assistance

No

Yes

Total

Firm enough for intercourse

38

74

112

Firm enough for masturbation and foreplay only

41

159

200

Not firm enough for any sexual activity

31

84

115

None at all

90

186

276

Total

200

503

703

Parallel coordinate plots

These plots can show participant progress across time. Each line represents an individual participant with the colour corresponding the final outcome at 12 months.

Code
u = w[!is.na(w$A23.0) & !is.na(w$A27.0) &
      !is.na(w$A23.6) & !is.na(w$A27.6) &
      !is.na(w$A23.12) & !is.na(w$A27.12),]

s = w[!is.na(w$A59.0) & !is.na(w$A59.6) & !is.na(w$A59.12),]

Q23: Over the past 4 weeks, how often have you leaked urine?

Code
plt = u |>
  pcp_select(A23.0, A23.12) |>
  pcp_scale() |>
  pcp_arrange(method="from-right")|>
  ggplot(aes_pcp()) +
  geom_pcp_boxes(boxwidth=0.1) +
  geom_pcp(aes(colour = A23.12), alpha = 0.1, axiswidth = c(0,0)) +
  guides(colour=guide_legend(override.aes = list(alpha=1))) +
  geom_pcp_labels() +
  scale_x_discrete(expand = expansion(add=0.15),
                   labels=c("A23.0" = "Baseline leakage",
                            "A23.12" = "Twelve month leakage")) +
  labs(x='Item', y='Proportion', color='Twelve month leakage') +
  theme(legend.position="none")
plt

Q27: How many pads per day did you usually use to control leakage during the last 4 weeks?

Code
plt = u |>
  pcp_select(A27.0, A27.12) |>
  pcp_scale() |>
  pcp_arrange(method="from-right")|>
  ggplot(aes_pcp()) +
  geom_pcp_boxes(boxwidth=0.1) +
  geom_pcp(aes(colour = A27.12), alpha = 0.1, axiswidth = c(0,0)) +
  guides(colour=guide_legend(override.aes = list(alpha=1))) +
  geom_pcp_labels() +
  scale_x_discrete(expand = expansion(add=0.2),
                   labels=c("A27.0" = "Baseline pad use",
                            "A27.12" = "Twelve month pad use")) +
  labs(x='Item', y='Proportion', color='Twelve month pad use') +
  theme(legend.position="none")
plt

Q59: How would you describe the usual QUALITY of your erections during the last 4 weeks?

Code
plt = s |>
  pcp_select(A59.0, A59.12) |>
  pcp_scale() |>
  pcp_arrange(method="from-right")|>
  ggplot(aes_pcp()) +
  geom_pcp_boxes(boxwidth=0.1) +
  geom_pcp(aes(colour = A59.12), alpha = 0.1, axiswidth = c(0,0)) +
  guides(colour=guide_legend(override.aes = list(alpha=1))) +
  geom_pcp_labels() +
  scale_x_discrete(expand = expansion(add=0.5),
                   labels=c("A59.0" = "Baseline erections",
                            "A59.12" = "Twelve month erections")) +
  labs(x='Item', y='Proportion', color='Twelve month erections') +
  theme(legend.position="none")
plt

Missing data

Code for the multiple imputation is given here. Multiple imputation by chained equations was used. The coded question answers are treated as ordinal. Use of assistance is treated as binary. Age is treated as continuous and is given no transformation.

The percentage of participants with missing responses at different time points is summarised in the table below.

Code
df_miss = data.frame(
  "Item"=c("Age", 
           "A23", "A26", "A27", "A28",
           "A57", "A58", "A59", "A60", "A64", "A68",
           "Assist",
           "Any"),
  
  "Baseline"=c(
    (length(w$Study_ID[is.na(w$Age)])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.0)])/2030)*100, 
    (length(w$Study_ID[is.na(w$A28.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$Age) | is.na(w$A23.0) | is.na(w$A26.0) | 
                       is.na(w$A27.0) | is.na(w$A28.0) | is.na(w$A57.0) | 
                       is.na(w$A58.0) | is.na(w$A59.0) | is.na(w$A60.0) | 
                       is.na(w$A64.0) | is.na(w$A68.0) | is.na(w$Assist.0) 
                       ])/2030)*100
  ),
  "Month 1"=c(
    (length(w$Study_ID[is.na(w$Age)])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.1)])/2030)*100, 
    (length(w$Study_ID[is.na(w$A28.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$Age) | is.na(w$A23.1) | is.na(w$A26.1) | 
                       is.na(w$A27.1) | is.na(w$A28.1) | is.na(w$A57.1) | 
                       is.na(w$A58.1) | is.na(w$A59.1) | is.na(w$A60.1) | 
                       is.na(w$A64.1) | is.na(w$A68.1) | is.na(w$Assist.1)  
                       ])/2030)*100
  ),
  "Month 3"=c(
    (length(w$Study_ID[is.na(w$Age)])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.3)])/2030)*100, 
    (length(w$Study_ID[is.na(w$A28.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$Age) | is.na(w$A23.3) | is.na(w$A26.3) | 
                       is.na(w$A27.3) | is.na(w$A28.3) | is.na(w$A57.3) | 
                       is.na(w$A58.3) | is.na(w$A59.3) | is.na(w$A60.3) | 
                       is.na(w$A64.3) | is.na(w$A68.3) | is.na(w$Assist.3) 
                       ])/2030)*100
  ),
  "Month 6"=c(
    (length(w$Study_ID[is.na(w$Age)])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.6)])/2030)*100, 
    (length(w$Study_ID[is.na(w$A28.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$Age) | is.na(w$A23.6) | is.na(w$A26.6) | 
                       is.na(w$A27.6) | is.na(w$A28.6) | is.na(w$A57.6) | 
                       is.na(w$A58.6) | is.na(w$A59.6) | is.na(w$A60.6) | 
                       is.na(w$A64.6) | is.na(w$A68.6) | is.na(w$Assist.6) 
                       ])/2030)*100
  ),
  "Month 12"=c(
    (length(w$Study_ID[is.na(w$Age)])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.12)])/2030)*100, 
    (length(w$Study_ID[is.na(w$A28.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Age) | is.na(w$A23.12) | is.na(w$A26.12) | 
                       is.na(w$A27.12) | is.na(w$A28.12) | is.na(w$A57.12) | 
                       is.na(w$A58.12) | is.na(w$A59.12) | is.na(w$A60.12) | 
                       is.na(w$A64.12) | is.na(w$A68.12) | is.na(w$Assist.12) 
                       ])/2030)*100
  ),
  "Any"=c(
    (length(w$Study_ID[is.na(w$Age) ])/2030)*100,
    (length(w$Study_ID[is.na(w$A23.0) | is.na(w$A23.1) | is.na(w$A23.3)| 
                       is.na(w$A23.6) | is.na(w$A23.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A26.0) | is.na(w$A26.1) | is.na(w$A26.3)| 
                       is.na(w$A26.6) | is.na(w$A26.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A27.0) | is.na(w$A27.1) | is.na(w$A27.3)| 
                       is.na(w$A27.6) | is.na(w$A27.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A28.0) | is.na(w$A28.1) | is.na(w$A28.3)| 
                       is.na(w$A28.6) | is.na(w$A28.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A57.0) | is.na(w$A57.1) | is.na(w$A57.3)| 
                       is.na(w$A57.6) | is.na(w$A57.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A58.0) | is.na(w$A58.1) | is.na(w$A58.3)| 
                       is.na(w$A58.6) | is.na(w$A58.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A59.0) | is.na(w$A59.1) | is.na(w$A59.3)| 
                       is.na(w$A59.6) | is.na(w$A59.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A60.0) | is.na(w$A60.1) | is.na(w$A60.3)| 
                       is.na(w$A60.6) | is.na(w$A60.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A64.0) | is.na(w$A64.1) | is.na(w$A64.3)| 
                       is.na(w$A64.6) | is.na(w$A64.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$A68.0) | is.na(w$A68.1) | is.na(w$A68.3)| 
                       is.na(w$A68.6) | is.na(w$A68.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Assist.0) | is.na(w$Assist.1) | is.na(w$Assist.3)| 
                       is.na(w$Assist.6) | is.na(w$Assist.12)])/2030)*100,
    (length(w$Study_ID[
                       is.na(w$Age) |
                       is.na(w$A23.0) | is.na(w$A23.1) | is.na(w$A23.3)| 
                       is.na(w$A23.6) | is.na(w$A23.12) |
                       is.na(w$A26.0) | is.na(w$A26.1) | is.na(w$A26.3)| 
                       is.na(w$A26.6) | is.na(w$A26.12) |
                       is.na(w$A27.0) | is.na(w$A27.1) | is.na(w$A27.3)| 
                       is.na(w$A27.6) | is.na(w$A27.12) |
                       is.na(w$A28.0) | is.na(w$A28.1) | is.na(w$A28.3)| 
                       is.na(w$A28.6) | is.na(w$A28.12) |
                       is.na(w$A57.0) | is.na(w$A57.1) | is.na(w$A57.3)| 
                       is.na(w$A57.6) | is.na(w$A57.12) |
                       is.na(w$A58.0) | is.na(w$A58.1) | is.na(w$A58.3)| 
                       is.na(w$A58.6) | is.na(w$A58.12) |
                       is.na(w$A59.0) | is.na(w$A59.1) | is.na(w$A59.3)| 
                       is.na(w$A59.6) | is.na(w$A59.12) | 
                       is.na(w$A60.0) | is.na(w$A60.1) | is.na(w$A60.3)| 
                       is.na(w$A60.6) | is.na(w$A60.12) |
                       is.na(w$A64.0) | is.na(w$A64.1) | is.na(w$A64.3)| 
                       is.na(w$A64.6) | is.na(w$A64.12) |
                       is.na(w$A68.0) | is.na(w$A68.1) | is.na(w$A68.3)| 
                       is.na(w$A68.6) | is.na(w$A68.12) |
                       is.na(w$Assist.0) | is.na(w$Assist.1) | is.na(w$Assist.3)| 
                       is.na(w$Assist.6) | is.na(w$Assist.12)
                       ])/2030)*100
  )
  
 
  
)
flextable(df_miss)

Item

Baseline

Month.1

Month.3

Month.6

Month.12

Any

Age

3.60

3.6

3.6

3.6

3.6

3.6

A23

0.39

29.7

30.3

32.6

29.9

54.9

A26

0.39

29.7

30.1

32.2

29.8

54.6

A27

0.20

29.6

30.0

32.0

29.6

53.7

A28

0.89

29.5

30.3

32.3

29.4

54.0

A57

3.45

32.1

33.6

34.9

33.3

62.7

A58

5.07

33.4

34.6

35.7

34.4

65.3

A59

2.61

31.4

32.6

34.1

31.6

59.5

A60

2.51

31.9

32.1

33.6

31.0

58.1

A64

2.27

31.3

31.7

33.5

30.7

57.1

A68

1.87

30.6

31.2

33.0

30.6

56.2

Assist

2.32

31.0

32.3

34.2

32.5

59.3

Any

11.77

36.1

37.1

38.5

38.4

70.7

The missing responses in the urinary incontinence and sexual function domains lead to missing domain scores. The percentage of participant with missing domain scores is shown below.

Code
df_miss = data.frame(
  "Item"=c("Urinary", "Sexual", "Either"),
  
  "Baseline"=c(
    (length(w$Study_ID[is.na(w$Uscore.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.0)])/2030)*100,
    (length(w$Study_ID[is.na(w$Uscore.0) | is.na(w$Sscore.0)])/2030)*100
  ),
  "Month 1"=c(
    (length(w$Study_ID[is.na(w$Uscore.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.1)])/2030)*100,
    (length(w$Study_ID[is.na(w$Uscore.1) | is.na(w$Sscore.1)])/2030)*100
  ),
  "Month 3"=c(
    (length(w$Study_ID[is.na(w$Uscore.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.3)])/2030)*100,
    (length(w$Study_ID[is.na(w$Uscore.3) | is.na(w$Sscore.3)])/2030)*100
  ),
  "Month 6"=c(
    (length(w$Study_ID[is.na(w$Uscore.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.6)])/2030)*100,
    (length(w$Study_ID[is.na(w$Uscore.6) | is.na(w$Sscore.6)])/2030)*100
  ),
  "Month 12"=c(
    (length(w$Study_ID[is.na(w$Uscore.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Uscore.12) | is.na(w$Sscore.12)])/2030)*100
  ),
  "Any"=c(
    (length(w$Study_ID[is.na(w$Uscore.0) | is.na(w$Uscore.1) | is.na(w$Uscore.3)| 
                       is.na(w$Uscore.6) | is.na(w$Uscore.12)])/2030)*100,
    (length(w$Study_ID[is.na(w$Sscore.0) | is.na(w$Sscore.1) | is.na(w$Sscore.3)| 
                       is.na(w$Sscore.6) | is.na(w$Sscore.12)])/2030)*100,

    (length(w$Study_ID[
                       is.na(w$Uscore.0) | is.na(w$Uscore.1) | is.na(w$Uscore.3)| 
                       is.na(w$Uscore.6) | is.na(w$Uscore.12) |
                       is.na(w$Sscore.0) | is.na(w$Sscore.1) | is.na(w$Sscore.3)| 
                       is.na(w$Sscore.6) | is.na(w$Sscore.12)
                       ])/2030)*100
  )
  
 
  
)
flextable(df_miss)

Item

Baseline

Month.1

Month.3

Month.6

Month.12

Any

Urinary

1.7

30

31

33

31

57

Sexual

3.8

33

34

35

33

63

Either

5.2

33

34

36

35

65

The missmap function in Amelia can be used to produce a missingness map.

Code
set.seed(1)



imp = subset(w, select=c(
               'C23.0', 'C26.0', 'C27.0', 'C28.0',
               'C57.0', 'C58.0', 'C59.0', 'C60.0', 'C64.0', 'C68.0',
               'C23.1', 'C26.1', 'C27.1', 'C28.1',
               'C57.1', 'C58.1', 'C59.1', 'C60.1', 'C64.1', 'C68.1',
               'C23.3', 'C26.3', 'C27.3', 'C28.3',
               'C57.3', 'C58.3', 'C59.3', 'C60.3', 'C64.3', 'C68.3',
               'C23.6', 'C26.6', 'C27.6', 'C28.6',
               'C57.6', 'C58.6', 'C59.6', 'C60.6', 'C64.6', 'C68.6',
               'C23.12', 'C26.12', 'C27.12', 'C28.12',
               'C57.12', 'C58.12', 'C59.12', 'C60.12', 'C64.12', 'C68.12',
               'Assist.0', 'Assist.1', 'Assist.3', 'Assist.6', 'Assist.12',
               'Age'

))

toOrdinal <- c('C23.0', 'C26.0', 'C27.0', 'C28.0',
               'C57.0', 'C58.0', 'C59.0', 'C60.0', 'C64.0', 'C68.0',
               'C23.1', 'C26.1', 'C27.1', 'C28.1',
               'C57.1', 'C58.1', 'C59.1', 'C60.1', 'C64.1', 'C68.1',
               'C23.3', 'C26.3', 'C27.3', 'C28.3',
               'C57.3', 'C58.3', 'C59.3', 'C60.3', 'C64.3', 'C68.3',
               'C23.6', 'C26.6', 'C27.6', 'C28.6',
               'C57.6', 'C58.6', 'C59.6', 'C60.6', 'C64.6', 'C68.6',
               'C23.12', 'C26.12', 'C27.12', 'C28.12',
               'C57.12', 'C58.12', 'C59.12', 'C60.12', 'C64.12', 'C68.12'
               )
imp[toOrdinal] <- lapply(imp[toOrdinal], ordered)

toFactors <- c('Assist.0', 'Assist.1', 'Assist.3', 'Assist.6', 'Assist.12')
imp[toFactors] <- lapply(imp[toFactors], as.factor)

missmap(imp)

This shows that 25% of the total data is missing.

Multiple imputation by chained equations using the domain responses with assistance use (yes/no) and participant age at surgery. As 25% of the data is missing, 25 imputed datasets are created.

Code
# This code block will take a long time to render (possibly days).
# Use {r, MI1, eval=FALSE} after initial run to prevent this running. 
# After the first run the required files should already be saved.

MI = mice(imp, m=25, seed=1, maxit=20)
write.datlist(MI, 'MI/', type='csv')
Code
folder1 = 'MI/MI'
folder2 = 'MI/UAll'
folder3 = 'MI/UBaselineLPFree'
folder4 = 'MI/UNBaselineLPFree'

for (i in 1:25){

  mi_file1 = paste(folder1,'/__IMPDATA',i,'.csv', sep='')
  mi_file2 = paste(folder2,'/MI__IMPDATA',i,'.csv', sep='')
  mi_file3 = paste(folder3,'/MI__IMPDATA',i,'.csv', sep='')
  mi_file4 = paste(folder4,'/MI__IMPDATA',i,'.csv', sep='')

  mi_csv = read.csv(mi_file1, header=T)

  mi_csv$LeakFree = ifelse(mi_csv$C23.12==100,1,0)
  mi_csv$PadFree = ifelse(mi_csv$C27.12==100,1,0)
  mi_csv$LeakPadFree = ifelse(mi_csv$C23.12==100 & mi_csv$C27.12==100,1,0)

  mi_lpfree = subset(mi_csv, C23.0==100 & C27.0==100)
  mi_nlpfree = subset(mi_csv, C23.0<100 | C27.0<100)

  write.csv(mi_csv, mi_file2, row.names = F)
  write.csv(mi_lpfree, mi_file3, row.names = F)
  write.csv(mi_nlpfree, mi_file4, row.names = F)

}

Complete case results from section 5.

Code
df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point"=c(0.4459654,0.6512968,0.4200288 ),
                    "Lower"=c(0.4200002,0.6258394,0.394319),
                    "Upper"=c(0.4722289,0.6759191,0.4461801)
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Leak Free

0.45

0.42

0.47

Pad Free

0.65

0.63

0.68

Leak and Pad Free

0.42

0.39

0.45

Imputed results

Code
dataList = list(
  read.csv('MI/UAll/MI__IMPDATA1.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA2.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA3.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA4.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA5.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA6.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA7.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA8.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA9.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA10.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA11.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA12.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA13.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA14.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA15.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA16.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA17.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA18.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA19.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA20.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA21.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA22.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA23.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA24.csv', header=T),
  read.csv('MI/UAll/MI__IMPDATA25.csv', header=T)
)
dat = list2milist(dataList)

LeakFree <- with(dat, expr=prop_wald(LeakFree ~ 1))
LeakFree <- pool_prop_wilson(LeakFree)
PadFree <- with(dat, expr=prop_wald(PadFree ~ 1))
PadFree <- pool_prop_wilson(PadFree)
LeakPadFree <- with(dat, expr=prop_wald(LeakPadFree ~ 1))
LeakPadFree <- pool_prop_wilson(LeakPadFree)



df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point"=c(LeakFree[1],PadFree[1],LeakPadFree[1]),
                    "Lower"=c(LeakFree[2],PadFree[2],LeakPadFree[2]),
                    "Upper"=c(LeakFree[3],PadFree[3],LeakPadFree[3])
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Leak Free

0.42

0.39

0.45

Pad Free

0.62

0.60

0.65

Leak and Pad Free

0.36

0.34

0.39

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Leak and pad-free',
                   'Pad free',
                   'Leak free')

df1 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.36144   ,0.62201,0.41980),
                  Lower=c(0.33842   ,0.59763,0.39393    ),
                  Upper=c(0.38511,0.64579,0.44613)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.4200288 ,0.6512968  ,0.4459654  ),
                  Lower=c(0.3943190 ,0.6258394  ,0.4200002  ),
                  Upper=c(0.4461801 ,0.6759191  ,0.4722289)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

Complete case reuslts from section 5

Code
df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point"=c(0.5055249,0.6979742   ,0.4815838  ),
                    "Lower"=c(0.4758222 ,0.6700086      ,0.4519838      ),
                    "Upper"=c(0.5351885,0.7245442,0.5113136)
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Leak Free

0.51

0.48

0.54

Pad Free

0.70

0.67

0.72

Leak and Pad Free

0.48

0.45

0.51

Imputed results

Code
dataList = list(
  read.csv('MI/UBaselineLPFree/MI__IMPDATA1.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA2.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA3.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA4.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA5.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA6.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA7.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA8.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA9.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA10.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA11.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA12.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA13.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA14.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA15.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA16.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA17.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA18.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA19.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA20.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA21.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA22.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA23.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA24.csv', header=T),
  read.csv('MI/UBaselineLPFree/MI__IMPDATA25.csv', header=T)
)

dat = list2milist(dataList)

LeakFree <- with(dat, expr=prop_wald(LeakFree ~ 1))
LeakFree <- pool_prop_wilson(LeakFree)
PadFree <- with(dat, expr=prop_wald(PadFree ~ 1))
PadFree <- pool_prop_wilson(PadFree)
LeakPadFree <- with(dat, expr=prop_wald(LeakPadFree ~ 1))
LeakPadFree <- pool_prop_wilson(LeakPadFree)

df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point"=c(LeakFree[1],PadFree[1],LeakPadFree[1]),
                    "Lower"=c(LeakFree[2],PadFree[2],LeakPadFree[2]),
                    "Upper"=c(LeakFree[3],PadFree[3],LeakPadFree[3])
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Leak Free

0.47

0.44

0.50

Pad Free

0.66

0.63

0.69

Leak and Pad Free

0.41

0.39

0.44

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Leak and pad-free',
                   'Pad free',
                   'Leak free')

df1 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.41182   ,0.65992    ,0.46808    ),
                  Lower=c(0.38519       ,0.63207    ,0.43904    ),
                  Upper=c(0.43898,0.68670,0.49734)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.4815838     ,0.6979742      ,0.5055249  ),
                  Lower=c(0.38519       ,0.63207    ,0.4758222  ),
                  Upper=c(0.5113136,0.7245442,0.5351885)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

Complete case results from section 5

Code
df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point"=c(0.2317881 ,0.4834437      ,0.1986755      ),
                    "Lower"=c(0.1877452     ,0.4276456          ,0.157583       ),
                    "Upper"=c(0.2825686,0.5396577,0.2473375)
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Leak Free

0.23

0.19

0.28

Pad Free

0.48

0.43

0.54

Leak and Pad Free

0.20

0.16

0.25

Imputed results

Code
dataList = list(
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA1.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA2.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA3.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA4.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA5.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA6.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA7.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA8.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA9.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA10.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA11.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA12.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA13.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA14.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA15.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA16.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA17.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA18.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA19.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA20.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA21.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA22.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA23.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA24.csv', header=T),
  read.csv('MI/UNBaselineLPFree/MI__IMPDATA25.csv', header=T)
)

dat = list2milist(dataList)

LeakFree <- with(dat, expr=prop_wald(LeakFree ~ 1))
LeakFree <- pool_prop_wilson(LeakFree)
PadFree <- with(dat, expr=prop_wald(PadFree ~ 1))
PadFree <- pool_prop_wilson(PadFree)
LeakPadFree <- with(dat, expr=prop_wald(LeakPadFree ~ 1))
LeakPadFree <- pool_prop_wilson(LeakPadFree)

df_tmp = data.frame("Type"=c("Leak Free", "Pad Free", "Leak and Pad Free"),
                    "Point Estimate"=c(LeakFree[1],PadFree[1],LeakPadFree[1]),
                    "Lower"=c(LeakFree[2],PadFree[2],LeakPadFree[2]),
                    "Upper"=c(LeakFree[3],PadFree[3],LeakPadFree[3])
                    )

flextable(df_tmp)

Type

Point.Estimate

Lower

Upper

Leak Free

0.26

0.22

0.31

Pad Free

0.50

0.45

0.56

Leak and Pad Free

0.20

0.16

0.24

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Leak and pad-free',
                   'Pad free',
                   'Leak free')

df1 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.19748       ,0.49865    ,0.26270        ),
                  Lower=c(0.15997       ,0.44946        ,0.21757        ),
                  Upper=c(0.24127,0.54787,0.31344)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(
                            'Leak and pad-free',
                            'Pad free',
                            'Leak free'),
                  Point=c(0.1986755     ,0.4834437  ,0.2317881      ),
                  Lower=c(0.1575830     ,0.4276456      ,0.1877452  ),
                  Upper=c(0.2473375,0.5396577,0.2825686)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

Code
folder1 = 'MI/MI'
folder2 = 'MI/SAll'
folder3 = 'MI/SBaselineNatAd'
folder4 = 'MI/SNBaselineNatAd'

for (i in 1:25){

  mi_file1 = paste(folder1,'/__IMPDATA',i,'.csv', sep='')
  mi_file2 = paste(folder2,'/MI__IMPDATA',i,'.csv', sep='')
  mi_file3 = paste(folder3,'/MI__IMPDATA',i,'.csv', sep='')
  mi_file4 = paste(folder4,'/MI__IMPDATA',i,'.csv', sep='')

  mi_csv = read.csv(mi_file1, header=T)

  mi_csv$UAd = ifelse(mi_csv$C59.12==100 & mi_csv$Assist.12=='No',1,0)
  mi_csv$AAd = ifelse(mi_csv$C59.12==100 & mi_csv$Assist.12=='Yes',1,0)
  mi_csv$Inad = ifelse(mi_csv$C59.12<100,1,0)

  mi_UnAd = subset(mi_csv, C59.0==100 & Assist.0=='No')
  mi_NUnAd = subset(mi_csv, C59.0<100 | Assist.0=='Yes')

  write.csv(mi_csv, mi_file2, row.names = F)
  write.csv(mi_UnAd, mi_file3, row.names = F)
  write.csv(mi_NUnAd, mi_file4, row.names = F)

}

Complete case results from section 5

Code
df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point"=c(  0.035 ,  0.0691667   ,  0.8958333     ),
                    "Lower"=c(  0.0259968     ,  0.0561423    , 0.877268     ),
                    "Upper"=c(   0.0469708  ,  0.0849406   , 0.9118724  )
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Unassisted Adequate

0.035

0.026

0.047

Assisted adequate

0.069

0.056

0.085

Inadequate

0.896

0.877

0.912

Imputed results

Code
dataList = list(
  read.csv('MI/SAll/MI__IMPDATA1.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA2.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA3.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA4.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA5.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA6.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA7.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA8.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA9.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA10.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA11.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA12.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA13.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA14.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA15.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA16.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA17.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA18.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA19.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA20.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA21.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA22.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA23.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA24.csv', header=T),
  read.csv('MI/SAll/MI__IMPDATA25.csv', header=T)
)
dat = list2milist(dataList)

UAd <- with(dat, expr=prop_wald(UAd ~ 1))
UAd <- pool_prop_wilson(UAd)
AAd <- with(dat, expr=prop_wald(AAd ~ 1))
AAd <- pool_prop_wilson(AAd)
Inad <- with(dat, expr=prop_wald(Inad ~ 1))
Inad <- pool_prop_wilson(Inad)

df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point Estimate"=c(UAd[1],AAd[1],Inad[1]),
                    "Lower"=c(UAd[2],AAd[2],Inad[2]),
                    "Upper"=c(UAd[3],AAd[3],Inad[3])
                    )

flextable(df_tmp)

Type

Point.Estimate

Lower

Upper

Unassisted Adequate

0.038

0.030

0.049

Assisted adequate

0.066

0.054

0.079

Inadequate

0.896

0.880

0.910

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate')

df1 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.89697       ,0.06824        ,0.03480        ),
                  Lower=c(0.87990       ,0.05602            ,0.02520            ),
                  Upper=c(0.91185,0.08288,0.04788)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.89697       ,0.06824        ,0.03480        ),
                  Lower=c(0.87990       ,0.05602        ,0.02520        ),
                  Upper=c(0.91185,0.08288,0.04788)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

Complete case results from section 5

Code
df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point"=c( 0.0587302      ,  0.1079365   , 0.8333333       ),
                    "Lower"=c( 0.0429062       ,  0.0860392   , 0.80223  ),
                    "Upper"=c(  0.0799028   ,   0.1345861  , 0.8603963  )
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Unassisted Adequate

0.059

0.043

0.08

Assisted adequate

0.108

0.086

0.13

Inadequate

0.833

0.802

0.86

Imputed results

Code
dataList = list(
  read.csv('MI/SBaselineNatAd/MI__IMPDATA1.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA2.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA3.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA4.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA5.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA6.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA7.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA8.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA9.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA10.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA11.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA12.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA13.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA14.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA15.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA16.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA17.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA18.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA19.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA20.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA21.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA22.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA23.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA24.csv', header=T),
  read.csv('MI/SBaselineNatAd/MI__IMPDATA25.csv', header=T)
)

dat = list2milist(dataList)

UAd <- with(dat, expr=prop_wald(UAd ~ 1))
UAd <- pool_prop_wilson(UAd)
AAd <- with(dat, expr=prop_wald(AAd ~ 1))
AAd <- pool_prop_wilson(AAd)
Inad <- with(dat, expr=prop_wald(Inad ~ 1))
Inad <- pool_prop_wilson(Inad)

df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point Estimate"=c(UAd[1],AAd[1],Inad[1]),
                    "Lower"=c(UAd[2],AAd[2],Inad[2]),
                    "Upper"=c(UAd[3],AAd[3],Inad[3])
                    )

flextable(df_tmp)

Type

Point.Estimate

Lower

Upper

Unassisted Adequate

0.052

0.039

0.069

Assisted adequate

0.096

0.077

0.120

Inadequate

0.852

0.824

0.875

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate')

df1 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.85174       ,0.09949        ,0.04877        ),
                  Lower=c(0.82548       ,0.08032                    ,0.03547                ),
                  Upper=c(0.87466,0.12263,0.06670)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.8333333     ,0.1079365      ,0.0587302  ),
                  Lower=c(0.8022300         ,0.0860392          ,0.0429062          ),
                  Upper=c(0.8603963,0.1345861,0.0799028)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

Complete case results from section 5

Code
df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point"=c( 0.0087719      ,  0.0263158       , 0.9649123           ),
                    "Lower"=c( 0.0037525           ,  0.0160114   , 0.9464269    ),
                    "Upper"=c( 0.0203682   ,   0.0429621  , 0.9771731  )
                    )

flextable(df_tmp)

Type

Point

Lower

Upper

Unassisted Adequate

0.0088

0.0038

0.020

Assisted adequate

0.0263

0.0160

0.043

Inadequate

0.9649

0.9464

0.977

Imputed results

Code
dataList = list(
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA1.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA2.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA3.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA4.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA5.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA6.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA7.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA8.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA9.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA10.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA11.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA12.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA13.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA14.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA15.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA16.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA17.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA18.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA19.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA20.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA21.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA22.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA23.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA24.csv', header=T),
  read.csv('MI/SNBaselineNatAd/MI__IMPDATA25.csv', header=T)
)

dat = list2milist(dataList)

UAd <- with(dat, expr=prop_wald(UAd ~ 1))
UAd <- pool_prop_wilson(UAd)
AAd <- with(dat, expr=prop_wald(AAd ~ 1))
AAd <- pool_prop_wilson(AAd)
Inad <- with(dat, expr=prop_wald(Inad ~ 1))
Inad <- pool_prop_wilson(Inad)

df_tmp = data.frame("Type"=c("Unassisted Adequate", "Assisted adequate", "Inadequate"),
                    "Point Estimate"=c(UAd[1],AAd[1],Inad[1]),
                    "Lower"=c(UAd[2],AAd[2],Inad[2]),
                    "Upper"=c(UAd[3],AAd[3],Inad[3])
                    )

flextable(df_tmp)

Type

Point.Estimate

Lower

Upper

Unassisted Adequate

0.024

0.015

0.038

Assisted adequate

0.034

0.023

0.051

Inadequate

0.942

0.923

0.957

Plot comparing the complete case and imputed results

Code
Outcome_order <- c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate')

df1 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.94404       ,0.03571            ,0.02026            ),
                  Lower=c(0.92177       ,0.02424                        ,0.01027                ),
                  Upper=c(0.96024,0.05231,0.03958)
                  )

df1$Group <- "Imputed"
df2 <- data.frame(Outcome=c(                   
                   'Inadequate',
                   'Assisted adequate',
                   'Unassisted adequate'),
                  Point=c(0.9649123     ,0.0263158      ,0.0087719      ),
                  Lower=c(0.9464269         ,0.0160114              ,0.0037525              ),
                  Upper=c(0.9771731 ,0.0429621,0.0203682)
                  )
df2$Group <- "Complete"
df = rbind(df1,df2)
df$Outcome = factor (df$Outcome, level=Outcome_order)

dotCOLS = c("white","white")
barCOLS = c("darkorange2","blue2")


p <- ggplot(df, aes(x=Outcome, y=Point, ymin=Lower, ymax=Upper,col=Group,fill=Group)) +
  geom_linerange(size=5,position=position_dodge(width = 0.5)) +
  geom_point(size=3, shape=20, colour="white", stroke = 0.5,position=position_dodge(width = 0.5)) +
  scale_fill_manual(values=dotCOLS)+
  scale_color_manual(values=barCOLS)+
  scale_x_discrete(name="Outcome") +
  scale_y_continuous(name="Proportion", limits = c(0, 1)) +
  coord_flip() +
  theme_light()
p

References

Allaire, J., Xie, Y., McPherson, J., et al. (2022) Rmarkdown: Dynamic Documents for r. Available at: https://github.com/rstudio/rmarkdown.
Gagolewski, M. (2021) Stringi: Fast and Portable Character String Processing in r. Available at: https://stringi.gagolewski.com/.
Gohel, D. and Skintzos, P. (2023) Flextable: Functions for Tabular Reporting. Available at: https://CRAN.R-project.org/package=flextable.
Harrell Jr, F. E. (2022) Hmisc: Harrell Miscellaneous. Available at: https://CRAN.R-project.org/package=Hmisc.
Henry, L. and Wickham, H. (2023) Rlang: Functions for Base Types and Core r and ’Tidyverse’ Features. Available at: https://CRAN.R-project.org/package=rlang.
Heymans, M. (2022) Miceafter: Data and Statistical Analyses After Multiple Imputation. Available at: https://CRAN.R-project.org/package=miceafter.
Hofmann, H., VanderPlas, S. and Ge, Y. (2022) Ggpcp: Parallel Coordinate Plots in the ’Ggplot2’ Framework. Available at: https://CRAN.R-project.org/package=ggpcp.
Honaker, J., King, G. and Blackwell, M. (2011) Amelia II: A program for missing data. Journal of Statistical Software, 45, 1–47. Available at: https://www.jstatsoft.org/v45/i07/.
Ooms, J. (2022) Gifski: Highest Quality GIF Encoder. Available at: https://CRAN.R-project.org/package=gifski.
Pedersen, T. L. (2022) Patchwork: The Composer of Plots. Available at: https://CRAN.R-project.org/package=patchwork.
Pedersen, T. L. and Robinson, D. (2022) Gganimate: A Grammar of Animated Graphics. Available at: https://CRAN.R-project.org/package=gganimate.
R Core Team (2021) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Available at: https://www.R-project.org/.
Robitzsch, A. and Grund, S. (2022) Miceadds: Some Additional Multiple Imputation Functions, Especially for ’Mice’. Available at: https://CRAN.R-project.org/package=miceadds.
van Buuren, S. and Groothuis-Oudshoorn, K. (2011) mice: Multivariate imputation by chained equations in r. Journal of Statistical Software, 45, 1–67. DOI: 10.18637/jss.v045.i03.
Wei, J. T., Dunn, R. L., Litwin, M. S., et al. (2000) Development and validation of the expanded prostate cancer index composite (EPIC) for comprehensive assessment of health-related quality of life in men with prostate cancer. Urology, 56, 899–905. Elsevier.
Wickham, H. (2016) Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. Available at: https://ggplot2.tidyverse.org.
Wickham, H., François, R., Henry, L., et al. (2022) Dplyr: A Grammar of Data Manipulation. Available at: https://CRAN.R-project.org/package=dplyr.
Xie, Y. (2014) Knitr: A comprehensive tool for reproducible research in R. In Implementing Reproducible Computational Research (eds. V. Stodden, F. Leisch, and R. D. Peng). Chapman; Hall/CRC. Available at: http://www.crcpress.com/product/isbn/9781466561595.
Xie, Y., Allaire, J. J. and Grolemund, G. (2018) R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman; Hall/CRC. Available at: https://bookdown.org/yihui/rmarkdown.
Xie, Y., Dervieux, C. and Riederer, E. (2020) R Markdown Cookbook. Boca Raton, Florida: Chapman; Hall/CRC. Available at: https://bookdown.org/yihui/rmarkdown-cookbook.
Zhu, H. (2021) kableExtra: Construct Complex Table with ’Kable’ and Pipe Syntax. Available at: https://CRAN.R-project.org/package=kableExtra.