Skip to content

Commit dad84ae

Browse files
authored
Merge pull request #1454 from rstudio/keras3-1.0
Prepare CRAN release of keras3 1.0
2 parents 9611031 + 81d920c commit dad84ae

File tree

731 files changed

+3025
-2532
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

731 files changed

+3025
-2532
lines changed

DESCRIPTION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Package: keras3
22
Type: Package
33
Title: R Interface to 'Keras'
4-
Version: 0.2.0.9000
4+
Version: 1.0.0
55
Authors@R: c(
66
person("Tomasz", "Kalinowski", role = c("aut", "cph", "cre"),
77
email = "[email protected]"),

NEWS.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,23 @@
1-
# keras3 (development version)
1+
# keras3 1.0.0
22

33
- Chains of `layer_*` calls with `|>` now instantiate layers in the
44
same order as `%>%` pipe chains: left-hand-side first (#1440).
55

66
- `iterate()`, `iter_next()` and `as_iterator()` are now reexported from reticulate.
77

8+
9+
User facing changes with upstream Keras v3.3.3:
10+
11+
- new functions: `op_slogdet()`, `op_psnr()`
12+
13+
- `clone_model()` gains new args: `call_function`, `recursive`
14+
Updated example usage.
15+
16+
- `op_ctc_decode()` strategy argument has new default: `"greedy"`.
17+
Updated docs.
18+
19+
- `loss_ctc()` default name fixed, changed to `"ctc"`
20+
821
User facing changes with upstream Keras v3.3.2:
922

1023
- new function: `op_ctc_decode()`

R/applications.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2642,7 +2642,7 @@ function (input_shape = NULL, alpha = 1, include_top = TRUE,
26422642
#'
26432643
#' # Reference
26442644
#' - [Searching for MobileNetV3](
2645-
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
2645+
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
26462646
#'
26472647
#' The following table describes the performance of MobileNets v3:
26482648
#' ------------------------------------------------------------------------
@@ -2788,7 +2788,7 @@ function (input_shape = NULL, alpha = 1, minimalistic = FALSE,
27882788
#'
27892789
#' # Reference
27902790
#' - [Searching for MobileNetV3](
2791-
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
2791+
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
27922792
#'
27932793
#' The following table describes the performance of MobileNets v3:
27942794
#' ------------------------------------------------------------------------

R/losses.R

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
131131
#' Computes focal cross-entropy loss between true labels and predictions.
132132
#'
133133
#' @description
134-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
134+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
135135
#' helps to apply a focal factor to down-weight easy examples and focus more on
136136
#' hard examples. By default, the focal tensor is computed as follows:
137137
#'
@@ -157,7 +157,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
157157
#' when `from_logits=TRUE`) or a probability (i.e, value in `[0., 1.]` when
158158
#' `from_logits=FALSE`).
159159
#'
160-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
160+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
161161
#' helps to apply a "focal factor" to down-weight easy examples and focus more
162162
#' on hard examples. By default, the focal tensor is computed as follows:
163163
#'
@@ -274,13 +274,13 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
274274
#' @param alpha
275275
#' A weight balancing factor for class 1, default is `0.25` as
276276
#' mentioned in reference [Lin et al., 2018](
277-
#' https://arxiv.org/pdf/1708.02002.pdf). The weight for class 0 is
277+
#' https://arxiv.org/pdf/1708.02002). The weight for class 0 is
278278
#' `1.0 - alpha`.
279279
#'
280280
#' @param gamma
281281
#' A focusing parameter used to compute the focal factor, default is
282282
#' `2.0` as mentioned in the reference
283-
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf).
283+
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002).
284284
#'
285285
#' @param from_logits
286286
#' Whether to interpret `y_pred` as a tensor of
@@ -450,7 +450,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
450450
#' `class_weights`. We expect labels to be provided in a `one_hot`
451451
#' representation.
452452
#'
453-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
453+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
454454
#' helps to apply a focal factor to down-weight easy examples and focus more on
455455
#' hard examples. The general formula for the focal loss (FL)
456456
#' is as follows:

R/metrics.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
#' Computes the binary focal crossentropy loss.
44
#'
55
#' @description
6-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
6+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
77
#' helps to apply a focal factor to down-weight easy examples and focus more on
88
#' hard examples. By default, the focal tensor is computed as follows:
99
#'

R/model-training.R

Lines changed: 20 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -269,17 +269,20 @@ function (object, x = NULL, y = NULL, ..., batch_size = NULL,
269269
verbose = as_model_verbose_arg),
270270
ignore = "object",
271271
force = "verbose")
272-
args[["return_dict"]] <- FALSE
272+
273+
## return_dict=TRUE because object$metrics_names returns wrong value
274+
## (e.g., "compile_metrics" instead of "mae")
275+
args[["return_dict"]] <- TRUE
273276

274277
if(inherits(args$x, "tensorflow.python.data.ops.dataset_ops.DatasetV2") &&
275278
!is.null(args$batch_size))
276279
stop("batch_size can not be specified with a TF Dataset")
277280

278281
result <- do.call(object$evaluate, args)
279-
if (length(result) > 1L) {
280-
result <- as.list(result)
281-
names(result) <- object$metrics_names
282-
}
282+
# if (length(result) > 1L) { ## if return_dict=FALSE
283+
# result <- as.list(result)
284+
# names(result) <- object$metrics_names
285+
# }
283286

284287
tfruns::write_run_metadata("evaluation", unlist(result))
285288

@@ -761,11 +764,12 @@ function (object, x, y = NULL, sample_weight = NULL, ...)
761764
result <- object$test_on_batch(as_array(x),
762765
as_array(y),
763766
as_array(sample_weight), ...,
764-
return_dict = FALSE)
765-
if (length(result) > 1L) {
766-
result <- as.list(result)
767-
names(result) <- object$metrics_names
768-
} else if (is_scalar(result)) {
767+
return_dict = TRUE)
768+
# if (length(result) > 1L) {
769+
# result <- as.list(result)
770+
# names(result) <- object$metrics_names
771+
# } else
772+
if (is_scalar(result)) {
769773
result <- result[[1L]]
770774
}
771775
result
@@ -824,11 +828,12 @@ function (object, x, y = NULL, sample_weight = NULL, class_weight = NULL)
824828
as_array(y),
825829
as_array(sample_weight),
826830
class_weight = as_class_weight(class_weight),
827-
return_dict = FALSE)
828-
if (length(result) > 1L) {
829-
result <- as.list(result)
830-
names(result) <- object$metrics_names
831-
} else if (is_scalar(result)) {
831+
return_dict = TRUE)
832+
# if (length(result) > 1L) {
833+
# result <- as.list(result)
834+
# names(result) <- object$metrics_names
835+
# } else
836+
if (is_scalar(result)) {
832837
result <- result[[1L]]
833838
}
834839

R/optimizers-schedules.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
#' SGDR: Stochastic Gradient Descent with Warm Restarts.
88
#'
99
#' For the idea of a linear warmup of our learning rate,
10-
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677.pdf).
10+
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677).
1111
#'
1212
#' When we begin training a model, we often want an initial increase in our
1313
#' learning rate followed by a decay. If `warmup_target` is an int, this

docs/404.html

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/LICENSE-text.html

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)