API references
ClassCondFlow
Bases: Module
Class conditional normalizing Flow model, providing the class to be conditioned on only to the base distribution, as done e.g. in Glow
Source code in normflows/core.py
369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 |
|
__init__(q0, flows)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
q0 |
Base distribution |
required | |
flows |
List of flows |
required |
Source code in normflows/core.py
376 377 378 379 380 381 382 383 384 385 |
|
forward_kld(x, y)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required |
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
446 447 448 449 450 451 452 |
|
log_prob(x, y)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
y |
Classes of x |
required |
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 |
|
sample(num_samples=1, y=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
y |
Classes to sample from, will be sampled uniformly if None |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param |
path
|
Path including filename where to save model |
required |
Source code in normflows/core.py
438 439 440 441 442 443 444 |
|
ConditionalNormalizingFlow
Bases: NormalizingFlow
Conditional normalizing flow model, providing condition, which is also called context, to both the base distribution and the flow layers
Source code in normflows/core.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
forward(z, context=None)
Transforms latent variable z to the flow variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution |
Source code in normflows/core.py
222 223 224 225 226 227 228 229 230 231 232 233 234 |
|
forward_and_log_det(z, context=None)
Transforms latent variable z to the flow variable x and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution, |
|
log determinant of the Jacobian |
Source code in normflows/core.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
|
forward_kld(x, context=None)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 |
|
inverse(x, context=None)
Transforms flow variable x to the latent variable z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the latent space |
Source code in normflows/core.py
254 255 256 257 258 259 260 261 262 263 264 265 266 |
|
inverse_and_log_det(x, context=None)
Transforms flow variable x to the latent variable z and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the latent space, log determinant of the |
|
Jacobian |
Source code in normflows/core.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 |
|
log_prob(x, context=None)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 |
|
reverse_kld(num_samples=1, context=None, beta=1.0, score_fn=True)
Estimates reverse KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from base distribution |
1
|
|
context |
Batch of conditions/context |
None
|
|
beta |
Annealing parameter, see arXiv 1505.05770 |
1.0
|
|
score_fn |
Flag whether to include score function in gradient, see arXiv 1703.09194 |
True
|
Returns:
Type | Description |
---|---|
Estimate of the reverse KL divergence averaged over latent samples |
Source code in normflows/core.py
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
sample(num_samples=1, context=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
|
MultiscaleFlow
Bases: Module
Normalizing Flow model with multiscale architecture, see RealNVP or Glow paper
Source code in normflows/core.py
455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 |
|
__init__(q0, flows, merges, transform=None, class_cond=True)
Constructor
Args:
q0: List of base distribution flows: List of flows for each level merges: List of merge/split operations (forward pass must do merge) transform: Initial transformation of inputs class_cond: Flag, indicated whether model has class conditional base distributions
Source code in normflows/core.py
460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 |
|
forward(x, y=None)
Get negative log-likelihood for maximum likelihood training
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch of data |
required | |
y |
Batch of classes to condition on, if applicable |
None
|
Returns:
Type | Description |
---|---|
Negative log-likelihood of the batch |
Source code in normflows/core.py
492 493 494 495 496 497 498 499 500 501 502 |
|
forward_and_log_det(z)
Get observed variable x from list of latent variables z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
List of latent variables |
required |
Returns:
Type | Description |
---|---|
Observed variable x, log determinant of Jacobian |
Source code in normflows/core.py
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 |
|
forward_kld(x, y=None)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required | |
y |
Batch of classes to condition on, if applicable |
None
|
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
480 481 482 483 484 485 486 487 488 489 490 |
|
inverse_and_log_det(x)
Get latent variable z from observed variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Observed variable |
required |
Returns:
Type | Description |
---|---|
List of latent variables z, log determinant of Jacobian |
Source code in normflows/core.py
528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
626 627 628 629 630 631 632 |
|
log_prob(x, y=None)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
y |
Classes of x. Must be passed in if |
None
|
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 |
|
reset_temperature()
Set temperature values of base distributions back to None
Source code in normflows/core.py
649 650 651 652 653 |
|
sample(num_samples=1, y=None, temperature=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
y |
Classes to sample from, will be sampled uniformly if None |
None
|
|
temperature |
Temperature parameter for temp annealed sampling |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to save model |
required |
Source code in normflows/core.py
618 619 620 621 622 623 624 |
|
set_temperature(temperature)
Set temperature for temperature a annealed sampling
Parameters:
Name | Type | Description | Default |
---|---|---|---|
temperature |
Temperature parameter |
required |
Source code in normflows/core.py
634 635 636 637 638 639 640 641 642 643 644 645 646 647 |
|
NormalizingFlow
Bases: Module
Normalizing Flow model to approximate target distribution
Source code in normflows/core.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
|
__init__(q0, flows, p=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
q0 |
Base distribution |
required | |
flows |
List of flows |
required | |
p |
Target distribution |
None
|
Source code in normflows/core.py
14 15 16 17 18 19 20 21 22 23 24 25 |
|
forward(z)
Transforms latent variable z to the flow variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required |
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution |
Source code in normflows/core.py
27 28 29 30 31 32 33 34 35 36 37 38 |
|
forward_and_log_det(z)
Transforms latent variable z to the flow variable x and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required |
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution, |
|
log determinant of the Jacobian |
Source code in normflows/core.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
forward_kld(x)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required |
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
inverse(x)
Transforms flow variable x to the latent variable z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required |
Returns:
Type | Description |
---|---|
Batch in the latent space |
Source code in normflows/core.py
57 58 59 60 61 62 63 64 65 66 67 68 |
|
inverse_and_log_det(x)
Transforms flow variable x to the latent variable z and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required |
Returns:
Type | Description |
---|---|
Batch in the latent space, log determinant of the |
|
Jacobian |
Source code in normflows/core.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
207 208 209 210 211 212 213 |
|
log_prob(x)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required |
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
|
reverse_alpha_div(num_samples=1, alpha=1, dreg=False)
Alpha divergence when sampling from q
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
dreg |
Flag whether to use Double Reparametrized Gradient estimator, see arXiv 1810.04152 |
False
|
Returns:
Type | Description |
---|---|
Alpha divergence |
Source code in normflows/core.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
|
reverse_kld(num_samples=1, beta=1.0, score_fn=True)
Estimates reverse KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from base distribution |
1
|
|
beta |
Annealing parameter, see arXiv 1505.05770 |
1.0
|
|
score_fn |
Flag whether to include score function in gradient, see arXiv 1703.09194 |
True
|
Returns:
Type | Description |
---|---|
Estimate of the reverse KL divergence averaged over latent samples |
Source code in normflows/core.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
|
sample(num_samples=1)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to save model |
required |
Source code in normflows/core.py
199 200 201 202 203 204 205 |
|
NormalizingFlowVAE
Bases: Module
VAE using normalizing flows to express approximate distribution
Source code in normflows/core.py
656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 |
|
__init__(prior, q0=distributions.Dirac(), flows=None, decoder=None)
Constructor of normalizing flow model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior |
Prior distribution of te VAE, i.e. Gaussian |
required | |
decoder |
Optional decoder |
None
|
|
flows |
Flows to transform output of base encoder |
None
|
|
q0 |
Base Encoder |
Dirac()
|
Source code in normflows/core.py
661 662 663 664 665 666 667 668 669 670 671 672 673 674 |
|
forward(x, num_samples=1)
Takes data batch, samples num_samples for each data point from base distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
data batch |
required | |
num_samples |
number of samples to draw for each data point |
1
|
Returns:
Type | Description |
---|---|
latent variables for each batch and sample, log_q, and log_p |
Source code in normflows/core.py
676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 |
|
core
ClassCondFlow
Bases: Module
Class conditional normalizing Flow model, providing the class to be conditioned on only to the base distribution, as done e.g. in Glow
Source code in normflows/core.py
369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 |
|
__init__(q0, flows)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
q0 |
Base distribution |
required | |
flows |
List of flows |
required |
Source code in normflows/core.py
376 377 378 379 380 381 382 383 384 385 |
|
forward_kld(x, y)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required |
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
446 447 448 449 450 451 452 |
|
log_prob(x, y)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
y |
Classes of x |
required |
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 |
|
sample(num_samples=1, y=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
y |
Classes to sample from, will be sampled uniformly if None |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param |
path
|
Path including filename where to save model |
required |
Source code in normflows/core.py
438 439 440 441 442 443 444 |
|
ConditionalNormalizingFlow
Bases: NormalizingFlow
Conditional normalizing flow model, providing condition, which is also called context, to both the base distribution and the flow layers
Source code in normflows/core.py
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
forward(z, context=None)
Transforms latent variable z to the flow variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution |
Source code in normflows/core.py
222 223 224 225 226 227 228 229 230 231 232 233 234 |
|
forward_and_log_det(z, context=None)
Transforms latent variable z to the flow variable x and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution, |
|
log determinant of the Jacobian |
Source code in normflows/core.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
|
forward_kld(x, context=None)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 |
|
inverse(x, context=None)
Transforms flow variable x to the latent variable z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the latent space |
Source code in normflows/core.py
254 255 256 257 258 259 260 261 262 263 264 265 266 |
|
inverse_and_log_det(x, context=None)
Transforms flow variable x to the latent variable z and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Batch in the latent space, log determinant of the |
|
Jacobian |
Source code in normflows/core.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 |
|
log_prob(x, context=None)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 |
|
reverse_kld(num_samples=1, context=None, beta=1.0, score_fn=True)
Estimates reverse KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from base distribution |
1
|
|
context |
Batch of conditions/context |
None
|
|
beta |
Annealing parameter, see arXiv 1505.05770 |
1.0
|
|
score_fn |
Flag whether to include score function in gradient, see arXiv 1703.09194 |
True
|
Returns:
Type | Description |
---|---|
Estimate of the reverse KL divergence averaged over latent samples |
Source code in normflows/core.py
338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
sample(num_samples=1, context=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
context |
Batch of conditions/context |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
|
MultiscaleFlow
Bases: Module
Normalizing Flow model with multiscale architecture, see RealNVP or Glow paper
Source code in normflows/core.py
455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 |
|
__init__(q0, flows, merges, transform=None, class_cond=True)
Constructor
Args:
q0: List of base distribution flows: List of flows for each level merges: List of merge/split operations (forward pass must do merge) transform: Initial transformation of inputs class_cond: Flag, indicated whether model has class conditional base distributions
Source code in normflows/core.py
460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 |
|
forward(x, y=None)
Get negative log-likelihood for maximum likelihood training
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch of data |
required | |
y |
Batch of classes to condition on, if applicable |
None
|
Returns:
Type | Description |
---|---|
Negative log-likelihood of the batch |
Source code in normflows/core.py
492 493 494 495 496 497 498 499 500 501 502 |
|
forward_and_log_det(z)
Get observed variable x from list of latent variables z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
List of latent variables |
required |
Returns:
Type | Description |
---|---|
Observed variable x, log determinant of Jacobian |
Source code in normflows/core.py
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 |
|
forward_kld(x, y=None)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required | |
y |
Batch of classes to condition on, if applicable |
None
|
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
480 481 482 483 484 485 486 487 488 489 490 |
|
inverse_and_log_det(x)
Get latent variable z from observed variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Observed variable |
required |
Returns:
Type | Description |
---|---|
List of latent variables z, log determinant of Jacobian |
Source code in normflows/core.py
528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
626 627 628 629 630 631 632 |
|
log_prob(x, y=None)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required | |
y |
Classes of x. Must be passed in if |
None
|
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 |
|
reset_temperature()
Set temperature values of base distributions back to None
Source code in normflows/core.py
649 650 651 652 653 |
|
sample(num_samples=1, y=None, temperature=None)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
y |
Classes to sample from, will be sampled uniformly if None |
None
|
|
temperature |
Temperature parameter for temp annealed sampling |
None
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to save model |
required |
Source code in normflows/core.py
618 619 620 621 622 623 624 |
|
set_temperature(temperature)
Set temperature for temperature a annealed sampling
Parameters:
Name | Type | Description | Default |
---|---|---|---|
temperature |
Temperature parameter |
required |
Source code in normflows/core.py
634 635 636 637 638 639 640 641 642 643 644 645 646 647 |
|
NormalizingFlow
Bases: Module
Normalizing Flow model to approximate target distribution
Source code in normflows/core.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
|
__init__(q0, flows, p=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
q0 |
Base distribution |
required | |
flows |
List of flows |
required | |
p |
Target distribution |
None
|
Source code in normflows/core.py
14 15 16 17 18 19 20 21 22 23 24 25 |
|
forward(z)
Transforms latent variable z to the flow variable x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required |
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution |
Source code in normflows/core.py
27 28 29 30 31 32 33 34 35 36 37 38 |
|
forward_and_log_det(z)
Transforms latent variable z to the flow variable x and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch in the latent space |
required |
Returns:
Type | Description |
---|---|
Batch in the space of the target distribution, |
|
log determinant of the Jacobian |
Source code in normflows/core.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
forward_kld(x)
Estimates forward KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch sampled from target distribution |
required |
Returns:
Type | Description |
---|---|
Estimate of forward KL divergence averaged over batch |
Source code in normflows/core.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
inverse(x)
Transforms flow variable x to the latent variable z
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required |
Returns:
Type | Description |
---|---|
Batch in the latent space |
Source code in normflows/core.py
57 58 59 60 61 62 63 64 65 66 67 68 |
|
inverse_and_log_det(x)
Transforms flow variable x to the latent variable z and computes log determinant of the Jacobian
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch in the space of the target distribution |
required |
Returns:
Type | Description |
---|---|
Batch in the latent space, log determinant of the |
|
Jacobian |
Source code in normflows/core.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
load(path)
Load model from state dict
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to load model from |
required |
Source code in normflows/core.py
207 208 209 210 211 212 213 |
|
log_prob(x)
Get log probability for batch
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Batch |
required |
Returns:
Type | Description |
---|---|
log probability |
Source code in normflows/core.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
|
reverse_alpha_div(num_samples=1, alpha=1, dreg=False)
Alpha divergence when sampling from q
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
|
dreg |
Flag whether to use Double Reparametrized Gradient estimator, see arXiv 1810.04152 |
False
|
Returns:
Type | Description |
---|---|
Alpha divergence |
Source code in normflows/core.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
|
reverse_kld(num_samples=1, beta=1.0, score_fn=True)
Estimates reverse KL divergence, see arXiv 1912.02762
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from base distribution |
1
|
|
beta |
Annealing parameter, see arXiv 1505.05770 |
1.0
|
|
score_fn |
Flag whether to include score function in gradient, see arXiv 1703.09194 |
True
|
Returns:
Type | Description |
---|---|
Estimate of the reverse KL divergence averaged over latent samples |
Source code in normflows/core.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
|
sample(num_samples=1)
Samples from flow-based approximate distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
Returns:
Type | Description |
---|---|
Samples, log probability |
Source code in normflows/core.py
167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
save(path)
Save state dict of model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path including filename where to save model |
required |
Source code in normflows/core.py
199 200 201 202 203 204 205 |
|
NormalizingFlowVAE
Bases: Module
VAE using normalizing flows to express approximate distribution
Source code in normflows/core.py
656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 |
|
__init__(prior, q0=distributions.Dirac(), flows=None, decoder=None)
Constructor of normalizing flow model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prior |
Prior distribution of te VAE, i.e. Gaussian |
required | |
decoder |
Optional decoder |
None
|
|
flows |
Flows to transform output of base encoder |
None
|
|
q0 |
Base Encoder |
Dirac()
|
Source code in normflows/core.py
661 662 663 664 665 666 667 668 669 670 671 672 673 674 |
|
forward(x, num_samples=1)
Takes data batch, samples num_samples for each data point from base distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
data batch |
required | |
num_samples |
number of samples to draw for each data point |
1
|
Returns:
Type | Description |
---|---|
latent variables for each batch and sample, log_q, and log_p |
Source code in normflows/core.py
676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 |
|
distributions
base
AffineGaussian
Bases: BaseDistribution
Diagonal Gaussian an affine constant transformation applied to it, can be class conditional or not
Source code in normflows/distributions/base.py
474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 |
|
__init__(shape, affine_shape, num_classes=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Shape of the variables |
required | |
affine_shape |
Shape of the parameters in the affine transformation |
required | |
num_classes |
Number of classes if the base is class conditional, None otherwise |
None
|
Source code in normflows/distributions/base.py
480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 |
|
BaseDistribution
Bases: Module
Base distribution of a flow-based model Parameters do not depend of target variable (as is the case for a VAE encoder)
Source code in normflows/distributions/base.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
forward(num_samples=1)
Samples from base distribution and calculates log probability
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from the distriubtion |
1
|
Returns:
Type | Description |
---|---|
Samples drawn from the distribution, log probability |
Source code in normflows/distributions/base.py
17 18 19 20 21 22 23 24 25 26 |
|
log_prob(z)
Calculate log probability of batch of samples
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Batch of random variables to determine log probability for |
required |
Returns:
Type | Description |
---|---|
log probability for each batch element |
Source code in normflows/distributions/base.py
28 29 30 31 32 33 34 35 36 37 |
|
sample(num_samples=1, **kwargs)
Samples from base distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw from the distriubtion |
1
|
Returns:
Type | Description |
---|---|
Samples drawn from the distribution |
Source code in normflows/distributions/base.py
39 40 41 42 43 44 45 46 47 48 49 |
|
ClassCondDiagGaussian
Bases: BaseDistribution
Class conditional multivariate Gaussian distribution with diagonal covariance matrix
Source code in normflows/distributions/base.py
273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 |
|
__init__(shape, num_classes)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Tuple with shape of data, if int shape has one dimension |
required | |
num_classes |
Number of classes to condition on |
required |
Source code in normflows/distributions/base.py
278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 |
|
ConditionalDiagGaussian
Bases: BaseDistribution
Conditional multivariate Gaussian distribution with diagonal covariance matrix, parameters are obtained by a context encoder, context meaning the variable to condition on
Source code in normflows/distributions/base.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
|
__init__(shape, context_encoder)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Tuple with shape of data, if int shape has one dimension |
required | |
context_encoder |
Computes mean and log of the standard deviation |
required |
Source code in normflows/distributions/base.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
DiagGaussian
Bases: BaseDistribution
Multivariate Gaussian distribution with diagonal covariance matrix
Source code in normflows/distributions/base.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
__init__(shape, trainable=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Tuple with shape of data, if int shape has one dimension |
required | |
trainable |
Flag whether to use trainable or fixed parameters |
True
|
Source code in normflows/distributions/base.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
GaussianMixture
Bases: BaseDistribution
Mixture of Gaussians with diagonal covariance matrix
Source code in normflows/distributions/base.py
573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 |
|
__init__(n_modes, dim, loc=None, scale=None, weights=None, trainable=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_modes |
Number of modes of the mixture model |
required | |
dim |
Number of dimensions of each Gaussian |
required | |
loc |
List of mean values |
None
|
|
scale |
List of diagonals of the covariance matrices |
None
|
|
weights |
List of mode probabilities |
None
|
|
trainable |
Flag, if true parameters will be optimized during training |
True
|
Source code in normflows/distributions/base.py
578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 |
|
GaussianPCA
Bases: BaseDistribution
Gaussian distribution resulting from linearly mapping a normal distributed latent variable describing the "content of the target"
Source code in normflows/distributions/base.py
662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 |
|
__init__(dim, latent_dim=None, sigma=0.1)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim |
Number of dimensions of the flow variables |
required | |
latent_dim |
Number of dimensions of the latent "content" variable; if None it is set equal to dim |
None
|
|
sigma |
Noise level |
0.1
|
Source code in normflows/distributions/base.py
668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 |
|
GlowBase
Bases: BaseDistribution
Base distribution of the Glow model, i.e. Diagonal Gaussian with one mean and log scale for each channel
Source code in normflows/distributions/base.py
347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 |
|
__init__(shape, num_classes=None, logscale_factor=3.0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Shape of the variables |
required | |
num_classes |
Number of classes if the base is class conditional, None otherwise |
None
|
|
logscale_factor |
Scaling factor for mean and log variance |
3.0
|
Source code in normflows/distributions/base.py
353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
Uniform
Bases: BaseDistribution
Multivariate uniform distribution
Source code in normflows/distributions/base.py
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
|
__init__(shape, low=-1.0, high=1.0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Tuple with shape of data, if int shape has one dimension |
required | |
low |
Lower bound of uniform distribution |
-1.0
|
|
high |
Upper bound of uniform distribution |
1.0
|
Source code in normflows/distributions/base.py
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
UniformGaussian
Bases: BaseDistribution
Distribution of a 1D random variable with some entries having a uniform and others a Gaussian distribution
Source code in normflows/distributions/base.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 |
|
__init__(ndim, ind, scale=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ndim |
Int, number of dimensions |
required | |
ind |
Iterable, indices of uniformly distributed entries |
required | |
scale |
Iterable, standard deviation of Gaussian or width of uniform distribution |
None
|
Source code in normflows/distributions/base.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 |
|
decoder
BaseDecoder
Bases: Module
Source code in normflows/distributions/decoder.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
forward(z)
Decodes z to x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
latent variable |
required |
Returns:
Type | Description |
---|---|
x, std of x |
Source code in normflows/distributions/decoder.py
10 11 12 13 14 15 16 17 18 19 |
|
log_prob(x, z)
Log probability
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
observable |
required | |
z |
latent variable |
required |
Returns:
Type | Description |
---|---|
log(p) of x given z |
Source code in normflows/distributions/decoder.py
21 22 23 24 25 26 27 28 29 30 31 |
|
NNBernoulliDecoder
Bases: BaseDecoder
BaseDecoder representing a Bernoulli distribution with mean parametrized by a NN
Source code in normflows/distributions/decoder.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
__init__(net)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
neural network parametrizing mean Bernoulli (mean = sigmoid(nn_out) |
required |
Source code in normflows/distributions/decoder.py
78 79 80 81 82 83 84 85 |
|
NNDiagGaussianDecoder
Bases: BaseDecoder
BaseDecoder representing a diagonal Gaussian distribution with mean and std parametrized by a NN
Source code in normflows/distributions/decoder.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|
__init__(net)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
neural network parametrizing mean and standard deviation of diagonal Gaussian |
required |
Source code in normflows/distributions/decoder.py
39 40 41 42 43 44 45 46 |
|
distribution_test
DistributionTest
Bases: TestCase
Generic test case for distribution modules
Source code in normflows/distributions/distribution_test.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
encoder
BaseEncoder
Bases: Module
Base distribution of a flow-based variational autoencoder Parameters of the distribution depend of the target variable x
Source code in normflows/distributions/encoder.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
forward(x, num_samples=1)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Variable to condition on, first dimension is batch size |
required | |
num_samples |
number of samples to draw per element of mini-batch |
1
|
Returns sample of z for x, log probability for sample
Source code in normflows/distributions/encoder.py
15 16 17 18 19 20 21 22 23 24 |
|
log_prob(z, x)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Primary random variable, first dimension is batch size |
required | |
x |
Variable to condition on, first dimension is batch size |
required |
Returns:
Type | Description |
---|---|
log probability of z given x |
Source code in normflows/distributions/encoder.py
26 27 28 29 30 31 32 33 34 35 36 |
|
ConstDiagGaussian
Bases: BaseEncoder
Source code in normflows/distributions/encoder.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
|
__init__(loc, scale)
Multivariate Gaussian distribution with diagonal covariance and parameters being constant wrt x
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loc |
mean vector of the distribution |
required | |
scale |
vector of the standard deviations on the diagonal of the covariance matrix |
required |
Source code in normflows/distributions/encoder.py
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
forward(x=None, num_samples=1)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Variable to condition on, will only be used to determine the batch size |
None
|
|
num_samples |
number of samples to draw per element of mini-batch |
1
|
Returns:
Type | Description |
---|---|
sample of z for x, log probability for sample |
Source code in normflows/distributions/encoder.py
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
log_prob(z, x)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Primary random variable, first dimension is batch dimension |
required | |
x |
Variable to condition on, first dimension is batch dimension |
required |
Returns:
Type | Description |
---|---|
log probability of z given x |
Source code in normflows/distributions/encoder.py
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
|
NNDiagGaussian
Bases: BaseEncoder
Diagonal Gaussian distribution with mean and variance determined by a neural network
Source code in normflows/distributions/encoder.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
__init__(net)
Construtor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
net computing mean (first n / 2 outputs), standard deviation (second n / 2 outputs) |
required |
Source code in normflows/distributions/encoder.py
135 136 137 138 139 140 141 142 |
|
forward(x, num_samples=1)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Variable to condition on |
required | |
num_samples |
number of samples to draw per element of mini-batch |
1
|
Returns:
Type | Description |
---|---|
sample of z for x, log probability for sample |
Source code in normflows/distributions/encoder.py
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
|
log_prob(z, x)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Primary random variable, first dimension is batch dimension |
required | |
x |
Variable to condition on, first dimension is batch dimension |
required |
Returns:
Type | Description |
---|---|
log probability of z given x |
Source code in normflows/distributions/encoder.py
167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
linear_interpolation
LinearInterpolation
Linear interpolation of two distributions in the log space
Source code in normflows/distributions/linear_interpolation.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
__init__(dist1, dist2, alpha)
Constructor
Interpolation parameter alpha:
log_p = alpha * log_p_1 + (1 - alpha) * log_p_2
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dist1 |
First distribution |
required | |
dist2 |
Second distribution |
required | |
alpha |
Interpolation parameter |
required |
Source code in normflows/distributions/linear_interpolation.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
mh_proposal
DiagGaussianProposal
Bases: MHProposal
Diagonal Gaussian distribution with previous value as mean as a proposal for Metropolis Hastings algorithm
Source code in normflows/distributions/mh_proposal.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__init__(shape, scale)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Shape of variables to sample |
required | |
scale |
Standard deviation of distribution |
required |
Source code in normflows/distributions/mh_proposal.py
53 54 55 56 57 58 59 60 61 62 63 |
|
MHProposal
Bases: Module
Proposal distribution for the Metropolis Hastings algorithm
Source code in normflows/distributions/mh_proposal.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
forward(z)
Draw samples given z and compute log probability difference
log(p(z | z_new)) - log(p(z_new | z))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
Previous samples |
required |
Returns:
Type | Description |
---|---|
Proposal, difference of log probability ratio |
Source code in normflows/distributions/mh_proposal.py
31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
log_prob(z_, z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z_ |
Potential new sample |
required | |
z |
Previous sample |
required |
Returns:
Type | Description |
---|---|
Log probability of proposal distribution |
Source code in normflows/distributions/mh_proposal.py
20 21 22 23 24 25 26 27 28 29 |
|
sample(z)
Sample new value based on previous z
Source code in normflows/distributions/mh_proposal.py
14 15 16 17 18 |
|
prior
ImagePrior
Bases: Module
Intensities of an image determine probability density of prior
Source code in normflows/distributions/prior.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
__init__(image, x_range=[-3, 3], y_range=[-3, 3], eps=1e-10)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
image as np matrix |
required | |
x_range |
x range to position image at |
[-3, 3]
|
|
y_range |
y range to position image at |
[-3, 3]
|
|
eps |
small value to add to image to avoid log(0) problems |
1e-10
|
Source code in normflows/distributions/prior.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
59 60 61 62 63 64 65 66 67 68 69 |
|
rejection_sampling(num_steps=1)
Perform rejection sampling on image distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_steps |
Number of rejection sampling steps to perform |
1
|
Returns:
Type | Description |
---|---|
Accepted samples |
Source code in normflows/distributions/prior.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
sample(num_samples=1)
Sample from image distribution through rejection sampling
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
Returns:
Type | Description |
---|---|
Samples |
Source code in normflows/distributions/prior.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
PriorDistribution
Source code in normflows/distributions/prior.py
6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
10 11 12 13 14 15 16 17 18 |
|
Sinusoidal
Bases: PriorDistribution
Source code in normflows/distributions/prior.py
152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
__init__(scale, period)
Distribution 2d with sinusoidal density given by
w_1(z) = sin(2*pi / period * z[0])
log(p) = - 1/2 * ((z[1] - w_1(z)) / (2 * scale)) ** 2
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale |
scale of the distribution, see formula |
required | |
period |
period of the sinosoidal |
required |
Source code in normflows/distributions/prior.py
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
log_prob(z)
log(p) = - 1/2 * ((z[1] - w_1(z)) / (2 * scale)) ** 2
w_1(z) = sin(2*pi / period * z[0])
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
Sinusoidal_gap
Bases: PriorDistribution
Source code in normflows/distributions/prior.py
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
__init__(scale, period)
Distribution 2d with sinusoidal density with gap given by
w_1(z) = sin(2*pi / period * z[0])
w_2(z) = 3 * exp(-0.5 * ((z[0] - 1) / 0.6) ** 2)
log(p) = -log(exp(-0.5 * ((z[1] - w_1(z)) / 0.35) ** 2) + exp(-0.5 * ((z[1] - w_1(z) + w_2(z)) / 0.35) ** 2))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loc |
distance of modes from the origin |
required | |
scale |
scale of modes |
required |
Source code in normflows/distributions/prior.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
Sinusoidal_split
Bases: PriorDistribution
Source code in normflows/distributions/prior.py
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 |
|
__init__(scale, period)
Distribution 2d with sinusoidal density with split given by
w_1(z) = sin(2*pi / period * z[0])
w_3(z) = 3 * sigmoid((z[0] - 1) / 0.3)
log(p) = -log(exp(-0.5 * ((z[1] - w_1(z)) / 0.4) ** 2) + exp(-0.5 * ((z[1] - w_1(z) + w_3(z)) / 0.35) ** 2))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loc |
distance of modes from the origin |
required | |
scale |
scale of modes |
required |
Source code in normflows/distributions/prior.py
249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 |
|
Smiley
Bases: PriorDistribution
Source code in normflows/distributions/prior.py
299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 |
|
__init__(scale)
Distribution 2d of a smiley :)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale |
scale of the smiley |
required |
Source code in normflows/distributions/prior.py
300 301 302 303 304 305 306 307 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 |
|
TwoModes
Bases: PriorDistribution
Source code in normflows/distributions/prior.py
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
__init__(loc, scale)
Distribution 2d with two modes
Distribution 2d with two modes at
z[0] = -loc
and z[0] = loc
following the density
log(p) = 1/2 * ((norm(z) - loc) / (2 * scale)) ** 2
- log(exp(-1/2 * ((z[0] - loc) / (3 * scale)) ** 2) + exp(-1/2 * ((z[0] + loc) / (3 * scale)) ** 2))
Args: loc: distance of modes from the origin scale: scale of modes
Source code in normflows/distributions/prior.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|
log_prob(z)
log(p) = 1/2 * ((norm(z) - loc) / (2 * scale)) ** 2
- log(exp(-1/2 * ((z[0] - loc) / (3 * scale)) ** 2) + exp(-1/2 * ((z[0] + loc) / (3 * scale)) ** 2))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/prior.py
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
target
CircularGaussianMixture
Bases: Module
Two-dimensional Gaussian mixture arranged in a circle
Source code in normflows/distributions/target.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
|
__init__(n_modes=8)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n_modes |
Number of modes |
8
|
Source code in normflows/distributions/target.py
137 138 139 140 141 142 143 144 145 146 147 |
|
ConditionalDiagGaussian
Bases: Target
Gaussian distribution conditioned on its mean and standard deviation
The first half of the entries of the condition, also called context, are the mean, while the second half are the standard deviation.
Source code in normflows/distributions/target.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 |
|
RingMixture
Bases: Target
Mixture of ring distributions in two dimensions
Source code in normflows/distributions/target.py
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
|
Target
Bases: Module
Sample target distributions to test models
Source code in normflows/distributions/target.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
__init__(prop_scale=torch.tensor(6.0), prop_shift=torch.tensor(-3.0))
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prop_scale |
Scale for the uniform proposal |
tensor(6.0)
|
|
prop_shift |
Shift for the uniform proposal |
tensor(-3.0)
|
Source code in normflows/distributions/target.py
13 14 15 16 17 18 19 20 21 22 |
|
log_prob(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/target.py
24 25 26 27 28 29 30 31 32 |
|
rejection_sampling(num_steps=1)
Perform rejection sampling on image distribution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_steps |
Number of rejection sampling steps to perform |
1
|
Returns:
Type | Description |
---|---|
Accepted samples |
Source code in normflows/distributions/target.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
sample(num_samples=1)
Sample from image distribution through rejection sampling
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
Number of samples to draw |
1
|
Returns:
Type | Description |
---|---|
Samples |
Source code in normflows/distributions/target.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
TwoIndependent
Bases: Target
Target distribution that combines two independent distributions of equal size into one distribution. This is needed for Augmented Normalizing Flows, see https://arxiv.org/abs/2002.07101
Source code in normflows/distributions/target.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
TwoMoons
Bases: Target
Bimodal two-dimensional distribution
Source code in normflows/distributions/target.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
log_prob(z)
log(p) = - 1/2 * ((norm(z) - 2) / 0.2) ** 2
+ log( exp(-1/2 * ((z[0] - 2) / 0.3) ** 2)
+ exp(-1/2 * ((z[0] + 2) / 0.3) ** 2))
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
value or batch of latent variable |
required |
Returns:
Type | Description |
---|---|
log probability of the distribution for z |
Source code in normflows/distributions/target.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
flows
affine
autoregressive
Autoregressive
Bases: Flow
Transforms each input variable with an invertible elementwise transformation.
The parameters of each invertible elementwise transformation can be functions of previous input variables, but they must not depend on the current or any following input variables.
NOTE Calculating the inverse transform is D times slower than calculating the forward transform, where D is the dimensionality of the input to the transform.
Source code in normflows/flows/affine/autoregressive.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
MaskedAffineAutoregressive
Bases: Autoregressive
Masked affine autoregressive flow, mostly referred to as Masked Autoregressive Flow (MAF), see arXiv 1705.07057.
Source code in normflows/flows/affine/autoregressive.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
__init__(features, hidden_features, context_features=None, num_blocks=2, use_residual_blocks=True, random_mask=False, activation=F.relu, dropout_probability=0.0, use_batch_norm=False)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
features |
Number of features/input dimensions |
required | |
hidden_features |
Number of hidden units in the MADE network |
required | |
context_features |
Number of context/conditional features |
None
|
|
num_blocks |
Number of blocks in the MADE network |
2
|
|
use_residual_blocks |
Flag whether residual blocks should be used |
True
|
|
random_mask |
Flag whether to use random masks |
False
|
|
activation |
Activation function to be used in the MADE network |
relu
|
|
dropout_probability |
Dropout probability in the MADE network |
0.0
|
|
use_batch_norm |
Flag whether batch normalization should be used |
False
|
Source code in normflows/flows/affine/autoregressive.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
coupling
AffineConstFlow
Bases: Flow
scales and shifts with learned constants per dimension. In the NICE paper there is a scaling layer which is a special case of this where t is None
Source code in normflows/flows/affine/coupling.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
__init__(shape, scale=True, shift=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Shape of the coupling layer |
required | |
scale |
Flag whether to apply scaling |
True
|
|
shift |
Flag whether to apply shift |
True
|
|
logscale_factor |
Optional factor which can be used to control the scale of the log scale factor |
required |
Source code in normflows/flows/affine/coupling.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
AffineCoupling
Bases: Flow
Affine Coupling layer as introduced RealNVP paper, see arXiv: 1605.08803
Source code in normflows/flows/affine/coupling.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
|
__init__(param_map, scale=True, scale_map='exp')
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param_map |
Maps features to shift and scale parameter (if applicable) |
required | |
scale |
Flag whether scale shall be applied |
True
|
|
scale_map |
Map to be applied to the scale parameter, can be 'exp' as in RealNVP or 'sigmoid' as in Glow, 'sigmoid_inv' uses multiplicative sigmoid scale when sampling from the model |
'exp'
|
Source code in normflows/flows/affine/coupling.py
104 105 106 107 108 109 110 111 112 113 114 115 |
|
forward(z)
z is a list of z1 and z2; z = [z1, z2]
z1 is left constant and affine map is applied to z2 with parameters depending
on z1
Source code in normflows/flows/affine/coupling.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
|
AffineCouplingBlock
Bases: Flow
Affine Coupling layer including split and merge operation
Source code in normflows/flows/affine/coupling.py
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
|
__init__(param_map, scale=True, scale_map='exp', split_mode='channel')
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param_map |
Maps features to shift and scale parameter (if applicable) |
required | |
scale |
Flag whether scale shall be applied |
True
|
|
scale_map |
Map to be applied to the scale parameter, can be 'exp' as in RealNVP or 'sigmoid' as in Glow |
'exp'
|
|
split_mode |
Splitting mode, for possible values see Split class |
'channel'
|
Source code in normflows/flows/affine/coupling.py
237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 |
|
CCAffineConst
Bases: Flow
Affine constant flow layer with class-conditional parameters
Source code in normflows/flows/affine/coupling.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
MaskedAffineFlow
Bases: Flow
RealNVP as introduced in arXiv: 1605.08803
Masked affine flow:
f(z) = b * z + (1 - b) * (z * exp(s(b * z)) + t)
- class AffineHalfFlow(Flow): is MaskedAffineFlow with alternating bit mask
- NICE is AffineFlow with only shifts (volume preserving)
Source code in normflows/flows/affine/coupling.py
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
|
__init__(b, t=None, s=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
b |
mask for features, i.e. tensor of same size as latent data point filled with 0s and 1s |
required | |
t |
translation mapping, i.e. neural network, where first input dimension is batch dim, if None no translation is applied |
None
|
|
s |
scale mapping, i.e. neural network, where first input dimension is batch dim, if None no scale is applied |
None
|
Source code in normflows/flows/affine/coupling.py
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
glow
GlowBlock
Bases: Flow
Glow: Generative Flow with Invertible 1×1 Convolutions, arXiv: 1807.03039
One Block of the Glow model, comprised of
- MaskedAffineFlow (affine coupling layer)
- Invertible1x1Conv (dropped if there is only one channel)
- ActNorm (first batch used for initialization)
Source code in normflows/flows/affine/glow.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
__init__(channels, hidden_channels, scale=True, scale_map='sigmoid', split_mode='channel', leaky=0.0, init_zeros=True, use_lu=True, net_actnorm=False)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
Number of channels of the data |
required | |
hidden_channels |
number of channels in the hidden layer of the ConvNet |
required | |
scale |
Flag, whether to include scale in affine coupling layer |
True
|
|
scale_map |
Map to be applied to the scale parameter, can be 'exp' as in RealNVP or 'sigmoid' as in Glow |
'sigmoid'
|
|
split_mode |
Splitting mode, for possible values see Split class |
'channel'
|
|
leaky |
Leaky parameter of LeakyReLUs of ConvNet2d |
0.0
|
|
init_zeros |
Flag whether to initialize last conv layer with zeros |
True
|
|
use_lu |
Flag whether to parametrize weights through the LU decomposition in invertible 1x1 convolution layers |
True
|
|
logscale_factor |
Factor which can be used to control the scale of the log scale factor, see source |
required |
Source code in normflows/flows/affine/glow.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|
base
Composite
Bases: Flow
Composes several flows into one, in the order they are given.
Source code in normflows/flows/base.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
__init__(flows)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flows |
Iterable of flows to composite |
required |
Source code in normflows/flows/base.py
53 54 55 56 57 58 59 60 |
|
Flow
Bases: Module
Generic class for flow functions
Source code in normflows/flows/base.py
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
forward(z)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
z |
input variable, first dimension is batch dim |
required |
Returns:
Type | Description |
---|---|
transformed z and log of absolute determinant |
Source code in normflows/flows/base.py
13 14 15 16 17 18 19 20 21 |
|
Reverse
Bases: Flow
Switches the forward transform of a flow layer with its inverse and vice versa
Source code in normflows/flows/base.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
__init__(flow)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
flow |
Flow layer to be reversed |
required |
Source code in normflows/flows/base.py
32 33 34 35 36 37 38 39 |
|
flow_test
FlowTest
Bases: TestCase
Generic test case for flow modules
Source code in normflows/flows/flow_test.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
mixing
Invertible1x1Conv
Bases: Flow
Invertible 1x1 convolution introduced in the Glow paper Assumes 4d input/output tensors of the form NCHW
Source code in normflows/flows/mixing.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
|
__init__(num_channels, use_lu=False)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_channels |
Number of channels of the data |
required | |
use_lu |
Flag whether to parametrize weights through the LU decomposition |
False
|
Source code in normflows/flows/mixing.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
|
InvertibleAffine
Bases: Flow
Invertible affine transformation without shift, i.e. one-dimensional version of the invertible 1x1 convolutions
Source code in normflows/flows/mixing.py
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
__init__(num_channels, use_lu=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_channels |
Number of channels of the data |
required | |
use_lu |
Flag whether to parametrize weights through the LU decomposition |
True
|
Source code in normflows/flows/mixing.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
|
LULinearPermute
Bases: Flow
Fixed permutation combined with a linear transformation parametrized using the LU decomposition, used in https://arxiv.org/abs/1906.04032
Source code in normflows/flows/mixing.py
535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 |
|
__init__(num_channels, identity_init=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_channels |
Number of dimensions of the data |
required | |
identity_init |
Flag, whether to initialize linear transform as identity matrix |
True
|
Source code in normflows/flows/mixing.py
541 542 543 544 545 546 547 548 549 550 551 552 553 |
|
Permute
Bases: Flow
Permutation features along the channel dimension
Source code in normflows/flows/mixing.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
__init__(num_channels, mode='shuffle')
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_channel |
Number of channels |
required | |
mode |
Mode of permuting features, can be shuffle for random permutation or swap for interchanging upper and lower part |
'shuffle'
|
Source code in normflows/flows/mixing.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
neural_spline
autoregressive
Implementations of autoregressive transforms. Code taken from https://github.com/bayesiains/nsf
autoregressive_test
Tests for the autoregressive transforms. Code partially taken from https://github.com/bayesiains/nsf
coupling
Implementations of various coupling layers. Code taken from https://github.com/bayesiains/nsf
Coupling
Bases: Flow
A base class for coupling layers. Supports 2D inputs (NxD), as well as 4D inputs for images (NxCxHxW). For images the splitting is done on the channel dimension, using the provided 1D mask.
Source code in normflows/flows/neural_spline/coupling.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
|
__init__(mask, transform_net_create_fn, unconditional_transform=None)
Constructor.
mask: a 1-dim tensor, tuple or list. It indexes inputs as follows:
- if
mask[i] > 0
,input[i]
will be transformed. - if
mask[i] <= 0
,input[i]
will be passed unchanged.
Source code in normflows/flows/neural_spline/coupling.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
coupling_test
Tests for the coupling Transforms. Code partially taken from https://github.com/bayesiains/nsf
wrapper
AutoregressiveRationalQuadraticSpline
Bases: Flow
Neural spline flow coupling layer, wrapper for the implementation of Durkan et al., see sources
Source code in normflows/flows/neural_spline/wrapper.py
186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 |
|
__init__(num_input_channels, num_blocks, num_hidden_channels, num_context_channels=None, num_bins=8, tail_bound=3, activation=nn.ReLU, dropout_probability=0.0, permute_mask=False, init_identity=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_input_channels |
int
|
Flow dimension |
required |
num_blocks |
int
|
Number of residual blocks of the parameter NN |
required |
num_hidden_channels |
int
|
Number of hidden units of the NN |
required |
num_context_channels |
int
|
Number of context/conditional channels |
None
|
num_bins |
int
|
Number of bins |
8
|
tail_bound |
int
|
Bound of the spline tails |
3
|
activation |
Module
|
Activation function |
ReLU
|
dropout_probability |
float
|
Dropout probability of the NN |
0.0
|
permute_mask |
bool
|
Flag, permutes the mask of the NN |
False
|
init_identity |
bool
|
Flag, initialize transform as identity |
True
|
Source code in normflows/flows/neural_spline/wrapper.py
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
|
CircularAutoregressiveRationalQuadraticSpline
Bases: Flow
Neural spline flow coupling layer, wrapper for the implementation of Durkan et al., see sources
Source code in normflows/flows/neural_spline/wrapper.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 |
|
__init__(num_input_channels, num_blocks, num_hidden_channels, ind_circ, num_context_channels=None, num_bins=8, tail_bound=3, activation=nn.ReLU, dropout_probability=0.0, permute_mask=True, init_identity=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_input_channels |
int
|
Flow dimension |
required |
num_blocks |
int
|
Number of residual blocks of the parameter NN |
required |
num_hidden_channels |
int
|
Number of hidden units of the NN |
required |
ind_circ |
Iterable
|
Indices of the circular coordinates |
required |
num_context_channels |
int
|
Number of context/conditional channels |
None
|
num_bins |
int
|
Number of bins |
8
|
tail_bound |
int
|
Bound of the spline tails |
3
|
activation |
torch module
|
Activation function |
ReLU
|
dropout_probability |
float
|
Dropout probability of the NN |
0.0
|
permute_mask |
bool
|
Flag, permutes the mask of the NN |
True
|
init_identity |
bool
|
Flag, initialize transform as identity |
True
|
Source code in normflows/flows/neural_spline/wrapper.py
253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
|
CircularCoupledRationalQuadraticSpline
Bases: Flow
Neural spline flow coupling layer with circular coordinates
Source code in normflows/flows/neural_spline/wrapper.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
|
__init__(num_input_channels, num_blocks, num_hidden_channels, ind_circ, num_context_channels=None, num_bins=8, tail_bound=3.0, activation=nn.ReLU, dropout_probability=0.0, reverse_mask=False, mask=None, init_identity=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_input_channels |
int
|
Flow dimension |
required |
num_blocks |
int
|
Number of residual blocks of the parameter NN |
required |
num_hidden_channels |
int
|
Number of hidden units of the NN |
required |
num_context_channels |
int
|
Number of context/conditional channels |
None
|
ind_circ |
Iterable
|
Indices of the circular coordinates |
required |
num_bins |
int
|
Number of bins |
8
|
tail_bound |
float or Iterable
|
Bound of the spline tails |
3.0
|
activation |
torch module
|
Activation function |
ReLU
|
dropout_probability |
float
|
Dropout probability of the NN |
0.0
|
reverse_mask |
bool
|
Flag whether the reverse mask should be used |
False
|
mask |
torch tensor
|
Mask to be used, alternating masked generated is None |
None
|
init_identity |
bool
|
Flag, initialize transform as identity |
True
|
Source code in normflows/flows/neural_spline/wrapper.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
|
CoupledRationalQuadraticSpline
Bases: Flow
Neural spline flow coupling layer, wrapper for the implementation of Durkan et al., see source
Source code in normflows/flows/neural_spline/wrapper.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
__init__(num_input_channels, num_blocks, num_hidden_channels, num_context_channels=None, num_bins=8, tails='linear', tail_bound=3.0, activation=nn.ReLU, dropout_probability=0.0, reverse_mask=False, init_identity=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_input_channels |
int
|
Flow dimension |
required |
num_blocks |
int
|
Number of residual blocks of the parameter NN |
required |
num_hidden_channels |
int
|
Number of hidden units of the NN |
required |
num_context_channels |
int
|
Number of context/conditional channels |
None
|
num_bins |
int
|
Number of bins |
8
|
tails |
str
|
Behaviour of the tails of the distribution, can be linear, circular for periodic distribution, or None for distribution on the compact interval |
'linear'
|
tail_bound |
float
|
Bound of the spline tails |
3.0
|
activation |
torch module
|
Activation function |
ReLU
|
dropout_probability |
float
|
Dropout probability of the NN |
0.0
|
reverse_mask |
bool
|
Flag whether the reverse mask should be used |
False
|
init_identity |
bool
|
Flag, initialize transform as identity |
True
|
Source code in normflows/flows/neural_spline/wrapper.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
|
normalization
ActNorm
Bases: AffineConstFlow
An AffineConstFlow but with a data-dependent initialization, where on the very first batch we clever initialize the s,t so that the output is unit gaussian. As described in Glow paper.
Source code in normflows/flows/normalization.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
BatchNorm
Bases: Flow
Batch Normalization with out considering the derivatives of the batch statistics, see arXiv: 1605.08803
Source code in normflows/flows/normalization.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
forward(z)
Do batch norm over batch and sample dimension
Source code in normflows/flows/normalization.py
52 53 54 55 56 57 58 59 60 61 62 |
|
periodic
PeriodicShift
Bases: Flow
Shift and wrap periodic coordinates
Source code in normflows/flows/periodic.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
__init__(ind, bound=1.0, shift=0.0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ind |
Iterable, indices of coordinates to be mapped |
required | |
bound |
Float or iterable, bound of interval |
1.0
|
|
shift |
Tensor, shift to be applied |
0.0
|
Source code in normflows/flows/periodic.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
PeriodicWrap
Bases: Flow
Map periodic coordinates to fixed interval
Source code in normflows/flows/periodic.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
__init__(ind, bound=1.0)
Constructor
ind: Iterable, indices of coordinates to be mapped bound: Float or iterable, bound of interval
Source code in normflows/flows/periodic.py
11 12 13 14 15 16 17 18 19 20 21 22 |
|
planar
Planar
Bases: Flow
Planar flow as introduced in arXiv: 1505.05770
f(z) = z + u * h(w * z + b)
Source code in normflows/flows/planar.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
|
__init__(shape, act='tanh', u=None, w=None, b=None)
Constructor of the planar flow
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
shape of the latent variable z |
required | |
h |
nonlinear function h of the planar flow (see definition of f above) |
required | |
u,w,b |
optional initialization for parameters |
required |
Source code in normflows/flows/planar.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
radial
Radial
Bases: Flow
Radial flow as introduced in arXiv: 1505.05770
f(z) = z + beta * h(alpha, r) * (z - z_0)
Source code in normflows/flows/radial.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
__init__(shape, z_0=None)
Constructor of the radial flow
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
shape of the latent variable z |
required | |
z_0 |
parameter of the radial flow |
None
|
Source code in normflows/flows/radial.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
reshape
Merge
Bases: Split
Same as Split but with forward and backward pass interchanged
Source code in normflows/flows/reshape.py
88 89 90 91 92 93 94 95 96 97 98 99 100 |
|
Split
Bases: Flow
Split features into two sets
Source code in normflows/flows/reshape.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
__init__(mode='channel')
Constructor
The splitting mode can be:
- channel: Splits first feature dimension, usually channels, into two halfs
- channel_inv: Same as channel, but with z1 and z2 flipped
- checkerboard: Splits features using a checkerboard pattern (last feature dimension must be even)
- checkerboard_inv: Same as checkerboard, but with inverted coloring
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mode |
splitting mode |
'channel'
|
Source code in normflows/flows/reshape.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Squeeze
Bases: Flow
Squeeze operation of multi-scale architecture, RealNVP or Glow paper
Source code in normflows/flows/reshape.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
__init__()
Constructor
Source code in normflows/flows/reshape.py
108 109 110 111 112 |
|
residual
Residual
Bases: Flow
Invertible residual net block, wrapper to the implementation of Chen et al., see sources
Source code in normflows/flows/residual.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
|
__init__(net, reverse=True, reduce_memory=True, geom_p=0.5, lamb=2.0, n_power_series=None, exact_trace=False, brute_force=False, n_samples=1, n_exact_terms=2, n_dist='geometric')
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
net |
Neural network, must be Lipschitz continuous with L < 1 |
required | |
reverse |
Flag, if true the map |
True
|
|
reduce_memory |
Flag, if true Neumann series and precomputations, for backward pass in forward pass are done |
True
|
|
geom_p |
Parameter of the geometric distribution used for the Neumann series |
0.5
|
|
lamb |
Parameter of the geometric distribution used for the Neumann series |
2.0
|
|
n_power_series |
Number of terms in the Neumann series |
None
|
|
exact_trace |
Flag, if true the trace of the Jacobian is computed exactly |
False
|
|
brute_force |
Flag, if true the Jacobian is computed exactly in 2D |
False
|
|
n_samples |
Number of samples used to estimate power series |
1
|
|
n_exact_terms |
Number of terms always included in the power series |
2
|
|
n_dist |
Distribution used for the power series, either "geometric" or "poisson" |
'geometric'
|
Source code in normflows/flows/residual.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
iResBlock
Bases: Module
Source code in normflows/flows/residual.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
|
__init__(nnet, geom_p=0.5, lamb=2.0, n_power_series=None, exact_trace=False, brute_force=False, n_samples=1, n_exact_terms=2, n_dist='geometric', neumann_grad=True, grad_in_forward=False)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
nnet |
a nn.Module |
required | |
n_power_series |
number of power series. If not None, uses a biased approximation to logdet. |
None
|
|
exact_trace |
if False, uses a Hutchinson trace estimator. Otherwise computes the exact full Jacobian. |
False
|
|
brute_force |
Computes the exact logdet. Only available for 2D inputs. |
False
|
Source code in normflows/flows/residual.py
79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
stochastic
HamiltonianMonteCarlo
Bases: Flow
Flow layer using the HMC proposal in Stochastic Normalising Flows
Source code in normflows/flows/stochastic.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
__init__(target, steps, log_step_size, log_mass, max_abs_grad=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target |
The stationary distribution of this Markov transition, i.e. the target distribution to sample from. |
required | |
steps |
The number of leapfrog steps |
required | |
log_step_size |
The log step size used in the leapfrog integrator. shape (dim) |
required | |
log_mass |
The log_mass determining the variance of the momentum samples. shape (dim) |
required | |
max_abs_grad |
Maximum absolute value of the gradient of the target distribution's log probability. If set to None then no gradient clipping is applied. Useful for improving numerical stability. |
None
|
Source code in normflows/flows/stochastic.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
MetropolisHastings
Bases: Flow
Sampling through Metropolis Hastings in Stochastic Normalizing Flow
Source code in normflows/flows/stochastic.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
__init__(target, proposal, steps)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target |
The stationary distribution of this Markov transition, i.e. the target distribution to sample from. |
required | |
proposal |
Proposal distribution |
required | |
steps |
Number of MCMC steps to perform |
required |
Source code in normflows/flows/stochastic.py
12 13 14 15 16 17 18 19 20 21 22 23 |
|
nets
cnn
ConvNet2d
Bases: Module
Convolutional Neural Network with leaky ReLU nonlinearities
Source code in normflows/nets/cnn.py
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
__init__(channels, kernel_size, leaky=0.0, init_zeros=True, actnorm=False, weight_std=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
List of channels of conv layers, first entry is in_channels |
required | |
kernel_size |
List of kernel sizes, same for height and width |
required | |
leaky |
Leaky part of ReLU |
0.0
|
|
init_zeros |
Flag whether last layer shall be initialized with zeros |
True
|
|
scale_output |
Flag whether to scale output with a log scale parameter |
required | |
logscale_factor |
Constant factor to be multiplied to log scaling |
required | |
actnorm |
Flag whether activation normalization shall be done after each conv layer except output |
False
|
|
weight_std |
Fixed std used to initialize every layer |
None
|
Source code in normflows/nets/cnn.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
lipschitz
LipschitzCNN
Bases: Module
Convolutional neural network which is Lipschitz continuous with Lipschitz constant L < 1
Source code in normflows/nets/lipschitz.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
__init__(channels, kernel_size, lipschitz_const=0.97, max_lipschitz_iter=5, lipschitz_tolerance=None, init_zeros=True)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
channels |
Integer list with the number of channels of the layers |
required | |
kernel_size |
Integer list of kernel sizes of the layers |
required | |
lipschitz_const |
Maximum Lipschitz constant of each layer |
0.97
|
|
max_lipschitz_iter |
Maximum number of iterations used to ensure that layers are Lipschitz continuous with L smaller than set maximum; if None, tolerance is used |
5
|
|
lipschitz_tolerance |
Float, tolerance used to ensure Lipschitz continuity if max_lipschitz_iter is None, typically 1e-3 |
None
|
|
init_zeros |
Flag, whether to initialize last layer approximately with zeros |
True
|
Source code in normflows/nets/lipschitz.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
|
LipschitzMLP
Bases: Module
Fully connected neural net which is Lipschitz continuou with Lipschitz constant L < 1
Source code in normflows/nets/lipschitz.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
__init__(channels, lipschitz_const=0.97, max_lipschitz_iter=5, lipschitz_tolerance=None, init_zeros=True)
Constructor channels: Integer list with the number of channels of the layers lipschitz_const: Maximum Lipschitz constant of each layer max_lipschitz_iter: Maximum number of iterations used to ensure that layers are Lipschitz continuous with L smaller than set maximum; if None, tolerance is used lipschitz_tolerance: Float, tolerance used to ensure Lipschitz continuity if max_lipschitz_iter is None, typically 1e-3 init_zeros: Flag, whether to initialize last layer approximately with zeros
Source code in normflows/nets/lipschitz.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
projmax_(v)
Inplace argmax on absolute value.
Source code in normflows/nets/lipschitz.py
651 652 653 654 655 656 |
|
made
Implementation of MADE. Code taken from https://github.com/bayesiains/nsf
MADE
Bases: Module
Implementation of MADE.
It can use either feedforward blocks or residual blocks (default is residual). Optionally, it can use batch norm or dropout within blocks (default is no).
Source code in normflows/nets/made.py
217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 |
|
MaskedFeedforwardBlock
Bases: Module
A feedforward block based on a masked linear module.
NOTE In this implementation, the number of output features is taken to be equal to the number of input features.
Source code in normflows/nets/made.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
|
MaskedLinear
Bases: Linear
A linear module with a masked weight matrix.
Source code in normflows/nets/made.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
|
MaskedResidualBlock
Bases: Module
A residual block containing masked linear modules.
Source code in normflows/nets/made.py
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
|
made_test
Tests for MADE. Code partially taken from https://github.com/bayesiains/nsf
mlp
MLP
Bases: Module
A multilayer perceptron with Leaky ReLU nonlinearities
Source code in normflows/nets/mlp.py
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
__init__(layers, leaky=0.0, score_scale=None, output_fn=None, output_scale=None, init_zeros=False, dropout=None)
layers: list of layer sizes from start to end
leaky: slope of the leaky part of the ReLU, if 0.0, standard ReLU is used
score_scale: Factor to apply to the scores, i.e. output before output_fn.
output_fn: String, function to be applied to the output, either None, "sigmoid", "relu", "tanh", or "clampexp"
output_scale: Rescale outputs if output_fn is specified, i.e. scale * output_fn(out / scale)
init_zeros: Flag, if true, weights and biases of last layer are initialized with zeros (helpful for deep models, see arXiv 1807.03039)
dropout: Float, if specified, dropout is done before last layer; if None, no dropout is done
Source code in normflows/nets/mlp.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
resnet
ResidualBlock
Bases: Module
A general-purpose residual block. Works only with 1-dim inputs.
Source code in normflows/nets/resnet.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
ResidualNet
Bases: Module
A general-purpose residual network. Works only with 1-dim inputs.
Source code in normflows/nets/resnet.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
sampling
hais
HAIS
Class which performs HAIS
Source code in normflows/sampling/hais.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
__init__(betas, prior, target, num_leapfrog, step_size, log_mass)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
betas |
Annealing schedule, the jth target is |
required | |
prior |
The prior distribution to start the HAIS chain. |
required | |
target |
The target distribution from which we would like to draw weighted samples. |
required | |
num_leapfrog |
Number of leapfrog steps in the HMC transitions. |
required | |
step_size |
step_size to use for HMC transitions. |
required | |
log_mass |
log_mass to use for HMC transitions. |
required |
Source code in normflows/sampling/hais.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
sample(num_samples)
Run HAIS to draw samples from the target with appropriate weights.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_samples |
The number of samples to draw.a |
required |
Source code in normflows/sampling/hais.py
37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
transforms
Logit
Bases: Flow
Logit mapping of image tensor, see RealNVP paper
logit(alpha + (1 - alpha) * x) where logit(x) = log(x / (1 - x))
Source code in normflows/transforms.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
__init__(alpha=0.05)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
alpha |
Alpha parameter, see above |
0.05
|
Source code in normflows/transforms.py
17 18 19 20 21 22 23 24 |
|
Shift
Bases: Flow
Shift data by a fixed constant
Default is -0.5 to shift data from interval [0, 1] to [-0.5, 0.5]
Source code in normflows/transforms.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
__init__(shift=-0.5)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shift |
Shift to apply to the data |
-0.5
|
Source code in normflows/transforms.py
57 58 59 60 61 62 63 64 |
|
utils
eval
bitsPerDim(model, x, y=None, trans='logit', trans_param=[0.05])
Computes the bits per dim for a batch of data
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Model to compute bits per dim for |
required | |
x |
Batch of data |
required | |
y |
Class labels for batch of data if base distribution is class conditional |
None
|
|
trans |
Transformation to be applied to images during training |
'logit'
|
|
trans_param |
List of parameters of the transformation |
[0.05]
|
Returns:
Type | Description |
---|---|
Bits per dim for data batch under model |
Source code in normflows/utils/eval.py
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
bitsPerDimDataset(model, data_loader, class_cond=True, trans='logit', trans_param=[0.05])
Computes average bits per dim for an entire dataset given by a data loader
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Model to compute bits per dim for |
required | |
data_loader |
Data loader of dataset |
required | |
class_cond |
Flag indicating whether model is class_conditional |
True
|
|
trans |
Transformation to be applied to images during training |
'logit'
|
|
trans_param |
List of parameters of the transformation |
[0.05]
|
Returns:
Type | Description |
---|---|
Average bits per dim for dataset |
Source code in normflows/utils/eval.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
masks
create_alternating_binary_mask(features, even=True)
Creates a binary mask of a given dimension which alternates its masking.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
features |
Dimension of mask. |
required | |
even |
If True, even values are assigned 1s, odd 0s. If False, vice versa. |
True
|
Returns:
Type | Description |
---|---|
Alternating binary mask of type torch.Tensor. |
Source code in normflows/utils/masks.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
create_mid_split_binary_mask(features)
Creates a binary mask of a given dimension which splits its masking at the midpoint.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
features |
Dimension of mask. |
required |
Returns:
Type | Description |
---|---|
Binary mask split at midpoint of type torch.Tensor |
Source code in normflows/utils/masks.py
20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
create_random_binary_mask(features, seed=None)
Creates a random binary mask of a given dimension with half of its entries randomly set to 1s.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
features |
Dimension of mask. |
required | |
seed |
Seed to be used |
None
|
Returns:
Type | Description |
---|---|
Binary mask with half of its entries set to 1s, of type torch.Tensor. |
Source code in normflows/utils/masks.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
nn
ActNorm
Bases: Module
ActNorm layer with just one forward pass
Source code in normflows/utils/nn.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
__init__(shape)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape |
Same as shape in flows.ActNorm |
required | |
logscale_factor |
Same as shape in flows.ActNorm |
required |
Source code in normflows/utils/nn.py
30 31 32 33 34 35 36 37 38 39 |
|
ClampExp
Bases: Module
Nonlinearity min(exp(lam * x), 1)
Source code in normflows/utils/nn.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
__init__()
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lam |
Lambda parameter |
required |
Source code in normflows/utils/nn.py
51 52 53 54 55 56 57 |
|
ConstScaleLayer
Bases: Module
Scaling features by a fixed factor
Source code in normflows/utils/nn.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
__init__(scale=1.0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale |
Scale to apply to features |
1.0
|
Source code in normflows/utils/nn.py
12 13 14 15 16 17 18 19 20 |
|
PeriodicFeaturesCat
Bases: Module
Converts a specified part of the input to periodic features by replacing those features f with [sin(scale * f), cos(scale * f)].
Note that this decreases the number of features and their order is changed.
Source code in normflows/utils/nn.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
|
__init__(ndim, ind, scale=1.0)
Constructor :param ndim: Int, number of dimensions :param ind: Iterable, indices of input elements to convert to periodic features :param scale: Scalar or iterable, used to scale inputs before converting them to periodic features
Source code in normflows/utils/nn.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
PeriodicFeaturesElementwise
Bases: Module
Converts a specified part of the input to periodic features by replacing those features f with w1 * sin(scale * f) + w2 * cos(scale * f).
Note that this operation is done elementwise and, therefore, some information about the feature can be lost.
Source code in normflows/utils/nn.py
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
__init__(ndim, ind, scale=1.0, bias=False, activation=None)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ndim |
int
|
number of dimensions |
required |
ind |
iterable
|
indices of input elements to convert to periodic features |
required |
scale |
Scalar or iterable, used to scale inputs before converting them to periodic features |
1.0
|
|
bias |
Flag, whether to add a bias |
False
|
|
activation |
Function or None, activation function to be applied |
None
|
Source code in normflows/utils/nn.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
sum_except_batch(x, num_batch_dims=1)
Sums all elements of x
except for the first num_batch_dims
dimensions.
Source code in normflows/utils/nn.py
190 191 192 193 |
|
optim
clear_grad(model)
Set gradients of model parameter to None as this speeds up training,
See youtube
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Model to clear gradients of |
required |
Source code in normflows/utils/optim.py
16 17 18 19 20 21 22 23 24 25 |
|
set_requires_grad(module, flag)
Sets requires_grad flag of all parameters of a torch.nn.module
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
torch.nn.module |
required | |
flag |
Flag to set requires_grad to |
required |
Source code in normflows/utils/optim.py
4 5 6 7 8 9 10 11 12 13 |
|
preprocessing
Jitter
Transform for dataloader, adds uniform jitter noise to data
Source code in normflows/utils/preprocessing.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
__init__(scale=1.0 / 256)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale |
Scaling factor for noise |
1.0 / 256
|
Source code in normflows/utils/preprocessing.py
31 32 33 34 35 36 37 |
|
Logit
Transform for dataloader
logit(alpha + (1 - alpha) * x) where logit(x) = log(x / (1 - x))
Source code in normflows/utils/preprocessing.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
__init__(alpha=0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
alpha |
see above |
0
|
Source code in normflows/utils/preprocessing.py
12 13 14 15 16 17 18 |
|
Scale
Transform for dataloader, adds uniform jitter noise to data
Source code in normflows/utils/preprocessing.py
45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
__init__(scale=255.0 / 256.0)
Constructor
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale |
Scaling factor for noise |
255.0 / 256.0
|
Source code in normflows/utils/preprocessing.py
48 49 50 51 52 53 54 |
|