Skip to content

Conversation

daniyalaliev
Copy link
Contributor

No description provided.

// Which of the blobs will be used during learn
int blobsForLearn;
// Save first adapter name to connect to necessary head in serialization
CString firstAdapter;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the first adapter will be removed from the dnn?


auto headSource = static_cast<CCompositeSourceLayer*>(internalDnn->GetLayer("source").Ptr());
headSource->SetBlob(inputBlobs[0]);
internalDnn->runOnce( 0 );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The special test should be where the CDnnHeadAdapterLayer is inside the CRecursiveLayer.


using namespace NeoML;
using namespace NeoMLTest;

Copy link
Contributor

@favorart favorart May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, add one more test with the exact scenario suggested by the user of this feature

    IMathEngine& mathEngine = MathEngine();
    CRandom random( 42 );
    CDnn dnn( random, mathEngine );

    // To be able run the same set of layers 3 times
    // on 3 different inputs to get 3 different outputs links
    CPtr<CDnnHead> head = new CDnnHead<
        CFullyConnectedLayer,
        CGELULayer,
        CFullyConnectedLayer,
        CReLULayer,
        CDropoutLayer,
        CFullyConnectedLayer
    >(
        random, mathEngine,
        FullyConnected( 128 ),
        Gelu(),
        FullyConnected( 64 ),
        Relu(),
        Dropout( 0.5f ),
        FullyConnected( 1 )
    );

    CBaseLayer* x = Source( dnn, "srcX" );
    x = FullyConnected( 512 )( x );
    x = Gelu()( x );
    x = DnnHeadAdapter( head )( x ); // #1 head dnn call

    CBaseLayer* y = Source( dnn, "srcY" );
    y = FullyConnected( 512 )( y );
    y = Gelu()( y );
    y = DnnHeadAdapter( head )( y ); // #2 head dnn call

    CBaseLayer* z = Source( dnn, "srcZ" );
    z = FullyConnected( 512 )( z );
    z = Gelu()( z );
    z = DnnHeadAdapter( head )( z ); // #3 head dnn call

    CBaseLayer* out = ConcatChannels()( x, y, z );

    CBaseLayer* labels = Source( dnn, "labels" );
    BinaryCrossEntropyLoss()( out, labels );

@favorart favorart marked this pull request as draft May 7, 2024 16:29
CDnnHeadAdapterLayer* head = dynamic_cast<CDnnHeadAdapterLayer*>(layer);
if (head != nullptr)
{
OptimizeDnn(*(head->GetDnnHead()->GetInnerDnn()));
Copy link
Contributor

@favorart favorart May 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a only one adapter for this exact sub-network, this adapter should be removed and sub-net reconnected directly, as it works for a composite in the optimization::UnpackComposites.

@daniyalaliev daniyalaliev force-pushed the headAdapter branch 2 times, most recently from c77fccb to dcbedff Compare May 30, 2024 22:18
@favorart favorart changed the title [NeoML] CDnnHead draft implementation [NeoML] CDnnHeadAdapterLayer Jun 2, 2024
@favorart favorart added the enhancement New feature or request label Aug 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants